00:00:00.001 Started by upstream project "autotest-per-patch" build number 130843 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.055 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.056 The recommended git tool is: git 00:00:00.056 using credential 00000000-0000-0000-0000-000000000002 00:00:00.058 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.120 Fetching changes from the remote Git repository 00:00:00.123 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.231 Using shallow fetch with depth 1 00:00:00.231 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.231 > git --version # timeout=10 00:00:00.337 > git --version # 'git version 2.39.2' 00:00:00.337 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.409 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.409 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.425 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.439 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.453 Checking out Revision 1913354106d3abc3c9aeb027a32277f58731b4dc (FETCH_HEAD) 00:00:07.453 > git config core.sparsecheckout # timeout=10 00:00:07.467 > git read-tree -mu HEAD # timeout=10 00:00:07.486 > git checkout -f 1913354106d3abc3c9aeb027a32277f58731b4dc # timeout=5 00:00:07.508 Commit message: "jenkins: update jenkins to 2.462.2 and update plugins to its latest versions" 00:00:07.508 > git rev-list --no-walk 1913354106d3abc3c9aeb027a32277f58731b4dc # timeout=10 00:00:07.606 [Pipeline] Start of Pipeline 00:00:07.619 [Pipeline] library 00:00:07.620 Loading library shm_lib@master 00:00:07.620 Library shm_lib@master is cached. Copying from home. 00:00:07.637 [Pipeline] node 00:00:07.659 Running on GP19 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:07.661 [Pipeline] { 00:00:07.671 [Pipeline] catchError 00:00:07.672 [Pipeline] { 00:00:07.682 [Pipeline] wrap 00:00:07.692 [Pipeline] { 00:00:07.699 [Pipeline] stage 00:00:07.700 [Pipeline] { (Prologue) 00:00:07.965 [Pipeline] sh 00:00:08.919 + logger -p user.info -t JENKINS-CI 00:00:08.949 [Pipeline] echo 00:00:08.950 Node: GP19 00:00:08.957 [Pipeline] sh 00:00:09.310 [Pipeline] setCustomBuildProperty 00:00:09.324 [Pipeline] echo 00:00:09.325 Cleanup processes 00:00:09.331 [Pipeline] sh 00:00:09.623 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.623 25710 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.637 [Pipeline] sh 00:00:09.956 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.956 ++ grep -v 'sudo pgrep' 00:00:09.956 ++ awk '{print $1}' 00:00:09.956 + sudo kill -9 00:00:09.956 + true 00:00:09.975 [Pipeline] cleanWs 00:00:09.987 [WS-CLEANUP] Deleting project workspace... 00:00:09.987 [WS-CLEANUP] Deferred wipeout is used... 00:00:10.002 [WS-CLEANUP] done 00:00:10.007 [Pipeline] setCustomBuildProperty 00:00:10.029 [Pipeline] sh 00:00:10.326 + sudo git config --global --replace-all safe.directory '*' 00:00:10.419 [Pipeline] httpRequest 00:00:12.381 [Pipeline] echo 00:00:12.383 Sorcerer 10.211.164.101 is alive 00:00:12.393 [Pipeline] retry 00:00:12.395 [Pipeline] { 00:00:12.410 [Pipeline] httpRequest 00:00:12.415 HttpMethod: GET 00:00:12.416 URL: http://10.211.164.101/packages/jbp_1913354106d3abc3c9aeb027a32277f58731b4dc.tar.gz 00:00:12.416 Sending request to url: http://10.211.164.101/packages/jbp_1913354106d3abc3c9aeb027a32277f58731b4dc.tar.gz 00:00:12.439 Response Code: HTTP/1.1 200 OK 00:00:12.440 Success: Status code 200 is in the accepted range: 200,404 00:00:12.440 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_1913354106d3abc3c9aeb027a32277f58731b4dc.tar.gz 00:00:34.988 [Pipeline] } 00:00:35.008 [Pipeline] // retry 00:00:35.015 [Pipeline] sh 00:00:35.309 + tar --no-same-owner -xf jbp_1913354106d3abc3c9aeb027a32277f58731b4dc.tar.gz 00:00:35.326 [Pipeline] httpRequest 00:00:36.299 [Pipeline] echo 00:00:36.301 Sorcerer 10.211.164.101 is alive 00:00:36.310 [Pipeline] retry 00:00:36.312 [Pipeline] { 00:00:36.327 [Pipeline] httpRequest 00:00:36.332 HttpMethod: GET 00:00:36.333 URL: http://10.211.164.101/packages/spdk_3365e53066423b27a1a5d215d3cd8050fcb4cc12.tar.gz 00:00:36.334 Sending request to url: http://10.211.164.101/packages/spdk_3365e53066423b27a1a5d215d3cd8050fcb4cc12.tar.gz 00:00:36.342 Response Code: HTTP/1.1 200 OK 00:00:36.342 Success: Status code 200 is in the accepted range: 200,404 00:00:36.342 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_3365e53066423b27a1a5d215d3cd8050fcb4cc12.tar.gz 00:03:22.966 [Pipeline] } 00:03:22.985 [Pipeline] // retry 00:03:22.992 [Pipeline] sh 00:03:23.286 + tar --no-same-owner -xf spdk_3365e53066423b27a1a5d215d3cd8050fcb4cc12.tar.gz 00:03:25.857 [Pipeline] sh 00:03:26.155 + git -C spdk log --oneline -n5 00:03:26.155 3365e5306 scripts/pkgdep: Drop support for downloading shfmt binaries 00:03:26.155 3950cd1bb bdev/nvme: Change spdk_bdev_reset() to succeed if at least one nvme_ctrlr is reconnected 00:03:26.155 f9141d271 test/blob: Add BLOCKLEN macro in blob_ut 00:03:26.155 82c46626a lib/event: implement scheduler trace events 00:03:26.155 fa6aec495 lib/thread: register thread owner type for scheduler trace events 00:03:26.169 [Pipeline] } 00:03:26.186 [Pipeline] // stage 00:03:26.196 [Pipeline] stage 00:03:26.198 [Pipeline] { (Prepare) 00:03:26.215 [Pipeline] writeFile 00:03:26.232 [Pipeline] sh 00:03:26.523 + logger -p user.info -t JENKINS-CI 00:03:26.538 [Pipeline] sh 00:03:26.828 + logger -p user.info -t JENKINS-CI 00:03:26.843 [Pipeline] sh 00:03:27.133 + cat autorun-spdk.conf 00:03:27.133 SPDK_RUN_FUNCTIONAL_TEST=1 00:03:27.133 SPDK_TEST_NVMF=1 00:03:27.133 SPDK_TEST_NVME_CLI=1 00:03:27.133 SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:27.133 SPDK_TEST_NVMF_NICS=e810 00:03:27.133 SPDK_TEST_VFIOUSER=1 00:03:27.133 SPDK_RUN_UBSAN=1 00:03:27.133 NET_TYPE=phy 00:03:27.143 RUN_NIGHTLY=0 00:03:27.147 [Pipeline] readFile 00:03:27.201 [Pipeline] withEnv 00:03:27.203 [Pipeline] { 00:03:27.216 [Pipeline] sh 00:03:27.510 + set -ex 00:03:27.510 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:03:27.510 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:27.510 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:27.510 ++ SPDK_TEST_NVMF=1 00:03:27.510 ++ SPDK_TEST_NVME_CLI=1 00:03:27.510 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:27.510 ++ SPDK_TEST_NVMF_NICS=e810 00:03:27.510 ++ SPDK_TEST_VFIOUSER=1 00:03:27.510 ++ SPDK_RUN_UBSAN=1 00:03:27.510 ++ NET_TYPE=phy 00:03:27.510 ++ RUN_NIGHTLY=0 00:03:27.510 + case $SPDK_TEST_NVMF_NICS in 00:03:27.510 + DRIVERS=ice 00:03:27.510 + [[ tcp == \r\d\m\a ]] 00:03:27.510 + [[ -n ice ]] 00:03:27.510 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:03:27.510 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:03:27.510 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:03:27.510 rmmod: ERROR: Module i40iw is not currently loaded 00:03:27.510 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:03:27.510 + true 00:03:27.510 + for D in $DRIVERS 00:03:27.510 + sudo modprobe ice 00:03:27.510 + exit 0 00:03:27.521 [Pipeline] } 00:03:27.538 [Pipeline] // withEnv 00:03:27.544 [Pipeline] } 00:03:27.558 [Pipeline] // stage 00:03:27.568 [Pipeline] catchError 00:03:27.570 [Pipeline] { 00:03:27.585 [Pipeline] timeout 00:03:27.585 Timeout set to expire in 1 hr 0 min 00:03:27.587 [Pipeline] { 00:03:27.603 [Pipeline] stage 00:03:27.606 [Pipeline] { (Tests) 00:03:27.620 [Pipeline] sh 00:03:27.912 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:27.912 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:27.912 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:27.912 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:03:27.912 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:27.912 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:03:27.912 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:03:27.912 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:03:27.912 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:03:27.912 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:03:27.912 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:03:27.912 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:27.912 + source /etc/os-release 00:03:27.912 ++ NAME='Fedora Linux' 00:03:27.912 ++ VERSION='39 (Cloud Edition)' 00:03:27.912 ++ ID=fedora 00:03:27.912 ++ VERSION_ID=39 00:03:27.912 ++ VERSION_CODENAME= 00:03:27.912 ++ PLATFORM_ID=platform:f39 00:03:27.912 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:03:27.912 ++ ANSI_COLOR='0;38;2;60;110;180' 00:03:27.912 ++ LOGO=fedora-logo-icon 00:03:27.912 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:03:27.912 ++ HOME_URL=https://fedoraproject.org/ 00:03:27.912 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:03:27.912 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:03:27.912 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:03:27.912 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:03:27.912 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:03:27.912 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:03:27.912 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:03:27.912 ++ SUPPORT_END=2024-11-12 00:03:27.912 ++ VARIANT='Cloud Edition' 00:03:27.912 ++ VARIANT_ID=cloud 00:03:27.912 + uname -a 00:03:27.912 Linux spdk-gp-19 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 05:41:37 UTC 2024 x86_64 GNU/Linux 00:03:27.912 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:28.856 Hugepages 00:03:28.856 node hugesize free / total 00:03:28.856 node0 1048576kB 0 / 0 00:03:28.856 node0 2048kB 0 / 0 00:03:28.856 node1 1048576kB 0 / 0 00:03:28.856 node1 2048kB 0 / 0 00:03:28.856 00:03:28.856 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:28.856 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:03:28.856 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:03:28.856 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:03:28.856 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:03:28.856 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:03:28.856 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:03:28.856 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:03:28.856 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:03:28.856 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:03:28.856 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:03:28.856 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:03:28.856 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:03:28.856 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:03:28.856 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:03:28.856 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:03:28.856 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:03:28.856 NVMe 0000:84:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:03:28.856 + rm -f /tmp/spdk-ld-path 00:03:28.856 + source autorun-spdk.conf 00:03:28.856 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:28.856 ++ SPDK_TEST_NVMF=1 00:03:28.856 ++ SPDK_TEST_NVME_CLI=1 00:03:28.856 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:28.856 ++ SPDK_TEST_NVMF_NICS=e810 00:03:28.856 ++ SPDK_TEST_VFIOUSER=1 00:03:28.856 ++ SPDK_RUN_UBSAN=1 00:03:28.856 ++ NET_TYPE=phy 00:03:28.856 ++ RUN_NIGHTLY=0 00:03:28.856 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:03:28.856 + [[ -n '' ]] 00:03:28.856 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:28.856 + for M in /var/spdk/build-*-manifest.txt 00:03:28.856 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:03:28.856 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:03:29.116 + for M in /var/spdk/build-*-manifest.txt 00:03:29.116 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:03:29.116 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:03:29.116 + for M in /var/spdk/build-*-manifest.txt 00:03:29.116 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:03:29.116 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:03:29.116 ++ uname 00:03:29.116 + [[ Linux == \L\i\n\u\x ]] 00:03:29.116 + sudo dmesg -T 00:03:29.116 + sudo dmesg --clear 00:03:29.116 + dmesg_pid=27609 00:03:29.116 + [[ Fedora Linux == FreeBSD ]] 00:03:29.116 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:29.116 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:29.116 + sudo dmesg -Tw 00:03:29.116 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:03:29.116 + [[ -x /usr/src/fio-static/fio ]] 00:03:29.116 + export FIO_BIN=/usr/src/fio-static/fio 00:03:29.116 + FIO_BIN=/usr/src/fio-static/fio 00:03:29.116 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:03:29.116 + [[ ! -v VFIO_QEMU_BIN ]] 00:03:29.116 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:03:29.116 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:29.116 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:29.116 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:03:29.116 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:29.116 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:29.116 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:29.116 Test configuration: 00:03:29.116 SPDK_RUN_FUNCTIONAL_TEST=1 00:03:29.116 SPDK_TEST_NVMF=1 00:03:29.116 SPDK_TEST_NVME_CLI=1 00:03:29.116 SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:29.116 SPDK_TEST_NVMF_NICS=e810 00:03:29.116 SPDK_TEST_VFIOUSER=1 00:03:29.116 SPDK_RUN_UBSAN=1 00:03:29.116 NET_TYPE=phy 00:03:29.116 RUN_NIGHTLY=0 09:24:17 -- common/autotest_common.sh@1680 -- $ [[ n == y ]] 00:03:29.116 09:24:17 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:29.116 09:24:17 -- scripts/common.sh@15 -- $ shopt -s extglob 00:03:29.116 09:24:17 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:03:29.116 09:24:17 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:29.116 09:24:17 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:29.116 09:24:17 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:29.116 09:24:17 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:29.116 09:24:17 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:29.116 09:24:17 -- paths/export.sh@5 -- $ export PATH 00:03:29.116 09:24:17 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:29.116 09:24:17 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:29.116 09:24:17 -- common/autobuild_common.sh@486 -- $ date +%s 00:03:29.116 09:24:17 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1728285857.XXXXXX 00:03:29.116 09:24:18 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1728285857.hzlrWX 00:03:29.116 09:24:18 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:03:29.116 09:24:18 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:03:29.116 09:24:18 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:03:29.116 09:24:18 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:03:29.116 09:24:18 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:03:29.116 09:24:18 -- common/autobuild_common.sh@502 -- $ get_config_params 00:03:29.117 09:24:18 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:03:29.117 09:24:18 -- common/autotest_common.sh@10 -- $ set +x 00:03:29.117 09:24:18 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:03:29.117 09:24:18 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:03:29.117 09:24:18 -- pm/common@17 -- $ local monitor 00:03:29.117 09:24:18 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:29.117 09:24:18 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:29.117 09:24:18 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:29.117 09:24:18 -- pm/common@21 -- $ date +%s 00:03:29.117 09:24:18 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:29.117 09:24:18 -- pm/common@21 -- $ date +%s 00:03:29.117 09:24:18 -- pm/common@25 -- $ sleep 1 00:03:29.117 09:24:18 -- pm/common@21 -- $ date +%s 00:03:29.117 09:24:18 -- pm/common@21 -- $ date +%s 00:03:29.117 09:24:18 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728285858 00:03:29.117 09:24:18 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728285858 00:03:29.117 09:24:18 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728285858 00:03:29.117 09:24:18 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728285858 00:03:29.117 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728285858_collect-cpu-load.pm.log 00:03:29.117 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728285858_collect-vmstat.pm.log 00:03:29.117 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728285858_collect-cpu-temp.pm.log 00:03:29.117 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728285858_collect-bmc-pm.bmc.pm.log 00:03:30.061 09:24:19 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:03:30.061 09:24:19 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:03:30.061 09:24:19 -- spdk/autobuild.sh@12 -- $ umask 022 00:03:30.061 09:24:19 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:30.061 09:24:19 -- spdk/autobuild.sh@16 -- $ date -u 00:03:30.061 Mon Oct 7 07:24:19 AM UTC 2024 00:03:30.061 09:24:19 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:03:30.061 v25.01-pre-36-g3365e5306 00:03:30.061 09:24:19 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:03:30.061 09:24:19 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:03:30.061 09:24:19 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:03:30.061 09:24:19 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:03:30.061 09:24:19 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:03:30.061 09:24:19 -- common/autotest_common.sh@10 -- $ set +x 00:03:30.321 ************************************ 00:03:30.321 START TEST ubsan 00:03:30.321 ************************************ 00:03:30.321 09:24:19 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:03:30.321 using ubsan 00:03:30.321 00:03:30.321 real 0m0.000s 00:03:30.321 user 0m0.000s 00:03:30.321 sys 0m0.000s 00:03:30.321 09:24:19 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:30.321 09:24:19 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:03:30.321 ************************************ 00:03:30.321 END TEST ubsan 00:03:30.321 ************************************ 00:03:30.321 09:24:19 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:03:30.321 09:24:19 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:30.321 09:24:19 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:30.321 09:24:19 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:30.321 09:24:19 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:30.321 09:24:19 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:30.321 09:24:19 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:30.321 09:24:19 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:30.321 09:24:19 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:03:30.895 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:03:30.895 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:03:31.836 Using 'verbs' RDMA provider 00:03:45.014 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:03:55.005 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:03:55.264 Creating mk/config.mk...done. 00:03:55.264 Creating mk/cc.flags.mk...done. 00:03:55.264 Type 'make' to build. 00:03:55.264 09:24:44 -- spdk/autobuild.sh@70 -- $ run_test make make -j48 00:03:55.264 09:24:44 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:03:55.264 09:24:44 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:03:55.264 09:24:44 -- common/autotest_common.sh@10 -- $ set +x 00:03:55.264 ************************************ 00:03:55.264 START TEST make 00:03:55.264 ************************************ 00:03:55.264 09:24:44 make -- common/autotest_common.sh@1125 -- $ make -j48 00:03:55.527 make[1]: Nothing to be done for 'all'. 00:03:58.095 The Meson build system 00:03:58.095 Version: 1.5.0 00:03:58.095 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:03:58.095 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:58.095 Build type: native build 00:03:58.095 Project name: libvfio-user 00:03:58.095 Project version: 0.0.1 00:03:58.095 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:58.095 C linker for the host machine: cc ld.bfd 2.40-14 00:03:58.095 Host machine cpu family: x86_64 00:03:58.095 Host machine cpu: x86_64 00:03:58.095 Run-time dependency threads found: YES 00:03:58.095 Library dl found: YES 00:03:58.095 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:58.095 Run-time dependency json-c found: YES 0.17 00:03:58.095 Run-time dependency cmocka found: YES 1.1.7 00:03:58.095 Program pytest-3 found: NO 00:03:58.095 Program flake8 found: NO 00:03:58.095 Program misspell-fixer found: NO 00:03:58.095 Program restructuredtext-lint found: NO 00:03:58.095 Program valgrind found: YES (/usr/bin/valgrind) 00:03:58.095 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:58.095 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:58.095 Compiler for C supports arguments -Wwrite-strings: YES 00:03:58.095 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:58.095 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:03:58.095 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:03:58.095 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:58.095 Build targets in project: 8 00:03:58.095 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:03:58.095 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:03:58.095 00:03:58.095 libvfio-user 0.0.1 00:03:58.095 00:03:58.095 User defined options 00:03:58.095 buildtype : debug 00:03:58.095 default_library: shared 00:03:58.095 libdir : /usr/local/lib 00:03:58.095 00:03:58.095 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:59.051 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:59.051 [1/37] Compiling C object samples/lspci.p/lspci.c.o 00:03:59.051 [2/37] Compiling C object samples/null.p/null.c.o 00:03:59.051 [3/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:03:59.051 [4/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:03:59.051 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:03:59.051 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:03:59.051 [7/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:03:59.051 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:03:59.051 [9/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:03:59.051 [10/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:03:59.051 [11/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:03:59.051 [12/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:03:59.051 [13/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:03:59.051 [14/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:03:59.051 [15/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:03:59.051 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:03:59.051 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:03:59.051 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:03:59.051 [19/37] Compiling C object test/unit_tests.p/mocks.c.o 00:03:59.051 [20/37] Compiling C object samples/server.p/server.c.o 00:03:59.051 [21/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:03:59.051 [22/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:03:59.316 [23/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:03:59.317 [24/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:03:59.317 [25/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:03:59.317 [26/37] Compiling C object samples/client.p/client.c.o 00:03:59.317 [27/37] Linking target samples/client 00:03:59.317 [28/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:03:59.317 [29/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:03:59.317 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:03:59.317 [31/37] Linking target test/unit_tests 00:03:59.581 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:03:59.840 [33/37] Linking target samples/null 00:03:59.840 [34/37] Linking target samples/server 00:03:59.840 [35/37] Linking target samples/shadow_ioeventfd_server 00:03:59.840 [36/37] Linking target samples/lspci 00:03:59.840 [37/37] Linking target samples/gpio-pci-idio-16 00:03:59.840 INFO: autodetecting backend as ninja 00:03:59.840 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:59.840 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:04:00.406 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:04:00.406 ninja: no work to do. 00:04:04.651 The Meson build system 00:04:04.651 Version: 1.5.0 00:04:04.651 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:04:04.651 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:04:04.651 Build type: native build 00:04:04.651 Program cat found: YES (/usr/bin/cat) 00:04:04.651 Project name: DPDK 00:04:04.651 Project version: 24.03.0 00:04:04.651 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:04:04.651 C linker for the host machine: cc ld.bfd 2.40-14 00:04:04.651 Host machine cpu family: x86_64 00:04:04.651 Host machine cpu: x86_64 00:04:04.651 Message: ## Building in Developer Mode ## 00:04:04.651 Program pkg-config found: YES (/usr/bin/pkg-config) 00:04:04.651 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:04:04.651 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:04:04.651 Program python3 found: YES (/usr/bin/python3) 00:04:04.651 Program cat found: YES (/usr/bin/cat) 00:04:04.651 Compiler for C supports arguments -march=native: YES 00:04:04.651 Checking for size of "void *" : 8 00:04:04.651 Checking for size of "void *" : 8 (cached) 00:04:04.652 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:04:04.652 Library m found: YES 00:04:04.652 Library numa found: YES 00:04:04.652 Has header "numaif.h" : YES 00:04:04.652 Library fdt found: NO 00:04:04.652 Library execinfo found: NO 00:04:04.652 Has header "execinfo.h" : YES 00:04:04.652 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:04:04.652 Run-time dependency libarchive found: NO (tried pkgconfig) 00:04:04.652 Run-time dependency libbsd found: NO (tried pkgconfig) 00:04:04.652 Run-time dependency jansson found: NO (tried pkgconfig) 00:04:04.652 Run-time dependency openssl found: YES 3.1.1 00:04:04.652 Run-time dependency libpcap found: YES 1.10.4 00:04:04.652 Has header "pcap.h" with dependency libpcap: YES 00:04:04.652 Compiler for C supports arguments -Wcast-qual: YES 00:04:04.652 Compiler for C supports arguments -Wdeprecated: YES 00:04:04.652 Compiler for C supports arguments -Wformat: YES 00:04:04.652 Compiler for C supports arguments -Wformat-nonliteral: NO 00:04:04.652 Compiler for C supports arguments -Wformat-security: NO 00:04:04.652 Compiler for C supports arguments -Wmissing-declarations: YES 00:04:04.652 Compiler for C supports arguments -Wmissing-prototypes: YES 00:04:04.652 Compiler for C supports arguments -Wnested-externs: YES 00:04:04.652 Compiler for C supports arguments -Wold-style-definition: YES 00:04:04.652 Compiler for C supports arguments -Wpointer-arith: YES 00:04:04.652 Compiler for C supports arguments -Wsign-compare: YES 00:04:04.652 Compiler for C supports arguments -Wstrict-prototypes: YES 00:04:04.652 Compiler for C supports arguments -Wundef: YES 00:04:04.652 Compiler for C supports arguments -Wwrite-strings: YES 00:04:04.652 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:04:04.652 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:04:04.652 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:04:04.652 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:04:04.652 Program objdump found: YES (/usr/bin/objdump) 00:04:04.652 Compiler for C supports arguments -mavx512f: YES 00:04:04.652 Checking if "AVX512 checking" compiles: YES 00:04:04.652 Fetching value of define "__SSE4_2__" : 1 00:04:04.652 Fetching value of define "__AES__" : 1 00:04:04.652 Fetching value of define "__AVX__" : 1 00:04:04.652 Fetching value of define "__AVX2__" : (undefined) 00:04:04.652 Fetching value of define "__AVX512BW__" : (undefined) 00:04:04.652 Fetching value of define "__AVX512CD__" : (undefined) 00:04:04.652 Fetching value of define "__AVX512DQ__" : (undefined) 00:04:04.652 Fetching value of define "__AVX512F__" : (undefined) 00:04:04.652 Fetching value of define "__AVX512VL__" : (undefined) 00:04:04.652 Fetching value of define "__PCLMUL__" : 1 00:04:04.652 Fetching value of define "__RDRND__" : 1 00:04:04.652 Fetching value of define "__RDSEED__" : (undefined) 00:04:04.652 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:04:04.652 Fetching value of define "__znver1__" : (undefined) 00:04:04.652 Fetching value of define "__znver2__" : (undefined) 00:04:04.652 Fetching value of define "__znver3__" : (undefined) 00:04:04.652 Fetching value of define "__znver4__" : (undefined) 00:04:04.652 Compiler for C supports arguments -Wno-format-truncation: YES 00:04:04.652 Message: lib/log: Defining dependency "log" 00:04:04.652 Message: lib/kvargs: Defining dependency "kvargs" 00:04:04.652 Message: lib/telemetry: Defining dependency "telemetry" 00:04:04.652 Checking for function "getentropy" : NO 00:04:04.652 Message: lib/eal: Defining dependency "eal" 00:04:04.652 Message: lib/ring: Defining dependency "ring" 00:04:04.652 Message: lib/rcu: Defining dependency "rcu" 00:04:04.652 Message: lib/mempool: Defining dependency "mempool" 00:04:04.652 Message: lib/mbuf: Defining dependency "mbuf" 00:04:04.652 Fetching value of define "__PCLMUL__" : 1 (cached) 00:04:04.652 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:04:04.652 Compiler for C supports arguments -mpclmul: YES 00:04:04.652 Compiler for C supports arguments -maes: YES 00:04:04.652 Compiler for C supports arguments -mavx512f: YES (cached) 00:04:04.652 Compiler for C supports arguments -mavx512bw: YES 00:04:04.652 Compiler for C supports arguments -mavx512dq: YES 00:04:04.652 Compiler for C supports arguments -mavx512vl: YES 00:04:04.652 Compiler for C supports arguments -mvpclmulqdq: YES 00:04:04.652 Compiler for C supports arguments -mavx2: YES 00:04:04.652 Compiler for C supports arguments -mavx: YES 00:04:04.652 Message: lib/net: Defining dependency "net" 00:04:04.652 Message: lib/meter: Defining dependency "meter" 00:04:04.652 Message: lib/ethdev: Defining dependency "ethdev" 00:04:04.652 Message: lib/pci: Defining dependency "pci" 00:04:04.652 Message: lib/cmdline: Defining dependency "cmdline" 00:04:04.652 Message: lib/hash: Defining dependency "hash" 00:04:04.652 Message: lib/timer: Defining dependency "timer" 00:04:04.652 Message: lib/compressdev: Defining dependency "compressdev" 00:04:04.652 Message: lib/cryptodev: Defining dependency "cryptodev" 00:04:04.652 Message: lib/dmadev: Defining dependency "dmadev" 00:04:04.652 Compiler for C supports arguments -Wno-cast-qual: YES 00:04:04.652 Message: lib/power: Defining dependency "power" 00:04:04.652 Message: lib/reorder: Defining dependency "reorder" 00:04:04.652 Message: lib/security: Defining dependency "security" 00:04:04.652 Has header "linux/userfaultfd.h" : YES 00:04:04.652 Has header "linux/vduse.h" : YES 00:04:04.652 Message: lib/vhost: Defining dependency "vhost" 00:04:04.652 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:04:04.652 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:04:04.652 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:04:04.652 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:04:04.652 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:04:04.652 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:04:04.652 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:04:04.652 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:04:04.652 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:04:04.652 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:04:04.652 Program doxygen found: YES (/usr/local/bin/doxygen) 00:04:04.652 Configuring doxy-api-html.conf using configuration 00:04:04.652 Configuring doxy-api-man.conf using configuration 00:04:04.652 Program mandb found: YES (/usr/bin/mandb) 00:04:04.652 Program sphinx-build found: NO 00:04:04.652 Configuring rte_build_config.h using configuration 00:04:04.652 Message: 00:04:04.652 ================= 00:04:04.652 Applications Enabled 00:04:04.652 ================= 00:04:04.652 00:04:04.652 apps: 00:04:04.652 00:04:04.652 00:04:04.652 Message: 00:04:04.652 ================= 00:04:04.652 Libraries Enabled 00:04:04.652 ================= 00:04:04.652 00:04:04.653 libs: 00:04:04.653 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:04:04.653 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:04:04.653 cryptodev, dmadev, power, reorder, security, vhost, 00:04:04.653 00:04:04.653 Message: 00:04:04.653 =============== 00:04:04.653 Drivers Enabled 00:04:04.653 =============== 00:04:04.653 00:04:04.653 common: 00:04:04.653 00:04:04.653 bus: 00:04:04.653 pci, vdev, 00:04:04.653 mempool: 00:04:04.653 ring, 00:04:04.653 dma: 00:04:04.653 00:04:04.653 net: 00:04:04.653 00:04:04.653 crypto: 00:04:04.653 00:04:04.653 compress: 00:04:04.653 00:04:04.653 vdpa: 00:04:04.653 00:04:04.653 00:04:04.653 Message: 00:04:04.653 ================= 00:04:04.653 Content Skipped 00:04:04.653 ================= 00:04:04.653 00:04:04.653 apps: 00:04:04.653 dumpcap: explicitly disabled via build config 00:04:04.653 graph: explicitly disabled via build config 00:04:04.653 pdump: explicitly disabled via build config 00:04:04.653 proc-info: explicitly disabled via build config 00:04:04.653 test-acl: explicitly disabled via build config 00:04:04.653 test-bbdev: explicitly disabled via build config 00:04:04.653 test-cmdline: explicitly disabled via build config 00:04:04.653 test-compress-perf: explicitly disabled via build config 00:04:04.653 test-crypto-perf: explicitly disabled via build config 00:04:04.653 test-dma-perf: explicitly disabled via build config 00:04:04.653 test-eventdev: explicitly disabled via build config 00:04:04.653 test-fib: explicitly disabled via build config 00:04:04.653 test-flow-perf: explicitly disabled via build config 00:04:04.653 test-gpudev: explicitly disabled via build config 00:04:04.653 test-mldev: explicitly disabled via build config 00:04:04.653 test-pipeline: explicitly disabled via build config 00:04:04.653 test-pmd: explicitly disabled via build config 00:04:04.653 test-regex: explicitly disabled via build config 00:04:04.653 test-sad: explicitly disabled via build config 00:04:04.653 test-security-perf: explicitly disabled via build config 00:04:04.653 00:04:04.653 libs: 00:04:04.653 argparse: explicitly disabled via build config 00:04:04.653 metrics: explicitly disabled via build config 00:04:04.653 acl: explicitly disabled via build config 00:04:04.653 bbdev: explicitly disabled via build config 00:04:04.653 bitratestats: explicitly disabled via build config 00:04:04.653 bpf: explicitly disabled via build config 00:04:04.653 cfgfile: explicitly disabled via build config 00:04:04.653 distributor: explicitly disabled via build config 00:04:04.653 efd: explicitly disabled via build config 00:04:04.653 eventdev: explicitly disabled via build config 00:04:04.653 dispatcher: explicitly disabled via build config 00:04:04.653 gpudev: explicitly disabled via build config 00:04:04.653 gro: explicitly disabled via build config 00:04:04.653 gso: explicitly disabled via build config 00:04:04.653 ip_frag: explicitly disabled via build config 00:04:04.653 jobstats: explicitly disabled via build config 00:04:04.653 latencystats: explicitly disabled via build config 00:04:04.653 lpm: explicitly disabled via build config 00:04:04.653 member: explicitly disabled via build config 00:04:04.653 pcapng: explicitly disabled via build config 00:04:04.653 rawdev: explicitly disabled via build config 00:04:04.653 regexdev: explicitly disabled via build config 00:04:04.653 mldev: explicitly disabled via build config 00:04:04.653 rib: explicitly disabled via build config 00:04:04.653 sched: explicitly disabled via build config 00:04:04.653 stack: explicitly disabled via build config 00:04:04.653 ipsec: explicitly disabled via build config 00:04:04.653 pdcp: explicitly disabled via build config 00:04:04.653 fib: explicitly disabled via build config 00:04:04.653 port: explicitly disabled via build config 00:04:04.653 pdump: explicitly disabled via build config 00:04:04.653 table: explicitly disabled via build config 00:04:04.653 pipeline: explicitly disabled via build config 00:04:04.653 graph: explicitly disabled via build config 00:04:04.653 node: explicitly disabled via build config 00:04:04.653 00:04:04.653 drivers: 00:04:04.653 common/cpt: not in enabled drivers build config 00:04:04.653 common/dpaax: not in enabled drivers build config 00:04:04.653 common/iavf: not in enabled drivers build config 00:04:04.653 common/idpf: not in enabled drivers build config 00:04:04.653 common/ionic: not in enabled drivers build config 00:04:04.653 common/mvep: not in enabled drivers build config 00:04:04.653 common/octeontx: not in enabled drivers build config 00:04:04.653 bus/auxiliary: not in enabled drivers build config 00:04:04.653 bus/cdx: not in enabled drivers build config 00:04:04.653 bus/dpaa: not in enabled drivers build config 00:04:04.653 bus/fslmc: not in enabled drivers build config 00:04:04.653 bus/ifpga: not in enabled drivers build config 00:04:04.653 bus/platform: not in enabled drivers build config 00:04:04.653 bus/uacce: not in enabled drivers build config 00:04:04.653 bus/vmbus: not in enabled drivers build config 00:04:04.653 common/cnxk: not in enabled drivers build config 00:04:04.653 common/mlx5: not in enabled drivers build config 00:04:04.653 common/nfp: not in enabled drivers build config 00:04:04.653 common/nitrox: not in enabled drivers build config 00:04:04.653 common/qat: not in enabled drivers build config 00:04:04.653 common/sfc_efx: not in enabled drivers build config 00:04:04.653 mempool/bucket: not in enabled drivers build config 00:04:04.653 mempool/cnxk: not in enabled drivers build config 00:04:04.653 mempool/dpaa: not in enabled drivers build config 00:04:04.653 mempool/dpaa2: not in enabled drivers build config 00:04:04.653 mempool/octeontx: not in enabled drivers build config 00:04:04.653 mempool/stack: not in enabled drivers build config 00:04:04.653 dma/cnxk: not in enabled drivers build config 00:04:04.653 dma/dpaa: not in enabled drivers build config 00:04:04.653 dma/dpaa2: not in enabled drivers build config 00:04:04.653 dma/hisilicon: not in enabled drivers build config 00:04:04.653 dma/idxd: not in enabled drivers build config 00:04:04.653 dma/ioat: not in enabled drivers build config 00:04:04.653 dma/skeleton: not in enabled drivers build config 00:04:04.653 net/af_packet: not in enabled drivers build config 00:04:04.653 net/af_xdp: not in enabled drivers build config 00:04:04.653 net/ark: not in enabled drivers build config 00:04:04.653 net/atlantic: not in enabled drivers build config 00:04:04.653 net/avp: not in enabled drivers build config 00:04:04.654 net/axgbe: not in enabled drivers build config 00:04:04.654 net/bnx2x: not in enabled drivers build config 00:04:04.654 net/bnxt: not in enabled drivers build config 00:04:04.654 net/bonding: not in enabled drivers build config 00:04:04.654 net/cnxk: not in enabled drivers build config 00:04:04.654 net/cpfl: not in enabled drivers build config 00:04:04.654 net/cxgbe: not in enabled drivers build config 00:04:04.654 net/dpaa: not in enabled drivers build config 00:04:04.654 net/dpaa2: not in enabled drivers build config 00:04:04.654 net/e1000: not in enabled drivers build config 00:04:04.654 net/ena: not in enabled drivers build config 00:04:04.654 net/enetc: not in enabled drivers build config 00:04:04.654 net/enetfec: not in enabled drivers build config 00:04:04.654 net/enic: not in enabled drivers build config 00:04:04.654 net/failsafe: not in enabled drivers build config 00:04:04.654 net/fm10k: not in enabled drivers build config 00:04:04.654 net/gve: not in enabled drivers build config 00:04:04.654 net/hinic: not in enabled drivers build config 00:04:04.654 net/hns3: not in enabled drivers build config 00:04:04.654 net/i40e: not in enabled drivers build config 00:04:04.654 net/iavf: not in enabled drivers build config 00:04:04.654 net/ice: not in enabled drivers build config 00:04:04.654 net/idpf: not in enabled drivers build config 00:04:04.654 net/igc: not in enabled drivers build config 00:04:04.654 net/ionic: not in enabled drivers build config 00:04:04.654 net/ipn3ke: not in enabled drivers build config 00:04:04.654 net/ixgbe: not in enabled drivers build config 00:04:04.654 net/mana: not in enabled drivers build config 00:04:04.654 net/memif: not in enabled drivers build config 00:04:04.654 net/mlx4: not in enabled drivers build config 00:04:04.654 net/mlx5: not in enabled drivers build config 00:04:04.654 net/mvneta: not in enabled drivers build config 00:04:04.654 net/mvpp2: not in enabled drivers build config 00:04:04.654 net/netvsc: not in enabled drivers build config 00:04:04.654 net/nfb: not in enabled drivers build config 00:04:04.654 net/nfp: not in enabled drivers build config 00:04:04.654 net/ngbe: not in enabled drivers build config 00:04:04.654 net/null: not in enabled drivers build config 00:04:04.654 net/octeontx: not in enabled drivers build config 00:04:04.654 net/octeon_ep: not in enabled drivers build config 00:04:04.654 net/pcap: not in enabled drivers build config 00:04:04.654 net/pfe: not in enabled drivers build config 00:04:04.654 net/qede: not in enabled drivers build config 00:04:04.654 net/ring: not in enabled drivers build config 00:04:04.654 net/sfc: not in enabled drivers build config 00:04:04.654 net/softnic: not in enabled drivers build config 00:04:04.654 net/tap: not in enabled drivers build config 00:04:04.654 net/thunderx: not in enabled drivers build config 00:04:04.654 net/txgbe: not in enabled drivers build config 00:04:04.654 net/vdev_netvsc: not in enabled drivers build config 00:04:04.654 net/vhost: not in enabled drivers build config 00:04:04.654 net/virtio: not in enabled drivers build config 00:04:04.654 net/vmxnet3: not in enabled drivers build config 00:04:04.654 raw/*: missing internal dependency, "rawdev" 00:04:04.654 crypto/armv8: not in enabled drivers build config 00:04:04.654 crypto/bcmfs: not in enabled drivers build config 00:04:04.654 crypto/caam_jr: not in enabled drivers build config 00:04:04.654 crypto/ccp: not in enabled drivers build config 00:04:04.654 crypto/cnxk: not in enabled drivers build config 00:04:04.654 crypto/dpaa_sec: not in enabled drivers build config 00:04:04.654 crypto/dpaa2_sec: not in enabled drivers build config 00:04:04.654 crypto/ipsec_mb: not in enabled drivers build config 00:04:04.654 crypto/mlx5: not in enabled drivers build config 00:04:04.654 crypto/mvsam: not in enabled drivers build config 00:04:04.654 crypto/nitrox: not in enabled drivers build config 00:04:04.654 crypto/null: not in enabled drivers build config 00:04:04.654 crypto/octeontx: not in enabled drivers build config 00:04:04.654 crypto/openssl: not in enabled drivers build config 00:04:04.654 crypto/scheduler: not in enabled drivers build config 00:04:04.654 crypto/uadk: not in enabled drivers build config 00:04:04.654 crypto/virtio: not in enabled drivers build config 00:04:04.654 compress/isal: not in enabled drivers build config 00:04:04.654 compress/mlx5: not in enabled drivers build config 00:04:04.654 compress/nitrox: not in enabled drivers build config 00:04:04.654 compress/octeontx: not in enabled drivers build config 00:04:04.654 compress/zlib: not in enabled drivers build config 00:04:04.654 regex/*: missing internal dependency, "regexdev" 00:04:04.654 ml/*: missing internal dependency, "mldev" 00:04:04.654 vdpa/ifc: not in enabled drivers build config 00:04:04.654 vdpa/mlx5: not in enabled drivers build config 00:04:04.654 vdpa/nfp: not in enabled drivers build config 00:04:04.654 vdpa/sfc: not in enabled drivers build config 00:04:04.654 event/*: missing internal dependency, "eventdev" 00:04:04.654 baseband/*: missing internal dependency, "bbdev" 00:04:04.654 gpu/*: missing internal dependency, "gpudev" 00:04:04.654 00:04:04.654 00:04:04.914 Build targets in project: 85 00:04:04.914 00:04:04.914 DPDK 24.03.0 00:04:04.914 00:04:04.914 User defined options 00:04:04.914 buildtype : debug 00:04:04.914 default_library : shared 00:04:04.914 libdir : lib 00:04:04.914 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:04:04.914 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:04:04.914 c_link_args : 00:04:04.914 cpu_instruction_set: native 00:04:04.914 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:04:04.914 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:04:04.914 enable_docs : false 00:04:04.914 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:04:04.914 enable_kmods : false 00:04:04.914 max_lcores : 128 00:04:04.914 tests : false 00:04:04.914 00:04:04.914 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:04:05.489 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:04:05.489 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:04:05.489 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:04:05.489 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:04:05.489 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:04:05.489 [5/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:04:05.489 [6/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:04:05.489 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:04:05.489 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:04:05.489 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:04:05.489 [10/268] Linking static target lib/librte_kvargs.a 00:04:05.489 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:04:05.489 [12/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:04:05.489 [13/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:04:05.489 [14/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:04:05.489 [15/268] Linking static target lib/librte_log.a 00:04:05.489 [16/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:04:06.076 [17/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:04:06.343 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:04:06.343 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:04:06.343 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:04:06.343 [21/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:04:06.343 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:04:06.343 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:04:06.343 [24/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:04:06.343 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:04:06.343 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:04:06.343 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:04:06.343 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:04:06.343 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:04:06.343 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:04:06.343 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:04:06.343 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:04:06.343 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:04:06.343 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:04:06.343 [35/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:04:06.343 [36/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:04:06.343 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:04:06.343 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:04:06.343 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:04:06.343 [40/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:04:06.343 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:04:06.343 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:04:06.343 [43/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:04:06.343 [44/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:04:06.343 [45/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:04:06.343 [46/268] Linking static target lib/librte_telemetry.a 00:04:06.343 [47/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:04:06.343 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:04:06.343 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:04:06.343 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:04:06.343 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:04:06.343 [52/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:04:06.611 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:04:06.611 [54/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:04:06.611 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:04:06.611 [56/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:04:06.611 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:04:06.611 [58/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:04:06.611 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:04:06.611 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:04:06.611 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:04:06.611 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:04:06.611 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:04:06.878 [64/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:04:06.878 [65/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:04:06.878 [66/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:04:06.878 [67/268] Linking target lib/librte_log.so.24.1 00:04:06.878 [68/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:04:06.878 [69/268] Linking static target lib/librte_pci.a 00:04:06.878 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:04:07.142 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:04:07.142 [72/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:04:07.142 [73/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:04:07.142 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:04:07.142 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:04:07.142 [76/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:04:07.142 [77/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:04:07.143 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:04:07.143 [79/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:04:07.143 [80/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:04:07.143 [81/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:04:07.409 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:04:07.409 [83/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:04:07.409 [84/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:04:07.409 [85/268] Linking target lib/librte_kvargs.so.24.1 00:04:07.409 [86/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:04:07.409 [87/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:04:07.409 [88/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:04:07.409 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:04:07.409 [90/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:04:07.409 [91/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:04:07.409 [92/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:04:07.409 [93/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:04:07.409 [94/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:04:07.409 [95/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:04:07.409 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:04:07.409 [97/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:04:07.409 [98/268] Linking static target lib/librte_meter.a 00:04:07.409 [99/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:04:07.409 [100/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:07.409 [101/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:04:07.409 [102/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:04:07.409 [103/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:04:07.409 [104/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:04:07.409 [105/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:04:07.409 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:04:07.409 [107/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:04:07.409 [108/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:04:07.676 [109/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:04:07.676 [110/268] Linking static target lib/librte_mempool.a 00:04:07.676 [111/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:04:07.676 [112/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:04:07.676 [113/268] Linking static target lib/librte_ring.a 00:04:07.676 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:04:07.676 [115/268] Linking static target lib/librte_eal.a 00:04:07.676 [116/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:04:07.676 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:04:07.676 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:04:07.676 [119/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:04:07.676 [120/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:04:07.676 [121/268] Linking target lib/librte_telemetry.so.24.1 00:04:07.676 [122/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:04:07.676 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:04:07.676 [124/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:04:07.676 [125/268] Linking static target lib/librte_rcu.a 00:04:07.676 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:04:07.676 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:04:07.676 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:04:07.943 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:04:07.943 [130/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:04:07.943 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:04:07.943 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:04:07.943 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:04:07.943 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:04:07.943 [135/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:04:07.943 [136/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:04:07.943 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:04:07.943 [138/268] Linking static target lib/librte_net.a 00:04:07.943 [139/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:04:08.208 [140/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:04:08.208 [141/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:04:08.208 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:04:08.208 [143/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:04:08.208 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:04:08.208 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:04:08.208 [146/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:04:08.208 [147/268] Linking static target lib/librte_cmdline.a 00:04:08.473 [148/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:04:08.473 [149/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:04:08.473 [150/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:04:08.473 [151/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:04:08.473 [152/268] Linking static target lib/librte_timer.a 00:04:08.473 [153/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:04:08.473 [154/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:04:08.473 [155/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:04:08.473 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:04:08.473 [157/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:04:08.473 [158/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:04:08.473 [159/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:04:08.473 [160/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:04:08.733 [161/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:04:08.733 [162/268] Linking static target lib/librte_dmadev.a 00:04:08.733 [163/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:04:08.733 [164/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:04:08.733 [165/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:04:08.733 [166/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:04:08.733 [167/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:04:08.733 [168/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:04:08.733 [169/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:04:08.733 [170/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:04:08.733 [171/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:04:08.733 [172/268] Linking static target lib/librte_power.a 00:04:08.733 [173/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:04:08.733 [174/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:04:08.733 [175/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:04:08.992 [176/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:04:08.992 [177/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:04:08.992 [178/268] Linking static target lib/librte_compressdev.a 00:04:08.992 [179/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:04:08.992 [180/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:04:08.992 [181/268] Linking static target lib/librte_hash.a 00:04:08.992 [182/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:04:08.992 [183/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:04:08.992 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:04:08.992 [185/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:04:08.992 [186/268] Linking static target lib/librte_mbuf.a 00:04:08.992 [187/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:09.252 [188/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:04:09.252 [189/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:04:09.252 [190/268] Linking static target lib/librte_reorder.a 00:04:09.252 [191/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:04:09.252 [192/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:04:09.252 [193/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:04:09.252 [194/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:04:09.252 [195/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:04:09.252 [196/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:04:09.252 [197/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:04:09.252 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:04:09.252 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:04:09.252 [200/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:04:09.511 [201/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:04:09.511 [202/268] Linking static target lib/librte_security.a 00:04:09.511 [203/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:09.511 [204/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:04:09.511 [205/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:04:09.511 [206/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:09.511 [207/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:09.512 [208/268] Linking static target drivers/librte_mempool_ring.a 00:04:09.512 [209/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:04:09.512 [210/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:04:09.512 [211/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:09.512 [212/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:09.512 [213/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:09.512 [214/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:09.512 [215/268] Linking static target drivers/librte_bus_vdev.a 00:04:09.512 [216/268] Linking static target drivers/librte_bus_pci.a 00:04:09.512 [217/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:04:09.512 [218/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:04:09.512 [219/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:04:09.772 [220/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:09.772 [221/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:04:09.772 [222/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:04:09.772 [223/268] Linking static target lib/librte_cryptodev.a 00:04:09.772 [224/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:04:10.030 [225/268] Linking static target lib/librte_ethdev.a 00:04:10.030 [226/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:10.967 [227/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:11.904 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:04:13.806 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:04:13.806 [230/268] Linking target lib/librte_eal.so.24.1 00:04:14.066 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:04:14.066 [232/268] Linking target lib/librte_timer.so.24.1 00:04:14.066 [233/268] Linking target lib/librte_meter.so.24.1 00:04:14.066 [234/268] Linking target lib/librte_dmadev.so.24.1 00:04:14.066 [235/268] Linking target lib/librte_pci.so.24.1 00:04:14.066 [236/268] Linking target lib/librte_ring.so.24.1 00:04:14.066 [237/268] Linking target drivers/librte_bus_vdev.so.24.1 00:04:14.066 [238/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:04:14.066 [239/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:04:14.066 [240/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:04:14.066 [241/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:04:14.066 [242/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:04:14.325 [243/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:14.325 [244/268] Linking target lib/librte_rcu.so.24.1 00:04:14.325 [245/268] Linking target lib/librte_mempool.so.24.1 00:04:14.325 [246/268] Linking target drivers/librte_bus_pci.so.24.1 00:04:14.325 [247/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:04:14.325 [248/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:04:14.325 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:04:14.325 [250/268] Linking target lib/librte_mbuf.so.24.1 00:04:14.584 [251/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:04:14.584 [252/268] Linking target lib/librte_reorder.so.24.1 00:04:14.584 [253/268] Linking target lib/librte_compressdev.so.24.1 00:04:14.584 [254/268] Linking target lib/librte_net.so.24.1 00:04:14.584 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:04:14.584 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:04:14.584 [257/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:04:14.843 [258/268] Linking target lib/librte_hash.so.24.1 00:04:14.843 [259/268] Linking target lib/librte_cmdline.so.24.1 00:04:14.843 [260/268] Linking target lib/librte_security.so.24.1 00:04:14.843 [261/268] Linking target lib/librte_ethdev.so.24.1 00:04:14.843 [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:04:14.843 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:04:14.843 [264/268] Linking target lib/librte_power.so.24.1 00:04:18.173 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:04:18.173 [266/268] Linking static target lib/librte_vhost.a 00:04:19.112 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:04:19.112 [268/268] Linking target lib/librte_vhost.so.24.1 00:04:19.112 INFO: autodetecting backend as ninja 00:04:19.112 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 48 00:04:41.053 CC lib/ut_mock/mock.o 00:04:41.053 CC lib/log/log.o 00:04:41.053 CC lib/log/log_flags.o 00:04:41.053 CC lib/log/log_deprecated.o 00:04:41.053 CC lib/ut/ut.o 00:04:41.053 LIB libspdk_ut.a 00:04:41.053 LIB libspdk_log.a 00:04:41.053 LIB libspdk_ut_mock.a 00:04:41.053 SO libspdk_ut.so.2.0 00:04:41.053 SO libspdk_ut_mock.so.6.0 00:04:41.053 SO libspdk_log.so.7.0 00:04:41.053 SYMLINK libspdk_ut_mock.so 00:04:41.053 SYMLINK libspdk_ut.so 00:04:41.053 SYMLINK libspdk_log.so 00:04:41.053 CC lib/dma/dma.o 00:04:41.053 CC lib/ioat/ioat.o 00:04:41.053 CXX lib/trace_parser/trace.o 00:04:41.053 CC lib/util/base64.o 00:04:41.053 CC lib/util/bit_array.o 00:04:41.053 CC lib/util/cpuset.o 00:04:41.053 CC lib/util/crc16.o 00:04:41.053 CC lib/util/crc32.o 00:04:41.053 CC lib/util/crc32c.o 00:04:41.053 CC lib/util/crc32_ieee.o 00:04:41.053 CC lib/util/crc64.o 00:04:41.053 CC lib/util/dif.o 00:04:41.053 CC lib/util/fd.o 00:04:41.053 CC lib/util/fd_group.o 00:04:41.053 CC lib/util/file.o 00:04:41.053 CC lib/util/hexlify.o 00:04:41.053 CC lib/util/iov.o 00:04:41.053 CC lib/util/math.o 00:04:41.053 CC lib/util/net.o 00:04:41.053 CC lib/util/pipe.o 00:04:41.053 CC lib/util/strerror_tls.o 00:04:41.053 CC lib/util/uuid.o 00:04:41.053 CC lib/util/string.o 00:04:41.053 CC lib/util/xor.o 00:04:41.053 CC lib/util/zipf.o 00:04:41.053 CC lib/util/md5.o 00:04:41.053 CC lib/vfio_user/host/vfio_user_pci.o 00:04:41.053 CC lib/vfio_user/host/vfio_user.o 00:04:41.053 LIB libspdk_dma.a 00:04:41.053 SO libspdk_dma.so.5.0 00:04:41.053 SYMLINK libspdk_dma.so 00:04:41.053 LIB libspdk_ioat.a 00:04:41.053 SO libspdk_ioat.so.7.0 00:04:41.054 SYMLINK libspdk_ioat.so 00:04:41.054 LIB libspdk_vfio_user.a 00:04:41.054 SO libspdk_vfio_user.so.5.0 00:04:41.054 SYMLINK libspdk_vfio_user.so 00:04:41.054 LIB libspdk_util.a 00:04:41.054 SO libspdk_util.so.10.0 00:04:41.054 SYMLINK libspdk_util.so 00:04:41.054 CC lib/conf/conf.o 00:04:41.054 CC lib/rdma_utils/rdma_utils.o 00:04:41.054 CC lib/rdma_provider/common.o 00:04:41.054 CC lib/idxd/idxd.o 00:04:41.054 CC lib/json/json_parse.o 00:04:41.054 CC lib/idxd/idxd_user.o 00:04:41.054 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:41.054 CC lib/vmd/vmd.o 00:04:41.054 CC lib/json/json_util.o 00:04:41.054 CC lib/idxd/idxd_kernel.o 00:04:41.054 CC lib/env_dpdk/env.o 00:04:41.054 CC lib/vmd/led.o 00:04:41.054 CC lib/json/json_write.o 00:04:41.054 CC lib/env_dpdk/memory.o 00:04:41.054 CC lib/env_dpdk/pci.o 00:04:41.054 CC lib/env_dpdk/init.o 00:04:41.054 CC lib/env_dpdk/threads.o 00:04:41.054 CC lib/env_dpdk/pci_ioat.o 00:04:41.054 CC lib/env_dpdk/pci_virtio.o 00:04:41.054 CC lib/env_dpdk/pci_vmd.o 00:04:41.054 CC lib/env_dpdk/pci_idxd.o 00:04:41.054 CC lib/env_dpdk/pci_event.o 00:04:41.054 CC lib/env_dpdk/sigbus_handler.o 00:04:41.054 CC lib/env_dpdk/pci_dpdk.o 00:04:41.054 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:41.054 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:41.054 LIB libspdk_conf.a 00:04:41.313 SO libspdk_conf.so.6.0 00:04:41.313 LIB libspdk_rdma_provider.a 00:04:41.313 LIB libspdk_rdma_utils.a 00:04:41.313 LIB libspdk_json.a 00:04:41.313 SO libspdk_rdma_provider.so.6.0 00:04:41.313 SYMLINK libspdk_conf.so 00:04:41.313 SO libspdk_rdma_utils.so.1.0 00:04:41.313 SO libspdk_json.so.6.0 00:04:41.313 SYMLINK libspdk_rdma_provider.so 00:04:41.313 SYMLINK libspdk_rdma_utils.so 00:04:41.313 SYMLINK libspdk_json.so 00:04:41.571 CC lib/jsonrpc/jsonrpc_server.o 00:04:41.571 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:41.571 CC lib/jsonrpc/jsonrpc_client.o 00:04:41.571 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:41.571 LIB libspdk_idxd.a 00:04:41.571 SO libspdk_idxd.so.12.1 00:04:41.571 LIB libspdk_vmd.a 00:04:41.571 SO libspdk_vmd.so.6.0 00:04:41.571 SYMLINK libspdk_idxd.so 00:04:41.571 SYMLINK libspdk_vmd.so 00:04:41.830 LIB libspdk_jsonrpc.a 00:04:41.830 SO libspdk_jsonrpc.so.6.0 00:04:41.830 SYMLINK libspdk_jsonrpc.so 00:04:41.830 LIB libspdk_trace_parser.a 00:04:41.830 SO libspdk_trace_parser.so.6.0 00:04:42.088 SYMLINK libspdk_trace_parser.so 00:04:42.088 CC lib/rpc/rpc.o 00:04:42.346 LIB libspdk_rpc.a 00:04:42.346 SO libspdk_rpc.so.6.0 00:04:42.346 SYMLINK libspdk_rpc.so 00:04:42.346 CC lib/notify/notify.o 00:04:42.346 CC lib/notify/notify_rpc.o 00:04:42.346 CC lib/keyring/keyring.o 00:04:42.346 CC lib/keyring/keyring_rpc.o 00:04:42.346 CC lib/trace/trace.o 00:04:42.346 CC lib/trace/trace_flags.o 00:04:42.346 CC lib/trace/trace_rpc.o 00:04:42.604 LIB libspdk_notify.a 00:04:42.604 SO libspdk_notify.so.6.0 00:04:42.604 SYMLINK libspdk_notify.so 00:04:42.604 LIB libspdk_keyring.a 00:04:42.604 LIB libspdk_trace.a 00:04:42.604 SO libspdk_keyring.so.2.0 00:04:42.863 SO libspdk_trace.so.11.0 00:04:42.863 SYMLINK libspdk_keyring.so 00:04:42.863 SYMLINK libspdk_trace.so 00:04:42.863 CC lib/thread/thread.o 00:04:42.863 CC lib/thread/iobuf.o 00:04:42.863 CC lib/sock/sock.o 00:04:42.863 CC lib/sock/sock_rpc.o 00:04:43.122 LIB libspdk_env_dpdk.a 00:04:43.122 SO libspdk_env_dpdk.so.15.0 00:04:43.122 SYMLINK libspdk_env_dpdk.so 00:04:43.381 LIB libspdk_sock.a 00:04:43.381 SO libspdk_sock.so.10.0 00:04:43.381 SYMLINK libspdk_sock.so 00:04:43.640 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:43.640 CC lib/nvme/nvme_ctrlr.o 00:04:43.640 CC lib/nvme/nvme_fabric.o 00:04:43.640 CC lib/nvme/nvme_ns_cmd.o 00:04:43.640 CC lib/nvme/nvme_ns.o 00:04:43.640 CC lib/nvme/nvme_pcie_common.o 00:04:43.640 CC lib/nvme/nvme_pcie.o 00:04:43.640 CC lib/nvme/nvme_qpair.o 00:04:43.640 CC lib/nvme/nvme.o 00:04:43.640 CC lib/nvme/nvme_quirks.o 00:04:43.640 CC lib/nvme/nvme_transport.o 00:04:43.640 CC lib/nvme/nvme_discovery.o 00:04:43.640 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:43.640 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:43.640 CC lib/nvme/nvme_tcp.o 00:04:43.640 CC lib/nvme/nvme_opal.o 00:04:43.640 CC lib/nvme/nvme_io_msg.o 00:04:43.640 CC lib/nvme/nvme_poll_group.o 00:04:43.640 CC lib/nvme/nvme_zns.o 00:04:43.640 CC lib/nvme/nvme_stubs.o 00:04:43.640 CC lib/nvme/nvme_auth.o 00:04:43.640 CC lib/nvme/nvme_cuse.o 00:04:43.640 CC lib/nvme/nvme_vfio_user.o 00:04:43.640 CC lib/nvme/nvme_rdma.o 00:04:44.578 LIB libspdk_thread.a 00:04:44.578 SO libspdk_thread.so.10.2 00:04:44.578 SYMLINK libspdk_thread.so 00:04:44.837 CC lib/accel/accel.o 00:04:44.837 CC lib/accel/accel_rpc.o 00:04:44.837 CC lib/accel/accel_sw.o 00:04:44.837 CC lib/fsdev/fsdev.o 00:04:44.837 CC lib/fsdev/fsdev_io.o 00:04:44.837 CC lib/blob/blobstore.o 00:04:44.837 CC lib/blob/request.o 00:04:44.837 CC lib/fsdev/fsdev_rpc.o 00:04:44.837 CC lib/blob/zeroes.o 00:04:44.837 CC lib/vfu_tgt/tgt_endpoint.o 00:04:44.837 CC lib/init/json_config.o 00:04:44.837 CC lib/virtio/virtio.o 00:04:44.837 CC lib/blob/blob_bs_dev.o 00:04:44.837 CC lib/vfu_tgt/tgt_rpc.o 00:04:44.837 CC lib/init/subsystem.o 00:04:44.837 CC lib/virtio/virtio_vhost_user.o 00:04:44.837 CC lib/init/subsystem_rpc.o 00:04:44.837 CC lib/virtio/virtio_vfio_user.o 00:04:44.837 CC lib/init/rpc.o 00:04:44.837 CC lib/virtio/virtio_pci.o 00:04:45.096 LIB libspdk_init.a 00:04:45.096 SO libspdk_init.so.6.0 00:04:45.096 SYMLINK libspdk_init.so 00:04:45.096 LIB libspdk_vfu_tgt.a 00:04:45.096 SO libspdk_vfu_tgt.so.3.0 00:04:45.096 LIB libspdk_virtio.a 00:04:45.354 SO libspdk_virtio.so.7.0 00:04:45.354 SYMLINK libspdk_vfu_tgt.so 00:04:45.354 SYMLINK libspdk_virtio.so 00:04:45.354 CC lib/event/app.o 00:04:45.354 CC lib/event/reactor.o 00:04:45.354 CC lib/event/log_rpc.o 00:04:45.354 CC lib/event/app_rpc.o 00:04:45.354 CC lib/event/scheduler_static.o 00:04:45.613 LIB libspdk_fsdev.a 00:04:45.613 SO libspdk_fsdev.so.1.0 00:04:45.613 SYMLINK libspdk_fsdev.so 00:04:45.872 LIB libspdk_event.a 00:04:45.872 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:45.872 SO libspdk_event.so.15.0 00:04:45.872 SYMLINK libspdk_event.so 00:04:46.131 LIB libspdk_accel.a 00:04:46.131 SO libspdk_accel.so.16.0 00:04:46.132 LIB libspdk_nvme.a 00:04:46.132 SYMLINK libspdk_accel.so 00:04:46.132 SO libspdk_nvme.so.14.0 00:04:46.132 CC lib/bdev/bdev.o 00:04:46.390 CC lib/bdev/bdev_rpc.o 00:04:46.390 CC lib/bdev/bdev_zone.o 00:04:46.390 CC lib/bdev/part.o 00:04:46.390 CC lib/bdev/scsi_nvme.o 00:04:46.390 SYMLINK libspdk_nvme.so 00:04:46.390 LIB libspdk_fuse_dispatcher.a 00:04:46.390 SO libspdk_fuse_dispatcher.so.1.0 00:04:46.650 SYMLINK libspdk_fuse_dispatcher.so 00:04:48.033 LIB libspdk_blob.a 00:04:48.033 SO libspdk_blob.so.11.0 00:04:48.033 SYMLINK libspdk_blob.so 00:04:48.292 CC lib/lvol/lvol.o 00:04:48.292 CC lib/blobfs/blobfs.o 00:04:48.292 CC lib/blobfs/tree.o 00:04:48.859 LIB libspdk_bdev.a 00:04:48.859 SO libspdk_bdev.so.17.0 00:04:48.859 SYMLINK libspdk_bdev.so 00:04:49.122 LIB libspdk_blobfs.a 00:04:49.122 SO libspdk_blobfs.so.10.0 00:04:49.122 SYMLINK libspdk_blobfs.so 00:04:49.122 LIB libspdk_lvol.a 00:04:49.122 SO libspdk_lvol.so.10.0 00:04:49.122 CC lib/nbd/nbd.o 00:04:49.122 CC lib/nbd/nbd_rpc.o 00:04:49.122 CC lib/ublk/ublk.o 00:04:49.122 CC lib/ublk/ublk_rpc.o 00:04:49.122 CC lib/nvmf/ctrlr.o 00:04:49.122 CC lib/scsi/dev.o 00:04:49.122 CC lib/scsi/lun.o 00:04:49.122 CC lib/ftl/ftl_core.o 00:04:49.122 CC lib/nvmf/ctrlr_discovery.o 00:04:49.122 CC lib/nvmf/ctrlr_bdev.o 00:04:49.122 CC lib/scsi/port.o 00:04:49.122 CC lib/ftl/ftl_init.o 00:04:49.122 CC lib/scsi/scsi.o 00:04:49.122 CC lib/ftl/ftl_layout.o 00:04:49.122 CC lib/scsi/scsi_bdev.o 00:04:49.122 CC lib/nvmf/subsystem.o 00:04:49.122 CC lib/ftl/ftl_debug.o 00:04:49.122 CC lib/ftl/ftl_io.o 00:04:49.122 CC lib/nvmf/nvmf.o 00:04:49.122 CC lib/ftl/ftl_sb.o 00:04:49.122 CC lib/scsi/scsi_rpc.o 00:04:49.122 CC lib/scsi/scsi_pr.o 00:04:49.122 CC lib/nvmf/transport.o 00:04:49.123 CC lib/nvmf/nvmf_rpc.o 00:04:49.123 CC lib/ftl/ftl_l2p.o 00:04:49.123 CC lib/scsi/task.o 00:04:49.123 CC lib/ftl/ftl_l2p_flat.o 00:04:49.123 CC lib/nvmf/tcp.o 00:04:49.123 CC lib/nvmf/stubs.o 00:04:49.123 CC lib/ftl/ftl_nv_cache.o 00:04:49.123 CC lib/ftl/ftl_band.o 00:04:49.123 CC lib/nvmf/mdns_server.o 00:04:49.123 CC lib/nvmf/vfio_user.o 00:04:49.123 CC lib/ftl/ftl_band_ops.o 00:04:49.123 CC lib/nvmf/rdma.o 00:04:49.123 CC lib/ftl/ftl_writer.o 00:04:49.123 CC lib/ftl/ftl_rq.o 00:04:49.123 CC lib/nvmf/auth.o 00:04:49.123 CC lib/ftl/ftl_reloc.o 00:04:49.123 CC lib/ftl/ftl_l2p_cache.o 00:04:49.123 CC lib/ftl/ftl_p2l.o 00:04:49.123 CC lib/ftl/ftl_p2l_log.o 00:04:49.123 CC lib/ftl/mngt/ftl_mngt.o 00:04:49.123 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:49.123 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:49.123 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:49.123 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:49.123 SYMLINK libspdk_lvol.so 00:04:49.123 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:49.388 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:49.388 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:49.654 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:49.654 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:49.654 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:49.654 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:49.654 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:49.654 CC lib/ftl/utils/ftl_conf.o 00:04:49.654 CC lib/ftl/utils/ftl_md.o 00:04:49.654 CC lib/ftl/utils/ftl_mempool.o 00:04:49.654 CC lib/ftl/utils/ftl_bitmap.o 00:04:49.654 CC lib/ftl/utils/ftl_property.o 00:04:49.654 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:49.654 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:49.654 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:49.654 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:49.654 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:49.654 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:49.914 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:49.914 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:49.914 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:49.914 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:49.914 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:49.914 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:49.914 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:49.914 CC lib/ftl/base/ftl_base_dev.o 00:04:49.914 CC lib/ftl/base/ftl_base_bdev.o 00:04:49.914 CC lib/ftl/ftl_trace.o 00:04:49.914 LIB libspdk_nbd.a 00:04:49.914 SO libspdk_nbd.so.7.0 00:04:50.174 SYMLINK libspdk_nbd.so 00:04:50.174 LIB libspdk_scsi.a 00:04:50.174 SO libspdk_scsi.so.9.0 00:04:50.174 SYMLINK libspdk_scsi.so 00:04:50.174 LIB libspdk_ublk.a 00:04:50.174 SO libspdk_ublk.so.3.0 00:04:50.432 SYMLINK libspdk_ublk.so 00:04:50.432 CC lib/iscsi/conn.o 00:04:50.432 CC lib/vhost/vhost.o 00:04:50.432 CC lib/iscsi/init_grp.o 00:04:50.432 CC lib/vhost/vhost_rpc.o 00:04:50.432 CC lib/iscsi/iscsi.o 00:04:50.432 CC lib/iscsi/param.o 00:04:50.433 CC lib/vhost/vhost_scsi.o 00:04:50.433 CC lib/iscsi/portal_grp.o 00:04:50.433 CC lib/vhost/vhost_blk.o 00:04:50.433 CC lib/vhost/rte_vhost_user.o 00:04:50.433 CC lib/iscsi/tgt_node.o 00:04:50.433 CC lib/iscsi/iscsi_subsystem.o 00:04:50.433 CC lib/iscsi/iscsi_rpc.o 00:04:50.433 CC lib/iscsi/task.o 00:04:50.691 LIB libspdk_ftl.a 00:04:50.950 SO libspdk_ftl.so.9.0 00:04:51.209 SYMLINK libspdk_ftl.so 00:04:51.778 LIB libspdk_vhost.a 00:04:51.778 SO libspdk_vhost.so.8.0 00:04:51.778 SYMLINK libspdk_vhost.so 00:04:51.778 LIB libspdk_iscsi.a 00:04:51.778 LIB libspdk_nvmf.a 00:04:52.037 SO libspdk_iscsi.so.8.0 00:04:52.037 SO libspdk_nvmf.so.19.0 00:04:52.037 SYMLINK libspdk_iscsi.so 00:04:52.037 SYMLINK libspdk_nvmf.so 00:04:52.296 CC module/env_dpdk/env_dpdk_rpc.o 00:04:52.296 CC module/vfu_device/vfu_virtio.o 00:04:52.296 CC module/vfu_device/vfu_virtio_blk.o 00:04:52.296 CC module/vfu_device/vfu_virtio_scsi.o 00:04:52.296 CC module/vfu_device/vfu_virtio_rpc.o 00:04:52.296 CC module/vfu_device/vfu_virtio_fs.o 00:04:52.555 CC module/fsdev/aio/fsdev_aio.o 00:04:52.555 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:52.555 CC module/fsdev/aio/linux_aio_mgr.o 00:04:52.555 CC module/keyring/file/keyring.o 00:04:52.555 CC module/accel/error/accel_error.o 00:04:52.555 CC module/keyring/file/keyring_rpc.o 00:04:52.555 CC module/accel/error/accel_error_rpc.o 00:04:52.555 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:52.555 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:52.555 CC module/accel/iaa/accel_iaa.o 00:04:52.555 CC module/accel/iaa/accel_iaa_rpc.o 00:04:52.555 CC module/sock/posix/posix.o 00:04:52.555 CC module/accel/dsa/accel_dsa.o 00:04:52.555 CC module/keyring/linux/keyring.o 00:04:52.555 CC module/blob/bdev/blob_bdev.o 00:04:52.555 CC module/accel/dsa/accel_dsa_rpc.o 00:04:52.555 CC module/keyring/linux/keyring_rpc.o 00:04:52.555 CC module/accel/ioat/accel_ioat.o 00:04:52.555 CC module/scheduler/gscheduler/gscheduler.o 00:04:52.555 CC module/accel/ioat/accel_ioat_rpc.o 00:04:52.555 LIB libspdk_env_dpdk_rpc.a 00:04:52.555 SO libspdk_env_dpdk_rpc.so.6.0 00:04:52.555 SYMLINK libspdk_env_dpdk_rpc.so 00:04:52.555 LIB libspdk_scheduler_dpdk_governor.a 00:04:52.555 LIB libspdk_scheduler_gscheduler.a 00:04:52.815 LIB libspdk_keyring_file.a 00:04:52.815 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:52.815 SO libspdk_scheduler_gscheduler.so.4.0 00:04:52.815 SO libspdk_keyring_file.so.2.0 00:04:52.815 LIB libspdk_keyring_linux.a 00:04:52.815 LIB libspdk_scheduler_dynamic.a 00:04:52.815 LIB libspdk_accel_error.a 00:04:52.815 LIB libspdk_accel_iaa.a 00:04:52.815 LIB libspdk_accel_ioat.a 00:04:52.815 SYMLINK libspdk_scheduler_gscheduler.so 00:04:52.815 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:52.815 SO libspdk_keyring_linux.so.1.0 00:04:52.815 SO libspdk_scheduler_dynamic.so.4.0 00:04:52.815 SO libspdk_accel_error.so.2.0 00:04:52.815 SYMLINK libspdk_keyring_file.so 00:04:52.815 SO libspdk_accel_iaa.so.3.0 00:04:52.815 SO libspdk_accel_ioat.so.6.0 00:04:52.815 SYMLINK libspdk_scheduler_dynamic.so 00:04:52.815 SYMLINK libspdk_keyring_linux.so 00:04:52.815 SYMLINK libspdk_accel_error.so 00:04:52.815 LIB libspdk_blob_bdev.a 00:04:52.815 SYMLINK libspdk_accel_ioat.so 00:04:52.815 SYMLINK libspdk_accel_iaa.so 00:04:52.815 LIB libspdk_accel_dsa.a 00:04:52.815 SO libspdk_blob_bdev.so.11.0 00:04:52.815 SO libspdk_accel_dsa.so.5.0 00:04:52.815 SYMLINK libspdk_blob_bdev.so 00:04:52.815 SYMLINK libspdk_accel_dsa.so 00:04:53.076 LIB libspdk_vfu_device.a 00:04:53.076 SO libspdk_vfu_device.so.3.0 00:04:53.076 CC module/bdev/malloc/bdev_malloc.o 00:04:53.076 CC module/blobfs/bdev/blobfs_bdev.o 00:04:53.076 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:53.076 CC module/bdev/null/bdev_null.o 00:04:53.076 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:53.076 CC module/bdev/null/bdev_null_rpc.o 00:04:53.076 CC module/bdev/gpt/gpt.o 00:04:53.076 CC module/bdev/gpt/vbdev_gpt.o 00:04:53.076 CC module/bdev/iscsi/bdev_iscsi.o 00:04:53.076 CC module/bdev/aio/bdev_aio.o 00:04:53.076 CC module/bdev/delay/vbdev_delay.o 00:04:53.076 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:53.076 CC module/bdev/aio/bdev_aio_rpc.o 00:04:53.076 CC module/bdev/lvol/vbdev_lvol.o 00:04:53.076 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:53.076 CC module/bdev/nvme/bdev_nvme.o 00:04:53.076 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:53.076 CC module/bdev/raid/bdev_raid.o 00:04:53.076 CC module/bdev/split/vbdev_split.o 00:04:53.076 CC module/bdev/passthru/vbdev_passthru.o 00:04:53.076 CC module/bdev/error/vbdev_error.o 00:04:53.076 CC module/bdev/split/vbdev_split_rpc.o 00:04:53.076 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:53.076 CC module/bdev/error/vbdev_error_rpc.o 00:04:53.076 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:53.076 CC module/bdev/raid/bdev_raid_rpc.o 00:04:53.076 CC module/bdev/nvme/nvme_rpc.o 00:04:53.076 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:53.076 CC module/bdev/raid/bdev_raid_sb.o 00:04:53.076 CC module/bdev/nvme/bdev_mdns_client.o 00:04:53.076 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:53.076 CC module/bdev/raid/raid0.o 00:04:53.076 CC module/bdev/nvme/vbdev_opal.o 00:04:53.076 CC module/bdev/raid/raid1.o 00:04:53.076 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:53.076 CC module/bdev/raid/concat.o 00:04:53.076 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:53.076 CC module/bdev/ftl/bdev_ftl.o 00:04:53.076 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:53.076 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:53.076 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:53.076 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:53.337 LIB libspdk_fsdev_aio.a 00:04:53.337 SYMLINK libspdk_vfu_device.so 00:04:53.337 SO libspdk_fsdev_aio.so.1.0 00:04:53.337 LIB libspdk_sock_posix.a 00:04:53.337 SO libspdk_sock_posix.so.6.0 00:04:53.337 SYMLINK libspdk_fsdev_aio.so 00:04:53.337 SYMLINK libspdk_sock_posix.so 00:04:53.596 LIB libspdk_blobfs_bdev.a 00:04:53.596 SO libspdk_blobfs_bdev.so.6.0 00:04:53.596 SYMLINK libspdk_blobfs_bdev.so 00:04:53.596 LIB libspdk_bdev_split.a 00:04:53.596 LIB libspdk_bdev_error.a 00:04:53.596 SO libspdk_bdev_split.so.6.0 00:04:53.596 LIB libspdk_bdev_null.a 00:04:53.596 LIB libspdk_bdev_ftl.a 00:04:53.596 SO libspdk_bdev_error.so.6.0 00:04:53.596 LIB libspdk_bdev_passthru.a 00:04:53.596 SO libspdk_bdev_null.so.6.0 00:04:53.596 SO libspdk_bdev_ftl.so.6.0 00:04:53.596 SO libspdk_bdev_passthru.so.6.0 00:04:53.596 LIB libspdk_bdev_gpt.a 00:04:53.596 SYMLINK libspdk_bdev_split.so 00:04:53.596 SO libspdk_bdev_gpt.so.6.0 00:04:53.596 SYMLINK libspdk_bdev_error.so 00:04:53.596 LIB libspdk_bdev_malloc.a 00:04:53.596 LIB libspdk_bdev_iscsi.a 00:04:53.855 SYMLINK libspdk_bdev_null.so 00:04:53.855 SYMLINK libspdk_bdev_ftl.so 00:04:53.855 LIB libspdk_bdev_aio.a 00:04:53.855 SO libspdk_bdev_iscsi.so.6.0 00:04:53.855 SYMLINK libspdk_bdev_passthru.so 00:04:53.855 SO libspdk_bdev_malloc.so.6.0 00:04:53.855 LIB libspdk_bdev_zone_block.a 00:04:53.855 SO libspdk_bdev_aio.so.6.0 00:04:53.856 SYMLINK libspdk_bdev_gpt.so 00:04:53.856 SO libspdk_bdev_zone_block.so.6.0 00:04:53.856 SYMLINK libspdk_bdev_iscsi.so 00:04:53.856 LIB libspdk_bdev_delay.a 00:04:53.856 SYMLINK libspdk_bdev_malloc.so 00:04:53.856 SYMLINK libspdk_bdev_aio.so 00:04:53.856 SO libspdk_bdev_delay.so.6.0 00:04:53.856 SYMLINK libspdk_bdev_zone_block.so 00:04:53.856 SYMLINK libspdk_bdev_delay.so 00:04:53.856 LIB libspdk_bdev_virtio.a 00:04:53.856 SO libspdk_bdev_virtio.so.6.0 00:04:53.856 LIB libspdk_bdev_lvol.a 00:04:54.114 SO libspdk_bdev_lvol.so.6.0 00:04:54.114 SYMLINK libspdk_bdev_virtio.so 00:04:54.114 SYMLINK libspdk_bdev_lvol.so 00:04:54.379 LIB libspdk_bdev_raid.a 00:04:54.379 SO libspdk_bdev_raid.so.6.0 00:04:54.637 SYMLINK libspdk_bdev_raid.so 00:04:55.572 LIB libspdk_bdev_nvme.a 00:04:55.572 SO libspdk_bdev_nvme.so.7.0 00:04:55.572 SYMLINK libspdk_bdev_nvme.so 00:04:56.166 CC module/event/subsystems/sock/sock.o 00:04:56.166 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:56.166 CC module/event/subsystems/vmd/vmd.o 00:04:56.166 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:56.166 CC module/event/subsystems/iobuf/iobuf.o 00:04:56.166 CC module/event/subsystems/keyring/keyring.o 00:04:56.166 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:04:56.166 CC module/event/subsystems/scheduler/scheduler.o 00:04:56.166 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:56.166 CC module/event/subsystems/fsdev/fsdev.o 00:04:56.166 LIB libspdk_event_keyring.a 00:04:56.166 LIB libspdk_event_vhost_blk.a 00:04:56.166 LIB libspdk_event_fsdev.a 00:04:56.166 LIB libspdk_event_vfu_tgt.a 00:04:56.166 LIB libspdk_event_vmd.a 00:04:56.166 LIB libspdk_event_sock.a 00:04:56.166 LIB libspdk_event_scheduler.a 00:04:56.166 SO libspdk_event_keyring.so.1.0 00:04:56.166 SO libspdk_event_vhost_blk.so.3.0 00:04:56.166 LIB libspdk_event_iobuf.a 00:04:56.166 SO libspdk_event_fsdev.so.1.0 00:04:56.166 SO libspdk_event_sock.so.5.0 00:04:56.166 SO libspdk_event_scheduler.so.4.0 00:04:56.166 SO libspdk_event_vfu_tgt.so.3.0 00:04:56.166 SO libspdk_event_vmd.so.6.0 00:04:56.166 SO libspdk_event_iobuf.so.3.0 00:04:56.166 SYMLINK libspdk_event_vhost_blk.so 00:04:56.166 SYMLINK libspdk_event_keyring.so 00:04:56.166 SYMLINK libspdk_event_fsdev.so 00:04:56.166 SYMLINK libspdk_event_sock.so 00:04:56.166 SYMLINK libspdk_event_vfu_tgt.so 00:04:56.166 SYMLINK libspdk_event_scheduler.so 00:04:56.166 SYMLINK libspdk_event_vmd.so 00:04:56.166 SYMLINK libspdk_event_iobuf.so 00:04:56.425 CC module/event/subsystems/accel/accel.o 00:04:56.684 LIB libspdk_event_accel.a 00:04:56.684 SO libspdk_event_accel.so.6.0 00:04:56.684 SYMLINK libspdk_event_accel.so 00:04:56.942 CC module/event/subsystems/bdev/bdev.o 00:04:56.942 LIB libspdk_event_bdev.a 00:04:56.942 SO libspdk_event_bdev.so.6.0 00:04:56.942 SYMLINK libspdk_event_bdev.so 00:04:57.200 CC module/event/subsystems/scsi/scsi.o 00:04:57.200 CC module/event/subsystems/ublk/ublk.o 00:04:57.200 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:57.200 CC module/event/subsystems/nbd/nbd.o 00:04:57.200 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:57.458 LIB libspdk_event_nbd.a 00:04:57.459 LIB libspdk_event_ublk.a 00:04:57.459 LIB libspdk_event_scsi.a 00:04:57.459 SO libspdk_event_nbd.so.6.0 00:04:57.459 SO libspdk_event_ublk.so.3.0 00:04:57.459 SO libspdk_event_scsi.so.6.0 00:04:57.459 SYMLINK libspdk_event_nbd.so 00:04:57.459 SYMLINK libspdk_event_ublk.so 00:04:57.459 SYMLINK libspdk_event_scsi.so 00:04:57.459 LIB libspdk_event_nvmf.a 00:04:57.459 SO libspdk_event_nvmf.so.6.0 00:04:57.459 SYMLINK libspdk_event_nvmf.so 00:04:57.717 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:57.717 CC module/event/subsystems/iscsi/iscsi.o 00:04:57.717 LIB libspdk_event_vhost_scsi.a 00:04:57.717 SO libspdk_event_vhost_scsi.so.3.0 00:04:57.717 LIB libspdk_event_iscsi.a 00:04:57.717 SO libspdk_event_iscsi.so.6.0 00:04:57.977 SYMLINK libspdk_event_vhost_scsi.so 00:04:57.977 SYMLINK libspdk_event_iscsi.so 00:04:57.977 SO libspdk.so.6.0 00:04:57.977 SYMLINK libspdk.so 00:04:58.243 CXX app/trace/trace.o 00:04:58.243 CC app/trace_record/trace_record.o 00:04:58.243 CC test/rpc_client/rpc_client_test.o 00:04:58.243 CC app/spdk_nvme_identify/identify.o 00:04:58.243 CC app/spdk_lspci/spdk_lspci.o 00:04:58.243 CC app/spdk_nvme_perf/perf.o 00:04:58.243 CC app/spdk_top/spdk_top.o 00:04:58.243 TEST_HEADER include/spdk/accel.h 00:04:58.243 TEST_HEADER include/spdk/accel_module.h 00:04:58.243 TEST_HEADER include/spdk/assert.h 00:04:58.243 TEST_HEADER include/spdk/barrier.h 00:04:58.243 CC app/spdk_nvme_discover/discovery_aer.o 00:04:58.243 TEST_HEADER include/spdk/base64.h 00:04:58.243 TEST_HEADER include/spdk/bdev.h 00:04:58.243 TEST_HEADER include/spdk/bdev_module.h 00:04:58.243 TEST_HEADER include/spdk/bdev_zone.h 00:04:58.243 TEST_HEADER include/spdk/bit_array.h 00:04:58.243 TEST_HEADER include/spdk/bit_pool.h 00:04:58.243 TEST_HEADER include/spdk/blob_bdev.h 00:04:58.243 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:58.243 TEST_HEADER include/spdk/blobfs.h 00:04:58.243 TEST_HEADER include/spdk/blob.h 00:04:58.243 TEST_HEADER include/spdk/conf.h 00:04:58.243 TEST_HEADER include/spdk/config.h 00:04:58.243 TEST_HEADER include/spdk/crc16.h 00:04:58.243 TEST_HEADER include/spdk/cpuset.h 00:04:58.243 TEST_HEADER include/spdk/crc64.h 00:04:58.243 TEST_HEADER include/spdk/crc32.h 00:04:58.243 TEST_HEADER include/spdk/dif.h 00:04:58.243 TEST_HEADER include/spdk/dma.h 00:04:58.243 TEST_HEADER include/spdk/endian.h 00:04:58.243 TEST_HEADER include/spdk/env.h 00:04:58.243 TEST_HEADER include/spdk/env_dpdk.h 00:04:58.243 TEST_HEADER include/spdk/event.h 00:04:58.243 TEST_HEADER include/spdk/fd_group.h 00:04:58.243 TEST_HEADER include/spdk/fd.h 00:04:58.243 TEST_HEADER include/spdk/file.h 00:04:58.243 TEST_HEADER include/spdk/fsdev.h 00:04:58.243 TEST_HEADER include/spdk/ftl.h 00:04:58.243 TEST_HEADER include/spdk/fsdev_module.h 00:04:58.243 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:58.243 TEST_HEADER include/spdk/gpt_spec.h 00:04:58.243 TEST_HEADER include/spdk/hexlify.h 00:04:58.243 TEST_HEADER include/spdk/histogram_data.h 00:04:58.243 TEST_HEADER include/spdk/idxd.h 00:04:58.243 TEST_HEADER include/spdk/idxd_spec.h 00:04:58.243 TEST_HEADER include/spdk/ioat.h 00:04:58.243 TEST_HEADER include/spdk/init.h 00:04:58.243 TEST_HEADER include/spdk/ioat_spec.h 00:04:58.243 TEST_HEADER include/spdk/iscsi_spec.h 00:04:58.243 TEST_HEADER include/spdk/json.h 00:04:58.243 TEST_HEADER include/spdk/jsonrpc.h 00:04:58.243 TEST_HEADER include/spdk/keyring.h 00:04:58.243 TEST_HEADER include/spdk/keyring_module.h 00:04:58.243 TEST_HEADER include/spdk/log.h 00:04:58.243 TEST_HEADER include/spdk/likely.h 00:04:58.243 TEST_HEADER include/spdk/lvol.h 00:04:58.243 TEST_HEADER include/spdk/md5.h 00:04:58.243 TEST_HEADER include/spdk/memory.h 00:04:58.243 TEST_HEADER include/spdk/mmio.h 00:04:58.243 TEST_HEADER include/spdk/nbd.h 00:04:58.243 TEST_HEADER include/spdk/net.h 00:04:58.243 TEST_HEADER include/spdk/notify.h 00:04:58.243 TEST_HEADER include/spdk/nvme.h 00:04:58.243 TEST_HEADER include/spdk/nvme_intel.h 00:04:58.243 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:58.243 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:58.243 TEST_HEADER include/spdk/nvme_spec.h 00:04:58.243 TEST_HEADER include/spdk/nvme_zns.h 00:04:58.243 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:58.243 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:58.243 TEST_HEADER include/spdk/nvmf.h 00:04:58.243 TEST_HEADER include/spdk/nvmf_spec.h 00:04:58.243 TEST_HEADER include/spdk/nvmf_transport.h 00:04:58.243 TEST_HEADER include/spdk/opal.h 00:04:58.243 TEST_HEADER include/spdk/opal_spec.h 00:04:58.243 TEST_HEADER include/spdk/pci_ids.h 00:04:58.243 TEST_HEADER include/spdk/pipe.h 00:04:58.243 TEST_HEADER include/spdk/queue.h 00:04:58.244 TEST_HEADER include/spdk/reduce.h 00:04:58.244 TEST_HEADER include/spdk/rpc.h 00:04:58.244 TEST_HEADER include/spdk/scheduler.h 00:04:58.244 TEST_HEADER include/spdk/scsi.h 00:04:58.244 TEST_HEADER include/spdk/scsi_spec.h 00:04:58.244 TEST_HEADER include/spdk/sock.h 00:04:58.244 TEST_HEADER include/spdk/string.h 00:04:58.244 TEST_HEADER include/spdk/thread.h 00:04:58.244 TEST_HEADER include/spdk/stdinc.h 00:04:58.244 TEST_HEADER include/spdk/trace.h 00:04:58.244 TEST_HEADER include/spdk/trace_parser.h 00:04:58.244 TEST_HEADER include/spdk/tree.h 00:04:58.244 TEST_HEADER include/spdk/util.h 00:04:58.244 TEST_HEADER include/spdk/ublk.h 00:04:58.244 TEST_HEADER include/spdk/uuid.h 00:04:58.244 TEST_HEADER include/spdk/version.h 00:04:58.244 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:58.244 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:58.244 TEST_HEADER include/spdk/vhost.h 00:04:58.244 TEST_HEADER include/spdk/vmd.h 00:04:58.244 TEST_HEADER include/spdk/xor.h 00:04:58.244 TEST_HEADER include/spdk/zipf.h 00:04:58.244 CXX test/cpp_headers/accel.o 00:04:58.244 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:58.244 CXX test/cpp_headers/accel_module.o 00:04:58.244 CXX test/cpp_headers/assert.o 00:04:58.244 CXX test/cpp_headers/barrier.o 00:04:58.244 CXX test/cpp_headers/base64.o 00:04:58.244 CXX test/cpp_headers/bdev.o 00:04:58.244 CXX test/cpp_headers/bdev_module.o 00:04:58.244 CXX test/cpp_headers/bdev_zone.o 00:04:58.244 CXX test/cpp_headers/bit_array.o 00:04:58.244 CXX test/cpp_headers/bit_pool.o 00:04:58.244 CXX test/cpp_headers/blob_bdev.o 00:04:58.244 CXX test/cpp_headers/blobfs_bdev.o 00:04:58.244 CXX test/cpp_headers/blobfs.o 00:04:58.244 CXX test/cpp_headers/blob.o 00:04:58.244 CXX test/cpp_headers/conf.o 00:04:58.244 CXX test/cpp_headers/config.o 00:04:58.244 CXX test/cpp_headers/cpuset.o 00:04:58.244 CXX test/cpp_headers/crc16.o 00:04:58.244 CC app/iscsi_tgt/iscsi_tgt.o 00:04:58.244 CC app/spdk_dd/spdk_dd.o 00:04:58.244 CC app/nvmf_tgt/nvmf_main.o 00:04:58.244 CXX test/cpp_headers/crc32.o 00:04:58.244 CC test/env/vtophys/vtophys.o 00:04:58.244 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:58.244 CC examples/ioat/perf/perf.o 00:04:58.244 CC test/app/stub/stub.o 00:04:58.244 CC app/spdk_tgt/spdk_tgt.o 00:04:58.244 CC test/app/jsoncat/jsoncat.o 00:04:58.244 CC test/env/memory/memory_ut.o 00:04:58.244 CC examples/ioat/verify/verify.o 00:04:58.244 CC test/env/pci/pci_ut.o 00:04:58.244 CC test/thread/poller_perf/poller_perf.o 00:04:58.244 CC app/fio/nvme/fio_plugin.o 00:04:58.244 CC test/app/histogram_perf/histogram_perf.o 00:04:58.244 CC examples/util/zipf/zipf.o 00:04:58.509 CC test/app/bdev_svc/bdev_svc.o 00:04:58.509 CC app/fio/bdev/fio_plugin.o 00:04:58.509 CC test/dma/test_dma/test_dma.o 00:04:58.509 CC test/env/mem_callbacks/mem_callbacks.o 00:04:58.509 LINK spdk_lspci 00:04:58.509 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:58.509 LINK rpc_client_test 00:04:58.772 LINK spdk_nvme_discover 00:04:58.772 LINK interrupt_tgt 00:04:58.772 LINK histogram_perf 00:04:58.772 LINK poller_perf 00:04:58.772 LINK vtophys 00:04:58.772 LINK env_dpdk_post_init 00:04:58.772 CXX test/cpp_headers/crc64.o 00:04:58.772 LINK zipf 00:04:58.772 LINK jsoncat 00:04:58.772 CXX test/cpp_headers/dif.o 00:04:58.772 CXX test/cpp_headers/dma.o 00:04:58.772 CXX test/cpp_headers/endian.o 00:04:58.773 CXX test/cpp_headers/env_dpdk.o 00:04:58.773 LINK nvmf_tgt 00:04:58.773 CXX test/cpp_headers/env.o 00:04:58.773 CXX test/cpp_headers/event.o 00:04:58.773 CXX test/cpp_headers/fd_group.o 00:04:58.773 CXX test/cpp_headers/fd.o 00:04:58.773 LINK iscsi_tgt 00:04:58.773 CXX test/cpp_headers/file.o 00:04:58.773 CXX test/cpp_headers/fsdev.o 00:04:58.773 LINK spdk_trace_record 00:04:58.773 LINK stub 00:04:58.773 CXX test/cpp_headers/fsdev_module.o 00:04:58.773 CXX test/cpp_headers/ftl.o 00:04:58.773 CXX test/cpp_headers/fuse_dispatcher.o 00:04:58.773 LINK bdev_svc 00:04:58.773 LINK verify 00:04:58.773 LINK ioat_perf 00:04:58.773 CXX test/cpp_headers/gpt_spec.o 00:04:58.773 CXX test/cpp_headers/hexlify.o 00:04:58.773 LINK spdk_tgt 00:04:58.773 CXX test/cpp_headers/histogram_data.o 00:04:58.773 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:58.773 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:58.773 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:59.043 CXX test/cpp_headers/idxd.o 00:04:59.043 CXX test/cpp_headers/idxd_spec.o 00:04:59.043 CXX test/cpp_headers/init.o 00:04:59.043 LINK spdk_dd 00:04:59.043 CXX test/cpp_headers/ioat.o 00:04:59.043 CXX test/cpp_headers/ioat_spec.o 00:04:59.043 CXX test/cpp_headers/iscsi_spec.o 00:04:59.043 CXX test/cpp_headers/json.o 00:04:59.043 CXX test/cpp_headers/jsonrpc.o 00:04:59.043 LINK spdk_trace 00:04:59.043 CXX test/cpp_headers/keyring.o 00:04:59.043 CXX test/cpp_headers/keyring_module.o 00:04:59.043 CXX test/cpp_headers/likely.o 00:04:59.043 CXX test/cpp_headers/log.o 00:04:59.043 LINK pci_ut 00:04:59.043 CXX test/cpp_headers/lvol.o 00:04:59.043 CXX test/cpp_headers/md5.o 00:04:59.308 CXX test/cpp_headers/memory.o 00:04:59.308 CXX test/cpp_headers/mmio.o 00:04:59.308 CXX test/cpp_headers/nbd.o 00:04:59.308 CXX test/cpp_headers/net.o 00:04:59.308 CXX test/cpp_headers/notify.o 00:04:59.308 CXX test/cpp_headers/nvme.o 00:04:59.308 CXX test/cpp_headers/nvme_intel.o 00:04:59.308 CXX test/cpp_headers/nvme_ocssd.o 00:04:59.308 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:59.308 CXX test/cpp_headers/nvme_spec.o 00:04:59.308 CXX test/cpp_headers/nvme_zns.o 00:04:59.308 CXX test/cpp_headers/nvmf_cmd.o 00:04:59.308 CC test/event/event_perf/event_perf.o 00:04:59.308 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:59.308 CXX test/cpp_headers/nvmf.o 00:04:59.308 CC test/event/reactor/reactor.o 00:04:59.308 LINK nvme_fuzz 00:04:59.308 CC test/event/reactor_perf/reactor_perf.o 00:04:59.576 CC examples/vmd/lsvmd/lsvmd.o 00:04:59.576 CC examples/sock/hello_world/hello_sock.o 00:04:59.576 CC examples/idxd/perf/perf.o 00:04:59.576 CXX test/cpp_headers/nvmf_spec.o 00:04:59.576 CXX test/cpp_headers/nvmf_transport.o 00:04:59.576 CC examples/vmd/led/led.o 00:04:59.576 CXX test/cpp_headers/opal.o 00:04:59.576 LINK test_dma 00:04:59.576 CC test/event/app_repeat/app_repeat.o 00:04:59.576 CC examples/thread/thread/thread_ex.o 00:04:59.576 CXX test/cpp_headers/opal_spec.o 00:04:59.576 CXX test/cpp_headers/pci_ids.o 00:04:59.576 CXX test/cpp_headers/pipe.o 00:04:59.576 CXX test/cpp_headers/queue.o 00:04:59.576 CXX test/cpp_headers/reduce.o 00:04:59.576 CXX test/cpp_headers/rpc.o 00:04:59.576 CXX test/cpp_headers/scheduler.o 00:04:59.576 CXX test/cpp_headers/scsi.o 00:04:59.576 CC test/event/scheduler/scheduler.o 00:04:59.576 CXX test/cpp_headers/scsi_spec.o 00:04:59.576 CXX test/cpp_headers/sock.o 00:04:59.576 CXX test/cpp_headers/stdinc.o 00:04:59.576 CXX test/cpp_headers/thread.o 00:04:59.576 CXX test/cpp_headers/string.o 00:04:59.576 CXX test/cpp_headers/trace.o 00:04:59.576 CXX test/cpp_headers/trace_parser.o 00:04:59.576 LINK spdk_bdev 00:04:59.576 CXX test/cpp_headers/tree.o 00:04:59.576 LINK mem_callbacks 00:04:59.842 CXX test/cpp_headers/ublk.o 00:04:59.842 LINK event_perf 00:04:59.842 LINK reactor 00:04:59.842 CXX test/cpp_headers/util.o 00:04:59.842 LINK spdk_nvme 00:04:59.842 LINK spdk_nvme_perf 00:04:59.842 CXX test/cpp_headers/uuid.o 00:04:59.842 LINK vhost_fuzz 00:04:59.842 CXX test/cpp_headers/version.o 00:04:59.842 LINK reactor_perf 00:04:59.842 CXX test/cpp_headers/vfio_user_pci.o 00:04:59.842 CXX test/cpp_headers/vfio_user_spec.o 00:04:59.842 CXX test/cpp_headers/vhost.o 00:04:59.842 LINK lsvmd 00:04:59.842 CXX test/cpp_headers/vmd.o 00:04:59.842 CC app/vhost/vhost.o 00:04:59.842 LINK led 00:04:59.842 CXX test/cpp_headers/xor.o 00:04:59.842 CXX test/cpp_headers/zipf.o 00:04:59.842 LINK spdk_nvme_identify 00:04:59.842 LINK app_repeat 00:04:59.842 LINK spdk_top 00:05:00.101 LINK hello_sock 00:05:00.101 LINK thread 00:05:00.101 LINK scheduler 00:05:00.101 CC test/nvme/reset/reset.o 00:05:00.101 CC test/nvme/aer/aer.o 00:05:00.101 CC test/nvme/doorbell_aers/doorbell_aers.o 00:05:00.101 CC test/nvme/overhead/overhead.o 00:05:00.101 CC test/nvme/compliance/nvme_compliance.o 00:05:00.101 CC test/nvme/fused_ordering/fused_ordering.o 00:05:00.101 CC test/nvme/sgl/sgl.o 00:05:00.101 CC test/nvme/connect_stress/connect_stress.o 00:05:00.101 CC test/nvme/startup/startup.o 00:05:00.101 CC test/nvme/cuse/cuse.o 00:05:00.101 CC test/nvme/reserve/reserve.o 00:05:00.101 CC test/nvme/simple_copy/simple_copy.o 00:05:00.101 CC test/nvme/boot_partition/boot_partition.o 00:05:00.101 CC test/nvme/err_injection/err_injection.o 00:05:00.101 LINK vhost 00:05:00.101 CC test/nvme/fdp/fdp.o 00:05:00.101 LINK idxd_perf 00:05:00.101 CC test/nvme/e2edp/nvme_dp.o 00:05:00.101 CC test/blobfs/mkfs/mkfs.o 00:05:00.360 CC test/accel/dif/dif.o 00:05:00.360 CC test/lvol/esnap/esnap.o 00:05:00.360 LINK boot_partition 00:05:00.360 CC examples/nvme/reconnect/reconnect.o 00:05:00.360 CC examples/nvme/nvme_manage/nvme_manage.o 00:05:00.360 CC examples/nvme/hello_world/hello_world.o 00:05:00.360 CC examples/nvme/abort/abort.o 00:05:00.360 CC examples/nvme/hotplug/hotplug.o 00:05:00.360 CC examples/nvme/cmb_copy/cmb_copy.o 00:05:00.360 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:05:00.360 CC examples/nvme/arbitration/arbitration.o 00:05:00.360 LINK err_injection 00:05:00.360 LINK fused_ordering 00:05:00.621 LINK doorbell_aers 00:05:00.621 LINK mkfs 00:05:00.621 LINK connect_stress 00:05:00.621 LINK startup 00:05:00.621 LINK reset 00:05:00.621 LINK sgl 00:05:00.621 LINK aer 00:05:00.621 CC examples/accel/perf/accel_perf.o 00:05:00.621 LINK overhead 00:05:00.621 LINK memory_ut 00:05:00.621 CC examples/blob/cli/blobcli.o 00:05:00.621 CC examples/fsdev/hello_world/hello_fsdev.o 00:05:00.621 LINK reserve 00:05:00.621 LINK simple_copy 00:05:00.621 CC examples/blob/hello_world/hello_blob.o 00:05:00.621 LINK nvme_compliance 00:05:00.621 LINK nvme_dp 00:05:00.621 LINK fdp 00:05:00.621 LINK cmb_copy 00:05:00.881 LINK pmr_persistence 00:05:00.881 LINK hello_world 00:05:00.881 LINK reconnect 00:05:00.881 LINK hotplug 00:05:00.881 LINK hello_blob 00:05:00.881 LINK abort 00:05:01.140 LINK dif 00:05:01.140 LINK arbitration 00:05:01.140 LINK hello_fsdev 00:05:01.140 LINK nvme_manage 00:05:01.140 LINK accel_perf 00:05:01.140 LINK blobcli 00:05:01.399 LINK iscsi_fuzz 00:05:01.399 CC test/bdev/bdevio/bdevio.o 00:05:01.399 CC examples/bdev/hello_world/hello_bdev.o 00:05:01.399 CC examples/bdev/bdevperf/bdevperf.o 00:05:01.657 LINK cuse 00:05:01.657 LINK hello_bdev 00:05:01.915 LINK bdevio 00:05:02.480 LINK bdevperf 00:05:02.736 CC examples/nvmf/nvmf/nvmf.o 00:05:02.993 LINK nvmf 00:05:05.573 LINK esnap 00:05:05.833 00:05:05.833 real 1m10.518s 00:05:05.833 user 11m56.491s 00:05:05.833 sys 2m40.076s 00:05:05.833 09:25:54 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:05:05.833 09:25:54 make -- common/autotest_common.sh@10 -- $ set +x 00:05:05.833 ************************************ 00:05:05.833 END TEST make 00:05:05.833 ************************************ 00:05:05.833 09:25:54 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:05:05.833 09:25:54 -- pm/common@29 -- $ signal_monitor_resources TERM 00:05:05.833 09:25:54 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:05:05.833 09:25:54 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:05.833 09:25:54 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:05:05.833 09:25:54 -- pm/common@44 -- $ pid=27640 00:05:05.833 09:25:54 -- pm/common@50 -- $ kill -TERM 27640 00:05:05.833 09:25:54 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:05.833 09:25:54 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:05:05.833 09:25:54 -- pm/common@44 -- $ pid=27642 00:05:05.833 09:25:54 -- pm/common@50 -- $ kill -TERM 27642 00:05:05.833 09:25:54 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:05.833 09:25:54 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:05:05.833 09:25:54 -- pm/common@44 -- $ pid=27644 00:05:05.833 09:25:54 -- pm/common@50 -- $ kill -TERM 27644 00:05:05.833 09:25:54 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:05.833 09:25:54 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:05:05.833 09:25:54 -- pm/common@44 -- $ pid=27674 00:05:05.833 09:25:54 -- pm/common@50 -- $ sudo -E kill -TERM 27674 00:05:05.833 09:25:54 -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:05.833 09:25:54 -- common/autotest_common.sh@1681 -- # lcov --version 00:05:05.833 09:25:54 -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:05.833 09:25:54 -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:05.833 09:25:54 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:05.833 09:25:54 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:05.833 09:25:54 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:05.833 09:25:54 -- scripts/common.sh@336 -- # IFS=.-: 00:05:05.833 09:25:54 -- scripts/common.sh@336 -- # read -ra ver1 00:05:05.833 09:25:54 -- scripts/common.sh@337 -- # IFS=.-: 00:05:05.833 09:25:54 -- scripts/common.sh@337 -- # read -ra ver2 00:05:05.833 09:25:54 -- scripts/common.sh@338 -- # local 'op=<' 00:05:05.833 09:25:54 -- scripts/common.sh@340 -- # ver1_l=2 00:05:05.833 09:25:54 -- scripts/common.sh@341 -- # ver2_l=1 00:05:05.833 09:25:54 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:05.833 09:25:54 -- scripts/common.sh@344 -- # case "$op" in 00:05:05.833 09:25:54 -- scripts/common.sh@345 -- # : 1 00:05:05.833 09:25:54 -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:05.833 09:25:54 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:05.833 09:25:54 -- scripts/common.sh@365 -- # decimal 1 00:05:05.833 09:25:54 -- scripts/common.sh@353 -- # local d=1 00:05:05.833 09:25:54 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:05.833 09:25:54 -- scripts/common.sh@355 -- # echo 1 00:05:05.833 09:25:54 -- scripts/common.sh@365 -- # ver1[v]=1 00:05:05.833 09:25:54 -- scripts/common.sh@366 -- # decimal 2 00:05:05.833 09:25:54 -- scripts/common.sh@353 -- # local d=2 00:05:05.833 09:25:54 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:05.833 09:25:54 -- scripts/common.sh@355 -- # echo 2 00:05:05.833 09:25:54 -- scripts/common.sh@366 -- # ver2[v]=2 00:05:05.833 09:25:54 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:05.833 09:25:54 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:05.833 09:25:54 -- scripts/common.sh@368 -- # return 0 00:05:05.833 09:25:54 -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:05.833 09:25:54 -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:05.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.833 --rc genhtml_branch_coverage=1 00:05:05.833 --rc genhtml_function_coverage=1 00:05:05.833 --rc genhtml_legend=1 00:05:05.833 --rc geninfo_all_blocks=1 00:05:05.833 --rc geninfo_unexecuted_blocks=1 00:05:05.833 00:05:05.833 ' 00:05:05.833 09:25:54 -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:05.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.833 --rc genhtml_branch_coverage=1 00:05:05.833 --rc genhtml_function_coverage=1 00:05:05.833 --rc genhtml_legend=1 00:05:05.833 --rc geninfo_all_blocks=1 00:05:05.833 --rc geninfo_unexecuted_blocks=1 00:05:05.833 00:05:05.833 ' 00:05:05.833 09:25:54 -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:05.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.833 --rc genhtml_branch_coverage=1 00:05:05.833 --rc genhtml_function_coverage=1 00:05:05.833 --rc genhtml_legend=1 00:05:05.833 --rc geninfo_all_blocks=1 00:05:05.833 --rc geninfo_unexecuted_blocks=1 00:05:05.833 00:05:05.833 ' 00:05:05.833 09:25:54 -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:05.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.833 --rc genhtml_branch_coverage=1 00:05:05.833 --rc genhtml_function_coverage=1 00:05:05.833 --rc genhtml_legend=1 00:05:05.833 --rc geninfo_all_blocks=1 00:05:05.833 --rc geninfo_unexecuted_blocks=1 00:05:05.833 00:05:05.833 ' 00:05:05.833 09:25:54 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:05.833 09:25:54 -- nvmf/common.sh@7 -- # uname -s 00:05:05.833 09:25:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:05.833 09:25:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:05.833 09:25:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:05.833 09:25:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:05.833 09:25:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:05.833 09:25:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:05.833 09:25:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:05.833 09:25:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:05.833 09:25:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:05.833 09:25:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:06.095 09:25:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:05:06.095 09:25:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:05:06.095 09:25:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:06.095 09:25:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:06.095 09:25:54 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:06.095 09:25:54 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:06.095 09:25:54 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:06.095 09:25:54 -- scripts/common.sh@15 -- # shopt -s extglob 00:05:06.095 09:25:54 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:06.095 09:25:54 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:06.095 09:25:54 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:06.095 09:25:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:06.095 09:25:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:06.095 09:25:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:06.095 09:25:54 -- paths/export.sh@5 -- # export PATH 00:05:06.095 09:25:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:06.095 09:25:54 -- nvmf/common.sh@51 -- # : 0 00:05:06.095 09:25:54 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:06.095 09:25:54 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:06.095 09:25:54 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:06.095 09:25:54 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:06.095 09:25:54 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:06.095 09:25:54 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:06.095 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:06.095 09:25:54 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:06.095 09:25:54 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:06.095 09:25:54 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:06.095 09:25:54 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:05:06.095 09:25:54 -- spdk/autotest.sh@32 -- # uname -s 00:05:06.095 09:25:54 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:05:06.095 09:25:54 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:05:06.095 09:25:54 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:05:06.095 09:25:54 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:05:06.095 09:25:54 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:05:06.095 09:25:54 -- spdk/autotest.sh@44 -- # modprobe nbd 00:05:06.095 09:25:54 -- spdk/autotest.sh@46 -- # type -P udevadm 00:05:06.095 09:25:54 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:05:06.095 09:25:54 -- spdk/autotest.sh@48 -- # udevadm_pid=87336 00:05:06.095 09:25:54 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:05:06.095 09:25:54 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:05:06.095 09:25:54 -- pm/common@17 -- # local monitor 00:05:06.095 09:25:54 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:06.095 09:25:54 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:06.095 09:25:54 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:06.095 09:25:54 -- pm/common@21 -- # date +%s 00:05:06.095 09:25:54 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:06.095 09:25:54 -- pm/common@21 -- # date +%s 00:05:06.095 09:25:54 -- pm/common@25 -- # sleep 1 00:05:06.095 09:25:54 -- pm/common@21 -- # date +%s 00:05:06.095 09:25:54 -- pm/common@21 -- # date +%s 00:05:06.095 09:25:54 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728285954 00:05:06.095 09:25:54 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728285954 00:05:06.095 09:25:54 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728285954 00:05:06.095 09:25:54 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728285954 00:05:06.095 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728285954_collect-vmstat.pm.log 00:05:06.095 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728285954_collect-cpu-load.pm.log 00:05:06.095 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728285954_collect-cpu-temp.pm.log 00:05:06.095 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728285954_collect-bmc-pm.bmc.pm.log 00:05:07.037 09:25:55 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:05:07.037 09:25:55 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:05:07.037 09:25:55 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:07.037 09:25:55 -- common/autotest_common.sh@10 -- # set +x 00:05:07.037 09:25:55 -- spdk/autotest.sh@59 -- # create_test_list 00:05:07.037 09:25:55 -- common/autotest_common.sh@748 -- # xtrace_disable 00:05:07.037 09:25:55 -- common/autotest_common.sh@10 -- # set +x 00:05:07.037 09:25:55 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:05:07.037 09:25:55 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:07.037 09:25:55 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:07.037 09:25:55 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:05:07.037 09:25:55 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:07.037 09:25:55 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:05:07.037 09:25:55 -- common/autotest_common.sh@1455 -- # uname 00:05:07.037 09:25:55 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:05:07.037 09:25:55 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:05:07.037 09:25:55 -- common/autotest_common.sh@1475 -- # uname 00:05:07.037 09:25:55 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:05:07.037 09:25:55 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:05:07.037 09:25:55 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:05:07.037 lcov: LCOV version 1.15 00:05:07.037 09:25:56 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:05:25.128 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:25.128 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:05:47.054 09:26:32 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:47.054 09:26:32 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:47.054 09:26:32 -- common/autotest_common.sh@10 -- # set +x 00:05:47.054 09:26:32 -- spdk/autotest.sh@78 -- # rm -f 00:05:47.054 09:26:32 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:47.054 0000:84:00.0 (8086 0a54): Already using the nvme driver 00:05:47.054 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:05:47.054 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:05:47.054 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:05:47.054 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:05:47.054 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:05:47.054 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:05:47.054 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:05:47.054 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:05:47.054 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:05:47.054 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:05:47.054 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:05:47.054 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:05:47.054 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:05:47.054 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:05:47.054 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:05:47.054 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:05:47.054 09:26:34 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:47.054 09:26:34 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:05:47.054 09:26:34 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:05:47.054 09:26:34 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:05:47.054 09:26:34 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:47.054 09:26:34 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:05:47.054 09:26:34 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:05:47.054 09:26:34 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:47.054 09:26:34 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:47.054 09:26:34 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:47.054 09:26:34 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:47.054 09:26:34 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:47.054 09:26:34 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:47.054 09:26:34 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:47.054 09:26:34 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:47.054 No valid GPT data, bailing 00:05:47.054 09:26:34 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:47.054 09:26:34 -- scripts/common.sh@394 -- # pt= 00:05:47.054 09:26:34 -- scripts/common.sh@395 -- # return 1 00:05:47.054 09:26:34 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:47.054 1+0 records in 00:05:47.054 1+0 records out 00:05:47.054 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00164595 s, 637 MB/s 00:05:47.054 09:26:34 -- spdk/autotest.sh@105 -- # sync 00:05:47.054 09:26:34 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:47.054 09:26:34 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:47.054 09:26:34 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:47.054 09:26:35 -- spdk/autotest.sh@111 -- # uname -s 00:05:47.054 09:26:35 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:47.054 09:26:35 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:47.054 09:26:35 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:48.433 Hugepages 00:05:48.433 node hugesize free / total 00:05:48.433 node0 1048576kB 0 / 0 00:05:48.433 node0 2048kB 0 / 0 00:05:48.433 node1 1048576kB 0 / 0 00:05:48.433 node1 2048kB 0 / 0 00:05:48.433 00:05:48.433 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:48.433 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:05:48.433 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:05:48.433 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:05:48.433 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:05:48.433 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:05:48.433 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:05:48.433 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:05:48.433 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:05:48.433 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:05:48.433 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:05:48.433 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:05:48.433 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:05:48.433 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:05:48.433 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:05:48.433 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:05:48.433 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:05:48.433 NVMe 0000:84:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:05:48.433 09:26:37 -- spdk/autotest.sh@117 -- # uname -s 00:05:48.433 09:26:37 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:48.433 09:26:37 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:48.433 09:26:37 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:49.374 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:49.374 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:49.374 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:49.374 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:49.374 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:49.374 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:49.633 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:49.633 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:49.633 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:49.633 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:49.633 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:49.633 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:49.633 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:49.633 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:49.633 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:49.633 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:50.577 0000:84:00.0 (8086 0a54): nvme -> vfio-pci 00:05:50.577 09:26:39 -- common/autotest_common.sh@1515 -- # sleep 1 00:05:51.520 09:26:40 -- common/autotest_common.sh@1516 -- # bdfs=() 00:05:51.520 09:26:40 -- common/autotest_common.sh@1516 -- # local bdfs 00:05:51.520 09:26:40 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:05:51.520 09:26:40 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:05:51.520 09:26:40 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:51.520 09:26:40 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:51.520 09:26:40 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:51.521 09:26:40 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:51.521 09:26:40 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:51.779 09:26:40 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:05:51.779 09:26:40 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:84:00.0 00:05:51.779 09:26:40 -- common/autotest_common.sh@1520 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:52.721 Waiting for block devices as requested 00:05:52.721 0000:84:00.0 (8086 0a54): vfio-pci -> nvme 00:05:52.981 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:52.981 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:53.241 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:53.241 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:53.241 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:53.241 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:53.502 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:53.502 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:53.502 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:53.502 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:53.761 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:53.761 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:53.761 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:54.021 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:54.021 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:54.021 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:54.281 09:26:43 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:54.281 09:26:43 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:84:00.0 00:05:54.281 09:26:43 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 00:05:54.281 09:26:43 -- common/autotest_common.sh@1485 -- # grep 0000:84:00.0/nvme/nvme 00:05:54.281 09:26:43 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:84:00.0/nvme/nvme0 00:05:54.281 09:26:43 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:84:00.0/nvme/nvme0 ]] 00:05:54.281 09:26:43 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:84:00.0/nvme/nvme0 00:05:54.281 09:26:43 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:05:54.281 09:26:43 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:05:54.281 09:26:43 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:05:54.281 09:26:43 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:05:54.281 09:26:43 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:54.281 09:26:43 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:54.281 09:26:43 -- common/autotest_common.sh@1529 -- # oacs=' 0xf' 00:05:54.281 09:26:43 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:54.281 09:26:43 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:54.281 09:26:43 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:05:54.281 09:26:43 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:54.281 09:26:43 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:54.281 09:26:43 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:54.281 09:26:43 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:54.281 09:26:43 -- common/autotest_common.sh@1541 -- # continue 00:05:54.281 09:26:43 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:54.281 09:26:43 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:54.281 09:26:43 -- common/autotest_common.sh@10 -- # set +x 00:05:54.281 09:26:43 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:54.281 09:26:43 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:54.281 09:26:43 -- common/autotest_common.sh@10 -- # set +x 00:05:54.281 09:26:43 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:55.660 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:55.660 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:55.660 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:55.660 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:55.660 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:55.660 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:55.660 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:55.660 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:55.660 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:55.660 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:55.660 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:55.660 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:55.660 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:55.660 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:55.660 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:55.660 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:56.601 0000:84:00.0 (8086 0a54): nvme -> vfio-pci 00:05:56.601 09:26:45 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:56.601 09:26:45 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:56.601 09:26:45 -- common/autotest_common.sh@10 -- # set +x 00:05:56.601 09:26:45 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:56.601 09:26:45 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:05:56.601 09:26:45 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:05:56.601 09:26:45 -- common/autotest_common.sh@1561 -- # bdfs=() 00:05:56.601 09:26:45 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:05:56.601 09:26:45 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:05:56.601 09:26:45 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:05:56.601 09:26:45 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:05:56.601 09:26:45 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:56.601 09:26:45 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:56.601 09:26:45 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:56.601 09:26:45 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:56.601 09:26:45 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:56.601 09:26:45 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:05:56.601 09:26:45 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:84:00.0 00:05:56.601 09:26:45 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:56.601 09:26:45 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:84:00.0/device 00:05:56.601 09:26:45 -- common/autotest_common.sh@1564 -- # device=0x0a54 00:05:56.601 09:26:45 -- common/autotest_common.sh@1565 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:56.601 09:26:45 -- common/autotest_common.sh@1566 -- # bdfs+=($bdf) 00:05:56.601 09:26:45 -- common/autotest_common.sh@1570 -- # (( 1 > 0 )) 00:05:56.601 09:26:45 -- common/autotest_common.sh@1571 -- # printf '%s\n' 0000:84:00.0 00:05:56.601 09:26:45 -- common/autotest_common.sh@1577 -- # [[ -z 0000:84:00.0 ]] 00:05:56.601 09:26:45 -- common/autotest_common.sh@1582 -- # spdk_tgt_pid=97227 00:05:56.601 09:26:45 -- common/autotest_common.sh@1581 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:56.601 09:26:45 -- common/autotest_common.sh@1583 -- # waitforlisten 97227 00:05:56.601 09:26:45 -- common/autotest_common.sh@831 -- # '[' -z 97227 ']' 00:05:56.601 09:26:45 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.601 09:26:45 -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:56.601 09:26:45 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.601 09:26:45 -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:56.601 09:26:45 -- common/autotest_common.sh@10 -- # set +x 00:05:56.862 [2024-10-07 09:26:45.615508] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:05:56.862 [2024-10-07 09:26:45.615595] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97227 ] 00:05:56.862 [2024-10-07 09:26:45.670958] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.862 [2024-10-07 09:26:45.774039] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.122 09:26:46 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:57.122 09:26:46 -- common/autotest_common.sh@864 -- # return 0 00:05:57.122 09:26:46 -- common/autotest_common.sh@1585 -- # bdf_id=0 00:05:57.122 09:26:46 -- common/autotest_common.sh@1586 -- # for bdf in "${bdfs[@]}" 00:05:57.122 09:26:46 -- common/autotest_common.sh@1587 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:84:00.0 00:06:00.416 nvme0n1 00:06:00.416 09:26:49 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:06:00.416 [2024-10-07 09:26:49.368812] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:06:00.416 [2024-10-07 09:26:49.368854] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:06:00.416 request: 00:06:00.416 { 00:06:00.416 "nvme_ctrlr_name": "nvme0", 00:06:00.416 "password": "test", 00:06:00.416 "method": "bdev_nvme_opal_revert", 00:06:00.416 "req_id": 1 00:06:00.416 } 00:06:00.416 Got JSON-RPC error response 00:06:00.416 response: 00:06:00.416 { 00:06:00.416 "code": -32603, 00:06:00.416 "message": "Internal error" 00:06:00.416 } 00:06:00.416 09:26:49 -- common/autotest_common.sh@1589 -- # true 00:06:00.416 09:26:49 -- common/autotest_common.sh@1590 -- # (( ++bdf_id )) 00:06:00.416 09:26:49 -- common/autotest_common.sh@1593 -- # killprocess 97227 00:06:00.416 09:26:49 -- common/autotest_common.sh@950 -- # '[' -z 97227 ']' 00:06:00.416 09:26:49 -- common/autotest_common.sh@954 -- # kill -0 97227 00:06:00.416 09:26:49 -- common/autotest_common.sh@955 -- # uname 00:06:00.416 09:26:49 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:00.416 09:26:49 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 97227 00:06:00.675 09:26:49 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:00.675 09:26:49 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:00.675 09:26:49 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 97227' 00:06:00.675 killing process with pid 97227 00:06:00.675 09:26:49 -- common/autotest_common.sh@969 -- # kill 97227 00:06:00.675 09:26:49 -- common/autotest_common.sh@974 -- # wait 97227 00:06:02.589 09:26:51 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:06:02.589 09:26:51 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:06:02.589 09:26:51 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:02.589 09:26:51 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:02.589 09:26:51 -- spdk/autotest.sh@149 -- # timing_enter lib 00:06:02.589 09:26:51 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:02.589 09:26:51 -- common/autotest_common.sh@10 -- # set +x 00:06:02.589 09:26:51 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:06:02.589 09:26:51 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:06:02.589 09:26:51 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:02.589 09:26:51 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:02.589 09:26:51 -- common/autotest_common.sh@10 -- # set +x 00:06:02.589 ************************************ 00:06:02.589 START TEST env 00:06:02.589 ************************************ 00:06:02.589 09:26:51 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:06:02.589 * Looking for test storage... 00:06:02.589 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:06:02.589 09:26:51 env -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:02.589 09:26:51 env -- common/autotest_common.sh@1681 -- # lcov --version 00:06:02.589 09:26:51 env -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:02.589 09:26:51 env -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:02.589 09:26:51 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:02.589 09:26:51 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:02.589 09:26:51 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:02.589 09:26:51 env -- scripts/common.sh@336 -- # IFS=.-: 00:06:02.589 09:26:51 env -- scripts/common.sh@336 -- # read -ra ver1 00:06:02.589 09:26:51 env -- scripts/common.sh@337 -- # IFS=.-: 00:06:02.589 09:26:51 env -- scripts/common.sh@337 -- # read -ra ver2 00:06:02.589 09:26:51 env -- scripts/common.sh@338 -- # local 'op=<' 00:06:02.589 09:26:51 env -- scripts/common.sh@340 -- # ver1_l=2 00:06:02.589 09:26:51 env -- scripts/common.sh@341 -- # ver2_l=1 00:06:02.589 09:26:51 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:02.589 09:26:51 env -- scripts/common.sh@344 -- # case "$op" in 00:06:02.589 09:26:51 env -- scripts/common.sh@345 -- # : 1 00:06:02.589 09:26:51 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:02.589 09:26:51 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:02.589 09:26:51 env -- scripts/common.sh@365 -- # decimal 1 00:06:02.589 09:26:51 env -- scripts/common.sh@353 -- # local d=1 00:06:02.589 09:26:51 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:02.589 09:26:51 env -- scripts/common.sh@355 -- # echo 1 00:06:02.590 09:26:51 env -- scripts/common.sh@365 -- # ver1[v]=1 00:06:02.590 09:26:51 env -- scripts/common.sh@366 -- # decimal 2 00:06:02.590 09:26:51 env -- scripts/common.sh@353 -- # local d=2 00:06:02.590 09:26:51 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:02.590 09:26:51 env -- scripts/common.sh@355 -- # echo 2 00:06:02.590 09:26:51 env -- scripts/common.sh@366 -- # ver2[v]=2 00:06:02.590 09:26:51 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:02.590 09:26:51 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:02.590 09:26:51 env -- scripts/common.sh@368 -- # return 0 00:06:02.590 09:26:51 env -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:02.590 09:26:51 env -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:02.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.590 --rc genhtml_branch_coverage=1 00:06:02.590 --rc genhtml_function_coverage=1 00:06:02.590 --rc genhtml_legend=1 00:06:02.590 --rc geninfo_all_blocks=1 00:06:02.590 --rc geninfo_unexecuted_blocks=1 00:06:02.590 00:06:02.590 ' 00:06:02.590 09:26:51 env -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:02.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.590 --rc genhtml_branch_coverage=1 00:06:02.590 --rc genhtml_function_coverage=1 00:06:02.590 --rc genhtml_legend=1 00:06:02.590 --rc geninfo_all_blocks=1 00:06:02.590 --rc geninfo_unexecuted_blocks=1 00:06:02.590 00:06:02.590 ' 00:06:02.590 09:26:51 env -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:02.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.590 --rc genhtml_branch_coverage=1 00:06:02.590 --rc genhtml_function_coverage=1 00:06:02.590 --rc genhtml_legend=1 00:06:02.590 --rc geninfo_all_blocks=1 00:06:02.590 --rc geninfo_unexecuted_blocks=1 00:06:02.590 00:06:02.590 ' 00:06:02.590 09:26:51 env -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:02.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.590 --rc genhtml_branch_coverage=1 00:06:02.590 --rc genhtml_function_coverage=1 00:06:02.590 --rc genhtml_legend=1 00:06:02.590 --rc geninfo_all_blocks=1 00:06:02.590 --rc geninfo_unexecuted_blocks=1 00:06:02.590 00:06:02.590 ' 00:06:02.590 09:26:51 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:06:02.590 09:26:51 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:02.590 09:26:51 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:02.590 09:26:51 env -- common/autotest_common.sh@10 -- # set +x 00:06:02.590 ************************************ 00:06:02.590 START TEST env_memory 00:06:02.590 ************************************ 00:06:02.590 09:26:51 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:06:02.590 00:06:02.590 00:06:02.590 CUnit - A unit testing framework for C - Version 2.1-3 00:06:02.590 http://cunit.sourceforge.net/ 00:06:02.590 00:06:02.590 00:06:02.590 Suite: memory 00:06:02.590 Test: alloc and free memory map ...[2024-10-07 09:26:51.405672] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:02.590 passed 00:06:02.590 Test: mem map translation ...[2024-10-07 09:26:51.425513] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:02.590 [2024-10-07 09:26:51.425534] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:02.590 [2024-10-07 09:26:51.425580] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:02.590 [2024-10-07 09:26:51.425591] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:02.590 passed 00:06:02.590 Test: mem map registration ...[2024-10-07 09:26:51.466922] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:06:02.590 [2024-10-07 09:26:51.466941] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:06:02.590 passed 00:06:02.590 Test: mem map adjacent registrations ...passed 00:06:02.590 00:06:02.590 Run Summary: Type Total Ran Passed Failed Inactive 00:06:02.590 suites 1 1 n/a 0 0 00:06:02.590 tests 4 4 4 0 0 00:06:02.590 asserts 152 152 152 0 n/a 00:06:02.590 00:06:02.590 Elapsed time = 0.140 seconds 00:06:02.590 00:06:02.590 real 0m0.149s 00:06:02.590 user 0m0.137s 00:06:02.590 sys 0m0.011s 00:06:02.590 09:26:51 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:02.590 09:26:51 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:02.590 ************************************ 00:06:02.590 END TEST env_memory 00:06:02.590 ************************************ 00:06:02.590 09:26:51 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:02.590 09:26:51 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:02.590 09:26:51 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:02.590 09:26:51 env -- common/autotest_common.sh@10 -- # set +x 00:06:02.590 ************************************ 00:06:02.590 START TEST env_vtophys 00:06:02.590 ************************************ 00:06:02.590 09:26:51 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:02.590 EAL: lib.eal log level changed from notice to debug 00:06:02.590 EAL: Detected lcore 0 as core 0 on socket 0 00:06:02.590 EAL: Detected lcore 1 as core 1 on socket 0 00:06:02.590 EAL: Detected lcore 2 as core 2 on socket 0 00:06:02.590 EAL: Detected lcore 3 as core 3 on socket 0 00:06:02.590 EAL: Detected lcore 4 as core 4 on socket 0 00:06:02.590 EAL: Detected lcore 5 as core 5 on socket 0 00:06:02.590 EAL: Detected lcore 6 as core 8 on socket 0 00:06:02.590 EAL: Detected lcore 7 as core 9 on socket 0 00:06:02.590 EAL: Detected lcore 8 as core 10 on socket 0 00:06:02.590 EAL: Detected lcore 9 as core 11 on socket 0 00:06:02.590 EAL: Detected lcore 10 as core 12 on socket 0 00:06:02.590 EAL: Detected lcore 11 as core 13 on socket 0 00:06:02.590 EAL: Detected lcore 12 as core 0 on socket 1 00:06:02.590 EAL: Detected lcore 13 as core 1 on socket 1 00:06:02.590 EAL: Detected lcore 14 as core 2 on socket 1 00:06:02.590 EAL: Detected lcore 15 as core 3 on socket 1 00:06:02.590 EAL: Detected lcore 16 as core 4 on socket 1 00:06:02.590 EAL: Detected lcore 17 as core 5 on socket 1 00:06:02.590 EAL: Detected lcore 18 as core 8 on socket 1 00:06:02.590 EAL: Detected lcore 19 as core 9 on socket 1 00:06:02.590 EAL: Detected lcore 20 as core 10 on socket 1 00:06:02.590 EAL: Detected lcore 21 as core 11 on socket 1 00:06:02.590 EAL: Detected lcore 22 as core 12 on socket 1 00:06:02.590 EAL: Detected lcore 23 as core 13 on socket 1 00:06:02.590 EAL: Detected lcore 24 as core 0 on socket 0 00:06:02.590 EAL: Detected lcore 25 as core 1 on socket 0 00:06:02.590 EAL: Detected lcore 26 as core 2 on socket 0 00:06:02.590 EAL: Detected lcore 27 as core 3 on socket 0 00:06:02.590 EAL: Detected lcore 28 as core 4 on socket 0 00:06:02.590 EAL: Detected lcore 29 as core 5 on socket 0 00:06:02.590 EAL: Detected lcore 30 as core 8 on socket 0 00:06:02.590 EAL: Detected lcore 31 as core 9 on socket 0 00:06:02.590 EAL: Detected lcore 32 as core 10 on socket 0 00:06:02.590 EAL: Detected lcore 33 as core 11 on socket 0 00:06:02.590 EAL: Detected lcore 34 as core 12 on socket 0 00:06:02.590 EAL: Detected lcore 35 as core 13 on socket 0 00:06:02.590 EAL: Detected lcore 36 as core 0 on socket 1 00:06:02.590 EAL: Detected lcore 37 as core 1 on socket 1 00:06:02.590 EAL: Detected lcore 38 as core 2 on socket 1 00:06:02.590 EAL: Detected lcore 39 as core 3 on socket 1 00:06:02.590 EAL: Detected lcore 40 as core 4 on socket 1 00:06:02.590 EAL: Detected lcore 41 as core 5 on socket 1 00:06:02.590 EAL: Detected lcore 42 as core 8 on socket 1 00:06:02.590 EAL: Detected lcore 43 as core 9 on socket 1 00:06:02.590 EAL: Detected lcore 44 as core 10 on socket 1 00:06:02.590 EAL: Detected lcore 45 as core 11 on socket 1 00:06:02.590 EAL: Detected lcore 46 as core 12 on socket 1 00:06:02.590 EAL: Detected lcore 47 as core 13 on socket 1 00:06:02.850 EAL: Maximum logical cores by configuration: 128 00:06:02.850 EAL: Detected CPU lcores: 48 00:06:02.850 EAL: Detected NUMA nodes: 2 00:06:02.850 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:06:02.850 EAL: Detected shared linkage of DPDK 00:06:02.850 EAL: No shared files mode enabled, IPC will be disabled 00:06:02.850 EAL: Bus pci wants IOVA as 'DC' 00:06:02.850 EAL: Buses did not request a specific IOVA mode. 00:06:02.850 EAL: IOMMU is available, selecting IOVA as VA mode. 00:06:02.850 EAL: Selected IOVA mode 'VA' 00:06:02.850 EAL: Probing VFIO support... 00:06:02.850 EAL: IOMMU type 1 (Type 1) is supported 00:06:02.850 EAL: IOMMU type 7 (sPAPR) is not supported 00:06:02.850 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:06:02.850 EAL: VFIO support initialized 00:06:02.851 EAL: Ask a virtual area of 0x2e000 bytes 00:06:02.851 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:02.851 EAL: Setting up physically contiguous memory... 00:06:02.851 EAL: Setting maximum number of open files to 524288 00:06:02.851 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:02.851 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:06:02.851 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:02.851 EAL: Ask a virtual area of 0x61000 bytes 00:06:02.851 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:02.851 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:02.851 EAL: Ask a virtual area of 0x400000000 bytes 00:06:02.851 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:02.851 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:02.851 EAL: Ask a virtual area of 0x61000 bytes 00:06:02.851 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:02.851 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:02.851 EAL: Ask a virtual area of 0x400000000 bytes 00:06:02.851 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:02.851 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:02.851 EAL: Ask a virtual area of 0x61000 bytes 00:06:02.851 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:02.851 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:02.851 EAL: Ask a virtual area of 0x400000000 bytes 00:06:02.851 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:02.851 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:02.851 EAL: Ask a virtual area of 0x61000 bytes 00:06:02.851 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:02.851 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:02.851 EAL: Ask a virtual area of 0x400000000 bytes 00:06:02.851 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:02.851 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:02.851 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:06:02.851 EAL: Ask a virtual area of 0x61000 bytes 00:06:02.851 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:06:02.851 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:02.851 EAL: Ask a virtual area of 0x400000000 bytes 00:06:02.851 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:06:02.851 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:06:02.851 EAL: Ask a virtual area of 0x61000 bytes 00:06:02.851 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:06:02.851 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:02.851 EAL: Ask a virtual area of 0x400000000 bytes 00:06:02.851 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:06:02.851 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:06:02.851 EAL: Ask a virtual area of 0x61000 bytes 00:06:02.851 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:06:02.851 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:02.851 EAL: Ask a virtual area of 0x400000000 bytes 00:06:02.851 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:06:02.851 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:06:02.851 EAL: Ask a virtual area of 0x61000 bytes 00:06:02.851 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:06:02.851 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:02.851 EAL: Ask a virtual area of 0x400000000 bytes 00:06:02.851 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:06:02.851 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:06:02.851 EAL: Hugepages will be freed exactly as allocated. 00:06:02.851 EAL: No shared files mode enabled, IPC is disabled 00:06:02.851 EAL: No shared files mode enabled, IPC is disabled 00:06:02.851 EAL: TSC frequency is ~2700000 KHz 00:06:02.851 EAL: Main lcore 0 is ready (tid=7f57db68aa00;cpuset=[0]) 00:06:02.851 EAL: Trying to obtain current memory policy. 00:06:02.851 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:02.851 EAL: Restoring previous memory policy: 0 00:06:02.851 EAL: request: mp_malloc_sync 00:06:02.851 EAL: No shared files mode enabled, IPC is disabled 00:06:02.851 EAL: Heap on socket 0 was expanded by 2MB 00:06:02.851 EAL: No shared files mode enabled, IPC is disabled 00:06:02.851 EAL: No PCI address specified using 'addr=' in: bus=pci 00:06:02.851 EAL: Mem event callback 'spdk:(nil)' registered 00:06:02.851 00:06:02.851 00:06:02.851 CUnit - A unit testing framework for C - Version 2.1-3 00:06:02.851 http://cunit.sourceforge.net/ 00:06:02.851 00:06:02.851 00:06:02.851 Suite: components_suite 00:06:02.851 Test: vtophys_malloc_test ...passed 00:06:02.851 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:02.851 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:02.851 EAL: Restoring previous memory policy: 4 00:06:02.851 EAL: Calling mem event callback 'spdk:(nil)' 00:06:02.851 EAL: request: mp_malloc_sync 00:06:02.851 EAL: No shared files mode enabled, IPC is disabled 00:06:02.851 EAL: Heap on socket 0 was expanded by 4MB 00:06:02.851 EAL: Calling mem event callback 'spdk:(nil)' 00:06:02.851 EAL: request: mp_malloc_sync 00:06:02.851 EAL: No shared files mode enabled, IPC is disabled 00:06:02.851 EAL: Heap on socket 0 was shrunk by 4MB 00:06:02.851 EAL: Trying to obtain current memory policy. 00:06:02.851 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:02.851 EAL: Restoring previous memory policy: 4 00:06:02.851 EAL: Calling mem event callback 'spdk:(nil)' 00:06:02.851 EAL: request: mp_malloc_sync 00:06:02.851 EAL: No shared files mode enabled, IPC is disabled 00:06:02.851 EAL: Heap on socket 0 was expanded by 6MB 00:06:02.851 EAL: Calling mem event callback 'spdk:(nil)' 00:06:02.851 EAL: request: mp_malloc_sync 00:06:02.851 EAL: No shared files mode enabled, IPC is disabled 00:06:02.851 EAL: Heap on socket 0 was shrunk by 6MB 00:06:02.851 EAL: Trying to obtain current memory policy. 00:06:02.851 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:02.851 EAL: Restoring previous memory policy: 4 00:06:02.851 EAL: Calling mem event callback 'spdk:(nil)' 00:06:02.851 EAL: request: mp_malloc_sync 00:06:02.851 EAL: No shared files mode enabled, IPC is disabled 00:06:02.851 EAL: Heap on socket 0 was expanded by 10MB 00:06:02.851 EAL: Calling mem event callback 'spdk:(nil)' 00:06:02.851 EAL: request: mp_malloc_sync 00:06:02.851 EAL: No shared files mode enabled, IPC is disabled 00:06:02.851 EAL: Heap on socket 0 was shrunk by 10MB 00:06:02.851 EAL: Trying to obtain current memory policy. 00:06:02.851 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:02.851 EAL: Restoring previous memory policy: 4 00:06:02.851 EAL: Calling mem event callback 'spdk:(nil)' 00:06:02.851 EAL: request: mp_malloc_sync 00:06:02.851 EAL: No shared files mode enabled, IPC is disabled 00:06:02.851 EAL: Heap on socket 0 was expanded by 18MB 00:06:02.851 EAL: Calling mem event callback 'spdk:(nil)' 00:06:02.851 EAL: request: mp_malloc_sync 00:06:02.851 EAL: No shared files mode enabled, IPC is disabled 00:06:02.851 EAL: Heap on socket 0 was shrunk by 18MB 00:06:02.851 EAL: Trying to obtain current memory policy. 00:06:02.851 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:02.851 EAL: Restoring previous memory policy: 4 00:06:02.851 EAL: Calling mem event callback 'spdk:(nil)' 00:06:02.851 EAL: request: mp_malloc_sync 00:06:02.851 EAL: No shared files mode enabled, IPC is disabled 00:06:02.851 EAL: Heap on socket 0 was expanded by 34MB 00:06:02.851 EAL: Calling mem event callback 'spdk:(nil)' 00:06:02.851 EAL: request: mp_malloc_sync 00:06:02.851 EAL: No shared files mode enabled, IPC is disabled 00:06:02.851 EAL: Heap on socket 0 was shrunk by 34MB 00:06:02.851 EAL: Trying to obtain current memory policy. 00:06:02.851 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:02.851 EAL: Restoring previous memory policy: 4 00:06:02.851 EAL: Calling mem event callback 'spdk:(nil)' 00:06:02.851 EAL: request: mp_malloc_sync 00:06:02.851 EAL: No shared files mode enabled, IPC is disabled 00:06:02.851 EAL: Heap on socket 0 was expanded by 66MB 00:06:02.851 EAL: Calling mem event callback 'spdk:(nil)' 00:06:02.851 EAL: request: mp_malloc_sync 00:06:02.851 EAL: No shared files mode enabled, IPC is disabled 00:06:02.851 EAL: Heap on socket 0 was shrunk by 66MB 00:06:02.851 EAL: Trying to obtain current memory policy. 00:06:02.851 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:02.851 EAL: Restoring previous memory policy: 4 00:06:02.851 EAL: Calling mem event callback 'spdk:(nil)' 00:06:02.851 EAL: request: mp_malloc_sync 00:06:02.851 EAL: No shared files mode enabled, IPC is disabled 00:06:02.851 EAL: Heap on socket 0 was expanded by 130MB 00:06:02.851 EAL: Calling mem event callback 'spdk:(nil)' 00:06:02.851 EAL: request: mp_malloc_sync 00:06:02.851 EAL: No shared files mode enabled, IPC is disabled 00:06:02.851 EAL: Heap on socket 0 was shrunk by 130MB 00:06:02.851 EAL: Trying to obtain current memory policy. 00:06:02.851 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:02.851 EAL: Restoring previous memory policy: 4 00:06:02.851 EAL: Calling mem event callback 'spdk:(nil)' 00:06:02.851 EAL: request: mp_malloc_sync 00:06:02.851 EAL: No shared files mode enabled, IPC is disabled 00:06:02.851 EAL: Heap on socket 0 was expanded by 258MB 00:06:03.112 EAL: Calling mem event callback 'spdk:(nil)' 00:06:03.112 EAL: request: mp_malloc_sync 00:06:03.112 EAL: No shared files mode enabled, IPC is disabled 00:06:03.112 EAL: Heap on socket 0 was shrunk by 258MB 00:06:03.112 EAL: Trying to obtain current memory policy. 00:06:03.112 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:03.112 EAL: Restoring previous memory policy: 4 00:06:03.112 EAL: Calling mem event callback 'spdk:(nil)' 00:06:03.112 EAL: request: mp_malloc_sync 00:06:03.112 EAL: No shared files mode enabled, IPC is disabled 00:06:03.112 EAL: Heap on socket 0 was expanded by 514MB 00:06:03.372 EAL: Calling mem event callback 'spdk:(nil)' 00:06:03.372 EAL: request: mp_malloc_sync 00:06:03.372 EAL: No shared files mode enabled, IPC is disabled 00:06:03.372 EAL: Heap on socket 0 was shrunk by 514MB 00:06:03.372 EAL: Trying to obtain current memory policy. 00:06:03.372 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:03.631 EAL: Restoring previous memory policy: 4 00:06:03.631 EAL: Calling mem event callback 'spdk:(nil)' 00:06:03.631 EAL: request: mp_malloc_sync 00:06:03.631 EAL: No shared files mode enabled, IPC is disabled 00:06:03.631 EAL: Heap on socket 0 was expanded by 1026MB 00:06:03.892 EAL: Calling mem event callback 'spdk:(nil)' 00:06:04.152 EAL: request: mp_malloc_sync 00:06:04.152 EAL: No shared files mode enabled, IPC is disabled 00:06:04.152 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:04.152 passed 00:06:04.152 00:06:04.152 Run Summary: Type Total Ran Passed Failed Inactive 00:06:04.152 suites 1 1 n/a 0 0 00:06:04.152 tests 2 2 2 0 0 00:06:04.152 asserts 497 497 497 0 n/a 00:06:04.152 00:06:04.152 Elapsed time = 1.308 seconds 00:06:04.152 EAL: Calling mem event callback 'spdk:(nil)' 00:06:04.152 EAL: request: mp_malloc_sync 00:06:04.152 EAL: No shared files mode enabled, IPC is disabled 00:06:04.152 EAL: Heap on socket 0 was shrunk by 2MB 00:06:04.152 EAL: No shared files mode enabled, IPC is disabled 00:06:04.152 EAL: No shared files mode enabled, IPC is disabled 00:06:04.152 EAL: No shared files mode enabled, IPC is disabled 00:06:04.152 00:06:04.152 real 0m1.413s 00:06:04.152 user 0m0.838s 00:06:04.152 sys 0m0.544s 00:06:04.152 09:26:52 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:04.152 09:26:52 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:04.152 ************************************ 00:06:04.152 END TEST env_vtophys 00:06:04.152 ************************************ 00:06:04.152 09:26:53 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:06:04.152 09:26:53 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:04.152 09:26:53 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:04.152 09:26:53 env -- common/autotest_common.sh@10 -- # set +x 00:06:04.152 ************************************ 00:06:04.152 START TEST env_pci 00:06:04.152 ************************************ 00:06:04.152 09:26:53 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:06:04.152 00:06:04.152 00:06:04.152 CUnit - A unit testing framework for C - Version 2.1-3 00:06:04.152 http://cunit.sourceforge.net/ 00:06:04.152 00:06:04.152 00:06:04.152 Suite: pci 00:06:04.152 Test: pci_hook ...[2024-10-07 09:26:53.046265] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 98095 has claimed it 00:06:04.152 EAL: Cannot find device (10000:00:01.0) 00:06:04.152 EAL: Failed to attach device on primary process 00:06:04.152 passed 00:06:04.152 00:06:04.152 Run Summary: Type Total Ran Passed Failed Inactive 00:06:04.152 suites 1 1 n/a 0 0 00:06:04.152 tests 1 1 1 0 0 00:06:04.152 asserts 25 25 25 0 n/a 00:06:04.152 00:06:04.152 Elapsed time = 0.020 seconds 00:06:04.152 00:06:04.152 real 0m0.032s 00:06:04.152 user 0m0.010s 00:06:04.152 sys 0m0.021s 00:06:04.152 09:26:53 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:04.152 09:26:53 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:04.152 ************************************ 00:06:04.152 END TEST env_pci 00:06:04.152 ************************************ 00:06:04.152 09:26:53 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:04.152 09:26:53 env -- env/env.sh@15 -- # uname 00:06:04.152 09:26:53 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:04.152 09:26:53 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:04.152 09:26:53 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:04.152 09:26:53 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:06:04.152 09:26:53 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:04.152 09:26:53 env -- common/autotest_common.sh@10 -- # set +x 00:06:04.152 ************************************ 00:06:04.152 START TEST env_dpdk_post_init 00:06:04.152 ************************************ 00:06:04.152 09:26:53 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:04.152 EAL: Detected CPU lcores: 48 00:06:04.152 EAL: Detected NUMA nodes: 2 00:06:04.152 EAL: Detected shared linkage of DPDK 00:06:04.152 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:04.413 EAL: Selected IOVA mode 'VA' 00:06:04.413 EAL: VFIO support initialized 00:06:04.413 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:04.413 EAL: Using IOMMU type 1 (Type 1) 00:06:04.413 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:06:04.413 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:06:04.413 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:06:04.413 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:06:04.413 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:06:04.413 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:06:04.413 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:06:04.413 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:06:04.413 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:06:04.413 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:06:04.413 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:06:04.413 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:06:04.413 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:06:04.413 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:06:04.413 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:06:04.674 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:06:05.246 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:84:00.0 (socket 1) 00:06:08.553 EAL: Releasing PCI mapped resource for 0000:84:00.0 00:06:08.553 EAL: Calling pci_unmap_resource for 0000:84:00.0 at 0x202001040000 00:06:08.553 Starting DPDK initialization... 00:06:08.553 Starting SPDK post initialization... 00:06:08.553 SPDK NVMe probe 00:06:08.553 Attaching to 0000:84:00.0 00:06:08.553 Attached to 0000:84:00.0 00:06:08.553 Cleaning up... 00:06:08.553 00:06:08.553 real 0m4.383s 00:06:08.553 user 0m3.009s 00:06:08.553 sys 0m0.428s 00:06:08.553 09:26:57 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:08.553 09:26:57 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:08.553 ************************************ 00:06:08.553 END TEST env_dpdk_post_init 00:06:08.553 ************************************ 00:06:08.553 09:26:57 env -- env/env.sh@26 -- # uname 00:06:08.553 09:26:57 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:08.553 09:26:57 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:08.553 09:26:57 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:08.553 09:26:57 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:08.553 09:26:57 env -- common/autotest_common.sh@10 -- # set +x 00:06:08.812 ************************************ 00:06:08.812 START TEST env_mem_callbacks 00:06:08.812 ************************************ 00:06:08.812 09:26:57 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:08.812 EAL: Detected CPU lcores: 48 00:06:08.812 EAL: Detected NUMA nodes: 2 00:06:08.812 EAL: Detected shared linkage of DPDK 00:06:08.812 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:08.812 EAL: Selected IOVA mode 'VA' 00:06:08.812 EAL: VFIO support initialized 00:06:08.812 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:08.812 00:06:08.812 00:06:08.812 CUnit - A unit testing framework for C - Version 2.1-3 00:06:08.812 http://cunit.sourceforge.net/ 00:06:08.812 00:06:08.812 00:06:08.812 Suite: memory 00:06:08.812 Test: test ... 00:06:08.812 register 0x200000200000 2097152 00:06:08.812 malloc 3145728 00:06:08.812 register 0x200000400000 4194304 00:06:08.812 buf 0x200000500000 len 3145728 PASSED 00:06:08.812 malloc 64 00:06:08.812 buf 0x2000004fff40 len 64 PASSED 00:06:08.812 malloc 4194304 00:06:08.812 register 0x200000800000 6291456 00:06:08.812 buf 0x200000a00000 len 4194304 PASSED 00:06:08.812 free 0x200000500000 3145728 00:06:08.812 free 0x2000004fff40 64 00:06:08.812 unregister 0x200000400000 4194304 PASSED 00:06:08.812 free 0x200000a00000 4194304 00:06:08.812 unregister 0x200000800000 6291456 PASSED 00:06:08.812 malloc 8388608 00:06:08.812 register 0x200000400000 10485760 00:06:08.812 buf 0x200000600000 len 8388608 PASSED 00:06:08.812 free 0x200000600000 8388608 00:06:08.812 unregister 0x200000400000 10485760 PASSED 00:06:08.812 passed 00:06:08.812 00:06:08.812 Run Summary: Type Total Ran Passed Failed Inactive 00:06:08.812 suites 1 1 n/a 0 0 00:06:08.812 tests 1 1 1 0 0 00:06:08.812 asserts 15 15 15 0 n/a 00:06:08.812 00:06:08.812 Elapsed time = 0.005 seconds 00:06:08.812 00:06:08.812 real 0m0.048s 00:06:08.812 user 0m0.013s 00:06:08.812 sys 0m0.035s 00:06:08.812 09:26:57 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:08.812 09:26:57 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:08.812 ************************************ 00:06:08.812 END TEST env_mem_callbacks 00:06:08.812 ************************************ 00:06:08.812 00:06:08.812 real 0m6.420s 00:06:08.812 user 0m4.191s 00:06:08.812 sys 0m1.274s 00:06:08.812 09:26:57 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:08.812 09:26:57 env -- common/autotest_common.sh@10 -- # set +x 00:06:08.812 ************************************ 00:06:08.812 END TEST env 00:06:08.812 ************************************ 00:06:08.812 09:26:57 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:06:08.812 09:26:57 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:08.812 09:26:57 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:08.812 09:26:57 -- common/autotest_common.sh@10 -- # set +x 00:06:08.812 ************************************ 00:06:08.812 START TEST rpc 00:06:08.812 ************************************ 00:06:08.812 09:26:57 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:06:08.812 * Looking for test storage... 00:06:08.812 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:08.812 09:26:57 rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:08.812 09:26:57 rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:06:08.812 09:26:57 rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:08.812 09:26:57 rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:08.812 09:26:57 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:08.812 09:26:57 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:08.812 09:26:57 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:08.812 09:26:57 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:08.812 09:26:57 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:08.812 09:26:57 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:08.812 09:26:57 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:08.812 09:26:57 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:08.812 09:26:57 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:08.812 09:26:57 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:08.812 09:26:57 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:08.812 09:26:57 rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:08.813 09:26:57 rpc -- scripts/common.sh@345 -- # : 1 00:06:08.813 09:26:57 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:08.813 09:26:57 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:08.813 09:26:57 rpc -- scripts/common.sh@365 -- # decimal 1 00:06:08.813 09:26:57 rpc -- scripts/common.sh@353 -- # local d=1 00:06:08.813 09:26:57 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:08.813 09:26:57 rpc -- scripts/common.sh@355 -- # echo 1 00:06:08.813 09:26:57 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:08.813 09:26:57 rpc -- scripts/common.sh@366 -- # decimal 2 00:06:09.072 09:26:57 rpc -- scripts/common.sh@353 -- # local d=2 00:06:09.072 09:26:57 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:09.072 09:26:57 rpc -- scripts/common.sh@355 -- # echo 2 00:06:09.072 09:26:57 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:09.072 09:26:57 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:09.072 09:26:57 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:09.072 09:26:57 rpc -- scripts/common.sh@368 -- # return 0 00:06:09.072 09:26:57 rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:09.072 09:26:57 rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:09.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.072 --rc genhtml_branch_coverage=1 00:06:09.072 --rc genhtml_function_coverage=1 00:06:09.072 --rc genhtml_legend=1 00:06:09.072 --rc geninfo_all_blocks=1 00:06:09.072 --rc geninfo_unexecuted_blocks=1 00:06:09.072 00:06:09.072 ' 00:06:09.072 09:26:57 rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:09.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.072 --rc genhtml_branch_coverage=1 00:06:09.072 --rc genhtml_function_coverage=1 00:06:09.072 --rc genhtml_legend=1 00:06:09.072 --rc geninfo_all_blocks=1 00:06:09.072 --rc geninfo_unexecuted_blocks=1 00:06:09.072 00:06:09.072 ' 00:06:09.072 09:26:57 rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:09.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.072 --rc genhtml_branch_coverage=1 00:06:09.072 --rc genhtml_function_coverage=1 00:06:09.072 --rc genhtml_legend=1 00:06:09.072 --rc geninfo_all_blocks=1 00:06:09.072 --rc geninfo_unexecuted_blocks=1 00:06:09.072 00:06:09.072 ' 00:06:09.072 09:26:57 rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:09.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.072 --rc genhtml_branch_coverage=1 00:06:09.072 --rc genhtml_function_coverage=1 00:06:09.072 --rc genhtml_legend=1 00:06:09.073 --rc geninfo_all_blocks=1 00:06:09.073 --rc geninfo_unexecuted_blocks=1 00:06:09.073 00:06:09.073 ' 00:06:09.073 09:26:57 rpc -- rpc/rpc.sh@65 -- # spdk_pid=98840 00:06:09.073 09:26:57 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:06:09.073 09:26:57 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:09.073 09:26:57 rpc -- rpc/rpc.sh@67 -- # waitforlisten 98840 00:06:09.073 09:26:57 rpc -- common/autotest_common.sh@831 -- # '[' -z 98840 ']' 00:06:09.073 09:26:57 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.073 09:26:57 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:09.073 09:26:57 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.073 09:26:57 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:09.073 09:26:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.073 [2024-10-07 09:26:57.867744] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:06:09.073 [2024-10-07 09:26:57.867827] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98840 ] 00:06:09.073 [2024-10-07 09:26:57.924281] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.073 [2024-10-07 09:26:58.032305] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:09.073 [2024-10-07 09:26:58.032366] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 98840' to capture a snapshot of events at runtime. 00:06:09.073 [2024-10-07 09:26:58.032380] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:09.073 [2024-10-07 09:26:58.032390] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:09.073 [2024-10-07 09:26:58.032399] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid98840 for offline analysis/debug. 00:06:09.073 [2024-10-07 09:26:58.032932] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.331 09:26:58 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:09.331 09:26:58 rpc -- common/autotest_common.sh@864 -- # return 0 00:06:09.331 09:26:58 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:09.331 09:26:58 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:09.332 09:26:58 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:09.332 09:26:58 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:09.332 09:26:58 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:09.332 09:26:58 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:09.332 09:26:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.332 ************************************ 00:06:09.332 START TEST rpc_integrity 00:06:09.332 ************************************ 00:06:09.332 09:26:58 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:06:09.332 09:26:58 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:09.332 09:26:58 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:09.332 09:26:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:09.332 09:26:58 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:09.332 09:26:58 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:09.332 09:26:58 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:09.593 09:26:58 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:09.593 09:26:58 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:09.593 09:26:58 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:09.593 09:26:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:09.593 09:26:58 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:09.593 09:26:58 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:09.593 09:26:58 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:09.593 09:26:58 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:09.593 09:26:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:09.593 09:26:58 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:09.594 09:26:58 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:09.594 { 00:06:09.594 "name": "Malloc0", 00:06:09.594 "aliases": [ 00:06:09.594 "1b43e8f0-5960-4dbf-aa7a-d9d55f00b13b" 00:06:09.594 ], 00:06:09.594 "product_name": "Malloc disk", 00:06:09.594 "block_size": 512, 00:06:09.594 "num_blocks": 16384, 00:06:09.594 "uuid": "1b43e8f0-5960-4dbf-aa7a-d9d55f00b13b", 00:06:09.594 "assigned_rate_limits": { 00:06:09.594 "rw_ios_per_sec": 0, 00:06:09.594 "rw_mbytes_per_sec": 0, 00:06:09.594 "r_mbytes_per_sec": 0, 00:06:09.594 "w_mbytes_per_sec": 0 00:06:09.594 }, 00:06:09.594 "claimed": false, 00:06:09.594 "zoned": false, 00:06:09.594 "supported_io_types": { 00:06:09.594 "read": true, 00:06:09.594 "write": true, 00:06:09.594 "unmap": true, 00:06:09.594 "flush": true, 00:06:09.594 "reset": true, 00:06:09.594 "nvme_admin": false, 00:06:09.594 "nvme_io": false, 00:06:09.594 "nvme_io_md": false, 00:06:09.594 "write_zeroes": true, 00:06:09.594 "zcopy": true, 00:06:09.594 "get_zone_info": false, 00:06:09.594 "zone_management": false, 00:06:09.594 "zone_append": false, 00:06:09.594 "compare": false, 00:06:09.594 "compare_and_write": false, 00:06:09.594 "abort": true, 00:06:09.594 "seek_hole": false, 00:06:09.594 "seek_data": false, 00:06:09.594 "copy": true, 00:06:09.594 "nvme_iov_md": false 00:06:09.594 }, 00:06:09.594 "memory_domains": [ 00:06:09.594 { 00:06:09.594 "dma_device_id": "system", 00:06:09.594 "dma_device_type": 1 00:06:09.594 }, 00:06:09.594 { 00:06:09.594 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:09.594 "dma_device_type": 2 00:06:09.594 } 00:06:09.594 ], 00:06:09.594 "driver_specific": {} 00:06:09.594 } 00:06:09.594 ]' 00:06:09.594 09:26:58 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:09.594 09:26:58 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:09.594 09:26:58 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:09.594 09:26:58 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:09.594 09:26:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:09.594 [2024-10-07 09:26:58.410561] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:09.594 [2024-10-07 09:26:58.410618] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:09.594 [2024-10-07 09:26:58.410639] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1d4c900 00:06:09.594 [2024-10-07 09:26:58.410652] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:09.594 [2024-10-07 09:26:58.412051] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:09.594 [2024-10-07 09:26:58.412073] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:09.594 Passthru0 00:06:09.594 09:26:58 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:09.594 09:26:58 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:09.594 09:26:58 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:09.594 09:26:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:09.594 09:26:58 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:09.594 09:26:58 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:09.594 { 00:06:09.594 "name": "Malloc0", 00:06:09.594 "aliases": [ 00:06:09.594 "1b43e8f0-5960-4dbf-aa7a-d9d55f00b13b" 00:06:09.594 ], 00:06:09.594 "product_name": "Malloc disk", 00:06:09.594 "block_size": 512, 00:06:09.594 "num_blocks": 16384, 00:06:09.594 "uuid": "1b43e8f0-5960-4dbf-aa7a-d9d55f00b13b", 00:06:09.594 "assigned_rate_limits": { 00:06:09.594 "rw_ios_per_sec": 0, 00:06:09.594 "rw_mbytes_per_sec": 0, 00:06:09.594 "r_mbytes_per_sec": 0, 00:06:09.594 "w_mbytes_per_sec": 0 00:06:09.594 }, 00:06:09.594 "claimed": true, 00:06:09.594 "claim_type": "exclusive_write", 00:06:09.594 "zoned": false, 00:06:09.594 "supported_io_types": { 00:06:09.594 "read": true, 00:06:09.594 "write": true, 00:06:09.594 "unmap": true, 00:06:09.594 "flush": true, 00:06:09.594 "reset": true, 00:06:09.594 "nvme_admin": false, 00:06:09.594 "nvme_io": false, 00:06:09.594 "nvme_io_md": false, 00:06:09.594 "write_zeroes": true, 00:06:09.594 "zcopy": true, 00:06:09.594 "get_zone_info": false, 00:06:09.594 "zone_management": false, 00:06:09.594 "zone_append": false, 00:06:09.594 "compare": false, 00:06:09.594 "compare_and_write": false, 00:06:09.594 "abort": true, 00:06:09.594 "seek_hole": false, 00:06:09.594 "seek_data": false, 00:06:09.594 "copy": true, 00:06:09.594 "nvme_iov_md": false 00:06:09.594 }, 00:06:09.594 "memory_domains": [ 00:06:09.594 { 00:06:09.594 "dma_device_id": "system", 00:06:09.594 "dma_device_type": 1 00:06:09.594 }, 00:06:09.594 { 00:06:09.594 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:09.594 "dma_device_type": 2 00:06:09.594 } 00:06:09.594 ], 00:06:09.594 "driver_specific": {} 00:06:09.594 }, 00:06:09.594 { 00:06:09.594 "name": "Passthru0", 00:06:09.594 "aliases": [ 00:06:09.594 "9e9ee0f1-d782-59a8-ae3d-59c0e6218991" 00:06:09.594 ], 00:06:09.594 "product_name": "passthru", 00:06:09.594 "block_size": 512, 00:06:09.594 "num_blocks": 16384, 00:06:09.594 "uuid": "9e9ee0f1-d782-59a8-ae3d-59c0e6218991", 00:06:09.594 "assigned_rate_limits": { 00:06:09.594 "rw_ios_per_sec": 0, 00:06:09.594 "rw_mbytes_per_sec": 0, 00:06:09.594 "r_mbytes_per_sec": 0, 00:06:09.594 "w_mbytes_per_sec": 0 00:06:09.594 }, 00:06:09.594 "claimed": false, 00:06:09.594 "zoned": false, 00:06:09.594 "supported_io_types": { 00:06:09.594 "read": true, 00:06:09.594 "write": true, 00:06:09.594 "unmap": true, 00:06:09.594 "flush": true, 00:06:09.594 "reset": true, 00:06:09.594 "nvme_admin": false, 00:06:09.594 "nvme_io": false, 00:06:09.594 "nvme_io_md": false, 00:06:09.594 "write_zeroes": true, 00:06:09.594 "zcopy": true, 00:06:09.594 "get_zone_info": false, 00:06:09.594 "zone_management": false, 00:06:09.594 "zone_append": false, 00:06:09.594 "compare": false, 00:06:09.594 "compare_and_write": false, 00:06:09.594 "abort": true, 00:06:09.594 "seek_hole": false, 00:06:09.594 "seek_data": false, 00:06:09.594 "copy": true, 00:06:09.594 "nvme_iov_md": false 00:06:09.594 }, 00:06:09.594 "memory_domains": [ 00:06:09.594 { 00:06:09.594 "dma_device_id": "system", 00:06:09.594 "dma_device_type": 1 00:06:09.594 }, 00:06:09.594 { 00:06:09.594 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:09.594 "dma_device_type": 2 00:06:09.594 } 00:06:09.594 ], 00:06:09.594 "driver_specific": { 00:06:09.594 "passthru": { 00:06:09.594 "name": "Passthru0", 00:06:09.594 "base_bdev_name": "Malloc0" 00:06:09.594 } 00:06:09.594 } 00:06:09.594 } 00:06:09.594 ]' 00:06:09.594 09:26:58 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:09.594 09:26:58 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:09.594 09:26:58 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:09.594 09:26:58 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:09.594 09:26:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:09.594 09:26:58 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:09.594 09:26:58 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:09.594 09:26:58 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:09.594 09:26:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:09.594 09:26:58 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:09.594 09:26:58 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:09.594 09:26:58 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:09.594 09:26:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:09.594 09:26:58 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:09.594 09:26:58 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:09.594 09:26:58 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:09.594 09:26:58 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:09.594 00:06:09.594 real 0m0.210s 00:06:09.594 user 0m0.140s 00:06:09.594 sys 0m0.018s 00:06:09.594 09:26:58 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:09.594 09:26:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:09.594 ************************************ 00:06:09.594 END TEST rpc_integrity 00:06:09.594 ************************************ 00:06:09.594 09:26:58 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:09.594 09:26:58 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:09.594 09:26:58 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:09.594 09:26:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.594 ************************************ 00:06:09.594 START TEST rpc_plugins 00:06:09.594 ************************************ 00:06:09.594 09:26:58 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:06:09.594 09:26:58 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:09.594 09:26:58 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:09.594 09:26:58 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:09.594 09:26:58 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:09.594 09:26:58 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:09.594 09:26:58 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:09.594 09:26:58 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:09.594 09:26:58 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:09.855 09:26:58 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:09.855 09:26:58 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:09.855 { 00:06:09.855 "name": "Malloc1", 00:06:09.855 "aliases": [ 00:06:09.855 "69701895-87f7-48f9-87c5-1219475aa14c" 00:06:09.855 ], 00:06:09.855 "product_name": "Malloc disk", 00:06:09.855 "block_size": 4096, 00:06:09.855 "num_blocks": 256, 00:06:09.855 "uuid": "69701895-87f7-48f9-87c5-1219475aa14c", 00:06:09.855 "assigned_rate_limits": { 00:06:09.855 "rw_ios_per_sec": 0, 00:06:09.855 "rw_mbytes_per_sec": 0, 00:06:09.855 "r_mbytes_per_sec": 0, 00:06:09.855 "w_mbytes_per_sec": 0 00:06:09.855 }, 00:06:09.855 "claimed": false, 00:06:09.855 "zoned": false, 00:06:09.855 "supported_io_types": { 00:06:09.855 "read": true, 00:06:09.855 "write": true, 00:06:09.855 "unmap": true, 00:06:09.855 "flush": true, 00:06:09.855 "reset": true, 00:06:09.855 "nvme_admin": false, 00:06:09.855 "nvme_io": false, 00:06:09.855 "nvme_io_md": false, 00:06:09.855 "write_zeroes": true, 00:06:09.855 "zcopy": true, 00:06:09.855 "get_zone_info": false, 00:06:09.855 "zone_management": false, 00:06:09.855 "zone_append": false, 00:06:09.855 "compare": false, 00:06:09.855 "compare_and_write": false, 00:06:09.855 "abort": true, 00:06:09.855 "seek_hole": false, 00:06:09.855 "seek_data": false, 00:06:09.855 "copy": true, 00:06:09.855 "nvme_iov_md": false 00:06:09.855 }, 00:06:09.855 "memory_domains": [ 00:06:09.855 { 00:06:09.855 "dma_device_id": "system", 00:06:09.855 "dma_device_type": 1 00:06:09.855 }, 00:06:09.855 { 00:06:09.855 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:09.855 "dma_device_type": 2 00:06:09.855 } 00:06:09.855 ], 00:06:09.855 "driver_specific": {} 00:06:09.855 } 00:06:09.855 ]' 00:06:09.855 09:26:58 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:09.855 09:26:58 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:09.855 09:26:58 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:09.855 09:26:58 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:09.855 09:26:58 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:09.855 09:26:58 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:09.855 09:26:58 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:09.855 09:26:58 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:09.855 09:26:58 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:09.855 09:26:58 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:09.855 09:26:58 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:09.855 09:26:58 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:09.855 09:26:58 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:09.855 00:06:09.855 real 0m0.115s 00:06:09.855 user 0m0.073s 00:06:09.855 sys 0m0.011s 00:06:09.855 09:26:58 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:09.855 09:26:58 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:09.855 ************************************ 00:06:09.855 END TEST rpc_plugins 00:06:09.855 ************************************ 00:06:09.855 09:26:58 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:09.855 09:26:58 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:09.855 09:26:58 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:09.855 09:26:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.855 ************************************ 00:06:09.855 START TEST rpc_trace_cmd_test 00:06:09.855 ************************************ 00:06:09.855 09:26:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:06:09.855 09:26:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:09.855 09:26:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:09.855 09:26:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:09.855 09:26:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:09.855 09:26:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:09.855 09:26:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:09.855 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid98840", 00:06:09.855 "tpoint_group_mask": "0x8", 00:06:09.855 "iscsi_conn": { 00:06:09.855 "mask": "0x2", 00:06:09.856 "tpoint_mask": "0x0" 00:06:09.856 }, 00:06:09.856 "scsi": { 00:06:09.856 "mask": "0x4", 00:06:09.856 "tpoint_mask": "0x0" 00:06:09.856 }, 00:06:09.856 "bdev": { 00:06:09.856 "mask": "0x8", 00:06:09.856 "tpoint_mask": "0xffffffffffffffff" 00:06:09.856 }, 00:06:09.856 "nvmf_rdma": { 00:06:09.856 "mask": "0x10", 00:06:09.856 "tpoint_mask": "0x0" 00:06:09.856 }, 00:06:09.856 "nvmf_tcp": { 00:06:09.856 "mask": "0x20", 00:06:09.856 "tpoint_mask": "0x0" 00:06:09.856 }, 00:06:09.856 "ftl": { 00:06:09.856 "mask": "0x40", 00:06:09.856 "tpoint_mask": "0x0" 00:06:09.856 }, 00:06:09.856 "blobfs": { 00:06:09.856 "mask": "0x80", 00:06:09.856 "tpoint_mask": "0x0" 00:06:09.856 }, 00:06:09.856 "dsa": { 00:06:09.856 "mask": "0x200", 00:06:09.856 "tpoint_mask": "0x0" 00:06:09.856 }, 00:06:09.856 "thread": { 00:06:09.856 "mask": "0x400", 00:06:09.856 "tpoint_mask": "0x0" 00:06:09.856 }, 00:06:09.856 "nvme_pcie": { 00:06:09.856 "mask": "0x800", 00:06:09.856 "tpoint_mask": "0x0" 00:06:09.856 }, 00:06:09.856 "iaa": { 00:06:09.856 "mask": "0x1000", 00:06:09.856 "tpoint_mask": "0x0" 00:06:09.856 }, 00:06:09.856 "nvme_tcp": { 00:06:09.856 "mask": "0x2000", 00:06:09.856 "tpoint_mask": "0x0" 00:06:09.856 }, 00:06:09.856 "bdev_nvme": { 00:06:09.856 "mask": "0x4000", 00:06:09.856 "tpoint_mask": "0x0" 00:06:09.856 }, 00:06:09.856 "sock": { 00:06:09.856 "mask": "0x8000", 00:06:09.856 "tpoint_mask": "0x0" 00:06:09.856 }, 00:06:09.856 "blob": { 00:06:09.856 "mask": "0x10000", 00:06:09.856 "tpoint_mask": "0x0" 00:06:09.856 }, 00:06:09.856 "bdev_raid": { 00:06:09.856 "mask": "0x20000", 00:06:09.856 "tpoint_mask": "0x0" 00:06:09.856 }, 00:06:09.856 "scheduler": { 00:06:09.856 "mask": "0x40000", 00:06:09.856 "tpoint_mask": "0x0" 00:06:09.856 } 00:06:09.856 }' 00:06:09.856 09:26:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:09.856 09:26:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:06:09.856 09:26:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:09.856 09:26:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:09.856 09:26:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:09.856 09:26:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:09.856 09:26:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:10.117 09:26:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:10.117 09:26:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:10.117 09:26:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:10.117 00:06:10.117 real 0m0.188s 00:06:10.117 user 0m0.168s 00:06:10.117 sys 0m0.013s 00:06:10.117 09:26:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:10.117 09:26:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:10.117 ************************************ 00:06:10.117 END TEST rpc_trace_cmd_test 00:06:10.117 ************************************ 00:06:10.117 09:26:58 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:10.117 09:26:58 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:10.117 09:26:58 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:10.117 09:26:58 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:10.117 09:26:58 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:10.117 09:26:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.117 ************************************ 00:06:10.117 START TEST rpc_daemon_integrity 00:06:10.117 ************************************ 00:06:10.117 09:26:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:06:10.117 09:26:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:10.117 09:26:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.117 09:26:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:10.118 09:26:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.118 09:26:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:10.118 09:26:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:10.118 09:26:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:10.118 09:26:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:10.118 09:26:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.118 09:26:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:10.118 09:26:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.118 09:26:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:10.118 09:26:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:10.118 09:26:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.118 09:26:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:10.118 09:26:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.118 09:26:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:10.118 { 00:06:10.118 "name": "Malloc2", 00:06:10.118 "aliases": [ 00:06:10.118 "dd55d2a5-6c24-483c-98cf-0c5411549165" 00:06:10.118 ], 00:06:10.118 "product_name": "Malloc disk", 00:06:10.118 "block_size": 512, 00:06:10.118 "num_blocks": 16384, 00:06:10.118 "uuid": "dd55d2a5-6c24-483c-98cf-0c5411549165", 00:06:10.118 "assigned_rate_limits": { 00:06:10.118 "rw_ios_per_sec": 0, 00:06:10.118 "rw_mbytes_per_sec": 0, 00:06:10.118 "r_mbytes_per_sec": 0, 00:06:10.118 "w_mbytes_per_sec": 0 00:06:10.118 }, 00:06:10.118 "claimed": false, 00:06:10.118 "zoned": false, 00:06:10.118 "supported_io_types": { 00:06:10.118 "read": true, 00:06:10.118 "write": true, 00:06:10.118 "unmap": true, 00:06:10.118 "flush": true, 00:06:10.118 "reset": true, 00:06:10.118 "nvme_admin": false, 00:06:10.118 "nvme_io": false, 00:06:10.118 "nvme_io_md": false, 00:06:10.118 "write_zeroes": true, 00:06:10.118 "zcopy": true, 00:06:10.118 "get_zone_info": false, 00:06:10.118 "zone_management": false, 00:06:10.118 "zone_append": false, 00:06:10.118 "compare": false, 00:06:10.118 "compare_and_write": false, 00:06:10.118 "abort": true, 00:06:10.118 "seek_hole": false, 00:06:10.118 "seek_data": false, 00:06:10.118 "copy": true, 00:06:10.118 "nvme_iov_md": false 00:06:10.118 }, 00:06:10.118 "memory_domains": [ 00:06:10.118 { 00:06:10.118 "dma_device_id": "system", 00:06:10.118 "dma_device_type": 1 00:06:10.118 }, 00:06:10.118 { 00:06:10.118 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:10.118 "dma_device_type": 2 00:06:10.118 } 00:06:10.118 ], 00:06:10.118 "driver_specific": {} 00:06:10.118 } 00:06:10.118 ]' 00:06:10.118 09:26:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:10.118 09:26:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:10.118 09:26:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:10.118 09:26:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.118 09:26:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:10.118 [2024-10-07 09:26:59.057044] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:10.118 [2024-10-07 09:26:59.057080] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:10.118 [2024-10-07 09:26:59.057114] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1dddf70 00:06:10.118 [2024-10-07 09:26:59.057127] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:10.118 [2024-10-07 09:26:59.058304] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:10.118 [2024-10-07 09:26:59.058325] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:10.118 Passthru0 00:06:10.118 09:26:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.118 09:26:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:10.118 09:26:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.118 09:26:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:10.118 09:26:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.118 09:26:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:10.118 { 00:06:10.118 "name": "Malloc2", 00:06:10.118 "aliases": [ 00:06:10.118 "dd55d2a5-6c24-483c-98cf-0c5411549165" 00:06:10.118 ], 00:06:10.118 "product_name": "Malloc disk", 00:06:10.118 "block_size": 512, 00:06:10.118 "num_blocks": 16384, 00:06:10.118 "uuid": "dd55d2a5-6c24-483c-98cf-0c5411549165", 00:06:10.118 "assigned_rate_limits": { 00:06:10.118 "rw_ios_per_sec": 0, 00:06:10.118 "rw_mbytes_per_sec": 0, 00:06:10.118 "r_mbytes_per_sec": 0, 00:06:10.118 "w_mbytes_per_sec": 0 00:06:10.118 }, 00:06:10.118 "claimed": true, 00:06:10.118 "claim_type": "exclusive_write", 00:06:10.118 "zoned": false, 00:06:10.118 "supported_io_types": { 00:06:10.118 "read": true, 00:06:10.118 "write": true, 00:06:10.118 "unmap": true, 00:06:10.118 "flush": true, 00:06:10.118 "reset": true, 00:06:10.118 "nvme_admin": false, 00:06:10.118 "nvme_io": false, 00:06:10.118 "nvme_io_md": false, 00:06:10.118 "write_zeroes": true, 00:06:10.118 "zcopy": true, 00:06:10.118 "get_zone_info": false, 00:06:10.118 "zone_management": false, 00:06:10.118 "zone_append": false, 00:06:10.118 "compare": false, 00:06:10.118 "compare_and_write": false, 00:06:10.118 "abort": true, 00:06:10.118 "seek_hole": false, 00:06:10.118 "seek_data": false, 00:06:10.118 "copy": true, 00:06:10.118 "nvme_iov_md": false 00:06:10.118 }, 00:06:10.118 "memory_domains": [ 00:06:10.118 { 00:06:10.118 "dma_device_id": "system", 00:06:10.118 "dma_device_type": 1 00:06:10.118 }, 00:06:10.118 { 00:06:10.118 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:10.118 "dma_device_type": 2 00:06:10.118 } 00:06:10.118 ], 00:06:10.118 "driver_specific": {} 00:06:10.118 }, 00:06:10.118 { 00:06:10.118 "name": "Passthru0", 00:06:10.118 "aliases": [ 00:06:10.118 "7138c3ab-b7ee-5c6e-adec-06fd5131c822" 00:06:10.118 ], 00:06:10.118 "product_name": "passthru", 00:06:10.118 "block_size": 512, 00:06:10.118 "num_blocks": 16384, 00:06:10.118 "uuid": "7138c3ab-b7ee-5c6e-adec-06fd5131c822", 00:06:10.118 "assigned_rate_limits": { 00:06:10.118 "rw_ios_per_sec": 0, 00:06:10.118 "rw_mbytes_per_sec": 0, 00:06:10.118 "r_mbytes_per_sec": 0, 00:06:10.118 "w_mbytes_per_sec": 0 00:06:10.118 }, 00:06:10.118 "claimed": false, 00:06:10.118 "zoned": false, 00:06:10.118 "supported_io_types": { 00:06:10.118 "read": true, 00:06:10.118 "write": true, 00:06:10.118 "unmap": true, 00:06:10.118 "flush": true, 00:06:10.118 "reset": true, 00:06:10.118 "nvme_admin": false, 00:06:10.118 "nvme_io": false, 00:06:10.118 "nvme_io_md": false, 00:06:10.118 "write_zeroes": true, 00:06:10.118 "zcopy": true, 00:06:10.118 "get_zone_info": false, 00:06:10.118 "zone_management": false, 00:06:10.118 "zone_append": false, 00:06:10.118 "compare": false, 00:06:10.118 "compare_and_write": false, 00:06:10.118 "abort": true, 00:06:10.118 "seek_hole": false, 00:06:10.118 "seek_data": false, 00:06:10.118 "copy": true, 00:06:10.118 "nvme_iov_md": false 00:06:10.118 }, 00:06:10.118 "memory_domains": [ 00:06:10.118 { 00:06:10.118 "dma_device_id": "system", 00:06:10.118 "dma_device_type": 1 00:06:10.118 }, 00:06:10.118 { 00:06:10.118 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:10.118 "dma_device_type": 2 00:06:10.118 } 00:06:10.118 ], 00:06:10.118 "driver_specific": { 00:06:10.118 "passthru": { 00:06:10.118 "name": "Passthru0", 00:06:10.118 "base_bdev_name": "Malloc2" 00:06:10.118 } 00:06:10.118 } 00:06:10.118 } 00:06:10.118 ]' 00:06:10.118 09:26:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:10.118 09:26:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:10.118 09:26:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:10.118 09:26:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.118 09:26:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:10.380 09:26:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.380 09:26:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:10.380 09:26:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.380 09:26:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:10.380 09:26:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.380 09:26:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:10.380 09:26:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.380 09:26:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:10.380 09:26:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.380 09:26:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:10.380 09:26:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:10.380 09:26:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:10.380 00:06:10.380 real 0m0.211s 00:06:10.380 user 0m0.128s 00:06:10.380 sys 0m0.028s 00:06:10.380 09:26:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:10.380 09:26:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:10.380 ************************************ 00:06:10.380 END TEST rpc_daemon_integrity 00:06:10.380 ************************************ 00:06:10.380 09:26:59 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:10.380 09:26:59 rpc -- rpc/rpc.sh@84 -- # killprocess 98840 00:06:10.380 09:26:59 rpc -- common/autotest_common.sh@950 -- # '[' -z 98840 ']' 00:06:10.380 09:26:59 rpc -- common/autotest_common.sh@954 -- # kill -0 98840 00:06:10.380 09:26:59 rpc -- common/autotest_common.sh@955 -- # uname 00:06:10.380 09:26:59 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:10.380 09:26:59 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 98840 00:06:10.380 09:26:59 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:10.380 09:26:59 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:10.380 09:26:59 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 98840' 00:06:10.380 killing process with pid 98840 00:06:10.380 09:26:59 rpc -- common/autotest_common.sh@969 -- # kill 98840 00:06:10.380 09:26:59 rpc -- common/autotest_common.sh@974 -- # wait 98840 00:06:10.951 00:06:10.951 real 0m2.006s 00:06:10.951 user 0m2.474s 00:06:10.951 sys 0m0.599s 00:06:10.951 09:26:59 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:10.951 09:26:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.951 ************************************ 00:06:10.951 END TEST rpc 00:06:10.951 ************************************ 00:06:10.951 09:26:59 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:10.951 09:26:59 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:10.951 09:26:59 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:10.951 09:26:59 -- common/autotest_common.sh@10 -- # set +x 00:06:10.951 ************************************ 00:06:10.951 START TEST skip_rpc 00:06:10.951 ************************************ 00:06:10.951 09:26:59 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:10.951 * Looking for test storage... 00:06:10.951 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:10.951 09:26:59 skip_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:10.951 09:26:59 skip_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:06:10.951 09:26:59 skip_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:10.951 09:26:59 skip_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:10.951 09:26:59 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:10.951 09:26:59 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:10.951 09:26:59 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:10.951 09:26:59 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:10.951 09:26:59 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:10.951 09:26:59 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:10.951 09:26:59 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:10.951 09:26:59 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:10.951 09:26:59 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:10.951 09:26:59 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:10.951 09:26:59 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:10.951 09:26:59 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:10.951 09:26:59 skip_rpc -- scripts/common.sh@345 -- # : 1 00:06:10.951 09:26:59 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:10.951 09:26:59 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:10.951 09:26:59 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:10.951 09:26:59 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:06:10.951 09:26:59 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:10.951 09:26:59 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:06:10.951 09:26:59 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:10.951 09:26:59 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:10.951 09:26:59 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:06:10.951 09:26:59 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:10.951 09:26:59 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:06:10.951 09:26:59 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:10.951 09:26:59 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:10.951 09:26:59 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:10.951 09:26:59 skip_rpc -- scripts/common.sh@368 -- # return 0 00:06:10.951 09:26:59 skip_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:10.951 09:26:59 skip_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:10.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.951 --rc genhtml_branch_coverage=1 00:06:10.951 --rc genhtml_function_coverage=1 00:06:10.951 --rc genhtml_legend=1 00:06:10.951 --rc geninfo_all_blocks=1 00:06:10.951 --rc geninfo_unexecuted_blocks=1 00:06:10.951 00:06:10.951 ' 00:06:10.951 09:26:59 skip_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:10.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.951 --rc genhtml_branch_coverage=1 00:06:10.951 --rc genhtml_function_coverage=1 00:06:10.951 --rc genhtml_legend=1 00:06:10.951 --rc geninfo_all_blocks=1 00:06:10.951 --rc geninfo_unexecuted_blocks=1 00:06:10.951 00:06:10.951 ' 00:06:10.951 09:26:59 skip_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:10.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.951 --rc genhtml_branch_coverage=1 00:06:10.951 --rc genhtml_function_coverage=1 00:06:10.951 --rc genhtml_legend=1 00:06:10.951 --rc geninfo_all_blocks=1 00:06:10.951 --rc geninfo_unexecuted_blocks=1 00:06:10.951 00:06:10.951 ' 00:06:10.951 09:26:59 skip_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:10.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.951 --rc genhtml_branch_coverage=1 00:06:10.951 --rc genhtml_function_coverage=1 00:06:10.951 --rc genhtml_legend=1 00:06:10.951 --rc geninfo_all_blocks=1 00:06:10.951 --rc geninfo_unexecuted_blocks=1 00:06:10.951 00:06:10.951 ' 00:06:10.951 09:26:59 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:10.951 09:26:59 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:10.951 09:26:59 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:10.951 09:26:59 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:10.951 09:26:59 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:10.951 09:26:59 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.951 ************************************ 00:06:10.951 START TEST skip_rpc 00:06:10.951 ************************************ 00:06:10.951 09:26:59 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:06:10.951 09:26:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=99271 00:06:10.951 09:26:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:10.951 09:26:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:10.951 09:26:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:11.210 [2024-10-07 09:26:59.948427] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:06:11.210 [2024-10-07 09:26:59.948492] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99271 ] 00:06:11.210 [2024-10-07 09:27:00.003786] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.210 [2024-10-07 09:27:00.117021] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.499 09:27:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:16.499 09:27:04 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:16.499 09:27:04 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:16.499 09:27:04 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:16.499 09:27:04 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:16.499 09:27:04 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:16.499 09:27:04 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:16.499 09:27:04 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:06:16.499 09:27:04 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:16.499 09:27:04 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.499 09:27:04 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:16.499 09:27:04 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:16.499 09:27:04 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:16.499 09:27:04 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:16.499 09:27:04 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:16.499 09:27:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:16.499 09:27:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 99271 00:06:16.499 09:27:04 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 99271 ']' 00:06:16.499 09:27:04 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 99271 00:06:16.499 09:27:04 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:06:16.499 09:27:04 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:16.499 09:27:04 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 99271 00:06:16.499 09:27:04 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:16.499 09:27:04 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:16.499 09:27:04 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 99271' 00:06:16.499 killing process with pid 99271 00:06:16.499 09:27:04 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 99271 00:06:16.499 09:27:04 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 99271 00:06:16.499 00:06:16.499 real 0m5.504s 00:06:16.499 user 0m5.185s 00:06:16.499 sys 0m0.318s 00:06:16.499 09:27:05 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:16.499 09:27:05 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.499 ************************************ 00:06:16.499 END TEST skip_rpc 00:06:16.499 ************************************ 00:06:16.499 09:27:05 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:16.499 09:27:05 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:16.499 09:27:05 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:16.499 09:27:05 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.499 ************************************ 00:06:16.499 START TEST skip_rpc_with_json 00:06:16.499 ************************************ 00:06:16.499 09:27:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:06:16.499 09:27:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:16.499 09:27:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=100047 00:06:16.499 09:27:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:16.499 09:27:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:16.499 09:27:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 100047 00:06:16.499 09:27:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 100047 ']' 00:06:16.500 09:27:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.500 09:27:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:16.500 09:27:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.500 09:27:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:16.500 09:27:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:16.760 [2024-10-07 09:27:05.509638] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:06:16.760 [2024-10-07 09:27:05.509727] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100047 ] 00:06:16.760 [2024-10-07 09:27:05.563853] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.760 [2024-10-07 09:27:05.669190] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.021 09:27:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:17.021 09:27:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:06:17.021 09:27:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:17.021 09:27:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:17.021 09:27:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:17.021 [2024-10-07 09:27:05.924292] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:17.021 request: 00:06:17.021 { 00:06:17.021 "trtype": "tcp", 00:06:17.021 "method": "nvmf_get_transports", 00:06:17.021 "req_id": 1 00:06:17.021 } 00:06:17.021 Got JSON-RPC error response 00:06:17.021 response: 00:06:17.021 { 00:06:17.021 "code": -19, 00:06:17.021 "message": "No such device" 00:06:17.021 } 00:06:17.021 09:27:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:17.021 09:27:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:17.021 09:27:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:17.021 09:27:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:17.021 [2024-10-07 09:27:05.932411] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:17.021 09:27:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:17.021 09:27:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:17.021 09:27:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:17.021 09:27:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:17.283 09:27:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:17.283 09:27:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:17.283 { 00:06:17.283 "subsystems": [ 00:06:17.283 { 00:06:17.283 "subsystem": "fsdev", 00:06:17.283 "config": [ 00:06:17.283 { 00:06:17.283 "method": "fsdev_set_opts", 00:06:17.283 "params": { 00:06:17.283 "fsdev_io_pool_size": 65535, 00:06:17.283 "fsdev_io_cache_size": 256 00:06:17.283 } 00:06:17.283 } 00:06:17.283 ] 00:06:17.283 }, 00:06:17.283 { 00:06:17.283 "subsystem": "vfio_user_target", 00:06:17.283 "config": null 00:06:17.283 }, 00:06:17.283 { 00:06:17.283 "subsystem": "keyring", 00:06:17.283 "config": [] 00:06:17.283 }, 00:06:17.283 { 00:06:17.283 "subsystem": "iobuf", 00:06:17.283 "config": [ 00:06:17.283 { 00:06:17.283 "method": "iobuf_set_options", 00:06:17.283 "params": { 00:06:17.283 "small_pool_count": 8192, 00:06:17.283 "large_pool_count": 1024, 00:06:17.283 "small_bufsize": 8192, 00:06:17.283 "large_bufsize": 135168 00:06:17.283 } 00:06:17.283 } 00:06:17.283 ] 00:06:17.283 }, 00:06:17.283 { 00:06:17.283 "subsystem": "sock", 00:06:17.283 "config": [ 00:06:17.283 { 00:06:17.283 "method": "sock_set_default_impl", 00:06:17.283 "params": { 00:06:17.283 "impl_name": "posix" 00:06:17.283 } 00:06:17.283 }, 00:06:17.283 { 00:06:17.283 "method": "sock_impl_set_options", 00:06:17.283 "params": { 00:06:17.283 "impl_name": "ssl", 00:06:17.283 "recv_buf_size": 4096, 00:06:17.283 "send_buf_size": 4096, 00:06:17.283 "enable_recv_pipe": true, 00:06:17.283 "enable_quickack": false, 00:06:17.283 "enable_placement_id": 0, 00:06:17.283 "enable_zerocopy_send_server": true, 00:06:17.283 "enable_zerocopy_send_client": false, 00:06:17.283 "zerocopy_threshold": 0, 00:06:17.283 "tls_version": 0, 00:06:17.283 "enable_ktls": false 00:06:17.283 } 00:06:17.283 }, 00:06:17.283 { 00:06:17.283 "method": "sock_impl_set_options", 00:06:17.283 "params": { 00:06:17.283 "impl_name": "posix", 00:06:17.283 "recv_buf_size": 2097152, 00:06:17.283 "send_buf_size": 2097152, 00:06:17.283 "enable_recv_pipe": true, 00:06:17.283 "enable_quickack": false, 00:06:17.283 "enable_placement_id": 0, 00:06:17.283 "enable_zerocopy_send_server": true, 00:06:17.283 "enable_zerocopy_send_client": false, 00:06:17.283 "zerocopy_threshold": 0, 00:06:17.283 "tls_version": 0, 00:06:17.283 "enable_ktls": false 00:06:17.283 } 00:06:17.283 } 00:06:17.283 ] 00:06:17.283 }, 00:06:17.283 { 00:06:17.283 "subsystem": "vmd", 00:06:17.283 "config": [] 00:06:17.283 }, 00:06:17.283 { 00:06:17.283 "subsystem": "accel", 00:06:17.283 "config": [ 00:06:17.283 { 00:06:17.283 "method": "accel_set_options", 00:06:17.283 "params": { 00:06:17.283 "small_cache_size": 128, 00:06:17.283 "large_cache_size": 16, 00:06:17.283 "task_count": 2048, 00:06:17.283 "sequence_count": 2048, 00:06:17.283 "buf_count": 2048 00:06:17.283 } 00:06:17.283 } 00:06:17.283 ] 00:06:17.283 }, 00:06:17.283 { 00:06:17.283 "subsystem": "bdev", 00:06:17.283 "config": [ 00:06:17.283 { 00:06:17.283 "method": "bdev_set_options", 00:06:17.283 "params": { 00:06:17.283 "bdev_io_pool_size": 65535, 00:06:17.283 "bdev_io_cache_size": 256, 00:06:17.283 "bdev_auto_examine": true, 00:06:17.283 "iobuf_small_cache_size": 128, 00:06:17.283 "iobuf_large_cache_size": 16 00:06:17.283 } 00:06:17.283 }, 00:06:17.283 { 00:06:17.283 "method": "bdev_raid_set_options", 00:06:17.283 "params": { 00:06:17.283 "process_window_size_kb": 1024, 00:06:17.283 "process_max_bandwidth_mb_sec": 0 00:06:17.283 } 00:06:17.283 }, 00:06:17.283 { 00:06:17.283 "method": "bdev_iscsi_set_options", 00:06:17.283 "params": { 00:06:17.283 "timeout_sec": 30 00:06:17.283 } 00:06:17.283 }, 00:06:17.283 { 00:06:17.283 "method": "bdev_nvme_set_options", 00:06:17.283 "params": { 00:06:17.283 "action_on_timeout": "none", 00:06:17.283 "timeout_us": 0, 00:06:17.283 "timeout_admin_us": 0, 00:06:17.283 "keep_alive_timeout_ms": 10000, 00:06:17.283 "arbitration_burst": 0, 00:06:17.283 "low_priority_weight": 0, 00:06:17.283 "medium_priority_weight": 0, 00:06:17.283 "high_priority_weight": 0, 00:06:17.283 "nvme_adminq_poll_period_us": 10000, 00:06:17.283 "nvme_ioq_poll_period_us": 0, 00:06:17.283 "io_queue_requests": 0, 00:06:17.283 "delay_cmd_submit": true, 00:06:17.283 "transport_retry_count": 4, 00:06:17.283 "bdev_retry_count": 3, 00:06:17.283 "transport_ack_timeout": 0, 00:06:17.283 "ctrlr_loss_timeout_sec": 0, 00:06:17.283 "reconnect_delay_sec": 0, 00:06:17.283 "fast_io_fail_timeout_sec": 0, 00:06:17.283 "disable_auto_failback": false, 00:06:17.283 "generate_uuids": false, 00:06:17.283 "transport_tos": 0, 00:06:17.283 "nvme_error_stat": false, 00:06:17.283 "rdma_srq_size": 0, 00:06:17.283 "io_path_stat": false, 00:06:17.283 "allow_accel_sequence": false, 00:06:17.283 "rdma_max_cq_size": 0, 00:06:17.283 "rdma_cm_event_timeout_ms": 0, 00:06:17.283 "dhchap_digests": [ 00:06:17.283 "sha256", 00:06:17.283 "sha384", 00:06:17.283 "sha512" 00:06:17.283 ], 00:06:17.283 "dhchap_dhgroups": [ 00:06:17.283 "null", 00:06:17.283 "ffdhe2048", 00:06:17.283 "ffdhe3072", 00:06:17.283 "ffdhe4096", 00:06:17.283 "ffdhe6144", 00:06:17.283 "ffdhe8192" 00:06:17.283 ] 00:06:17.283 } 00:06:17.283 }, 00:06:17.283 { 00:06:17.283 "method": "bdev_nvme_set_hotplug", 00:06:17.283 "params": { 00:06:17.283 "period_us": 100000, 00:06:17.283 "enable": false 00:06:17.283 } 00:06:17.283 }, 00:06:17.283 { 00:06:17.283 "method": "bdev_wait_for_examine" 00:06:17.283 } 00:06:17.283 ] 00:06:17.283 }, 00:06:17.283 { 00:06:17.283 "subsystem": "scsi", 00:06:17.283 "config": null 00:06:17.283 }, 00:06:17.283 { 00:06:17.283 "subsystem": "scheduler", 00:06:17.283 "config": [ 00:06:17.283 { 00:06:17.283 "method": "framework_set_scheduler", 00:06:17.283 "params": { 00:06:17.283 "name": "static" 00:06:17.283 } 00:06:17.283 } 00:06:17.283 ] 00:06:17.283 }, 00:06:17.283 { 00:06:17.283 "subsystem": "vhost_scsi", 00:06:17.283 "config": [] 00:06:17.283 }, 00:06:17.283 { 00:06:17.283 "subsystem": "vhost_blk", 00:06:17.283 "config": [] 00:06:17.283 }, 00:06:17.283 { 00:06:17.283 "subsystem": "ublk", 00:06:17.283 "config": [] 00:06:17.283 }, 00:06:17.283 { 00:06:17.283 "subsystem": "nbd", 00:06:17.283 "config": [] 00:06:17.283 }, 00:06:17.283 { 00:06:17.283 "subsystem": "nvmf", 00:06:17.283 "config": [ 00:06:17.283 { 00:06:17.283 "method": "nvmf_set_config", 00:06:17.283 "params": { 00:06:17.283 "discovery_filter": "match_any", 00:06:17.283 "admin_cmd_passthru": { 00:06:17.283 "identify_ctrlr": false 00:06:17.283 }, 00:06:17.283 "dhchap_digests": [ 00:06:17.283 "sha256", 00:06:17.283 "sha384", 00:06:17.283 "sha512" 00:06:17.283 ], 00:06:17.283 "dhchap_dhgroups": [ 00:06:17.283 "null", 00:06:17.283 "ffdhe2048", 00:06:17.283 "ffdhe3072", 00:06:17.283 "ffdhe4096", 00:06:17.283 "ffdhe6144", 00:06:17.283 "ffdhe8192" 00:06:17.283 ] 00:06:17.283 } 00:06:17.283 }, 00:06:17.283 { 00:06:17.283 "method": "nvmf_set_max_subsystems", 00:06:17.283 "params": { 00:06:17.283 "max_subsystems": 1024 00:06:17.283 } 00:06:17.283 }, 00:06:17.283 { 00:06:17.283 "method": "nvmf_set_crdt", 00:06:17.283 "params": { 00:06:17.283 "crdt1": 0, 00:06:17.283 "crdt2": 0, 00:06:17.283 "crdt3": 0 00:06:17.283 } 00:06:17.283 }, 00:06:17.283 { 00:06:17.283 "method": "nvmf_create_transport", 00:06:17.283 "params": { 00:06:17.283 "trtype": "TCP", 00:06:17.283 "max_queue_depth": 128, 00:06:17.283 "max_io_qpairs_per_ctrlr": 127, 00:06:17.283 "in_capsule_data_size": 4096, 00:06:17.283 "max_io_size": 131072, 00:06:17.283 "io_unit_size": 131072, 00:06:17.283 "max_aq_depth": 128, 00:06:17.284 "num_shared_buffers": 511, 00:06:17.284 "buf_cache_size": 4294967295, 00:06:17.284 "dif_insert_or_strip": false, 00:06:17.284 "zcopy": false, 00:06:17.284 "c2h_success": true, 00:06:17.284 "sock_priority": 0, 00:06:17.284 "abort_timeout_sec": 1, 00:06:17.284 "ack_timeout": 0, 00:06:17.284 "data_wr_pool_size": 0 00:06:17.284 } 00:06:17.284 } 00:06:17.284 ] 00:06:17.284 }, 00:06:17.284 { 00:06:17.284 "subsystem": "iscsi", 00:06:17.284 "config": [ 00:06:17.284 { 00:06:17.284 "method": "iscsi_set_options", 00:06:17.284 "params": { 00:06:17.284 "node_base": "iqn.2016-06.io.spdk", 00:06:17.284 "max_sessions": 128, 00:06:17.284 "max_connections_per_session": 2, 00:06:17.284 "max_queue_depth": 64, 00:06:17.284 "default_time2wait": 2, 00:06:17.284 "default_time2retain": 20, 00:06:17.284 "first_burst_length": 8192, 00:06:17.284 "immediate_data": true, 00:06:17.284 "allow_duplicated_isid": false, 00:06:17.284 "error_recovery_level": 0, 00:06:17.284 "nop_timeout": 60, 00:06:17.284 "nop_in_interval": 30, 00:06:17.284 "disable_chap": false, 00:06:17.284 "require_chap": false, 00:06:17.284 "mutual_chap": false, 00:06:17.284 "chap_group": 0, 00:06:17.284 "max_large_datain_per_connection": 64, 00:06:17.284 "max_r2t_per_connection": 4, 00:06:17.284 "pdu_pool_size": 36864, 00:06:17.284 "immediate_data_pool_size": 16384, 00:06:17.284 "data_out_pool_size": 2048 00:06:17.284 } 00:06:17.284 } 00:06:17.284 ] 00:06:17.284 } 00:06:17.284 ] 00:06:17.284 } 00:06:17.284 09:27:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:17.284 09:27:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 100047 00:06:17.284 09:27:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 100047 ']' 00:06:17.284 09:27:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 100047 00:06:17.284 09:27:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:06:17.284 09:27:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:17.284 09:27:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 100047 00:06:17.284 09:27:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:17.284 09:27:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:17.284 09:27:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 100047' 00:06:17.284 killing process with pid 100047 00:06:17.284 09:27:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 100047 00:06:17.284 09:27:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 100047 00:06:17.854 09:27:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=100181 00:06:17.854 09:27:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:17.854 09:27:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:23.131 09:27:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 100181 00:06:23.131 09:27:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 100181 ']' 00:06:23.131 09:27:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 100181 00:06:23.131 09:27:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:06:23.131 09:27:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:23.131 09:27:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 100181 00:06:23.131 09:27:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:23.131 09:27:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:23.131 09:27:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 100181' 00:06:23.131 killing process with pid 100181 00:06:23.131 09:27:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 100181 00:06:23.131 09:27:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 100181 00:06:23.131 09:27:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:23.131 09:27:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:23.131 00:06:23.131 real 0m6.634s 00:06:23.131 user 0m6.308s 00:06:23.131 sys 0m0.647s 00:06:23.131 09:27:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:23.131 09:27:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:23.131 ************************************ 00:06:23.131 END TEST skip_rpc_with_json 00:06:23.131 ************************************ 00:06:23.131 09:27:12 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:23.131 09:27:12 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:23.131 09:27:12 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:23.131 09:27:12 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:23.392 ************************************ 00:06:23.392 START TEST skip_rpc_with_delay 00:06:23.392 ************************************ 00:06:23.392 09:27:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:06:23.392 09:27:12 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:23.392 09:27:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:06:23.392 09:27:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:23.392 09:27:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:23.392 09:27:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:23.392 09:27:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:23.392 09:27:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:23.392 09:27:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:23.392 09:27:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:23.392 09:27:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:23.392 09:27:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:23.392 09:27:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:23.392 [2024-10-07 09:27:12.191688] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:23.392 [2024-10-07 09:27:12.191821] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:06:23.392 09:27:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:06:23.392 09:27:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:23.392 09:27:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:23.392 09:27:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:23.392 00:06:23.392 real 0m0.074s 00:06:23.392 user 0m0.044s 00:06:23.392 sys 0m0.030s 00:06:23.392 09:27:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:23.392 09:27:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:23.392 ************************************ 00:06:23.392 END TEST skip_rpc_with_delay 00:06:23.392 ************************************ 00:06:23.392 09:27:12 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:23.392 09:27:12 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:23.392 09:27:12 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:23.392 09:27:12 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:23.392 09:27:12 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:23.392 09:27:12 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:23.392 ************************************ 00:06:23.392 START TEST exit_on_failed_rpc_init 00:06:23.392 ************************************ 00:06:23.392 09:27:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:06:23.392 09:27:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=101380 00:06:23.392 09:27:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:23.392 09:27:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 101380 00:06:23.392 09:27:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 101380 ']' 00:06:23.392 09:27:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.392 09:27:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:23.392 09:27:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.392 09:27:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:23.392 09:27:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:23.392 [2024-10-07 09:27:12.314914] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:06:23.392 [2024-10-07 09:27:12.315023] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101380 ] 00:06:23.392 [2024-10-07 09:27:12.371222] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.653 [2024-10-07 09:27:12.482272] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.913 09:27:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:23.913 09:27:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:06:23.913 09:27:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:23.913 09:27:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:23.913 09:27:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:06:23.913 09:27:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:23.913 09:27:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:23.913 09:27:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:23.913 09:27:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:23.913 09:27:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:23.913 09:27:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:23.913 09:27:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:23.913 09:27:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:23.913 09:27:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:23.913 09:27:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:23.913 [2024-10-07 09:27:12.798129] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:06:23.913 [2024-10-07 09:27:12.798226] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101386 ] 00:06:23.913 [2024-10-07 09:27:12.853502] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.171 [2024-10-07 09:27:12.961048] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:06:24.171 [2024-10-07 09:27:12.961180] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:24.171 [2024-10-07 09:27:12.961199] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:24.171 [2024-10-07 09:27:12.961210] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:24.171 09:27:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:06:24.171 09:27:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:24.171 09:27:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:06:24.171 09:27:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:06:24.171 09:27:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:06:24.171 09:27:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:24.171 09:27:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:24.171 09:27:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 101380 00:06:24.172 09:27:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 101380 ']' 00:06:24.172 09:27:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 101380 00:06:24.172 09:27:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:06:24.172 09:27:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:24.172 09:27:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 101380 00:06:24.172 09:27:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:24.172 09:27:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:24.172 09:27:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 101380' 00:06:24.172 killing process with pid 101380 00:06:24.172 09:27:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 101380 00:06:24.172 09:27:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 101380 00:06:24.741 00:06:24.741 real 0m1.319s 00:06:24.741 user 0m1.493s 00:06:24.741 sys 0m0.459s 00:06:24.741 09:27:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:24.741 09:27:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:24.741 ************************************ 00:06:24.741 END TEST exit_on_failed_rpc_init 00:06:24.741 ************************************ 00:06:24.741 09:27:13 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:24.741 00:06:24.741 real 0m13.882s 00:06:24.741 user 0m13.210s 00:06:24.741 sys 0m1.644s 00:06:24.741 09:27:13 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:24.741 09:27:13 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.741 ************************************ 00:06:24.741 END TEST skip_rpc 00:06:24.741 ************************************ 00:06:24.741 09:27:13 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:24.741 09:27:13 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:24.741 09:27:13 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:24.741 09:27:13 -- common/autotest_common.sh@10 -- # set +x 00:06:24.741 ************************************ 00:06:24.741 START TEST rpc_client 00:06:24.741 ************************************ 00:06:24.741 09:27:13 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:24.741 * Looking for test storage... 00:06:24.741 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:06:24.741 09:27:13 rpc_client -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:24.741 09:27:13 rpc_client -- common/autotest_common.sh@1681 -- # lcov --version 00:06:24.741 09:27:13 rpc_client -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:24.999 09:27:13 rpc_client -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:24.999 09:27:13 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:25.000 09:27:13 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:25.000 09:27:13 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:25.000 09:27:13 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:25.000 09:27:13 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:25.000 09:27:13 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:25.000 09:27:13 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:25.000 09:27:13 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:25.000 09:27:13 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:25.000 09:27:13 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:25.000 09:27:13 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:25.000 09:27:13 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:25.000 09:27:13 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:25.000 09:27:13 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:25.000 09:27:13 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:25.000 09:27:13 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:25.000 09:27:13 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:25.000 09:27:13 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:25.000 09:27:13 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:25.000 09:27:13 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:25.000 09:27:13 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:25.000 09:27:13 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:25.000 09:27:13 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:25.000 09:27:13 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:25.000 09:27:13 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:25.000 09:27:13 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:25.000 09:27:13 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:25.000 09:27:13 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:25.000 09:27:13 rpc_client -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:25.000 09:27:13 rpc_client -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:25.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.000 --rc genhtml_branch_coverage=1 00:06:25.000 --rc genhtml_function_coverage=1 00:06:25.000 --rc genhtml_legend=1 00:06:25.000 --rc geninfo_all_blocks=1 00:06:25.000 --rc geninfo_unexecuted_blocks=1 00:06:25.000 00:06:25.000 ' 00:06:25.000 09:27:13 rpc_client -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:25.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.000 --rc genhtml_branch_coverage=1 00:06:25.000 --rc genhtml_function_coverage=1 00:06:25.000 --rc genhtml_legend=1 00:06:25.000 --rc geninfo_all_blocks=1 00:06:25.000 --rc geninfo_unexecuted_blocks=1 00:06:25.000 00:06:25.000 ' 00:06:25.000 09:27:13 rpc_client -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:25.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.000 --rc genhtml_branch_coverage=1 00:06:25.000 --rc genhtml_function_coverage=1 00:06:25.000 --rc genhtml_legend=1 00:06:25.000 --rc geninfo_all_blocks=1 00:06:25.000 --rc geninfo_unexecuted_blocks=1 00:06:25.000 00:06:25.000 ' 00:06:25.000 09:27:13 rpc_client -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:25.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.000 --rc genhtml_branch_coverage=1 00:06:25.000 --rc genhtml_function_coverage=1 00:06:25.000 --rc genhtml_legend=1 00:06:25.000 --rc geninfo_all_blocks=1 00:06:25.000 --rc geninfo_unexecuted_blocks=1 00:06:25.000 00:06:25.000 ' 00:06:25.000 09:27:13 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:06:25.000 OK 00:06:25.000 09:27:13 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:25.000 00:06:25.000 real 0m0.164s 00:06:25.000 user 0m0.115s 00:06:25.000 sys 0m0.058s 00:06:25.000 09:27:13 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:25.000 09:27:13 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:25.000 ************************************ 00:06:25.000 END TEST rpc_client 00:06:25.000 ************************************ 00:06:25.000 09:27:13 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:25.000 09:27:13 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:25.000 09:27:13 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:25.000 09:27:13 -- common/autotest_common.sh@10 -- # set +x 00:06:25.000 ************************************ 00:06:25.000 START TEST json_config 00:06:25.000 ************************************ 00:06:25.000 09:27:13 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:25.000 09:27:13 json_config -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:25.000 09:27:13 json_config -- common/autotest_common.sh@1681 -- # lcov --version 00:06:25.000 09:27:13 json_config -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:25.000 09:27:13 json_config -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:25.000 09:27:13 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:25.000 09:27:13 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:25.000 09:27:13 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:25.000 09:27:13 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:25.000 09:27:13 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:25.000 09:27:13 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:25.000 09:27:13 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:25.000 09:27:13 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:25.000 09:27:13 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:25.000 09:27:13 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:25.000 09:27:13 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:25.000 09:27:13 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:25.000 09:27:13 json_config -- scripts/common.sh@345 -- # : 1 00:06:25.000 09:27:13 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:25.000 09:27:13 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:25.000 09:27:13 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:25.000 09:27:13 json_config -- scripts/common.sh@353 -- # local d=1 00:06:25.000 09:27:13 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:25.000 09:27:13 json_config -- scripts/common.sh@355 -- # echo 1 00:06:25.000 09:27:13 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:25.000 09:27:13 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:25.000 09:27:13 json_config -- scripts/common.sh@353 -- # local d=2 00:06:25.000 09:27:13 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:25.000 09:27:13 json_config -- scripts/common.sh@355 -- # echo 2 00:06:25.000 09:27:13 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:25.000 09:27:13 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:25.000 09:27:13 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:25.000 09:27:13 json_config -- scripts/common.sh@368 -- # return 0 00:06:25.000 09:27:13 json_config -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:25.000 09:27:13 json_config -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:25.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.000 --rc genhtml_branch_coverage=1 00:06:25.000 --rc genhtml_function_coverage=1 00:06:25.000 --rc genhtml_legend=1 00:06:25.000 --rc geninfo_all_blocks=1 00:06:25.000 --rc geninfo_unexecuted_blocks=1 00:06:25.000 00:06:25.000 ' 00:06:25.000 09:27:13 json_config -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:25.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.000 --rc genhtml_branch_coverage=1 00:06:25.000 --rc genhtml_function_coverage=1 00:06:25.000 --rc genhtml_legend=1 00:06:25.000 --rc geninfo_all_blocks=1 00:06:25.000 --rc geninfo_unexecuted_blocks=1 00:06:25.000 00:06:25.000 ' 00:06:25.000 09:27:13 json_config -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:25.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.000 --rc genhtml_branch_coverage=1 00:06:25.000 --rc genhtml_function_coverage=1 00:06:25.000 --rc genhtml_legend=1 00:06:25.000 --rc geninfo_all_blocks=1 00:06:25.000 --rc geninfo_unexecuted_blocks=1 00:06:25.000 00:06:25.000 ' 00:06:25.000 09:27:13 json_config -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:25.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.000 --rc genhtml_branch_coverage=1 00:06:25.000 --rc genhtml_function_coverage=1 00:06:25.000 --rc genhtml_legend=1 00:06:25.000 --rc geninfo_all_blocks=1 00:06:25.000 --rc geninfo_unexecuted_blocks=1 00:06:25.000 00:06:25.000 ' 00:06:25.000 09:27:13 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:25.000 09:27:13 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:25.259 09:27:13 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:25.259 09:27:13 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:25.259 09:27:13 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:25.259 09:27:13 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:25.259 09:27:13 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:25.259 09:27:13 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:25.259 09:27:13 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:25.259 09:27:13 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:25.259 09:27:13 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:25.259 09:27:13 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:25.259 09:27:14 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:06:25.259 09:27:14 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:06:25.259 09:27:14 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:25.259 09:27:14 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:25.259 09:27:14 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:25.259 09:27:14 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:25.259 09:27:14 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:25.259 09:27:14 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:25.259 09:27:14 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:25.259 09:27:14 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:25.259 09:27:14 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:25.259 09:27:14 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.259 09:27:14 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.260 09:27:14 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.260 09:27:14 json_config -- paths/export.sh@5 -- # export PATH 00:06:25.260 09:27:14 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.260 09:27:14 json_config -- nvmf/common.sh@51 -- # : 0 00:06:25.260 09:27:14 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:25.260 09:27:14 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:25.260 09:27:14 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:25.260 09:27:14 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:25.260 09:27:14 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:25.260 09:27:14 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:25.260 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:25.260 09:27:14 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:25.260 09:27:14 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:25.260 09:27:14 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:25.260 09:27:14 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:25.260 09:27:14 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:25.260 09:27:14 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:25.260 09:27:14 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:25.260 09:27:14 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:25.260 09:27:14 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:06:25.260 09:27:14 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:06:25.260 09:27:14 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:25.260 09:27:14 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:06:25.260 09:27:14 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:25.260 09:27:14 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:06:25.260 09:27:14 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:06:25.260 09:27:14 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:06:25.260 09:27:14 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:06:25.260 09:27:14 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:25.260 09:27:14 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:06:25.260 INFO: JSON configuration test init 00:06:25.260 09:27:14 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:06:25.260 09:27:14 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:06:25.260 09:27:14 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:25.260 09:27:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:25.260 09:27:14 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:06:25.260 09:27:14 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:25.260 09:27:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:25.260 09:27:14 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:06:25.260 09:27:14 json_config -- json_config/common.sh@9 -- # local app=target 00:06:25.260 09:27:14 json_config -- json_config/common.sh@10 -- # shift 00:06:25.260 09:27:14 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:25.260 09:27:14 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:25.260 09:27:14 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:25.260 09:27:14 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:25.260 09:27:14 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:25.260 09:27:14 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=101642 00:06:25.260 09:27:14 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:25.260 09:27:14 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:25.260 Waiting for target to run... 00:06:25.260 09:27:14 json_config -- json_config/common.sh@25 -- # waitforlisten 101642 /var/tmp/spdk_tgt.sock 00:06:25.260 09:27:14 json_config -- common/autotest_common.sh@831 -- # '[' -z 101642 ']' 00:06:25.260 09:27:14 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:25.260 09:27:14 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:25.260 09:27:14 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:25.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:25.260 09:27:14 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:25.260 09:27:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:25.260 [2024-10-07 09:27:14.076339] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:06:25.260 [2024-10-07 09:27:14.076419] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101642 ] 00:06:25.828 [2024-10-07 09:27:14.587328] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.828 [2024-10-07 09:27:14.680332] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.087 09:27:15 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:26.087 09:27:15 json_config -- common/autotest_common.sh@864 -- # return 0 00:06:26.087 09:27:15 json_config -- json_config/common.sh@26 -- # echo '' 00:06:26.087 00:06:26.087 09:27:15 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:06:26.087 09:27:15 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:06:26.087 09:27:15 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:26.087 09:27:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:26.087 09:27:15 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:06:26.087 09:27:15 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:06:26.087 09:27:15 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:26.087 09:27:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:26.345 09:27:15 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:26.345 09:27:15 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:06:26.345 09:27:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:29.638 09:27:18 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:06:29.639 09:27:18 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:29.639 09:27:18 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:29.639 09:27:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:29.639 09:27:18 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:29.639 09:27:18 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:29.639 09:27:18 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:29.639 09:27:18 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:06:29.639 09:27:18 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:06:29.639 09:27:18 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:06:29.639 09:27:18 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:06:29.639 09:27:18 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:29.639 09:27:18 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:06:29.639 09:27:18 json_config -- json_config/json_config.sh@51 -- # local get_types 00:06:29.639 09:27:18 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:06:29.639 09:27:18 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:06:29.639 09:27:18 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:06:29.639 09:27:18 json_config -- json_config/json_config.sh@54 -- # sort 00:06:29.639 09:27:18 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:06:29.639 09:27:18 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:06:29.639 09:27:18 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:06:29.639 09:27:18 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:06:29.639 09:27:18 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:29.639 09:27:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:29.639 09:27:18 json_config -- json_config/json_config.sh@62 -- # return 0 00:06:29.639 09:27:18 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:06:29.639 09:27:18 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:06:29.639 09:27:18 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:06:29.639 09:27:18 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:06:29.639 09:27:18 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:06:29.639 09:27:18 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:06:29.639 09:27:18 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:29.639 09:27:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:29.639 09:27:18 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:29.639 09:27:18 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:06:29.639 09:27:18 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:06:29.639 09:27:18 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:29.639 09:27:18 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:29.899 MallocForNvmf0 00:06:29.899 09:27:18 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:29.899 09:27:18 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:30.160 MallocForNvmf1 00:06:30.160 09:27:19 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:30.160 09:27:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:30.421 [2024-10-07 09:27:19.381855] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:30.421 09:27:19 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:30.421 09:27:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:30.682 09:27:19 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:30.682 09:27:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:31.256 09:27:19 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:31.256 09:27:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:31.256 09:27:20 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:31.256 09:27:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:31.518 [2024-10-07 09:27:20.505556] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:31.777 09:27:20 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:06:31.777 09:27:20 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:31.777 09:27:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:31.777 09:27:20 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:06:31.777 09:27:20 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:31.777 09:27:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:31.777 09:27:20 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:06:31.777 09:27:20 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:31.777 09:27:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:32.038 MallocBdevForConfigChangeCheck 00:06:32.038 09:27:20 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:06:32.038 09:27:20 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:32.038 09:27:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:32.038 09:27:20 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:06:32.038 09:27:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:32.298 09:27:21 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:06:32.298 INFO: shutting down applications... 00:06:32.298 09:27:21 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:06:32.298 09:27:21 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:06:32.298 09:27:21 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:06:32.299 09:27:21 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:34.215 Calling clear_iscsi_subsystem 00:06:34.215 Calling clear_nvmf_subsystem 00:06:34.215 Calling clear_nbd_subsystem 00:06:34.215 Calling clear_ublk_subsystem 00:06:34.215 Calling clear_vhost_blk_subsystem 00:06:34.215 Calling clear_vhost_scsi_subsystem 00:06:34.215 Calling clear_bdev_subsystem 00:06:34.215 09:27:22 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:06:34.215 09:27:22 json_config -- json_config/json_config.sh@350 -- # count=100 00:06:34.215 09:27:22 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:06:34.215 09:27:22 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:34.215 09:27:22 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:34.215 09:27:22 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:06:34.475 09:27:23 json_config -- json_config/json_config.sh@352 -- # break 00:06:34.475 09:27:23 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:06:34.475 09:27:23 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:06:34.475 09:27:23 json_config -- json_config/common.sh@31 -- # local app=target 00:06:34.475 09:27:23 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:34.475 09:27:23 json_config -- json_config/common.sh@35 -- # [[ -n 101642 ]] 00:06:34.475 09:27:23 json_config -- json_config/common.sh@38 -- # kill -SIGINT 101642 00:06:34.475 09:27:23 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:34.475 09:27:23 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:34.475 09:27:23 json_config -- json_config/common.sh@41 -- # kill -0 101642 00:06:34.475 09:27:23 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:35.048 09:27:23 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:35.048 09:27:23 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:35.048 09:27:23 json_config -- json_config/common.sh@41 -- # kill -0 101642 00:06:35.048 09:27:23 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:35.048 09:27:23 json_config -- json_config/common.sh@43 -- # break 00:06:35.048 09:27:23 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:35.048 09:27:23 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:35.048 SPDK target shutdown done 00:06:35.048 09:27:23 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:06:35.048 INFO: relaunching applications... 00:06:35.048 09:27:23 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:35.048 09:27:23 json_config -- json_config/common.sh@9 -- # local app=target 00:06:35.048 09:27:23 json_config -- json_config/common.sh@10 -- # shift 00:06:35.048 09:27:23 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:35.048 09:27:23 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:35.048 09:27:23 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:35.048 09:27:23 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:35.048 09:27:23 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:35.048 09:27:23 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=102909 00:06:35.048 09:27:23 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:35.048 09:27:23 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:35.048 Waiting for target to run... 00:06:35.048 09:27:23 json_config -- json_config/common.sh@25 -- # waitforlisten 102909 /var/tmp/spdk_tgt.sock 00:06:35.048 09:27:23 json_config -- common/autotest_common.sh@831 -- # '[' -z 102909 ']' 00:06:35.048 09:27:23 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:35.048 09:27:23 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:35.048 09:27:23 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:35.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:35.048 09:27:23 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:35.048 09:27:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:35.048 [2024-10-07 09:27:23.869752] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:06:35.048 [2024-10-07 09:27:23.869863] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102909 ] 00:06:35.308 [2024-10-07 09:27:24.211829] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.308 [2024-10-07 09:27:24.294380] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.607 [2024-10-07 09:27:27.337551] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:38.607 [2024-10-07 09:27:27.369921] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:38.607 09:27:27 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:38.607 09:27:27 json_config -- common/autotest_common.sh@864 -- # return 0 00:06:38.607 09:27:27 json_config -- json_config/common.sh@26 -- # echo '' 00:06:38.607 00:06:38.607 09:27:27 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:06:38.607 09:27:27 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:38.607 INFO: Checking if target configuration is the same... 00:06:38.607 09:27:27 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:38.607 09:27:27 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:06:38.607 09:27:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:38.607 + '[' 2 -ne 2 ']' 00:06:38.607 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:38.607 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:38.607 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:38.607 +++ basename /dev/fd/62 00:06:38.607 ++ mktemp /tmp/62.XXX 00:06:38.607 + tmp_file_1=/tmp/62.GfA 00:06:38.607 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:38.607 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:38.607 + tmp_file_2=/tmp/spdk_tgt_config.json.JMh 00:06:38.607 + ret=0 00:06:38.607 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:38.866 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:38.866 + diff -u /tmp/62.GfA /tmp/spdk_tgt_config.json.JMh 00:06:38.866 + echo 'INFO: JSON config files are the same' 00:06:38.866 INFO: JSON config files are the same 00:06:38.866 + rm /tmp/62.GfA /tmp/spdk_tgt_config.json.JMh 00:06:38.866 + exit 0 00:06:38.866 09:27:27 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:06:38.866 09:27:27 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:38.866 INFO: changing configuration and checking if this can be detected... 00:06:38.866 09:27:27 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:38.866 09:27:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:39.436 09:27:28 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:06:39.436 09:27:28 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:39.436 09:27:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:39.436 + '[' 2 -ne 2 ']' 00:06:39.437 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:39.437 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:39.437 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:39.437 +++ basename /dev/fd/62 00:06:39.437 ++ mktemp /tmp/62.XXX 00:06:39.437 + tmp_file_1=/tmp/62.8Tz 00:06:39.437 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:39.437 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:39.437 + tmp_file_2=/tmp/spdk_tgt_config.json.aVN 00:06:39.437 + ret=0 00:06:39.437 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:39.696 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:39.696 + diff -u /tmp/62.8Tz /tmp/spdk_tgt_config.json.aVN 00:06:39.696 + ret=1 00:06:39.696 + echo '=== Start of file: /tmp/62.8Tz ===' 00:06:39.696 + cat /tmp/62.8Tz 00:06:39.696 + echo '=== End of file: /tmp/62.8Tz ===' 00:06:39.696 + echo '' 00:06:39.696 + echo '=== Start of file: /tmp/spdk_tgt_config.json.aVN ===' 00:06:39.696 + cat /tmp/spdk_tgt_config.json.aVN 00:06:39.696 + echo '=== End of file: /tmp/spdk_tgt_config.json.aVN ===' 00:06:39.696 + echo '' 00:06:39.696 + rm /tmp/62.8Tz /tmp/spdk_tgt_config.json.aVN 00:06:39.696 + exit 1 00:06:39.696 09:27:28 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:06:39.696 INFO: configuration change detected. 00:06:39.696 09:27:28 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:06:39.696 09:27:28 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:06:39.696 09:27:28 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:39.696 09:27:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:39.696 09:27:28 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:06:39.696 09:27:28 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:06:39.696 09:27:28 json_config -- json_config/json_config.sh@324 -- # [[ -n 102909 ]] 00:06:39.696 09:27:28 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:06:39.696 09:27:28 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:06:39.696 09:27:28 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:39.696 09:27:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:39.696 09:27:28 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:06:39.696 09:27:28 json_config -- json_config/json_config.sh@200 -- # uname -s 00:06:39.696 09:27:28 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:06:39.696 09:27:28 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:06:39.696 09:27:28 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:06:39.696 09:27:28 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:06:39.696 09:27:28 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:39.696 09:27:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:39.696 09:27:28 json_config -- json_config/json_config.sh@330 -- # killprocess 102909 00:06:39.696 09:27:28 json_config -- common/autotest_common.sh@950 -- # '[' -z 102909 ']' 00:06:39.696 09:27:28 json_config -- common/autotest_common.sh@954 -- # kill -0 102909 00:06:39.696 09:27:28 json_config -- common/autotest_common.sh@955 -- # uname 00:06:39.696 09:27:28 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:39.696 09:27:28 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 102909 00:06:39.696 09:27:28 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:39.696 09:27:28 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:39.696 09:27:28 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 102909' 00:06:39.696 killing process with pid 102909 00:06:39.696 09:27:28 json_config -- common/autotest_common.sh@969 -- # kill 102909 00:06:39.696 09:27:28 json_config -- common/autotest_common.sh@974 -- # wait 102909 00:06:41.610 09:27:30 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:41.610 09:27:30 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:06:41.610 09:27:30 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:41.610 09:27:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:41.610 09:27:30 json_config -- json_config/json_config.sh@335 -- # return 0 00:06:41.610 09:27:30 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:06:41.610 INFO: Success 00:06:41.610 00:06:41.610 real 0m16.420s 00:06:41.610 user 0m18.182s 00:06:41.610 sys 0m2.588s 00:06:41.610 09:27:30 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:41.610 09:27:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:41.610 ************************************ 00:06:41.610 END TEST json_config 00:06:41.610 ************************************ 00:06:41.610 09:27:30 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:41.610 09:27:30 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:41.610 09:27:30 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:41.610 09:27:30 -- common/autotest_common.sh@10 -- # set +x 00:06:41.610 ************************************ 00:06:41.610 START TEST json_config_extra_key 00:06:41.610 ************************************ 00:06:41.610 09:27:30 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:41.610 09:27:30 json_config_extra_key -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:41.610 09:27:30 json_config_extra_key -- common/autotest_common.sh@1681 -- # lcov --version 00:06:41.610 09:27:30 json_config_extra_key -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:41.610 09:27:30 json_config_extra_key -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:41.610 09:27:30 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:41.610 09:27:30 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:41.610 09:27:30 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:41.610 09:27:30 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:41.610 09:27:30 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:41.610 09:27:30 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:41.610 09:27:30 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:41.610 09:27:30 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:41.610 09:27:30 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:41.610 09:27:30 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:41.610 09:27:30 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:41.610 09:27:30 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:41.610 09:27:30 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:41.610 09:27:30 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:41.610 09:27:30 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:41.610 09:27:30 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:41.610 09:27:30 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:41.610 09:27:30 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:41.610 09:27:30 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:41.610 09:27:30 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:41.610 09:27:30 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:41.610 09:27:30 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:41.610 09:27:30 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:41.610 09:27:30 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:41.610 09:27:30 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:41.610 09:27:30 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:41.610 09:27:30 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:41.610 09:27:30 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:41.610 09:27:30 json_config_extra_key -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:41.610 09:27:30 json_config_extra_key -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:41.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.610 --rc genhtml_branch_coverage=1 00:06:41.610 --rc genhtml_function_coverage=1 00:06:41.610 --rc genhtml_legend=1 00:06:41.610 --rc geninfo_all_blocks=1 00:06:41.610 --rc geninfo_unexecuted_blocks=1 00:06:41.610 00:06:41.610 ' 00:06:41.610 09:27:30 json_config_extra_key -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:41.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.611 --rc genhtml_branch_coverage=1 00:06:41.611 --rc genhtml_function_coverage=1 00:06:41.611 --rc genhtml_legend=1 00:06:41.611 --rc geninfo_all_blocks=1 00:06:41.611 --rc geninfo_unexecuted_blocks=1 00:06:41.611 00:06:41.611 ' 00:06:41.611 09:27:30 json_config_extra_key -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:41.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.611 --rc genhtml_branch_coverage=1 00:06:41.611 --rc genhtml_function_coverage=1 00:06:41.611 --rc genhtml_legend=1 00:06:41.611 --rc geninfo_all_blocks=1 00:06:41.611 --rc geninfo_unexecuted_blocks=1 00:06:41.611 00:06:41.611 ' 00:06:41.611 09:27:30 json_config_extra_key -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:41.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.611 --rc genhtml_branch_coverage=1 00:06:41.611 --rc genhtml_function_coverage=1 00:06:41.611 --rc genhtml_legend=1 00:06:41.611 --rc geninfo_all_blocks=1 00:06:41.611 --rc geninfo_unexecuted_blocks=1 00:06:41.611 00:06:41.611 ' 00:06:41.611 09:27:30 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:41.611 09:27:30 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:41.611 09:27:30 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:41.611 09:27:30 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:41.611 09:27:30 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:41.611 09:27:30 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:41.611 09:27:30 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:41.611 09:27:30 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:41.611 09:27:30 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:41.611 09:27:30 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:41.611 09:27:30 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:41.611 09:27:30 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:41.611 09:27:30 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:06:41.611 09:27:30 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:06:41.611 09:27:30 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:41.611 09:27:30 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:41.611 09:27:30 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:41.611 09:27:30 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:41.611 09:27:30 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:41.611 09:27:30 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:41.611 09:27:30 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:41.611 09:27:30 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:41.611 09:27:30 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:41.611 09:27:30 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.611 09:27:30 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.611 09:27:30 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.611 09:27:30 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:41.611 09:27:30 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.611 09:27:30 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:41.611 09:27:30 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:41.611 09:27:30 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:41.611 09:27:30 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:41.611 09:27:30 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:41.611 09:27:30 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:41.611 09:27:30 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:41.611 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:41.611 09:27:30 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:41.611 09:27:30 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:41.611 09:27:30 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:41.611 09:27:30 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:41.611 09:27:30 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:41.611 09:27:30 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:41.611 09:27:30 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:41.611 09:27:30 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:41.611 09:27:30 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:41.611 09:27:30 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:41.611 09:27:30 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:41.611 09:27:30 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:41.611 09:27:30 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:41.611 09:27:30 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:41.611 INFO: launching applications... 00:06:41.611 09:27:30 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:41.611 09:27:30 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:41.611 09:27:30 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:41.611 09:27:30 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:41.611 09:27:30 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:41.611 09:27:30 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:41.611 09:27:30 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:41.611 09:27:30 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:41.611 09:27:30 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=103794 00:06:41.611 09:27:30 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:41.611 Waiting for target to run... 00:06:41.611 09:27:30 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:41.611 09:27:30 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 103794 /var/tmp/spdk_tgt.sock 00:06:41.611 09:27:30 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 103794 ']' 00:06:41.611 09:27:30 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:41.611 09:27:30 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:41.611 09:27:30 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:41.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:41.612 09:27:30 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:41.612 09:27:30 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:41.612 [2024-10-07 09:27:30.545922] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:06:41.612 [2024-10-07 09:27:30.546031] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103794 ] 00:06:42.180 [2024-10-07 09:27:31.089332] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.439 [2024-10-07 09:27:31.185541] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.699 09:27:31 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:42.699 09:27:31 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:06:42.699 09:27:31 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:42.699 00:06:42.699 09:27:31 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:42.699 INFO: shutting down applications... 00:06:42.699 09:27:31 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:42.699 09:27:31 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:42.699 09:27:31 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:42.699 09:27:31 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 103794 ]] 00:06:42.699 09:27:31 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 103794 00:06:42.699 09:27:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:42.699 09:27:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:42.699 09:27:31 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 103794 00:06:42.699 09:27:31 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:43.270 09:27:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:43.270 09:27:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:43.270 09:27:32 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 103794 00:06:43.270 09:27:32 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:43.270 09:27:32 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:43.270 09:27:32 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:43.270 09:27:32 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:43.270 SPDK target shutdown done 00:06:43.270 09:27:32 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:43.270 Success 00:06:43.270 00:06:43.270 real 0m1.697s 00:06:43.270 user 0m1.523s 00:06:43.270 sys 0m0.671s 00:06:43.270 09:27:32 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:43.270 09:27:32 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:43.270 ************************************ 00:06:43.270 END TEST json_config_extra_key 00:06:43.270 ************************************ 00:06:43.270 09:27:32 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:43.270 09:27:32 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:43.270 09:27:32 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:43.270 09:27:32 -- common/autotest_common.sh@10 -- # set +x 00:06:43.270 ************************************ 00:06:43.270 START TEST alias_rpc 00:06:43.270 ************************************ 00:06:43.270 09:27:32 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:43.270 * Looking for test storage... 00:06:43.271 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:06:43.271 09:27:32 alias_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:43.271 09:27:32 alias_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:06:43.271 09:27:32 alias_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:43.271 09:27:32 alias_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:43.271 09:27:32 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:43.271 09:27:32 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:43.271 09:27:32 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:43.271 09:27:32 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:43.271 09:27:32 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:43.271 09:27:32 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:43.271 09:27:32 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:43.271 09:27:32 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:43.271 09:27:32 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:43.271 09:27:32 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:43.271 09:27:32 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:43.271 09:27:32 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:43.271 09:27:32 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:43.271 09:27:32 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:43.271 09:27:32 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:43.271 09:27:32 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:43.271 09:27:32 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:43.271 09:27:32 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:43.271 09:27:32 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:43.271 09:27:32 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:43.271 09:27:32 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:43.271 09:27:32 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:43.271 09:27:32 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:43.271 09:27:32 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:43.271 09:27:32 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:43.271 09:27:32 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:43.271 09:27:32 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:43.271 09:27:32 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:43.271 09:27:32 alias_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:43.271 09:27:32 alias_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:43.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.271 --rc genhtml_branch_coverage=1 00:06:43.271 --rc genhtml_function_coverage=1 00:06:43.271 --rc genhtml_legend=1 00:06:43.271 --rc geninfo_all_blocks=1 00:06:43.271 --rc geninfo_unexecuted_blocks=1 00:06:43.271 00:06:43.271 ' 00:06:43.271 09:27:32 alias_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:43.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.271 --rc genhtml_branch_coverage=1 00:06:43.271 --rc genhtml_function_coverage=1 00:06:43.271 --rc genhtml_legend=1 00:06:43.271 --rc geninfo_all_blocks=1 00:06:43.271 --rc geninfo_unexecuted_blocks=1 00:06:43.271 00:06:43.271 ' 00:06:43.271 09:27:32 alias_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:43.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.271 --rc genhtml_branch_coverage=1 00:06:43.271 --rc genhtml_function_coverage=1 00:06:43.271 --rc genhtml_legend=1 00:06:43.271 --rc geninfo_all_blocks=1 00:06:43.271 --rc geninfo_unexecuted_blocks=1 00:06:43.271 00:06:43.271 ' 00:06:43.271 09:27:32 alias_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:43.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.271 --rc genhtml_branch_coverage=1 00:06:43.271 --rc genhtml_function_coverage=1 00:06:43.271 --rc genhtml_legend=1 00:06:43.271 --rc geninfo_all_blocks=1 00:06:43.271 --rc geninfo_unexecuted_blocks=1 00:06:43.271 00:06:43.271 ' 00:06:43.271 09:27:32 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:43.271 09:27:32 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=104093 00:06:43.271 09:27:32 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:43.271 09:27:32 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 104093 00:06:43.271 09:27:32 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 104093 ']' 00:06:43.271 09:27:32 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.271 09:27:32 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:43.271 09:27:32 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.271 09:27:32 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:43.271 09:27:32 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:43.532 [2024-10-07 09:27:32.289815] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:06:43.532 [2024-10-07 09:27:32.289917] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104093 ] 00:06:43.532 [2024-10-07 09:27:32.345575] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.532 [2024-10-07 09:27:32.456599] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.793 09:27:32 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:43.793 09:27:32 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:43.793 09:27:32 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:44.054 09:27:33 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 104093 00:06:44.054 09:27:33 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 104093 ']' 00:06:44.054 09:27:33 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 104093 00:06:44.054 09:27:33 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:06:44.054 09:27:33 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:44.054 09:27:33 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 104093 00:06:44.054 09:27:33 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:44.054 09:27:33 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:44.054 09:27:33 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 104093' 00:06:44.054 killing process with pid 104093 00:06:44.054 09:27:33 alias_rpc -- common/autotest_common.sh@969 -- # kill 104093 00:06:44.054 09:27:33 alias_rpc -- common/autotest_common.sh@974 -- # wait 104093 00:06:44.624 00:06:44.624 real 0m1.408s 00:06:44.624 user 0m1.522s 00:06:44.624 sys 0m0.448s 00:06:44.624 09:27:33 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:44.624 09:27:33 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:44.624 ************************************ 00:06:44.624 END TEST alias_rpc 00:06:44.624 ************************************ 00:06:44.624 09:27:33 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:44.624 09:27:33 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:44.624 09:27:33 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:44.624 09:27:33 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:44.624 09:27:33 -- common/autotest_common.sh@10 -- # set +x 00:06:44.624 ************************************ 00:06:44.624 START TEST spdkcli_tcp 00:06:44.624 ************************************ 00:06:44.624 09:27:33 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:44.624 * Looking for test storage... 00:06:44.624 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:06:44.624 09:27:33 spdkcli_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:44.624 09:27:33 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:06:44.624 09:27:33 spdkcli_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:44.883 09:27:33 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:44.883 09:27:33 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:44.883 09:27:33 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:44.883 09:27:33 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:44.883 09:27:33 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:44.883 09:27:33 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:44.883 09:27:33 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:44.883 09:27:33 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:44.883 09:27:33 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:44.883 09:27:33 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:44.883 09:27:33 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:44.883 09:27:33 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:44.883 09:27:33 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:44.883 09:27:33 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:44.883 09:27:33 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:44.883 09:27:33 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:44.883 09:27:33 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:44.883 09:27:33 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:44.883 09:27:33 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:44.883 09:27:33 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:44.883 09:27:33 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:44.883 09:27:33 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:44.883 09:27:33 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:44.883 09:27:33 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:44.883 09:27:33 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:44.883 09:27:33 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:44.883 09:27:33 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:44.883 09:27:33 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:44.883 09:27:33 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:44.883 09:27:33 spdkcli_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:44.883 09:27:33 spdkcli_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:44.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.883 --rc genhtml_branch_coverage=1 00:06:44.883 --rc genhtml_function_coverage=1 00:06:44.883 --rc genhtml_legend=1 00:06:44.883 --rc geninfo_all_blocks=1 00:06:44.883 --rc geninfo_unexecuted_blocks=1 00:06:44.883 00:06:44.883 ' 00:06:44.883 09:27:33 spdkcli_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:44.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.883 --rc genhtml_branch_coverage=1 00:06:44.883 --rc genhtml_function_coverage=1 00:06:44.883 --rc genhtml_legend=1 00:06:44.883 --rc geninfo_all_blocks=1 00:06:44.883 --rc geninfo_unexecuted_blocks=1 00:06:44.883 00:06:44.883 ' 00:06:44.883 09:27:33 spdkcli_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:44.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.883 --rc genhtml_branch_coverage=1 00:06:44.883 --rc genhtml_function_coverage=1 00:06:44.883 --rc genhtml_legend=1 00:06:44.883 --rc geninfo_all_blocks=1 00:06:44.883 --rc geninfo_unexecuted_blocks=1 00:06:44.883 00:06:44.883 ' 00:06:44.883 09:27:33 spdkcli_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:44.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.883 --rc genhtml_branch_coverage=1 00:06:44.883 --rc genhtml_function_coverage=1 00:06:44.883 --rc genhtml_legend=1 00:06:44.883 --rc geninfo_all_blocks=1 00:06:44.883 --rc geninfo_unexecuted_blocks=1 00:06:44.883 00:06:44.883 ' 00:06:44.883 09:27:33 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:06:44.883 09:27:33 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:44.884 09:27:33 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:06:44.884 09:27:33 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:44.884 09:27:33 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:44.884 09:27:33 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:44.884 09:27:33 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:44.884 09:27:33 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:44.884 09:27:33 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:44.884 09:27:33 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=104291 00:06:44.884 09:27:33 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 104291 00:06:44.884 09:27:33 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:44.884 09:27:33 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 104291 ']' 00:06:44.884 09:27:33 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:44.884 09:27:33 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:44.884 09:27:33 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:44.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:44.884 09:27:33 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:44.884 09:27:33 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:44.884 [2024-10-07 09:27:33.750189] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:06:44.884 [2024-10-07 09:27:33.750268] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104291 ] 00:06:44.884 [2024-10-07 09:27:33.804380] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:45.144 [2024-10-07 09:27:33.910401] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:06:45.144 [2024-10-07 09:27:33.910405] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.404 09:27:34 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:45.404 09:27:34 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:06:45.404 09:27:34 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=104301 00:06:45.404 09:27:34 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:45.404 09:27:34 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:45.664 [ 00:06:45.664 "bdev_malloc_delete", 00:06:45.664 "bdev_malloc_create", 00:06:45.664 "bdev_null_resize", 00:06:45.664 "bdev_null_delete", 00:06:45.664 "bdev_null_create", 00:06:45.664 "bdev_nvme_cuse_unregister", 00:06:45.664 "bdev_nvme_cuse_register", 00:06:45.664 "bdev_opal_new_user", 00:06:45.664 "bdev_opal_set_lock_state", 00:06:45.664 "bdev_opal_delete", 00:06:45.664 "bdev_opal_get_info", 00:06:45.664 "bdev_opal_create", 00:06:45.664 "bdev_nvme_opal_revert", 00:06:45.664 "bdev_nvme_opal_init", 00:06:45.664 "bdev_nvme_send_cmd", 00:06:45.664 "bdev_nvme_set_keys", 00:06:45.664 "bdev_nvme_get_path_iostat", 00:06:45.664 "bdev_nvme_get_mdns_discovery_info", 00:06:45.664 "bdev_nvme_stop_mdns_discovery", 00:06:45.664 "bdev_nvme_start_mdns_discovery", 00:06:45.664 "bdev_nvme_set_multipath_policy", 00:06:45.664 "bdev_nvme_set_preferred_path", 00:06:45.664 "bdev_nvme_get_io_paths", 00:06:45.664 "bdev_nvme_remove_error_injection", 00:06:45.664 "bdev_nvme_add_error_injection", 00:06:45.664 "bdev_nvme_get_discovery_info", 00:06:45.664 "bdev_nvme_stop_discovery", 00:06:45.664 "bdev_nvme_start_discovery", 00:06:45.664 "bdev_nvme_get_controller_health_info", 00:06:45.664 "bdev_nvme_disable_controller", 00:06:45.664 "bdev_nvme_enable_controller", 00:06:45.664 "bdev_nvme_reset_controller", 00:06:45.664 "bdev_nvme_get_transport_statistics", 00:06:45.664 "bdev_nvme_apply_firmware", 00:06:45.664 "bdev_nvme_detach_controller", 00:06:45.664 "bdev_nvme_get_controllers", 00:06:45.664 "bdev_nvme_attach_controller", 00:06:45.664 "bdev_nvme_set_hotplug", 00:06:45.664 "bdev_nvme_set_options", 00:06:45.664 "bdev_passthru_delete", 00:06:45.664 "bdev_passthru_create", 00:06:45.665 "bdev_lvol_set_parent_bdev", 00:06:45.665 "bdev_lvol_set_parent", 00:06:45.665 "bdev_lvol_check_shallow_copy", 00:06:45.665 "bdev_lvol_start_shallow_copy", 00:06:45.665 "bdev_lvol_grow_lvstore", 00:06:45.665 "bdev_lvol_get_lvols", 00:06:45.665 "bdev_lvol_get_lvstores", 00:06:45.665 "bdev_lvol_delete", 00:06:45.665 "bdev_lvol_set_read_only", 00:06:45.665 "bdev_lvol_resize", 00:06:45.665 "bdev_lvol_decouple_parent", 00:06:45.665 "bdev_lvol_inflate", 00:06:45.665 "bdev_lvol_rename", 00:06:45.665 "bdev_lvol_clone_bdev", 00:06:45.665 "bdev_lvol_clone", 00:06:45.665 "bdev_lvol_snapshot", 00:06:45.665 "bdev_lvol_create", 00:06:45.665 "bdev_lvol_delete_lvstore", 00:06:45.665 "bdev_lvol_rename_lvstore", 00:06:45.665 "bdev_lvol_create_lvstore", 00:06:45.665 "bdev_raid_set_options", 00:06:45.665 "bdev_raid_remove_base_bdev", 00:06:45.665 "bdev_raid_add_base_bdev", 00:06:45.665 "bdev_raid_delete", 00:06:45.665 "bdev_raid_create", 00:06:45.665 "bdev_raid_get_bdevs", 00:06:45.665 "bdev_error_inject_error", 00:06:45.665 "bdev_error_delete", 00:06:45.665 "bdev_error_create", 00:06:45.665 "bdev_split_delete", 00:06:45.665 "bdev_split_create", 00:06:45.665 "bdev_delay_delete", 00:06:45.665 "bdev_delay_create", 00:06:45.665 "bdev_delay_update_latency", 00:06:45.665 "bdev_zone_block_delete", 00:06:45.665 "bdev_zone_block_create", 00:06:45.665 "blobfs_create", 00:06:45.665 "blobfs_detect", 00:06:45.665 "blobfs_set_cache_size", 00:06:45.665 "bdev_aio_delete", 00:06:45.665 "bdev_aio_rescan", 00:06:45.665 "bdev_aio_create", 00:06:45.665 "bdev_ftl_set_property", 00:06:45.665 "bdev_ftl_get_properties", 00:06:45.665 "bdev_ftl_get_stats", 00:06:45.665 "bdev_ftl_unmap", 00:06:45.665 "bdev_ftl_unload", 00:06:45.665 "bdev_ftl_delete", 00:06:45.665 "bdev_ftl_load", 00:06:45.665 "bdev_ftl_create", 00:06:45.665 "bdev_virtio_attach_controller", 00:06:45.665 "bdev_virtio_scsi_get_devices", 00:06:45.665 "bdev_virtio_detach_controller", 00:06:45.665 "bdev_virtio_blk_set_hotplug", 00:06:45.665 "bdev_iscsi_delete", 00:06:45.665 "bdev_iscsi_create", 00:06:45.665 "bdev_iscsi_set_options", 00:06:45.665 "accel_error_inject_error", 00:06:45.665 "ioat_scan_accel_module", 00:06:45.665 "dsa_scan_accel_module", 00:06:45.665 "iaa_scan_accel_module", 00:06:45.665 "vfu_virtio_create_fs_endpoint", 00:06:45.665 "vfu_virtio_create_scsi_endpoint", 00:06:45.665 "vfu_virtio_scsi_remove_target", 00:06:45.665 "vfu_virtio_scsi_add_target", 00:06:45.665 "vfu_virtio_create_blk_endpoint", 00:06:45.665 "vfu_virtio_delete_endpoint", 00:06:45.665 "keyring_file_remove_key", 00:06:45.665 "keyring_file_add_key", 00:06:45.665 "keyring_linux_set_options", 00:06:45.665 "fsdev_aio_delete", 00:06:45.665 "fsdev_aio_create", 00:06:45.665 "iscsi_get_histogram", 00:06:45.665 "iscsi_enable_histogram", 00:06:45.665 "iscsi_set_options", 00:06:45.665 "iscsi_get_auth_groups", 00:06:45.665 "iscsi_auth_group_remove_secret", 00:06:45.665 "iscsi_auth_group_add_secret", 00:06:45.665 "iscsi_delete_auth_group", 00:06:45.665 "iscsi_create_auth_group", 00:06:45.665 "iscsi_set_discovery_auth", 00:06:45.665 "iscsi_get_options", 00:06:45.665 "iscsi_target_node_request_logout", 00:06:45.665 "iscsi_target_node_set_redirect", 00:06:45.665 "iscsi_target_node_set_auth", 00:06:45.665 "iscsi_target_node_add_lun", 00:06:45.665 "iscsi_get_stats", 00:06:45.665 "iscsi_get_connections", 00:06:45.665 "iscsi_portal_group_set_auth", 00:06:45.665 "iscsi_start_portal_group", 00:06:45.665 "iscsi_delete_portal_group", 00:06:45.665 "iscsi_create_portal_group", 00:06:45.665 "iscsi_get_portal_groups", 00:06:45.665 "iscsi_delete_target_node", 00:06:45.665 "iscsi_target_node_remove_pg_ig_maps", 00:06:45.665 "iscsi_target_node_add_pg_ig_maps", 00:06:45.665 "iscsi_create_target_node", 00:06:45.665 "iscsi_get_target_nodes", 00:06:45.665 "iscsi_delete_initiator_group", 00:06:45.665 "iscsi_initiator_group_remove_initiators", 00:06:45.665 "iscsi_initiator_group_add_initiators", 00:06:45.665 "iscsi_create_initiator_group", 00:06:45.665 "iscsi_get_initiator_groups", 00:06:45.665 "nvmf_set_crdt", 00:06:45.665 "nvmf_set_config", 00:06:45.665 "nvmf_set_max_subsystems", 00:06:45.665 "nvmf_stop_mdns_prr", 00:06:45.665 "nvmf_publish_mdns_prr", 00:06:45.665 "nvmf_subsystem_get_listeners", 00:06:45.665 "nvmf_subsystem_get_qpairs", 00:06:45.665 "nvmf_subsystem_get_controllers", 00:06:45.665 "nvmf_get_stats", 00:06:45.665 "nvmf_get_transports", 00:06:45.665 "nvmf_create_transport", 00:06:45.665 "nvmf_get_targets", 00:06:45.665 "nvmf_delete_target", 00:06:45.665 "nvmf_create_target", 00:06:45.665 "nvmf_subsystem_allow_any_host", 00:06:45.665 "nvmf_subsystem_set_keys", 00:06:45.665 "nvmf_subsystem_remove_host", 00:06:45.665 "nvmf_subsystem_add_host", 00:06:45.665 "nvmf_ns_remove_host", 00:06:45.665 "nvmf_ns_add_host", 00:06:45.665 "nvmf_subsystem_remove_ns", 00:06:45.665 "nvmf_subsystem_set_ns_ana_group", 00:06:45.665 "nvmf_subsystem_add_ns", 00:06:45.665 "nvmf_subsystem_listener_set_ana_state", 00:06:45.665 "nvmf_discovery_get_referrals", 00:06:45.665 "nvmf_discovery_remove_referral", 00:06:45.665 "nvmf_discovery_add_referral", 00:06:45.665 "nvmf_subsystem_remove_listener", 00:06:45.665 "nvmf_subsystem_add_listener", 00:06:45.665 "nvmf_delete_subsystem", 00:06:45.665 "nvmf_create_subsystem", 00:06:45.665 "nvmf_get_subsystems", 00:06:45.665 "env_dpdk_get_mem_stats", 00:06:45.665 "nbd_get_disks", 00:06:45.665 "nbd_stop_disk", 00:06:45.665 "nbd_start_disk", 00:06:45.665 "ublk_recover_disk", 00:06:45.665 "ublk_get_disks", 00:06:45.665 "ublk_stop_disk", 00:06:45.665 "ublk_start_disk", 00:06:45.665 "ublk_destroy_target", 00:06:45.665 "ublk_create_target", 00:06:45.665 "virtio_blk_create_transport", 00:06:45.665 "virtio_blk_get_transports", 00:06:45.665 "vhost_controller_set_coalescing", 00:06:45.665 "vhost_get_controllers", 00:06:45.665 "vhost_delete_controller", 00:06:45.665 "vhost_create_blk_controller", 00:06:45.665 "vhost_scsi_controller_remove_target", 00:06:45.665 "vhost_scsi_controller_add_target", 00:06:45.665 "vhost_start_scsi_controller", 00:06:45.665 "vhost_create_scsi_controller", 00:06:45.665 "thread_set_cpumask", 00:06:45.665 "scheduler_set_options", 00:06:45.665 "framework_get_governor", 00:06:45.665 "framework_get_scheduler", 00:06:45.665 "framework_set_scheduler", 00:06:45.665 "framework_get_reactors", 00:06:45.665 "thread_get_io_channels", 00:06:45.665 "thread_get_pollers", 00:06:45.665 "thread_get_stats", 00:06:45.665 "framework_monitor_context_switch", 00:06:45.665 "spdk_kill_instance", 00:06:45.665 "log_enable_timestamps", 00:06:45.665 "log_get_flags", 00:06:45.665 "log_clear_flag", 00:06:45.665 "log_set_flag", 00:06:45.665 "log_get_level", 00:06:45.665 "log_set_level", 00:06:45.665 "log_get_print_level", 00:06:45.665 "log_set_print_level", 00:06:45.665 "framework_enable_cpumask_locks", 00:06:45.665 "framework_disable_cpumask_locks", 00:06:45.665 "framework_wait_init", 00:06:45.665 "framework_start_init", 00:06:45.665 "scsi_get_devices", 00:06:45.665 "bdev_get_histogram", 00:06:45.665 "bdev_enable_histogram", 00:06:45.665 "bdev_set_qos_limit", 00:06:45.665 "bdev_set_qd_sampling_period", 00:06:45.665 "bdev_get_bdevs", 00:06:45.665 "bdev_reset_iostat", 00:06:45.665 "bdev_get_iostat", 00:06:45.665 "bdev_examine", 00:06:45.665 "bdev_wait_for_examine", 00:06:45.665 "bdev_set_options", 00:06:45.665 "accel_get_stats", 00:06:45.665 "accel_set_options", 00:06:45.665 "accel_set_driver", 00:06:45.665 "accel_crypto_key_destroy", 00:06:45.665 "accel_crypto_keys_get", 00:06:45.665 "accel_crypto_key_create", 00:06:45.665 "accel_assign_opc", 00:06:45.665 "accel_get_module_info", 00:06:45.665 "accel_get_opc_assignments", 00:06:45.665 "vmd_rescan", 00:06:45.665 "vmd_remove_device", 00:06:45.665 "vmd_enable", 00:06:45.665 "sock_get_default_impl", 00:06:45.665 "sock_set_default_impl", 00:06:45.665 "sock_impl_set_options", 00:06:45.665 "sock_impl_get_options", 00:06:45.665 "iobuf_get_stats", 00:06:45.665 "iobuf_set_options", 00:06:45.665 "keyring_get_keys", 00:06:45.665 "vfu_tgt_set_base_path", 00:06:45.665 "framework_get_pci_devices", 00:06:45.665 "framework_get_config", 00:06:45.665 "framework_get_subsystems", 00:06:45.665 "fsdev_set_opts", 00:06:45.665 "fsdev_get_opts", 00:06:45.665 "trace_get_info", 00:06:45.665 "trace_get_tpoint_group_mask", 00:06:45.665 "trace_disable_tpoint_group", 00:06:45.665 "trace_enable_tpoint_group", 00:06:45.665 "trace_clear_tpoint_mask", 00:06:45.665 "trace_set_tpoint_mask", 00:06:45.665 "notify_get_notifications", 00:06:45.665 "notify_get_types", 00:06:45.665 "spdk_get_version", 00:06:45.665 "rpc_get_methods" 00:06:45.665 ] 00:06:45.665 09:27:34 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:45.665 09:27:34 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:45.665 09:27:34 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:45.665 09:27:34 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:45.665 09:27:34 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 104291 00:06:45.665 09:27:34 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 104291 ']' 00:06:45.665 09:27:34 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 104291 00:06:45.665 09:27:34 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:06:45.665 09:27:34 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:45.666 09:27:34 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 104291 00:06:45.666 09:27:34 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:45.666 09:27:34 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:45.666 09:27:34 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 104291' 00:06:45.666 killing process with pid 104291 00:06:45.666 09:27:34 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 104291 00:06:45.666 09:27:34 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 104291 00:06:46.236 00:06:46.236 real 0m1.431s 00:06:46.236 user 0m2.473s 00:06:46.236 sys 0m0.499s 00:06:46.236 09:27:34 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:46.236 09:27:34 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:46.236 ************************************ 00:06:46.236 END TEST spdkcli_tcp 00:06:46.236 ************************************ 00:06:46.236 09:27:34 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:46.236 09:27:34 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:46.236 09:27:34 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:46.236 09:27:34 -- common/autotest_common.sh@10 -- # set +x 00:06:46.236 ************************************ 00:06:46.236 START TEST dpdk_mem_utility 00:06:46.236 ************************************ 00:06:46.236 09:27:35 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:46.236 * Looking for test storage... 00:06:46.236 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:06:46.236 09:27:35 dpdk_mem_utility -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:46.236 09:27:35 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lcov --version 00:06:46.236 09:27:35 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:46.236 09:27:35 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:46.236 09:27:35 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:46.236 09:27:35 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:46.236 09:27:35 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:46.236 09:27:35 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:46.236 09:27:35 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:46.236 09:27:35 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:46.236 09:27:35 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:46.236 09:27:35 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:46.236 09:27:35 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:46.236 09:27:35 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:46.236 09:27:35 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:46.236 09:27:35 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:46.236 09:27:35 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:46.236 09:27:35 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:46.236 09:27:35 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:46.236 09:27:35 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:46.236 09:27:35 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:46.236 09:27:35 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:46.236 09:27:35 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:46.236 09:27:35 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:46.236 09:27:35 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:46.236 09:27:35 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:46.236 09:27:35 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:46.236 09:27:35 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:46.236 09:27:35 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:46.236 09:27:35 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:46.236 09:27:35 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:46.236 09:27:35 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:46.236 09:27:35 dpdk_mem_utility -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:46.237 09:27:35 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:46.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.237 --rc genhtml_branch_coverage=1 00:06:46.237 --rc genhtml_function_coverage=1 00:06:46.237 --rc genhtml_legend=1 00:06:46.237 --rc geninfo_all_blocks=1 00:06:46.237 --rc geninfo_unexecuted_blocks=1 00:06:46.237 00:06:46.237 ' 00:06:46.237 09:27:35 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:46.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.237 --rc genhtml_branch_coverage=1 00:06:46.237 --rc genhtml_function_coverage=1 00:06:46.237 --rc genhtml_legend=1 00:06:46.237 --rc geninfo_all_blocks=1 00:06:46.237 --rc geninfo_unexecuted_blocks=1 00:06:46.237 00:06:46.237 ' 00:06:46.237 09:27:35 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:46.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.237 --rc genhtml_branch_coverage=1 00:06:46.237 --rc genhtml_function_coverage=1 00:06:46.237 --rc genhtml_legend=1 00:06:46.237 --rc geninfo_all_blocks=1 00:06:46.237 --rc geninfo_unexecuted_blocks=1 00:06:46.237 00:06:46.237 ' 00:06:46.237 09:27:35 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:46.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.237 --rc genhtml_branch_coverage=1 00:06:46.237 --rc genhtml_function_coverage=1 00:06:46.237 --rc genhtml_legend=1 00:06:46.237 --rc geninfo_all_blocks=1 00:06:46.237 --rc geninfo_unexecuted_blocks=1 00:06:46.237 00:06:46.237 ' 00:06:46.237 09:27:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:46.237 09:27:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=104501 00:06:46.237 09:27:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:46.237 09:27:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 104501 00:06:46.237 09:27:35 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 104501 ']' 00:06:46.237 09:27:35 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.237 09:27:35 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:46.237 09:27:35 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.237 09:27:35 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:46.237 09:27:35 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:46.237 [2024-10-07 09:27:35.224930] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:06:46.237 [2024-10-07 09:27:35.225047] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104501 ] 00:06:46.497 [2024-10-07 09:27:35.281198] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.497 [2024-10-07 09:27:35.389887] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.757 09:27:35 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:46.757 09:27:35 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:06:46.757 09:27:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:46.757 09:27:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:46.757 09:27:35 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.757 09:27:35 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:46.757 { 00:06:46.757 "filename": "/tmp/spdk_mem_dump.txt" 00:06:46.757 } 00:06:46.757 09:27:35 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.757 09:27:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:46.757 DPDK memory size 860.000000 MiB in 1 heap(s) 00:06:46.757 1 heaps totaling size 860.000000 MiB 00:06:46.757 size: 860.000000 MiB heap id: 0 00:06:46.757 end heaps---------- 00:06:46.757 9 mempools totaling size 642.649841 MiB 00:06:46.757 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:46.757 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:46.757 size: 92.545471 MiB name: bdev_io_104501 00:06:46.757 size: 51.011292 MiB name: evtpool_104501 00:06:46.757 size: 50.003479 MiB name: msgpool_104501 00:06:46.757 size: 36.509338 MiB name: fsdev_io_104501 00:06:46.757 size: 21.763794 MiB name: PDU_Pool 00:06:46.757 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:46.757 size: 0.026123 MiB name: Session_Pool 00:06:46.757 end mempools------- 00:06:46.757 6 memzones totaling size 4.142822 MiB 00:06:46.757 size: 1.000366 MiB name: RG_ring_0_104501 00:06:46.757 size: 1.000366 MiB name: RG_ring_1_104501 00:06:46.757 size: 1.000366 MiB name: RG_ring_4_104501 00:06:46.757 size: 1.000366 MiB name: RG_ring_5_104501 00:06:46.757 size: 0.125366 MiB name: RG_ring_2_104501 00:06:46.757 size: 0.015991 MiB name: RG_ring_3_104501 00:06:46.757 end memzones------- 00:06:46.757 09:27:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:47.019 heap id: 0 total size: 860.000000 MiB number of busy elements: 44 number of free elements: 16 00:06:47.019 list of free elements. size: 13.984680 MiB 00:06:47.019 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:47.019 element at address: 0x200000800000 with size: 1.996948 MiB 00:06:47.019 element at address: 0x20001bc00000 with size: 0.999878 MiB 00:06:47.019 element at address: 0x20001be00000 with size: 0.999878 MiB 00:06:47.019 element at address: 0x200034a00000 with size: 0.994446 MiB 00:06:47.019 element at address: 0x200009600000 with size: 0.959839 MiB 00:06:47.019 element at address: 0x200015e00000 with size: 0.954285 MiB 00:06:47.019 element at address: 0x20001c000000 with size: 0.936584 MiB 00:06:47.019 element at address: 0x200000200000 with size: 0.841614 MiB 00:06:47.019 element at address: 0x20001d800000 with size: 0.582886 MiB 00:06:47.019 element at address: 0x200003e00000 with size: 0.495422 MiB 00:06:47.019 element at address: 0x20000d800000 with size: 0.490723 MiB 00:06:47.019 element at address: 0x20001c200000 with size: 0.485657 MiB 00:06:47.019 element at address: 0x200007000000 with size: 0.481934 MiB 00:06:47.020 element at address: 0x20002ac00000 with size: 0.410034 MiB 00:06:47.020 element at address: 0x200003a00000 with size: 0.355042 MiB 00:06:47.020 list of standard malloc elements. size: 199.218628 MiB 00:06:47.020 element at address: 0x20000d9fff80 with size: 132.000122 MiB 00:06:47.020 element at address: 0x2000097fff80 with size: 64.000122 MiB 00:06:47.020 element at address: 0x20001bcfff80 with size: 1.000122 MiB 00:06:47.020 element at address: 0x20001befff80 with size: 1.000122 MiB 00:06:47.020 element at address: 0x20001c0fff80 with size: 1.000122 MiB 00:06:47.020 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:47.020 element at address: 0x20001c0eff00 with size: 0.062622 MiB 00:06:47.020 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:47.020 element at address: 0x20001c0efdc0 with size: 0.000305 MiB 00:06:47.020 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:06:47.020 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:06:47.020 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:06:47.020 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:47.020 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:47.020 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:47.020 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:47.020 element at address: 0x200003a5ae40 with size: 0.000183 MiB 00:06:47.020 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:47.020 element at address: 0x200003a5f300 with size: 0.000183 MiB 00:06:47.020 element at address: 0x200003a7f5c0 with size: 0.000183 MiB 00:06:47.020 element at address: 0x200003a7f680 with size: 0.000183 MiB 00:06:47.020 element at address: 0x200003aff940 with size: 0.000183 MiB 00:06:47.020 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:47.020 element at address: 0x200003e7ed40 with size: 0.000183 MiB 00:06:47.020 element at address: 0x200003eff000 with size: 0.000183 MiB 00:06:47.020 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:47.020 element at address: 0x20000707b600 with size: 0.000183 MiB 00:06:47.020 element at address: 0x20000707b6c0 with size: 0.000183 MiB 00:06:47.020 element at address: 0x2000070fb980 with size: 0.000183 MiB 00:06:47.020 element at address: 0x2000096fdd80 with size: 0.000183 MiB 00:06:47.020 element at address: 0x20000d87da00 with size: 0.000183 MiB 00:06:47.020 element at address: 0x20000d87dac0 with size: 0.000183 MiB 00:06:47.020 element at address: 0x20000d8fdd80 with size: 0.000183 MiB 00:06:47.020 element at address: 0x200015ef44c0 with size: 0.000183 MiB 00:06:47.020 element at address: 0x20001c0efc40 with size: 0.000183 MiB 00:06:47.020 element at address: 0x20001c0efd00 with size: 0.000183 MiB 00:06:47.020 element at address: 0x20001c2bc740 with size: 0.000183 MiB 00:06:47.020 element at address: 0x20001d895380 with size: 0.000183 MiB 00:06:47.020 element at address: 0x20001d895440 with size: 0.000183 MiB 00:06:47.020 element at address: 0x20002ac68f80 with size: 0.000183 MiB 00:06:47.020 element at address: 0x20002ac69040 with size: 0.000183 MiB 00:06:47.020 element at address: 0x20002ac6fc40 with size: 0.000183 MiB 00:06:47.020 element at address: 0x20002ac6fe40 with size: 0.000183 MiB 00:06:47.020 element at address: 0x20002ac6ff00 with size: 0.000183 MiB 00:06:47.020 list of memzone associated elements. size: 646.796692 MiB 00:06:47.020 element at address: 0x20001d895500 with size: 211.416748 MiB 00:06:47.020 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:47.020 element at address: 0x20002ac6ffc0 with size: 157.562561 MiB 00:06:47.020 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:47.020 element at address: 0x200015ff4780 with size: 92.045044 MiB 00:06:47.020 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_104501_0 00:06:47.020 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:47.020 associated memzone info: size: 48.002930 MiB name: MP_evtpool_104501_0 00:06:47.020 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:47.020 associated memzone info: size: 48.002930 MiB name: MP_msgpool_104501_0 00:06:47.020 element at address: 0x2000071fdb80 with size: 36.008911 MiB 00:06:47.020 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_104501_0 00:06:47.020 element at address: 0x20001c3be940 with size: 20.255554 MiB 00:06:47.020 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:47.020 element at address: 0x200034bfeb40 with size: 18.005066 MiB 00:06:47.020 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:47.020 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:47.020 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_104501 00:06:47.020 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:47.020 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_104501 00:06:47.020 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:47.020 associated memzone info: size: 1.007996 MiB name: MP_evtpool_104501 00:06:47.020 element at address: 0x20000d8fde40 with size: 1.008118 MiB 00:06:47.020 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:47.020 element at address: 0x20001c2bc800 with size: 1.008118 MiB 00:06:47.020 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:47.020 element at address: 0x2000096fde40 with size: 1.008118 MiB 00:06:47.020 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:47.020 element at address: 0x2000070fba40 with size: 1.008118 MiB 00:06:47.020 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:47.020 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:47.020 associated memzone info: size: 1.000366 MiB name: RG_ring_0_104501 00:06:47.020 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:47.020 associated memzone info: size: 1.000366 MiB name: RG_ring_1_104501 00:06:47.020 element at address: 0x200015ef4580 with size: 1.000488 MiB 00:06:47.020 associated memzone info: size: 1.000366 MiB name: RG_ring_4_104501 00:06:47.020 element at address: 0x200034afe940 with size: 1.000488 MiB 00:06:47.020 associated memzone info: size: 1.000366 MiB name: RG_ring_5_104501 00:06:47.020 element at address: 0x200003a7f740 with size: 0.500488 MiB 00:06:47.020 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_104501 00:06:47.020 element at address: 0x200003e7ee00 with size: 0.500488 MiB 00:06:47.020 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_104501 00:06:47.020 element at address: 0x20000d87db80 with size: 0.500488 MiB 00:06:47.020 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:47.020 element at address: 0x20000707b780 with size: 0.500488 MiB 00:06:47.020 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:47.020 element at address: 0x20001c27c540 with size: 0.250488 MiB 00:06:47.020 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:47.020 element at address: 0x200003a5f3c0 with size: 0.125488 MiB 00:06:47.020 associated memzone info: size: 0.125366 MiB name: RG_ring_2_104501 00:06:47.020 element at address: 0x2000096f5b80 with size: 0.031738 MiB 00:06:47.020 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:47.020 element at address: 0x20002ac69100 with size: 0.023743 MiB 00:06:47.020 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:47.020 element at address: 0x200003a5b100 with size: 0.016113 MiB 00:06:47.020 associated memzone info: size: 0.015991 MiB name: RG_ring_3_104501 00:06:47.020 element at address: 0x20002ac6f240 with size: 0.002441 MiB 00:06:47.020 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:47.020 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:06:47.020 associated memzone info: size: 0.000183 MiB name: MP_msgpool_104501 00:06:47.020 element at address: 0x200003affa00 with size: 0.000305 MiB 00:06:47.020 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_104501 00:06:47.020 element at address: 0x200003a5af00 with size: 0.000305 MiB 00:06:47.020 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_104501 00:06:47.020 element at address: 0x20002ac6fd00 with size: 0.000305 MiB 00:06:47.020 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:47.020 09:27:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:47.020 09:27:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 104501 00:06:47.020 09:27:35 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 104501 ']' 00:06:47.020 09:27:35 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 104501 00:06:47.020 09:27:35 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:06:47.020 09:27:35 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:47.020 09:27:35 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 104501 00:06:47.020 09:27:35 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:47.020 09:27:35 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:47.020 09:27:35 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 104501' 00:06:47.020 killing process with pid 104501 00:06:47.020 09:27:35 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 104501 00:06:47.020 09:27:35 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 104501 00:06:47.280 00:06:47.280 real 0m1.222s 00:06:47.280 user 0m1.201s 00:06:47.280 sys 0m0.428s 00:06:47.280 09:27:36 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:47.280 09:27:36 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:47.280 ************************************ 00:06:47.280 END TEST dpdk_mem_utility 00:06:47.280 ************************************ 00:06:47.280 09:27:36 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:47.280 09:27:36 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:47.280 09:27:36 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:47.280 09:27:36 -- common/autotest_common.sh@10 -- # set +x 00:06:47.539 ************************************ 00:06:47.539 START TEST event 00:06:47.539 ************************************ 00:06:47.539 09:27:36 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:47.539 * Looking for test storage... 00:06:47.539 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:47.539 09:27:36 event -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:47.539 09:27:36 event -- common/autotest_common.sh@1681 -- # lcov --version 00:06:47.539 09:27:36 event -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:47.539 09:27:36 event -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:47.539 09:27:36 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:47.539 09:27:36 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:47.539 09:27:36 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:47.539 09:27:36 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:47.539 09:27:36 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:47.539 09:27:36 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:47.539 09:27:36 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:47.539 09:27:36 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:47.539 09:27:36 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:47.539 09:27:36 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:47.539 09:27:36 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:47.539 09:27:36 event -- scripts/common.sh@344 -- # case "$op" in 00:06:47.539 09:27:36 event -- scripts/common.sh@345 -- # : 1 00:06:47.539 09:27:36 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:47.539 09:27:36 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:47.539 09:27:36 event -- scripts/common.sh@365 -- # decimal 1 00:06:47.539 09:27:36 event -- scripts/common.sh@353 -- # local d=1 00:06:47.539 09:27:36 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:47.539 09:27:36 event -- scripts/common.sh@355 -- # echo 1 00:06:47.539 09:27:36 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:47.539 09:27:36 event -- scripts/common.sh@366 -- # decimal 2 00:06:47.539 09:27:36 event -- scripts/common.sh@353 -- # local d=2 00:06:47.539 09:27:36 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:47.539 09:27:36 event -- scripts/common.sh@355 -- # echo 2 00:06:47.539 09:27:36 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:47.539 09:27:36 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:47.539 09:27:36 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:47.539 09:27:36 event -- scripts/common.sh@368 -- # return 0 00:06:47.539 09:27:36 event -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:47.539 09:27:36 event -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:47.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.539 --rc genhtml_branch_coverage=1 00:06:47.539 --rc genhtml_function_coverage=1 00:06:47.539 --rc genhtml_legend=1 00:06:47.539 --rc geninfo_all_blocks=1 00:06:47.539 --rc geninfo_unexecuted_blocks=1 00:06:47.539 00:06:47.539 ' 00:06:47.539 09:27:36 event -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:47.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.539 --rc genhtml_branch_coverage=1 00:06:47.539 --rc genhtml_function_coverage=1 00:06:47.539 --rc genhtml_legend=1 00:06:47.539 --rc geninfo_all_blocks=1 00:06:47.539 --rc geninfo_unexecuted_blocks=1 00:06:47.539 00:06:47.539 ' 00:06:47.539 09:27:36 event -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:47.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.539 --rc genhtml_branch_coverage=1 00:06:47.539 --rc genhtml_function_coverage=1 00:06:47.539 --rc genhtml_legend=1 00:06:47.539 --rc geninfo_all_blocks=1 00:06:47.539 --rc geninfo_unexecuted_blocks=1 00:06:47.539 00:06:47.539 ' 00:06:47.539 09:27:36 event -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:47.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.539 --rc genhtml_branch_coverage=1 00:06:47.539 --rc genhtml_function_coverage=1 00:06:47.539 --rc genhtml_legend=1 00:06:47.539 --rc geninfo_all_blocks=1 00:06:47.539 --rc geninfo_unexecuted_blocks=1 00:06:47.539 00:06:47.539 ' 00:06:47.539 09:27:36 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:47.539 09:27:36 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:47.539 09:27:36 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:47.539 09:27:36 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:06:47.539 09:27:36 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:47.539 09:27:36 event -- common/autotest_common.sh@10 -- # set +x 00:06:47.539 ************************************ 00:06:47.539 START TEST event_perf 00:06:47.539 ************************************ 00:06:47.539 09:27:36 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:47.539 Running I/O for 1 seconds...[2024-10-07 09:27:36.484811] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:06:47.539 [2024-10-07 09:27:36.484869] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104708 ] 00:06:47.798 [2024-10-07 09:27:36.549820] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:47.798 [2024-10-07 09:27:36.656602] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:06:47.798 [2024-10-07 09:27:36.656674] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:06:47.798 [2024-10-07 09:27:36.656762] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:06:47.798 [2024-10-07 09:27:36.656766] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.186 Running I/O for 1 seconds... 00:06:49.186 lcore 0: 234633 00:06:49.186 lcore 1: 234633 00:06:49.186 lcore 2: 234633 00:06:49.186 lcore 3: 234633 00:06:49.186 done. 00:06:49.186 00:06:49.186 real 0m1.301s 00:06:49.186 user 0m4.207s 00:06:49.186 sys 0m0.089s 00:06:49.186 09:27:37 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:49.186 09:27:37 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:49.186 ************************************ 00:06:49.186 END TEST event_perf 00:06:49.186 ************************************ 00:06:49.186 09:27:37 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:49.186 09:27:37 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:49.186 09:27:37 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:49.186 09:27:37 event -- common/autotest_common.sh@10 -- # set +x 00:06:49.186 ************************************ 00:06:49.186 START TEST event_reactor 00:06:49.186 ************************************ 00:06:49.186 09:27:37 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:49.186 [2024-10-07 09:27:37.834469] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:06:49.186 [2024-10-07 09:27:37.834531] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104955 ] 00:06:49.186 [2024-10-07 09:27:37.889876] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.186 [2024-10-07 09:27:37.991716] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.126 test_start 00:06:50.126 oneshot 00:06:50.126 tick 100 00:06:50.126 tick 100 00:06:50.126 tick 250 00:06:50.126 tick 100 00:06:50.126 tick 100 00:06:50.126 tick 100 00:06:50.126 tick 250 00:06:50.126 tick 500 00:06:50.126 tick 100 00:06:50.126 tick 100 00:06:50.126 tick 250 00:06:50.126 tick 100 00:06:50.126 tick 100 00:06:50.126 test_end 00:06:50.126 00:06:50.126 real 0m1.282s 00:06:50.126 user 0m1.203s 00:06:50.126 sys 0m0.074s 00:06:50.126 09:27:39 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:50.126 09:27:39 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:50.126 ************************************ 00:06:50.126 END TEST event_reactor 00:06:50.126 ************************************ 00:06:50.386 09:27:39 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:50.386 09:27:39 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:50.386 09:27:39 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:50.386 09:27:39 event -- common/autotest_common.sh@10 -- # set +x 00:06:50.386 ************************************ 00:06:50.386 START TEST event_reactor_perf 00:06:50.386 ************************************ 00:06:50.386 09:27:39 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:50.386 [2024-10-07 09:27:39.159795] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:06:50.386 [2024-10-07 09:27:39.159851] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105105 ] 00:06:50.386 [2024-10-07 09:27:39.212844] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.386 [2024-10-07 09:27:39.317477] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.767 test_start 00:06:51.767 test_end 00:06:51.767 Performance: 436275 events per second 00:06:51.767 00:06:51.767 real 0m1.282s 00:06:51.767 user 0m1.202s 00:06:51.767 sys 0m0.076s 00:06:51.767 09:27:40 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:51.767 09:27:40 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:51.767 ************************************ 00:06:51.767 END TEST event_reactor_perf 00:06:51.767 ************************************ 00:06:51.767 09:27:40 event -- event/event.sh@49 -- # uname -s 00:06:51.767 09:27:40 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:51.767 09:27:40 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:51.767 09:27:40 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:51.767 09:27:40 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:51.767 09:27:40 event -- common/autotest_common.sh@10 -- # set +x 00:06:51.767 ************************************ 00:06:51.767 START TEST event_scheduler 00:06:51.767 ************************************ 00:06:51.767 09:27:40 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:51.767 * Looking for test storage... 00:06:51.767 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:06:51.767 09:27:40 event.event_scheduler -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:51.767 09:27:40 event.event_scheduler -- common/autotest_common.sh@1681 -- # lcov --version 00:06:51.767 09:27:40 event.event_scheduler -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:51.767 09:27:40 event.event_scheduler -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:51.767 09:27:40 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:51.767 09:27:40 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:51.767 09:27:40 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:51.767 09:27:40 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:51.767 09:27:40 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:51.767 09:27:40 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:51.767 09:27:40 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:51.767 09:27:40 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:51.767 09:27:40 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:51.767 09:27:40 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:51.768 09:27:40 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:51.768 09:27:40 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:51.768 09:27:40 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:51.768 09:27:40 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:51.768 09:27:40 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:51.768 09:27:40 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:51.768 09:27:40 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:51.768 09:27:40 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:51.768 09:27:40 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:51.768 09:27:40 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:51.768 09:27:40 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:51.768 09:27:40 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:51.768 09:27:40 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:51.768 09:27:40 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:51.768 09:27:40 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:51.768 09:27:40 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:51.768 09:27:40 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:51.768 09:27:40 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:51.768 09:27:40 event.event_scheduler -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:51.768 09:27:40 event.event_scheduler -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:51.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.768 --rc genhtml_branch_coverage=1 00:06:51.768 --rc genhtml_function_coverage=1 00:06:51.768 --rc genhtml_legend=1 00:06:51.768 --rc geninfo_all_blocks=1 00:06:51.768 --rc geninfo_unexecuted_blocks=1 00:06:51.768 00:06:51.768 ' 00:06:51.768 09:27:40 event.event_scheduler -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:51.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.768 --rc genhtml_branch_coverage=1 00:06:51.768 --rc genhtml_function_coverage=1 00:06:51.768 --rc genhtml_legend=1 00:06:51.768 --rc geninfo_all_blocks=1 00:06:51.768 --rc geninfo_unexecuted_blocks=1 00:06:51.768 00:06:51.768 ' 00:06:51.768 09:27:40 event.event_scheduler -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:51.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.768 --rc genhtml_branch_coverage=1 00:06:51.768 --rc genhtml_function_coverage=1 00:06:51.768 --rc genhtml_legend=1 00:06:51.768 --rc geninfo_all_blocks=1 00:06:51.768 --rc geninfo_unexecuted_blocks=1 00:06:51.768 00:06:51.768 ' 00:06:51.768 09:27:40 event.event_scheduler -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:51.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.768 --rc genhtml_branch_coverage=1 00:06:51.768 --rc genhtml_function_coverage=1 00:06:51.768 --rc genhtml_legend=1 00:06:51.768 --rc geninfo_all_blocks=1 00:06:51.768 --rc geninfo_unexecuted_blocks=1 00:06:51.768 00:06:51.768 ' 00:06:51.768 09:27:40 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:51.768 09:27:40 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=105299 00:06:51.768 09:27:40 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:51.768 09:27:40 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:51.768 09:27:40 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 105299 00:06:51.768 09:27:40 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 105299 ']' 00:06:51.768 09:27:40 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.768 09:27:40 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:51.768 09:27:40 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.768 09:27:40 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:51.768 09:27:40 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:51.768 [2024-10-07 09:27:40.677370] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:06:51.768 [2024-10-07 09:27:40.677471] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105299 ] 00:06:51.768 [2024-10-07 09:27:40.734898] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:52.041 [2024-10-07 09:27:40.848697] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.041 [2024-10-07 09:27:40.848758] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:06:52.041 [2024-10-07 09:27:40.848824] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:06:52.041 [2024-10-07 09:27:40.848828] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:06:52.041 09:27:40 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:52.041 09:27:40 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:06:52.041 09:27:40 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:52.041 09:27:40 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.041 09:27:40 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:52.041 [2024-10-07 09:27:40.921705] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:06:52.041 [2024-10-07 09:27:40.921732] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:52.041 [2024-10-07 09:27:40.921749] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:52.041 [2024-10-07 09:27:40.921760] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:52.041 [2024-10-07 09:27:40.921770] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:52.041 09:27:40 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.041 09:27:40 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:52.041 09:27:40 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.041 09:27:40 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:52.041 [2024-10-07 09:27:41.016189] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:52.041 09:27:41 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.041 09:27:41 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:52.041 09:27:41 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:52.041 09:27:41 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:52.041 09:27:41 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:52.302 ************************************ 00:06:52.302 START TEST scheduler_create_thread 00:06:52.302 ************************************ 00:06:52.302 09:27:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:06:52.302 09:27:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:52.302 09:27:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.302 09:27:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:52.302 2 00:06:52.302 09:27:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.302 09:27:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:52.302 09:27:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.302 09:27:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:52.302 3 00:06:52.302 09:27:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.302 09:27:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:52.302 09:27:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.302 09:27:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:52.302 4 00:06:52.302 09:27:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.302 09:27:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:52.302 09:27:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.302 09:27:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:52.302 5 00:06:52.302 09:27:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.302 09:27:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:52.302 09:27:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.302 09:27:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:52.302 6 00:06:52.302 09:27:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.302 09:27:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:52.302 09:27:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.302 09:27:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:52.302 7 00:06:52.302 09:27:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.302 09:27:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:52.302 09:27:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.302 09:27:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:52.302 8 00:06:52.302 09:27:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.302 09:27:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:52.302 09:27:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.302 09:27:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:52.302 9 00:06:52.302 09:27:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.302 09:27:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:52.302 09:27:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.302 09:27:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:52.302 10 00:06:52.302 09:27:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.302 09:27:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:52.302 09:27:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.302 09:27:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:52.302 09:27:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.302 09:27:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:52.302 09:27:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:52.302 09:27:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.302 09:27:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:52.302 09:27:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.302 09:27:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:52.302 09:27:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.302 09:27:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:52.874 09:27:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.874 09:27:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:52.874 09:27:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:52.874 09:27:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.874 09:27:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:53.813 09:27:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:53.813 00:06:53.813 real 0m1.752s 00:06:53.813 user 0m0.008s 00:06:53.813 sys 0m0.005s 00:06:53.813 09:27:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:53.813 09:27:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:53.813 ************************************ 00:06:53.813 END TEST scheduler_create_thread 00:06:53.813 ************************************ 00:06:54.074 09:27:42 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:54.074 09:27:42 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 105299 00:06:54.074 09:27:42 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 105299 ']' 00:06:54.074 09:27:42 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 105299 00:06:54.074 09:27:42 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:06:54.074 09:27:42 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:54.074 09:27:42 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 105299 00:06:54.074 09:27:42 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:54.074 09:27:42 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:54.074 09:27:42 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 105299' 00:06:54.074 killing process with pid 105299 00:06:54.074 09:27:42 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 105299 00:06:54.074 09:27:42 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 105299 00:06:54.333 [2024-10-07 09:27:43.275780] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:54.592 00:06:54.592 real 0m3.052s 00:06:54.592 user 0m3.937s 00:06:54.592 sys 0m0.365s 00:06:54.592 09:27:43 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:54.592 09:27:43 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:54.592 ************************************ 00:06:54.592 END TEST event_scheduler 00:06:54.592 ************************************ 00:06:54.592 09:27:43 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:54.592 09:27:43 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:54.592 09:27:43 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:54.592 09:27:43 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:54.592 09:27:43 event -- common/autotest_common.sh@10 -- # set +x 00:06:54.592 ************************************ 00:06:54.592 START TEST app_repeat 00:06:54.592 ************************************ 00:06:54.592 09:27:43 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:06:54.592 09:27:43 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:54.592 09:27:43 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:54.592 09:27:43 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:54.592 09:27:43 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:54.851 09:27:43 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:54.851 09:27:43 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:54.851 09:27:43 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:54.851 09:27:43 event.app_repeat -- event/event.sh@19 -- # repeat_pid=105718 00:06:54.851 09:27:43 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:54.851 09:27:43 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:54.851 09:27:43 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 105718' 00:06:54.851 Process app_repeat pid: 105718 00:06:54.851 09:27:43 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:54.851 09:27:43 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:54.851 spdk_app_start Round 0 00:06:54.851 09:27:43 event.app_repeat -- event/event.sh@25 -- # waitforlisten 105718 /var/tmp/spdk-nbd.sock 00:06:54.851 09:27:43 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 105718 ']' 00:06:54.851 09:27:43 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:54.851 09:27:43 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:54.851 09:27:43 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:54.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:54.851 09:27:43 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:54.851 09:27:43 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:54.851 [2024-10-07 09:27:43.611647] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:06:54.851 [2024-10-07 09:27:43.611716] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105718 ] 00:06:54.851 [2024-10-07 09:27:43.665450] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:54.851 [2024-10-07 09:27:43.769298] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:06:54.851 [2024-10-07 09:27:43.769301] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.109 09:27:43 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:55.109 09:27:43 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:55.109 09:27:43 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:55.368 Malloc0 00:06:55.368 09:27:44 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:55.627 Malloc1 00:06:55.627 09:27:44 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:55.627 09:27:44 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:55.627 09:27:44 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:55.627 09:27:44 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:55.627 09:27:44 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:55.627 09:27:44 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:55.627 09:27:44 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:55.627 09:27:44 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:55.627 09:27:44 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:55.627 09:27:44 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:55.627 09:27:44 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:55.627 09:27:44 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:55.627 09:27:44 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:55.627 09:27:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:55.627 09:27:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:55.627 09:27:44 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:55.885 /dev/nbd0 00:06:55.885 09:27:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:55.885 09:27:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:55.886 09:27:44 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:55.886 09:27:44 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:55.886 09:27:44 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:55.886 09:27:44 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:55.886 09:27:44 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:55.886 09:27:44 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:55.886 09:27:44 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:55.886 09:27:44 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:55.886 09:27:44 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:55.886 1+0 records in 00:06:55.886 1+0 records out 00:06:55.886 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000225431 s, 18.2 MB/s 00:06:55.886 09:27:44 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:55.886 09:27:44 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:55.886 09:27:44 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:55.886 09:27:44 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:55.886 09:27:44 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:55.886 09:27:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:55.886 09:27:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:55.886 09:27:44 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:56.144 /dev/nbd1 00:06:56.144 09:27:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:56.144 09:27:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:56.144 09:27:45 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:56.144 09:27:45 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:56.144 09:27:45 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:56.144 09:27:45 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:56.144 09:27:45 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:56.144 09:27:45 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:56.144 09:27:45 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:56.144 09:27:45 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:56.144 09:27:45 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:56.144 1+0 records in 00:06:56.144 1+0 records out 00:06:56.144 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000145307 s, 28.2 MB/s 00:06:56.144 09:27:45 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:56.144 09:27:45 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:56.144 09:27:45 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:56.144 09:27:45 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:56.144 09:27:45 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:56.144 09:27:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:56.144 09:27:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:56.144 09:27:45 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:56.144 09:27:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:56.144 09:27:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:56.402 09:27:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:56.402 { 00:06:56.402 "nbd_device": "/dev/nbd0", 00:06:56.402 "bdev_name": "Malloc0" 00:06:56.402 }, 00:06:56.402 { 00:06:56.402 "nbd_device": "/dev/nbd1", 00:06:56.402 "bdev_name": "Malloc1" 00:06:56.402 } 00:06:56.402 ]' 00:06:56.402 09:27:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:56.402 { 00:06:56.402 "nbd_device": "/dev/nbd0", 00:06:56.402 "bdev_name": "Malloc0" 00:06:56.402 }, 00:06:56.402 { 00:06:56.402 "nbd_device": "/dev/nbd1", 00:06:56.402 "bdev_name": "Malloc1" 00:06:56.402 } 00:06:56.402 ]' 00:06:56.402 09:27:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:56.403 09:27:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:56.403 /dev/nbd1' 00:06:56.403 09:27:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:56.403 /dev/nbd1' 00:06:56.663 09:27:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:56.663 09:27:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:56.663 09:27:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:56.663 09:27:45 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:56.663 09:27:45 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:56.663 09:27:45 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:56.663 09:27:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:56.663 09:27:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:56.663 09:27:45 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:56.663 09:27:45 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:56.663 09:27:45 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:56.663 09:27:45 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:56.663 256+0 records in 00:06:56.663 256+0 records out 00:06:56.663 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00386831 s, 271 MB/s 00:06:56.663 09:27:45 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:56.663 09:27:45 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:56.663 256+0 records in 00:06:56.663 256+0 records out 00:06:56.663 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0202111 s, 51.9 MB/s 00:06:56.663 09:27:45 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:56.663 09:27:45 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:56.663 256+0 records in 00:06:56.663 256+0 records out 00:06:56.663 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.02264 s, 46.3 MB/s 00:06:56.663 09:27:45 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:56.663 09:27:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:56.663 09:27:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:56.663 09:27:45 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:56.663 09:27:45 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:56.663 09:27:45 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:56.663 09:27:45 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:56.663 09:27:45 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:56.663 09:27:45 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:56.663 09:27:45 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:56.663 09:27:45 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:56.663 09:27:45 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:56.663 09:27:45 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:56.664 09:27:45 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:56.664 09:27:45 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:56.664 09:27:45 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:56.664 09:27:45 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:56.664 09:27:45 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:56.664 09:27:45 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:56.923 09:27:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:56.923 09:27:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:56.923 09:27:45 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:56.923 09:27:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:56.923 09:27:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:56.923 09:27:45 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:56.923 09:27:45 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:56.923 09:27:45 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:56.923 09:27:45 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:56.923 09:27:45 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:57.181 09:27:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:57.181 09:27:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:57.181 09:27:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:57.181 09:27:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:57.181 09:27:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:57.181 09:27:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:57.181 09:27:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:57.181 09:27:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:57.181 09:27:46 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:57.181 09:27:46 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:57.181 09:27:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:57.439 09:27:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:57.439 09:27:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:57.439 09:27:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:57.439 09:27:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:57.439 09:27:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:57.439 09:27:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:57.439 09:27:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:57.439 09:27:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:57.439 09:27:46 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:57.439 09:27:46 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:57.439 09:27:46 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:57.439 09:27:46 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:57.439 09:27:46 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:57.698 09:27:46 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:57.959 [2024-10-07 09:27:46.948737] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:58.219 [2024-10-07 09:27:47.050725] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.219 [2024-10-07 09:27:47.050725] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:06:58.219 [2024-10-07 09:27:47.102182] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:58.219 [2024-10-07 09:27:47.102259] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:00.760 09:27:49 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:00.760 09:27:49 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:07:00.760 spdk_app_start Round 1 00:07:00.760 09:27:49 event.app_repeat -- event/event.sh@25 -- # waitforlisten 105718 /var/tmp/spdk-nbd.sock 00:07:00.760 09:27:49 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 105718 ']' 00:07:00.760 09:27:49 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:00.760 09:27:49 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:00.760 09:27:49 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:00.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:00.760 09:27:49 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:00.760 09:27:49 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:01.018 09:27:49 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:01.018 09:27:49 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:01.018 09:27:49 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:01.277 Malloc0 00:07:01.277 09:27:50 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:01.537 Malloc1 00:07:01.798 09:27:50 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:01.798 09:27:50 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:01.798 09:27:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:01.798 09:27:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:01.798 09:27:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:01.798 09:27:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:01.798 09:27:50 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:01.798 09:27:50 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:01.798 09:27:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:01.798 09:27:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:01.798 09:27:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:01.798 09:27:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:01.798 09:27:50 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:01.798 09:27:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:01.798 09:27:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:01.798 09:27:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:02.057 /dev/nbd0 00:07:02.057 09:27:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:02.057 09:27:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:02.057 09:27:50 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:02.057 09:27:50 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:02.057 09:27:50 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:02.057 09:27:50 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:02.057 09:27:50 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:02.057 09:27:50 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:02.057 09:27:50 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:02.057 09:27:50 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:02.057 09:27:50 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:02.057 1+0 records in 00:07:02.057 1+0 records out 00:07:02.057 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000212765 s, 19.3 MB/s 00:07:02.057 09:27:50 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:02.057 09:27:50 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:02.057 09:27:50 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:02.057 09:27:50 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:02.057 09:27:50 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:02.057 09:27:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:02.057 09:27:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:02.057 09:27:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:02.316 /dev/nbd1 00:07:02.316 09:27:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:02.316 09:27:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:02.316 09:27:51 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:02.316 09:27:51 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:02.316 09:27:51 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:02.316 09:27:51 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:02.316 09:27:51 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:02.316 09:27:51 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:02.316 09:27:51 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:02.316 09:27:51 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:02.316 09:27:51 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:02.316 1+0 records in 00:07:02.316 1+0 records out 00:07:02.316 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000215393 s, 19.0 MB/s 00:07:02.316 09:27:51 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:02.316 09:27:51 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:02.316 09:27:51 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:02.316 09:27:51 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:02.316 09:27:51 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:02.316 09:27:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:02.316 09:27:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:02.316 09:27:51 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:02.316 09:27:51 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:02.316 09:27:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:02.575 09:27:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:02.575 { 00:07:02.575 "nbd_device": "/dev/nbd0", 00:07:02.575 "bdev_name": "Malloc0" 00:07:02.575 }, 00:07:02.575 { 00:07:02.575 "nbd_device": "/dev/nbd1", 00:07:02.575 "bdev_name": "Malloc1" 00:07:02.575 } 00:07:02.575 ]' 00:07:02.575 09:27:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:02.575 { 00:07:02.575 "nbd_device": "/dev/nbd0", 00:07:02.575 "bdev_name": "Malloc0" 00:07:02.575 }, 00:07:02.575 { 00:07:02.575 "nbd_device": "/dev/nbd1", 00:07:02.575 "bdev_name": "Malloc1" 00:07:02.575 } 00:07:02.575 ]' 00:07:02.575 09:27:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:02.575 09:27:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:02.575 /dev/nbd1' 00:07:02.575 09:27:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:02.575 /dev/nbd1' 00:07:02.575 09:27:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:02.575 09:27:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:02.575 09:27:51 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:02.575 09:27:51 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:02.575 09:27:51 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:02.575 09:27:51 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:02.575 09:27:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:02.575 09:27:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:02.575 09:27:51 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:02.575 09:27:51 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:02.575 09:27:51 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:02.575 09:27:51 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:02.575 256+0 records in 00:07:02.575 256+0 records out 00:07:02.575 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00513571 s, 204 MB/s 00:07:02.575 09:27:51 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:02.575 09:27:51 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:02.575 256+0 records in 00:07:02.575 256+0 records out 00:07:02.575 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0203054 s, 51.6 MB/s 00:07:02.575 09:27:51 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:02.575 09:27:51 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:02.575 256+0 records in 00:07:02.575 256+0 records out 00:07:02.575 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0220209 s, 47.6 MB/s 00:07:02.575 09:27:51 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:02.575 09:27:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:02.575 09:27:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:02.575 09:27:51 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:02.575 09:27:51 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:02.575 09:27:51 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:02.575 09:27:51 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:02.575 09:27:51 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:02.575 09:27:51 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:02.575 09:27:51 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:02.575 09:27:51 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:02.575 09:27:51 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:02.575 09:27:51 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:02.575 09:27:51 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:02.575 09:27:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:02.575 09:27:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:02.575 09:27:51 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:02.575 09:27:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:02.575 09:27:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:03.143 09:27:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:03.143 09:27:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:03.143 09:27:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:03.143 09:27:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:03.143 09:27:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:03.143 09:27:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:03.143 09:27:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:03.143 09:27:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:03.143 09:27:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:03.144 09:27:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:03.402 09:27:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:03.402 09:27:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:03.402 09:27:52 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:03.402 09:27:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:03.402 09:27:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:03.402 09:27:52 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:03.402 09:27:52 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:03.402 09:27:52 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:03.402 09:27:52 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:03.402 09:27:52 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:03.402 09:27:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:03.668 09:27:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:03.668 09:27:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:03.668 09:27:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:03.668 09:27:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:03.668 09:27:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:03.668 09:27:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:03.668 09:27:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:03.668 09:27:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:03.668 09:27:52 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:03.668 09:27:52 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:03.668 09:27:52 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:03.668 09:27:52 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:03.668 09:27:52 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:03.928 09:27:52 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:04.189 [2024-10-07 09:27:53.000084] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:04.189 [2024-10-07 09:27:53.100537] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.189 [2024-10-07 09:27:53.100538] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:07:04.189 [2024-10-07 09:27:53.158291] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:04.189 [2024-10-07 09:27:53.158376] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:07.484 09:27:55 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:07.484 09:27:55 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:07.484 spdk_app_start Round 2 00:07:07.484 09:27:55 event.app_repeat -- event/event.sh@25 -- # waitforlisten 105718 /var/tmp/spdk-nbd.sock 00:07:07.484 09:27:55 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 105718 ']' 00:07:07.484 09:27:55 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:07.484 09:27:55 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:07.484 09:27:55 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:07.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:07.484 09:27:55 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:07.484 09:27:55 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:07.484 09:27:56 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:07.484 09:27:56 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:07.484 09:27:56 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:07.484 Malloc0 00:07:07.484 09:27:56 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:07.743 Malloc1 00:07:07.743 09:27:56 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:07.743 09:27:56 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:07.743 09:27:56 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:07.743 09:27:56 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:07.744 09:27:56 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:07.744 09:27:56 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:07.744 09:27:56 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:07.744 09:27:56 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:07.744 09:27:56 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:07.744 09:27:56 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:07.744 09:27:56 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:07.744 09:27:56 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:07.744 09:27:56 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:07.744 09:27:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:07.744 09:27:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:07.744 09:27:56 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:08.004 /dev/nbd0 00:07:08.004 09:27:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:08.004 09:27:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:08.004 09:27:56 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:08.004 09:27:56 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:08.004 09:27:56 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:08.004 09:27:56 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:08.004 09:27:56 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:08.004 09:27:56 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:08.004 09:27:56 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:08.004 09:27:56 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:08.004 09:27:56 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:08.004 1+0 records in 00:07:08.004 1+0 records out 00:07:08.004 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000260278 s, 15.7 MB/s 00:07:08.004 09:27:56 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:08.004 09:27:56 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:08.004 09:27:56 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:08.004 09:27:56 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:08.004 09:27:56 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:08.004 09:27:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:08.004 09:27:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:08.004 09:27:56 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:08.264 /dev/nbd1 00:07:08.522 09:27:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:08.522 09:27:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:08.522 09:27:57 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:08.522 09:27:57 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:08.522 09:27:57 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:08.522 09:27:57 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:08.522 09:27:57 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:08.522 09:27:57 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:08.522 09:27:57 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:08.522 09:27:57 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:08.523 09:27:57 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:08.523 1+0 records in 00:07:08.523 1+0 records out 00:07:08.523 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000163604 s, 25.0 MB/s 00:07:08.523 09:27:57 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:08.523 09:27:57 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:08.523 09:27:57 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:08.523 09:27:57 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:08.523 09:27:57 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:08.523 09:27:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:08.523 09:27:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:08.523 09:27:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:08.523 09:27:57 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:08.523 09:27:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:08.781 09:27:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:08.781 { 00:07:08.781 "nbd_device": "/dev/nbd0", 00:07:08.781 "bdev_name": "Malloc0" 00:07:08.781 }, 00:07:08.781 { 00:07:08.781 "nbd_device": "/dev/nbd1", 00:07:08.781 "bdev_name": "Malloc1" 00:07:08.781 } 00:07:08.781 ]' 00:07:08.781 09:27:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:08.781 { 00:07:08.781 "nbd_device": "/dev/nbd0", 00:07:08.781 "bdev_name": "Malloc0" 00:07:08.781 }, 00:07:08.781 { 00:07:08.781 "nbd_device": "/dev/nbd1", 00:07:08.781 "bdev_name": "Malloc1" 00:07:08.781 } 00:07:08.781 ]' 00:07:08.781 09:27:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:08.781 09:27:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:08.781 /dev/nbd1' 00:07:08.781 09:27:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:08.781 /dev/nbd1' 00:07:08.781 09:27:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:08.781 09:27:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:08.781 09:27:57 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:08.781 09:27:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:08.781 09:27:57 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:08.781 09:27:57 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:08.781 09:27:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:08.781 09:27:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:08.781 09:27:57 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:08.781 09:27:57 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:08.781 09:27:57 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:08.781 09:27:57 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:08.781 256+0 records in 00:07:08.781 256+0 records out 00:07:08.781 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00504786 s, 208 MB/s 00:07:08.781 09:27:57 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:08.781 09:27:57 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:08.781 256+0 records in 00:07:08.781 256+0 records out 00:07:08.781 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0197558 s, 53.1 MB/s 00:07:08.781 09:27:57 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:08.782 09:27:57 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:08.782 256+0 records in 00:07:08.782 256+0 records out 00:07:08.782 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0218532 s, 48.0 MB/s 00:07:08.782 09:27:57 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:08.782 09:27:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:08.782 09:27:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:08.782 09:27:57 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:08.782 09:27:57 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:08.782 09:27:57 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:08.782 09:27:57 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:08.782 09:27:57 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:08.782 09:27:57 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:08.782 09:27:57 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:08.782 09:27:57 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:08.782 09:27:57 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:08.782 09:27:57 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:08.782 09:27:57 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:08.782 09:27:57 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:08.782 09:27:57 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:08.782 09:27:57 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:08.782 09:27:57 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:08.782 09:27:57 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:09.040 09:27:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:09.040 09:27:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:09.040 09:27:57 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:09.040 09:27:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:09.040 09:27:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:09.040 09:27:57 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:09.040 09:27:57 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:09.040 09:27:57 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:09.040 09:27:57 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:09.040 09:27:57 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:09.298 09:27:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:09.298 09:27:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:09.298 09:27:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:09.298 09:27:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:09.298 09:27:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:09.298 09:27:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:09.298 09:27:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:09.298 09:27:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:09.298 09:27:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:09.298 09:27:58 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:09.298 09:27:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:09.564 09:27:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:09.564 09:27:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:09.564 09:27:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:09.823 09:27:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:09.823 09:27:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:09.823 09:27:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:09.823 09:27:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:09.823 09:27:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:09.823 09:27:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:09.823 09:27:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:09.823 09:27:58 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:09.823 09:27:58 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:09.823 09:27:58 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:10.083 09:27:58 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:10.344 [2024-10-07 09:27:59.121562] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:10.345 [2024-10-07 09:27:59.223202] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:07:10.345 [2024-10-07 09:27:59.223206] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.345 [2024-10-07 09:27:59.272511] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:10.345 [2024-10-07 09:27:59.272570] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:12.884 09:28:01 event.app_repeat -- event/event.sh@38 -- # waitforlisten 105718 /var/tmp/spdk-nbd.sock 00:07:12.884 09:28:01 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 105718 ']' 00:07:12.884 09:28:01 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:12.884 09:28:01 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:12.884 09:28:01 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:12.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:12.884 09:28:01 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:12.884 09:28:01 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:13.142 09:28:02 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:13.142 09:28:02 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:13.142 09:28:02 event.app_repeat -- event/event.sh@39 -- # killprocess 105718 00:07:13.142 09:28:02 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 105718 ']' 00:07:13.142 09:28:02 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 105718 00:07:13.142 09:28:02 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:07:13.142 09:28:02 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:13.142 09:28:02 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 105718 00:07:13.401 09:28:02 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:13.402 09:28:02 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:13.402 09:28:02 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 105718' 00:07:13.402 killing process with pid 105718 00:07:13.402 09:28:02 event.app_repeat -- common/autotest_common.sh@969 -- # kill 105718 00:07:13.402 09:28:02 event.app_repeat -- common/autotest_common.sh@974 -- # wait 105718 00:07:13.402 spdk_app_start is called in Round 0. 00:07:13.402 Shutdown signal received, stop current app iteration 00:07:13.402 Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 reinitialization... 00:07:13.402 spdk_app_start is called in Round 1. 00:07:13.402 Shutdown signal received, stop current app iteration 00:07:13.402 Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 reinitialization... 00:07:13.402 spdk_app_start is called in Round 2. 00:07:13.402 Shutdown signal received, stop current app iteration 00:07:13.402 Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 reinitialization... 00:07:13.402 spdk_app_start is called in Round 3. 00:07:13.402 Shutdown signal received, stop current app iteration 00:07:13.661 09:28:02 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:13.661 09:28:02 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:13.661 00:07:13.661 real 0m18.819s 00:07:13.661 user 0m41.417s 00:07:13.661 sys 0m3.182s 00:07:13.661 09:28:02 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:13.661 09:28:02 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:13.661 ************************************ 00:07:13.661 END TEST app_repeat 00:07:13.661 ************************************ 00:07:13.661 09:28:02 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:13.661 09:28:02 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:13.661 09:28:02 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:13.661 09:28:02 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:13.661 09:28:02 event -- common/autotest_common.sh@10 -- # set +x 00:07:13.661 ************************************ 00:07:13.661 START TEST cpu_locks 00:07:13.661 ************************************ 00:07:13.661 09:28:02 event.cpu_locks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:13.661 * Looking for test storage... 00:07:13.661 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:07:13.661 09:28:02 event.cpu_locks -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:13.661 09:28:02 event.cpu_locks -- common/autotest_common.sh@1681 -- # lcov --version 00:07:13.661 09:28:02 event.cpu_locks -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:13.661 09:28:02 event.cpu_locks -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:13.661 09:28:02 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:13.661 09:28:02 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:13.661 09:28:02 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:13.661 09:28:02 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:07:13.661 09:28:02 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:07:13.661 09:28:02 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:07:13.661 09:28:02 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:07:13.661 09:28:02 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:07:13.661 09:28:02 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:07:13.661 09:28:02 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:07:13.661 09:28:02 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:13.661 09:28:02 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:07:13.661 09:28:02 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:07:13.661 09:28:02 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:13.661 09:28:02 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:13.661 09:28:02 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:07:13.661 09:28:02 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:07:13.661 09:28:02 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:13.661 09:28:02 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:07:13.661 09:28:02 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:07:13.662 09:28:02 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:07:13.662 09:28:02 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:07:13.662 09:28:02 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:13.662 09:28:02 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:07:13.662 09:28:02 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:07:13.662 09:28:02 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:13.662 09:28:02 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:13.662 09:28:02 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:07:13.662 09:28:02 event.cpu_locks -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:13.662 09:28:02 event.cpu_locks -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:13.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.662 --rc genhtml_branch_coverage=1 00:07:13.662 --rc genhtml_function_coverage=1 00:07:13.662 --rc genhtml_legend=1 00:07:13.662 --rc geninfo_all_blocks=1 00:07:13.662 --rc geninfo_unexecuted_blocks=1 00:07:13.662 00:07:13.662 ' 00:07:13.662 09:28:02 event.cpu_locks -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:13.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.662 --rc genhtml_branch_coverage=1 00:07:13.662 --rc genhtml_function_coverage=1 00:07:13.662 --rc genhtml_legend=1 00:07:13.662 --rc geninfo_all_blocks=1 00:07:13.662 --rc geninfo_unexecuted_blocks=1 00:07:13.662 00:07:13.662 ' 00:07:13.662 09:28:02 event.cpu_locks -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:13.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.662 --rc genhtml_branch_coverage=1 00:07:13.662 --rc genhtml_function_coverage=1 00:07:13.662 --rc genhtml_legend=1 00:07:13.662 --rc geninfo_all_blocks=1 00:07:13.662 --rc geninfo_unexecuted_blocks=1 00:07:13.662 00:07:13.662 ' 00:07:13.662 09:28:02 event.cpu_locks -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:13.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.662 --rc genhtml_branch_coverage=1 00:07:13.662 --rc genhtml_function_coverage=1 00:07:13.662 --rc genhtml_legend=1 00:07:13.662 --rc geninfo_all_blocks=1 00:07:13.662 --rc geninfo_unexecuted_blocks=1 00:07:13.662 00:07:13.662 ' 00:07:13.662 09:28:02 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:13.662 09:28:02 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:13.662 09:28:02 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:13.662 09:28:02 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:13.662 09:28:02 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:13.662 09:28:02 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:13.662 09:28:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:13.662 ************************************ 00:07:13.662 START TEST default_locks 00:07:13.662 ************************************ 00:07:13.662 09:28:02 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:07:13.662 09:28:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=108109 00:07:13.662 09:28:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:13.662 09:28:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 108109 00:07:13.662 09:28:02 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 108109 ']' 00:07:13.662 09:28:02 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.662 09:28:02 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:13.662 09:28:02 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.662 09:28:02 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:13.662 09:28:02 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:13.923 [2024-10-07 09:28:02.665828] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:07:13.923 [2024-10-07 09:28:02.665900] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108109 ] 00:07:13.923 [2024-10-07 09:28:02.720555] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.923 [2024-10-07 09:28:02.828496] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.181 09:28:03 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:14.181 09:28:03 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:07:14.181 09:28:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 108109 00:07:14.181 09:28:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 108109 00:07:14.181 09:28:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:14.442 lslocks: write error 00:07:14.442 09:28:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 108109 00:07:14.442 09:28:03 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 108109 ']' 00:07:14.442 09:28:03 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 108109 00:07:14.442 09:28:03 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:07:14.442 09:28:03 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:14.442 09:28:03 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 108109 00:07:14.442 09:28:03 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:14.442 09:28:03 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:14.442 09:28:03 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 108109' 00:07:14.442 killing process with pid 108109 00:07:14.442 09:28:03 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 108109 00:07:14.442 09:28:03 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 108109 00:07:15.009 09:28:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 108109 00:07:15.009 09:28:03 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:07:15.009 09:28:03 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 108109 00:07:15.009 09:28:03 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:15.009 09:28:03 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:15.009 09:28:03 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:15.009 09:28:03 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:15.009 09:28:03 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 108109 00:07:15.009 09:28:03 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 108109 ']' 00:07:15.009 09:28:03 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:15.009 09:28:03 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:15.009 09:28:03 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:15.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:15.009 09:28:03 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:15.009 09:28:03 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:15.009 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (108109) - No such process 00:07:15.009 ERROR: process (pid: 108109) is no longer running 00:07:15.009 09:28:03 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:15.009 09:28:03 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:07:15.009 09:28:03 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:07:15.009 09:28:03 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:15.009 09:28:03 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:15.009 09:28:03 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:15.009 09:28:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:15.009 09:28:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:15.010 09:28:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:15.010 09:28:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:15.010 00:07:15.010 real 0m1.216s 00:07:15.010 user 0m1.185s 00:07:15.010 sys 0m0.509s 00:07:15.010 09:28:03 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:15.010 09:28:03 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:15.010 ************************************ 00:07:15.010 END TEST default_locks 00:07:15.010 ************************************ 00:07:15.010 09:28:03 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:15.010 09:28:03 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:15.010 09:28:03 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:15.010 09:28:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:15.010 ************************************ 00:07:15.010 START TEST default_locks_via_rpc 00:07:15.010 ************************************ 00:07:15.010 09:28:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:07:15.010 09:28:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=108271 00:07:15.010 09:28:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:15.010 09:28:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 108271 00:07:15.010 09:28:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 108271 ']' 00:07:15.010 09:28:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:15.010 09:28:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:15.010 09:28:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:15.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:15.010 09:28:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:15.010 09:28:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.010 [2024-10-07 09:28:03.935991] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:07:15.010 [2024-10-07 09:28:03.936072] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108271 ] 00:07:15.010 [2024-10-07 09:28:03.989982] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.269 [2024-10-07 09:28:04.088723] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.530 09:28:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:15.530 09:28:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:15.530 09:28:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:15.530 09:28:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.530 09:28:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.530 09:28:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.530 09:28:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:15.530 09:28:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:15.530 09:28:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:15.530 09:28:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:15.530 09:28:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:15.530 09:28:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.530 09:28:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.530 09:28:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.530 09:28:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 108271 00:07:15.530 09:28:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 108271 00:07:15.530 09:28:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:15.791 09:28:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 108271 00:07:15.791 09:28:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 108271 ']' 00:07:15.791 09:28:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 108271 00:07:15.791 09:28:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:07:15.791 09:28:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:15.791 09:28:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 108271 00:07:15.791 09:28:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:15.791 09:28:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:15.791 09:28:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 108271' 00:07:15.791 killing process with pid 108271 00:07:15.791 09:28:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 108271 00:07:15.791 09:28:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 108271 00:07:16.362 00:07:16.362 real 0m1.236s 00:07:16.362 user 0m1.210s 00:07:16.362 sys 0m0.494s 00:07:16.362 09:28:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:16.362 09:28:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.362 ************************************ 00:07:16.362 END TEST default_locks_via_rpc 00:07:16.362 ************************************ 00:07:16.362 09:28:05 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:16.362 09:28:05 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:16.362 09:28:05 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:16.362 09:28:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:16.362 ************************************ 00:07:16.362 START TEST non_locking_app_on_locked_coremask 00:07:16.362 ************************************ 00:07:16.362 09:28:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:07:16.362 09:28:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=108427 00:07:16.362 09:28:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:16.362 09:28:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 108427 /var/tmp/spdk.sock 00:07:16.362 09:28:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 108427 ']' 00:07:16.362 09:28:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:16.362 09:28:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:16.362 09:28:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:16.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:16.362 09:28:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:16.362 09:28:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:16.362 [2024-10-07 09:28:05.227211] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:07:16.362 [2024-10-07 09:28:05.227316] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108427 ] 00:07:16.362 [2024-10-07 09:28:05.282119] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.622 [2024-10-07 09:28:05.389615] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.881 09:28:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:16.881 09:28:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:16.881 09:28:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=108544 00:07:16.881 09:28:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:16.881 09:28:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 108544 /var/tmp/spdk2.sock 00:07:16.881 09:28:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 108544 ']' 00:07:16.881 09:28:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:16.881 09:28:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:16.881 09:28:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:16.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:16.881 09:28:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:16.881 09:28:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:16.881 [2024-10-07 09:28:05.701047] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:07:16.881 [2024-10-07 09:28:05.701139] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108544 ] 00:07:16.881 [2024-10-07 09:28:05.778639] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:16.881 [2024-10-07 09:28:05.778685] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.141 [2024-10-07 09:28:05.988460] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.711 09:28:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:17.711 09:28:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:17.711 09:28:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 108427 00:07:17.711 09:28:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 108427 00:07:17.711 09:28:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:18.280 lslocks: write error 00:07:18.280 09:28:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 108427 00:07:18.280 09:28:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 108427 ']' 00:07:18.280 09:28:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 108427 00:07:18.280 09:28:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:18.280 09:28:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:18.280 09:28:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 108427 00:07:18.280 09:28:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:18.280 09:28:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:18.280 09:28:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 108427' 00:07:18.280 killing process with pid 108427 00:07:18.280 09:28:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 108427 00:07:18.280 09:28:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 108427 00:07:19.221 09:28:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 108544 00:07:19.221 09:28:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 108544 ']' 00:07:19.221 09:28:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 108544 00:07:19.221 09:28:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:19.221 09:28:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:19.221 09:28:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 108544 00:07:19.221 09:28:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:19.221 09:28:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:19.221 09:28:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 108544' 00:07:19.221 killing process with pid 108544 00:07:19.221 09:28:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 108544 00:07:19.221 09:28:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 108544 00:07:19.791 00:07:19.791 real 0m3.352s 00:07:19.791 user 0m3.551s 00:07:19.791 sys 0m1.058s 00:07:19.791 09:28:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:19.791 09:28:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:19.791 ************************************ 00:07:19.791 END TEST non_locking_app_on_locked_coremask 00:07:19.791 ************************************ 00:07:19.791 09:28:08 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:19.791 09:28:08 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:19.791 09:28:08 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:19.791 09:28:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:19.791 ************************************ 00:07:19.791 START TEST locking_app_on_unlocked_coremask 00:07:19.791 ************************************ 00:07:19.791 09:28:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:07:19.791 09:28:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=108843 00:07:19.791 09:28:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:19.791 09:28:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 108843 /var/tmp/spdk.sock 00:07:19.791 09:28:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 108843 ']' 00:07:19.791 09:28:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:19.791 09:28:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:19.791 09:28:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:19.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:19.791 09:28:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:19.791 09:28:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:19.791 [2024-10-07 09:28:08.632806] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:07:19.791 [2024-10-07 09:28:08.632912] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108843 ] 00:07:19.791 [2024-10-07 09:28:08.688166] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:19.791 [2024-10-07 09:28:08.688205] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.051 [2024-10-07 09:28:08.796372] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.313 09:28:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:20.313 09:28:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:20.313 09:28:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=108966 00:07:20.313 09:28:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:20.313 09:28:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 108966 /var/tmp/spdk2.sock 00:07:20.313 09:28:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 108966 ']' 00:07:20.313 09:28:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:20.313 09:28:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:20.313 09:28:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:20.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:20.313 09:28:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:20.313 09:28:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:20.313 [2024-10-07 09:28:09.126196] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:07:20.313 [2024-10-07 09:28:09.126270] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108966 ] 00:07:20.313 [2024-10-07 09:28:09.209829] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.574 [2024-10-07 09:28:09.421592] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.143 09:28:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:21.143 09:28:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:21.143 09:28:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 108966 00:07:21.143 09:28:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:21.143 09:28:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 108966 00:07:21.709 lslocks: write error 00:07:21.709 09:28:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 108843 00:07:21.709 09:28:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 108843 ']' 00:07:21.709 09:28:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 108843 00:07:21.709 09:28:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:21.709 09:28:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:21.709 09:28:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 108843 00:07:21.967 09:28:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:21.967 09:28:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:21.967 09:28:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 108843' 00:07:21.967 killing process with pid 108843 00:07:21.967 09:28:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 108843 00:07:21.967 09:28:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 108843 00:07:22.954 09:28:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 108966 00:07:22.954 09:28:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 108966 ']' 00:07:22.954 09:28:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 108966 00:07:22.954 09:28:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:22.954 09:28:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:22.954 09:28:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 108966 00:07:22.954 09:28:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:22.954 09:28:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:22.954 09:28:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 108966' 00:07:22.954 killing process with pid 108966 00:07:22.954 09:28:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 108966 00:07:22.954 09:28:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 108966 00:07:23.213 00:07:23.213 real 0m3.497s 00:07:23.213 user 0m3.720s 00:07:23.213 sys 0m1.078s 00:07:23.213 09:28:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:23.213 09:28:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:23.213 ************************************ 00:07:23.213 END TEST locking_app_on_unlocked_coremask 00:07:23.213 ************************************ 00:07:23.213 09:28:12 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:23.213 09:28:12 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:23.213 09:28:12 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:23.213 09:28:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:23.213 ************************************ 00:07:23.213 START TEST locking_app_on_locked_coremask 00:07:23.213 ************************************ 00:07:23.213 09:28:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:07:23.213 09:28:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=109268 00:07:23.213 09:28:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:23.213 09:28:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 109268 /var/tmp/spdk.sock 00:07:23.213 09:28:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 109268 ']' 00:07:23.213 09:28:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:23.213 09:28:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:23.213 09:28:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:23.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:23.213 09:28:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:23.213 09:28:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:23.213 [2024-10-07 09:28:12.181859] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:07:23.213 [2024-10-07 09:28:12.181937] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109268 ] 00:07:23.471 [2024-10-07 09:28:12.240463] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.471 [2024-10-07 09:28:12.344783] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.730 09:28:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:23.730 09:28:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:23.730 09:28:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=109388 00:07:23.730 09:28:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 109388 /var/tmp/spdk2.sock 00:07:23.730 09:28:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:23.730 09:28:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:23.730 09:28:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 109388 /var/tmp/spdk2.sock 00:07:23.730 09:28:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:23.730 09:28:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:23.730 09:28:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:23.730 09:28:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:23.730 09:28:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 109388 /var/tmp/spdk2.sock 00:07:23.730 09:28:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 109388 ']' 00:07:23.730 09:28:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:23.730 09:28:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:23.730 09:28:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:23.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:23.730 09:28:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:23.730 09:28:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:23.730 [2024-10-07 09:28:12.655319] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:07:23.730 [2024-10-07 09:28:12.655402] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109388 ] 00:07:23.989 [2024-10-07 09:28:12.736551] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 109268 has claimed it. 00:07:23.989 [2024-10-07 09:28:12.736620] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:24.558 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (109388) - No such process 00:07:24.558 ERROR: process (pid: 109388) is no longer running 00:07:24.558 09:28:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:24.558 09:28:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:07:24.558 09:28:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:24.558 09:28:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:24.558 09:28:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:24.558 09:28:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:24.558 09:28:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 109268 00:07:24.558 09:28:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 109268 00:07:24.558 09:28:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:24.819 lslocks: write error 00:07:24.819 09:28:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 109268 00:07:24.819 09:28:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 109268 ']' 00:07:24.819 09:28:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 109268 00:07:24.819 09:28:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:24.819 09:28:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:24.819 09:28:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 109268 00:07:24.819 09:28:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:24.819 09:28:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:24.819 09:28:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 109268' 00:07:24.819 killing process with pid 109268 00:07:24.819 09:28:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 109268 00:07:24.819 09:28:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 109268 00:07:25.389 00:07:25.389 real 0m2.066s 00:07:25.389 user 0m2.264s 00:07:25.389 sys 0m0.658s 00:07:25.389 09:28:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:25.389 09:28:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:25.389 ************************************ 00:07:25.389 END TEST locking_app_on_locked_coremask 00:07:25.389 ************************************ 00:07:25.389 09:28:14 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:25.389 09:28:14 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:25.389 09:28:14 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:25.389 09:28:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:25.389 ************************************ 00:07:25.389 START TEST locking_overlapped_coremask 00:07:25.389 ************************************ 00:07:25.389 09:28:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:07:25.389 09:28:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=109552 00:07:25.389 09:28:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:07:25.389 09:28:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 109552 /var/tmp/spdk.sock 00:07:25.389 09:28:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 109552 ']' 00:07:25.389 09:28:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:25.389 09:28:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:25.389 09:28:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:25.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:25.389 09:28:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:25.390 09:28:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:25.390 [2024-10-07 09:28:14.299656] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:07:25.390 [2024-10-07 09:28:14.299747] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109552 ] 00:07:25.390 [2024-10-07 09:28:14.353814] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:25.651 [2024-10-07 09:28:14.463075] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:07:25.651 [2024-10-07 09:28:14.463140] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.651 [2024-10-07 09:28:14.463137] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:07:25.910 09:28:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:25.910 09:28:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:25.910 09:28:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=109675 00:07:25.910 09:28:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:25.910 09:28:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 109675 /var/tmp/spdk2.sock 00:07:25.910 09:28:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:25.910 09:28:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 109675 /var/tmp/spdk2.sock 00:07:25.910 09:28:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:25.910 09:28:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:25.910 09:28:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:25.910 09:28:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:25.910 09:28:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 109675 /var/tmp/spdk2.sock 00:07:25.910 09:28:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 109675 ']' 00:07:25.910 09:28:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:25.910 09:28:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:25.910 09:28:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:25.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:25.910 09:28:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:25.910 09:28:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:25.910 [2024-10-07 09:28:14.787841] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:07:25.910 [2024-10-07 09:28:14.787919] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109675 ] 00:07:25.910 [2024-10-07 09:28:14.868070] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 109552 has claimed it. 00:07:25.910 [2024-10-07 09:28:14.868140] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:26.850 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (109675) - No such process 00:07:26.850 ERROR: process (pid: 109675) is no longer running 00:07:26.851 09:28:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:26.851 09:28:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:07:26.851 09:28:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:26.851 09:28:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:26.851 09:28:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:26.851 09:28:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:26.851 09:28:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:26.851 09:28:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:26.851 09:28:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:26.851 09:28:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:26.851 09:28:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 109552 00:07:26.851 09:28:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 109552 ']' 00:07:26.851 09:28:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 109552 00:07:26.851 09:28:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:07:26.851 09:28:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:26.851 09:28:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 109552 00:07:26.851 09:28:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:26.851 09:28:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:26.851 09:28:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 109552' 00:07:26.851 killing process with pid 109552 00:07:26.851 09:28:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 109552 00:07:26.851 09:28:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 109552 00:07:27.109 00:07:27.109 real 0m1.739s 00:07:27.109 user 0m4.712s 00:07:27.109 sys 0m0.447s 00:07:27.109 09:28:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:27.109 09:28:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:27.109 ************************************ 00:07:27.109 END TEST locking_overlapped_coremask 00:07:27.109 ************************************ 00:07:27.109 09:28:16 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:27.109 09:28:16 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:27.109 09:28:16 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:27.109 09:28:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:27.109 ************************************ 00:07:27.109 START TEST locking_overlapped_coremask_via_rpc 00:07:27.109 ************************************ 00:07:27.109 09:28:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:07:27.109 09:28:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=109839 00:07:27.109 09:28:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 109839 /var/tmp/spdk.sock 00:07:27.109 09:28:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 109839 ']' 00:07:27.110 09:28:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:27.110 09:28:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:27.110 09:28:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:27.110 09:28:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:27.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:27.110 09:28:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:27.110 09:28:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:27.110 [2024-10-07 09:28:16.087461] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:07:27.110 [2024-10-07 09:28:16.087541] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109839 ] 00:07:27.370 [2024-10-07 09:28:16.142037] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:27.370 [2024-10-07 09:28:16.142074] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:27.370 [2024-10-07 09:28:16.241937] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:07:27.370 [2024-10-07 09:28:16.242001] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:07:27.370 [2024-10-07 09:28:16.242004] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.645 09:28:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:27.645 09:28:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:27.645 09:28:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=109844 00:07:27.645 09:28:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:27.645 09:28:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 109844 /var/tmp/spdk2.sock 00:07:27.645 09:28:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 109844 ']' 00:07:27.645 09:28:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:27.645 09:28:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:27.645 09:28:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:27.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:27.645 09:28:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:27.645 09:28:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:27.645 [2024-10-07 09:28:16.572887] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:07:27.645 [2024-10-07 09:28:16.572969] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109844 ] 00:07:27.903 [2024-10-07 09:28:16.657276] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:27.903 [2024-10-07 09:28:16.657323] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:27.903 [2024-10-07 09:28:16.875221] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:07:27.903 [2024-10-07 09:28:16.878724] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:07:27.903 [2024-10-07 09:28:16.878727] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:07:28.840 09:28:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:28.840 09:28:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:28.840 09:28:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:28.840 09:28:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.840 09:28:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:28.840 09:28:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.840 09:28:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:28.840 09:28:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:07:28.840 09:28:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:28.840 09:28:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:28.840 09:28:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:28.840 09:28:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:28.840 09:28:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:28.840 09:28:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:28.840 09:28:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.840 09:28:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:28.840 [2024-10-07 09:28:17.587769] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 109839 has claimed it. 00:07:28.840 request: 00:07:28.840 { 00:07:28.840 "method": "framework_enable_cpumask_locks", 00:07:28.840 "req_id": 1 00:07:28.840 } 00:07:28.840 Got JSON-RPC error response 00:07:28.840 response: 00:07:28.840 { 00:07:28.840 "code": -32603, 00:07:28.840 "message": "Failed to claim CPU core: 2" 00:07:28.840 } 00:07:28.840 09:28:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:28.840 09:28:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:07:28.840 09:28:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:28.840 09:28:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:28.840 09:28:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:28.840 09:28:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 109839 /var/tmp/spdk.sock 00:07:28.840 09:28:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 109839 ']' 00:07:28.840 09:28:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:28.840 09:28:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:28.840 09:28:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:28.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:28.840 09:28:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:28.840 09:28:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:29.099 09:28:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:29.099 09:28:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:29.099 09:28:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 109844 /var/tmp/spdk2.sock 00:07:29.099 09:28:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 109844 ']' 00:07:29.099 09:28:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:29.099 09:28:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:29.099 09:28:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:29.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:29.099 09:28:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:29.099 09:28:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:29.360 09:28:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:29.360 09:28:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:29.360 09:28:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:29.360 09:28:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:29.360 09:28:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:29.360 09:28:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:29.360 00:07:29.360 real 0m2.129s 00:07:29.360 user 0m1.152s 00:07:29.360 sys 0m0.193s 00:07:29.360 09:28:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:29.360 09:28:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:29.360 ************************************ 00:07:29.360 END TEST locking_overlapped_coremask_via_rpc 00:07:29.360 ************************************ 00:07:29.360 09:28:18 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:29.360 09:28:18 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 109839 ]] 00:07:29.360 09:28:18 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 109839 00:07:29.360 09:28:18 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 109839 ']' 00:07:29.360 09:28:18 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 109839 00:07:29.360 09:28:18 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:07:29.360 09:28:18 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:29.360 09:28:18 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 109839 00:07:29.360 09:28:18 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:29.360 09:28:18 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:29.360 09:28:18 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 109839' 00:07:29.360 killing process with pid 109839 00:07:29.360 09:28:18 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 109839 00:07:29.360 09:28:18 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 109839 00:07:29.929 09:28:18 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 109844 ]] 00:07:29.929 09:28:18 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 109844 00:07:29.929 09:28:18 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 109844 ']' 00:07:29.929 09:28:18 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 109844 00:07:29.929 09:28:18 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:07:29.929 09:28:18 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:29.929 09:28:18 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 109844 00:07:29.929 09:28:18 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:07:29.929 09:28:18 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:07:29.929 09:28:18 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 109844' 00:07:29.929 killing process with pid 109844 00:07:29.929 09:28:18 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 109844 00:07:29.929 09:28:18 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 109844 00:07:30.497 09:28:19 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:30.497 09:28:19 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:30.497 09:28:19 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 109839 ]] 00:07:30.497 09:28:19 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 109839 00:07:30.497 09:28:19 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 109839 ']' 00:07:30.497 09:28:19 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 109839 00:07:30.497 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (109839) - No such process 00:07:30.497 09:28:19 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 109839 is not found' 00:07:30.497 Process with pid 109839 is not found 00:07:30.497 09:28:19 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 109844 ]] 00:07:30.497 09:28:19 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 109844 00:07:30.497 09:28:19 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 109844 ']' 00:07:30.497 09:28:19 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 109844 00:07:30.497 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (109844) - No such process 00:07:30.497 09:28:19 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 109844 is not found' 00:07:30.497 Process with pid 109844 is not found 00:07:30.497 09:28:19 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:30.497 00:07:30.497 real 0m16.741s 00:07:30.497 user 0m29.806s 00:07:30.497 sys 0m5.377s 00:07:30.497 09:28:19 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:30.497 09:28:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:30.497 ************************************ 00:07:30.497 END TEST cpu_locks 00:07:30.497 ************************************ 00:07:30.497 00:07:30.497 real 0m42.922s 00:07:30.497 user 1m21.983s 00:07:30.497 sys 0m9.421s 00:07:30.497 09:28:19 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:30.497 09:28:19 event -- common/autotest_common.sh@10 -- # set +x 00:07:30.497 ************************************ 00:07:30.497 END TEST event 00:07:30.497 ************************************ 00:07:30.497 09:28:19 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:30.497 09:28:19 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:30.497 09:28:19 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:30.497 09:28:19 -- common/autotest_common.sh@10 -- # set +x 00:07:30.497 ************************************ 00:07:30.497 START TEST thread 00:07:30.497 ************************************ 00:07:30.497 09:28:19 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:30.497 * Looking for test storage... 00:07:30.497 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:07:30.497 09:28:19 thread -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:30.497 09:28:19 thread -- common/autotest_common.sh@1681 -- # lcov --version 00:07:30.497 09:28:19 thread -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:30.497 09:28:19 thread -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:30.497 09:28:19 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:30.497 09:28:19 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:30.497 09:28:19 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:30.497 09:28:19 thread -- scripts/common.sh@336 -- # IFS=.-: 00:07:30.497 09:28:19 thread -- scripts/common.sh@336 -- # read -ra ver1 00:07:30.497 09:28:19 thread -- scripts/common.sh@337 -- # IFS=.-: 00:07:30.497 09:28:19 thread -- scripts/common.sh@337 -- # read -ra ver2 00:07:30.497 09:28:19 thread -- scripts/common.sh@338 -- # local 'op=<' 00:07:30.497 09:28:19 thread -- scripts/common.sh@340 -- # ver1_l=2 00:07:30.497 09:28:19 thread -- scripts/common.sh@341 -- # ver2_l=1 00:07:30.497 09:28:19 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:30.497 09:28:19 thread -- scripts/common.sh@344 -- # case "$op" in 00:07:30.497 09:28:19 thread -- scripts/common.sh@345 -- # : 1 00:07:30.497 09:28:19 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:30.497 09:28:19 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:30.497 09:28:19 thread -- scripts/common.sh@365 -- # decimal 1 00:07:30.497 09:28:19 thread -- scripts/common.sh@353 -- # local d=1 00:07:30.497 09:28:19 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:30.497 09:28:19 thread -- scripts/common.sh@355 -- # echo 1 00:07:30.497 09:28:19 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:07:30.497 09:28:19 thread -- scripts/common.sh@366 -- # decimal 2 00:07:30.497 09:28:19 thread -- scripts/common.sh@353 -- # local d=2 00:07:30.497 09:28:19 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:30.497 09:28:19 thread -- scripts/common.sh@355 -- # echo 2 00:07:30.497 09:28:19 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:07:30.498 09:28:19 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:30.498 09:28:19 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:30.498 09:28:19 thread -- scripts/common.sh@368 -- # return 0 00:07:30.498 09:28:19 thread -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:30.498 09:28:19 thread -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:30.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.498 --rc genhtml_branch_coverage=1 00:07:30.498 --rc genhtml_function_coverage=1 00:07:30.498 --rc genhtml_legend=1 00:07:30.498 --rc geninfo_all_blocks=1 00:07:30.498 --rc geninfo_unexecuted_blocks=1 00:07:30.498 00:07:30.498 ' 00:07:30.498 09:28:19 thread -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:30.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.498 --rc genhtml_branch_coverage=1 00:07:30.498 --rc genhtml_function_coverage=1 00:07:30.498 --rc genhtml_legend=1 00:07:30.498 --rc geninfo_all_blocks=1 00:07:30.498 --rc geninfo_unexecuted_blocks=1 00:07:30.498 00:07:30.498 ' 00:07:30.498 09:28:19 thread -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:30.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.498 --rc genhtml_branch_coverage=1 00:07:30.498 --rc genhtml_function_coverage=1 00:07:30.498 --rc genhtml_legend=1 00:07:30.498 --rc geninfo_all_blocks=1 00:07:30.498 --rc geninfo_unexecuted_blocks=1 00:07:30.498 00:07:30.498 ' 00:07:30.498 09:28:19 thread -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:30.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.498 --rc genhtml_branch_coverage=1 00:07:30.498 --rc genhtml_function_coverage=1 00:07:30.498 --rc genhtml_legend=1 00:07:30.498 --rc geninfo_all_blocks=1 00:07:30.498 --rc geninfo_unexecuted_blocks=1 00:07:30.498 00:07:30.498 ' 00:07:30.498 09:28:19 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:30.498 09:28:19 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:30.498 09:28:19 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:30.498 09:28:19 thread -- common/autotest_common.sh@10 -- # set +x 00:07:30.498 ************************************ 00:07:30.498 START TEST thread_poller_perf 00:07:30.498 ************************************ 00:07:30.498 09:28:19 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:30.498 [2024-10-07 09:28:19.454683] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:07:30.498 [2024-10-07 09:28:19.454750] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110328 ] 00:07:30.758 [2024-10-07 09:28:19.515051] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.758 [2024-10-07 09:28:19.618570] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.758 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:32.141 ====================================== 00:07:32.142 busy:2709331302 (cyc) 00:07:32.142 total_run_count: 358000 00:07:32.142 tsc_hz: 2700000000 (cyc) 00:07:32.142 ====================================== 00:07:32.142 poller_cost: 7567 (cyc), 2802 (nsec) 00:07:32.142 00:07:32.142 real 0m1.300s 00:07:32.142 user 0m1.215s 00:07:32.142 sys 0m0.075s 00:07:32.142 09:28:20 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:32.142 09:28:20 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:32.142 ************************************ 00:07:32.142 END TEST thread_poller_perf 00:07:32.142 ************************************ 00:07:32.142 09:28:20 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:32.142 09:28:20 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:32.142 09:28:20 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:32.142 09:28:20 thread -- common/autotest_common.sh@10 -- # set +x 00:07:32.142 ************************************ 00:07:32.142 START TEST thread_poller_perf 00:07:32.142 ************************************ 00:07:32.142 09:28:20 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:32.142 [2024-10-07 09:28:20.801047] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:07:32.142 [2024-10-07 09:28:20.801113] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110476 ] 00:07:32.142 [2024-10-07 09:28:20.857378] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.142 [2024-10-07 09:28:20.958549] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.142 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:33.081 ====================================== 00:07:33.081 busy:2702548347 (cyc) 00:07:33.081 total_run_count: 4869000 00:07:33.081 tsc_hz: 2700000000 (cyc) 00:07:33.081 ====================================== 00:07:33.081 poller_cost: 555 (cyc), 205 (nsec) 00:07:33.081 00:07:33.081 real 0m1.282s 00:07:33.081 user 0m1.198s 00:07:33.081 sys 0m0.078s 00:07:33.081 09:28:22 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:33.081 09:28:22 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:33.081 ************************************ 00:07:33.081 END TEST thread_poller_perf 00:07:33.081 ************************************ 00:07:33.342 09:28:22 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:33.342 00:07:33.342 real 0m2.822s 00:07:33.342 user 0m2.551s 00:07:33.342 sys 0m0.272s 00:07:33.342 09:28:22 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:33.342 09:28:22 thread -- common/autotest_common.sh@10 -- # set +x 00:07:33.342 ************************************ 00:07:33.342 END TEST thread 00:07:33.342 ************************************ 00:07:33.342 09:28:22 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:33.342 09:28:22 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:33.342 09:28:22 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:33.342 09:28:22 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:33.342 09:28:22 -- common/autotest_common.sh@10 -- # set +x 00:07:33.342 ************************************ 00:07:33.342 START TEST app_cmdline 00:07:33.342 ************************************ 00:07:33.342 09:28:22 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:33.342 * Looking for test storage... 00:07:33.342 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:33.342 09:28:22 app_cmdline -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:33.342 09:28:22 app_cmdline -- common/autotest_common.sh@1681 -- # lcov --version 00:07:33.342 09:28:22 app_cmdline -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:33.342 09:28:22 app_cmdline -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:33.342 09:28:22 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:33.342 09:28:22 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:33.342 09:28:22 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:33.342 09:28:22 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:33.342 09:28:22 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:33.342 09:28:22 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:33.342 09:28:22 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:33.342 09:28:22 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:33.342 09:28:22 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:33.342 09:28:22 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:33.342 09:28:22 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:33.342 09:28:22 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:33.342 09:28:22 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:33.342 09:28:22 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:33.342 09:28:22 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:33.342 09:28:22 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:33.342 09:28:22 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:33.342 09:28:22 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:33.342 09:28:22 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:33.342 09:28:22 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:33.342 09:28:22 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:33.342 09:28:22 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:33.342 09:28:22 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:33.342 09:28:22 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:33.342 09:28:22 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:33.343 09:28:22 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:33.343 09:28:22 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:33.343 09:28:22 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:33.343 09:28:22 app_cmdline -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:33.343 09:28:22 app_cmdline -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:33.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.343 --rc genhtml_branch_coverage=1 00:07:33.343 --rc genhtml_function_coverage=1 00:07:33.343 --rc genhtml_legend=1 00:07:33.343 --rc geninfo_all_blocks=1 00:07:33.343 --rc geninfo_unexecuted_blocks=1 00:07:33.343 00:07:33.343 ' 00:07:33.343 09:28:22 app_cmdline -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:33.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.343 --rc genhtml_branch_coverage=1 00:07:33.343 --rc genhtml_function_coverage=1 00:07:33.343 --rc genhtml_legend=1 00:07:33.343 --rc geninfo_all_blocks=1 00:07:33.343 --rc geninfo_unexecuted_blocks=1 00:07:33.343 00:07:33.343 ' 00:07:33.343 09:28:22 app_cmdline -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:33.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.343 --rc genhtml_branch_coverage=1 00:07:33.343 --rc genhtml_function_coverage=1 00:07:33.343 --rc genhtml_legend=1 00:07:33.343 --rc geninfo_all_blocks=1 00:07:33.343 --rc geninfo_unexecuted_blocks=1 00:07:33.343 00:07:33.343 ' 00:07:33.343 09:28:22 app_cmdline -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:33.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.343 --rc genhtml_branch_coverage=1 00:07:33.343 --rc genhtml_function_coverage=1 00:07:33.343 --rc genhtml_legend=1 00:07:33.343 --rc geninfo_all_blocks=1 00:07:33.343 --rc geninfo_unexecuted_blocks=1 00:07:33.343 00:07:33.343 ' 00:07:33.343 09:28:22 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:33.343 09:28:22 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=110764 00:07:33.343 09:28:22 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:33.343 09:28:22 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 110764 00:07:33.343 09:28:22 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 110764 ']' 00:07:33.343 09:28:22 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:33.343 09:28:22 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:33.343 09:28:22 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:33.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:33.343 09:28:22 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:33.343 09:28:22 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:33.604 [2024-10-07 09:28:22.342349] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:07:33.604 [2024-10-07 09:28:22.342423] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110764 ] 00:07:33.604 [2024-10-07 09:28:22.398456] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.604 [2024-10-07 09:28:22.508877] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.865 09:28:22 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:33.865 09:28:22 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:07:33.865 09:28:22 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:34.125 { 00:07:34.126 "version": "SPDK v25.01-pre git sha1 3365e5306", 00:07:34.126 "fields": { 00:07:34.126 "major": 25, 00:07:34.126 "minor": 1, 00:07:34.126 "patch": 0, 00:07:34.126 "suffix": "-pre", 00:07:34.126 "commit": "3365e5306" 00:07:34.126 } 00:07:34.126 } 00:07:34.126 09:28:23 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:34.126 09:28:23 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:34.126 09:28:23 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:34.126 09:28:23 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:34.126 09:28:23 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:34.126 09:28:23 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:34.126 09:28:23 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.126 09:28:23 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:34.126 09:28:23 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:34.126 09:28:23 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.126 09:28:23 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:34.126 09:28:23 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:34.126 09:28:23 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:34.126 09:28:23 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:07:34.126 09:28:23 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:34.126 09:28:23 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:34.126 09:28:23 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:34.126 09:28:23 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:34.126 09:28:23 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:34.126 09:28:23 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:34.126 09:28:23 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:34.126 09:28:23 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:34.126 09:28:23 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:34.126 09:28:23 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:34.385 request: 00:07:34.385 { 00:07:34.385 "method": "env_dpdk_get_mem_stats", 00:07:34.385 "req_id": 1 00:07:34.385 } 00:07:34.385 Got JSON-RPC error response 00:07:34.385 response: 00:07:34.385 { 00:07:34.385 "code": -32601, 00:07:34.385 "message": "Method not found" 00:07:34.385 } 00:07:34.385 09:28:23 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:07:34.385 09:28:23 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:34.385 09:28:23 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:34.385 09:28:23 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:34.385 09:28:23 app_cmdline -- app/cmdline.sh@1 -- # killprocess 110764 00:07:34.385 09:28:23 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 110764 ']' 00:07:34.385 09:28:23 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 110764 00:07:34.385 09:28:23 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:07:34.385 09:28:23 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:34.385 09:28:23 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 110764 00:07:34.385 09:28:23 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:34.385 09:28:23 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:34.385 09:28:23 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 110764' 00:07:34.385 killing process with pid 110764 00:07:34.385 09:28:23 app_cmdline -- common/autotest_common.sh@969 -- # kill 110764 00:07:34.385 09:28:23 app_cmdline -- common/autotest_common.sh@974 -- # wait 110764 00:07:34.952 00:07:34.952 real 0m1.670s 00:07:34.952 user 0m2.057s 00:07:34.952 sys 0m0.486s 00:07:34.952 09:28:23 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:34.952 09:28:23 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:34.952 ************************************ 00:07:34.952 END TEST app_cmdline 00:07:34.952 ************************************ 00:07:34.952 09:28:23 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:34.952 09:28:23 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:34.952 09:28:23 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:34.952 09:28:23 -- common/autotest_common.sh@10 -- # set +x 00:07:34.952 ************************************ 00:07:34.952 START TEST version 00:07:34.952 ************************************ 00:07:34.952 09:28:23 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:34.952 * Looking for test storage... 00:07:34.952 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:34.952 09:28:23 version -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:34.952 09:28:23 version -- common/autotest_common.sh@1681 -- # lcov --version 00:07:34.952 09:28:23 version -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:35.211 09:28:23 version -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:35.211 09:28:23 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:35.211 09:28:24 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:35.211 09:28:24 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:35.211 09:28:24 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:35.211 09:28:24 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:35.211 09:28:24 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:35.211 09:28:24 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:35.211 09:28:24 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:35.211 09:28:24 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:35.211 09:28:24 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:35.211 09:28:24 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:35.211 09:28:24 version -- scripts/common.sh@344 -- # case "$op" in 00:07:35.211 09:28:24 version -- scripts/common.sh@345 -- # : 1 00:07:35.211 09:28:24 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:35.211 09:28:24 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:35.211 09:28:24 version -- scripts/common.sh@365 -- # decimal 1 00:07:35.211 09:28:24 version -- scripts/common.sh@353 -- # local d=1 00:07:35.211 09:28:24 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:35.211 09:28:24 version -- scripts/common.sh@355 -- # echo 1 00:07:35.211 09:28:24 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:35.211 09:28:24 version -- scripts/common.sh@366 -- # decimal 2 00:07:35.211 09:28:24 version -- scripts/common.sh@353 -- # local d=2 00:07:35.211 09:28:24 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:35.211 09:28:24 version -- scripts/common.sh@355 -- # echo 2 00:07:35.211 09:28:24 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:35.211 09:28:24 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:35.211 09:28:24 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:35.211 09:28:24 version -- scripts/common.sh@368 -- # return 0 00:07:35.211 09:28:24 version -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:35.211 09:28:24 version -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:35.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.211 --rc genhtml_branch_coverage=1 00:07:35.211 --rc genhtml_function_coverage=1 00:07:35.211 --rc genhtml_legend=1 00:07:35.211 --rc geninfo_all_blocks=1 00:07:35.211 --rc geninfo_unexecuted_blocks=1 00:07:35.211 00:07:35.211 ' 00:07:35.211 09:28:24 version -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:35.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.211 --rc genhtml_branch_coverage=1 00:07:35.211 --rc genhtml_function_coverage=1 00:07:35.211 --rc genhtml_legend=1 00:07:35.211 --rc geninfo_all_blocks=1 00:07:35.211 --rc geninfo_unexecuted_blocks=1 00:07:35.211 00:07:35.211 ' 00:07:35.211 09:28:24 version -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:35.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.211 --rc genhtml_branch_coverage=1 00:07:35.211 --rc genhtml_function_coverage=1 00:07:35.211 --rc genhtml_legend=1 00:07:35.211 --rc geninfo_all_blocks=1 00:07:35.212 --rc geninfo_unexecuted_blocks=1 00:07:35.212 00:07:35.212 ' 00:07:35.212 09:28:24 version -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:35.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.212 --rc genhtml_branch_coverage=1 00:07:35.212 --rc genhtml_function_coverage=1 00:07:35.212 --rc genhtml_legend=1 00:07:35.212 --rc geninfo_all_blocks=1 00:07:35.212 --rc geninfo_unexecuted_blocks=1 00:07:35.212 00:07:35.212 ' 00:07:35.212 09:28:24 version -- app/version.sh@17 -- # get_header_version major 00:07:35.212 09:28:24 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:35.212 09:28:24 version -- app/version.sh@14 -- # cut -f2 00:07:35.212 09:28:24 version -- app/version.sh@14 -- # tr -d '"' 00:07:35.212 09:28:24 version -- app/version.sh@17 -- # major=25 00:07:35.212 09:28:24 version -- app/version.sh@18 -- # get_header_version minor 00:07:35.212 09:28:24 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:35.212 09:28:24 version -- app/version.sh@14 -- # cut -f2 00:07:35.212 09:28:24 version -- app/version.sh@14 -- # tr -d '"' 00:07:35.212 09:28:24 version -- app/version.sh@18 -- # minor=1 00:07:35.212 09:28:24 version -- app/version.sh@19 -- # get_header_version patch 00:07:35.212 09:28:24 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:35.212 09:28:24 version -- app/version.sh@14 -- # cut -f2 00:07:35.212 09:28:24 version -- app/version.sh@14 -- # tr -d '"' 00:07:35.212 09:28:24 version -- app/version.sh@19 -- # patch=0 00:07:35.212 09:28:24 version -- app/version.sh@20 -- # get_header_version suffix 00:07:35.212 09:28:24 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:35.212 09:28:24 version -- app/version.sh@14 -- # cut -f2 00:07:35.212 09:28:24 version -- app/version.sh@14 -- # tr -d '"' 00:07:35.212 09:28:24 version -- app/version.sh@20 -- # suffix=-pre 00:07:35.212 09:28:24 version -- app/version.sh@22 -- # version=25.1 00:07:35.212 09:28:24 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:35.212 09:28:24 version -- app/version.sh@28 -- # version=25.1rc0 00:07:35.212 09:28:24 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:35.212 09:28:24 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:35.212 09:28:24 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:35.212 09:28:24 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:35.212 00:07:35.212 real 0m0.196s 00:07:35.212 user 0m0.134s 00:07:35.212 sys 0m0.088s 00:07:35.212 09:28:24 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:35.212 09:28:24 version -- common/autotest_common.sh@10 -- # set +x 00:07:35.212 ************************************ 00:07:35.212 END TEST version 00:07:35.212 ************************************ 00:07:35.212 09:28:24 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:35.212 09:28:24 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:07:35.212 09:28:24 -- spdk/autotest.sh@194 -- # uname -s 00:07:35.212 09:28:24 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:35.212 09:28:24 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:35.212 09:28:24 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:35.212 09:28:24 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:07:35.212 09:28:24 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:07:35.212 09:28:24 -- spdk/autotest.sh@256 -- # timing_exit lib 00:07:35.212 09:28:24 -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:35.212 09:28:24 -- common/autotest_common.sh@10 -- # set +x 00:07:35.212 09:28:24 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:07:35.212 09:28:24 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:07:35.212 09:28:24 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:07:35.212 09:28:24 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:07:35.212 09:28:24 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:07:35.212 09:28:24 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:07:35.212 09:28:24 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:35.212 09:28:24 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:35.212 09:28:24 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:35.212 09:28:24 -- common/autotest_common.sh@10 -- # set +x 00:07:35.212 ************************************ 00:07:35.212 START TEST nvmf_tcp 00:07:35.212 ************************************ 00:07:35.212 09:28:24 nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:35.212 * Looking for test storage... 00:07:35.212 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:35.212 09:28:24 nvmf_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:35.212 09:28:24 nvmf_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:07:35.212 09:28:24 nvmf_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:35.472 09:28:24 nvmf_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:35.472 09:28:24 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:35.472 09:28:24 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:35.472 09:28:24 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:35.472 09:28:24 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:07:35.472 09:28:24 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:07:35.472 09:28:24 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:07:35.472 09:28:24 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:07:35.472 09:28:24 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:07:35.472 09:28:24 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:07:35.472 09:28:24 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:07:35.472 09:28:24 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:35.472 09:28:24 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:07:35.472 09:28:24 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:07:35.472 09:28:24 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:35.472 09:28:24 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:35.472 09:28:24 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:07:35.472 09:28:24 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:07:35.472 09:28:24 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:35.472 09:28:24 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:07:35.472 09:28:24 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:07:35.472 09:28:24 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:07:35.472 09:28:24 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:07:35.472 09:28:24 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:35.472 09:28:24 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:07:35.472 09:28:24 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:07:35.472 09:28:24 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:35.472 09:28:24 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:35.472 09:28:24 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:07:35.472 09:28:24 nvmf_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:35.472 09:28:24 nvmf_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:35.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.472 --rc genhtml_branch_coverage=1 00:07:35.472 --rc genhtml_function_coverage=1 00:07:35.472 --rc genhtml_legend=1 00:07:35.472 --rc geninfo_all_blocks=1 00:07:35.472 --rc geninfo_unexecuted_blocks=1 00:07:35.472 00:07:35.472 ' 00:07:35.472 09:28:24 nvmf_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:35.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.472 --rc genhtml_branch_coverage=1 00:07:35.472 --rc genhtml_function_coverage=1 00:07:35.472 --rc genhtml_legend=1 00:07:35.472 --rc geninfo_all_blocks=1 00:07:35.472 --rc geninfo_unexecuted_blocks=1 00:07:35.472 00:07:35.472 ' 00:07:35.472 09:28:24 nvmf_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:35.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.472 --rc genhtml_branch_coverage=1 00:07:35.472 --rc genhtml_function_coverage=1 00:07:35.472 --rc genhtml_legend=1 00:07:35.472 --rc geninfo_all_blocks=1 00:07:35.472 --rc geninfo_unexecuted_blocks=1 00:07:35.472 00:07:35.472 ' 00:07:35.472 09:28:24 nvmf_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:35.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.472 --rc genhtml_branch_coverage=1 00:07:35.472 --rc genhtml_function_coverage=1 00:07:35.472 --rc genhtml_legend=1 00:07:35.472 --rc geninfo_all_blocks=1 00:07:35.472 --rc geninfo_unexecuted_blocks=1 00:07:35.472 00:07:35.472 ' 00:07:35.472 09:28:24 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:35.472 09:28:24 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:35.472 09:28:24 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:35.472 09:28:24 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:35.472 09:28:24 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:35.472 09:28:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:35.472 ************************************ 00:07:35.472 START TEST nvmf_target_core 00:07:35.472 ************************************ 00:07:35.472 09:28:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:35.472 * Looking for test storage... 00:07:35.472 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:35.472 09:28:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:35.472 09:28:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # lcov --version 00:07:35.472 09:28:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:35.472 09:28:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:35.472 09:28:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:35.472 09:28:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:35.472 09:28:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:35.472 09:28:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:07:35.472 09:28:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:07:35.472 09:28:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:07:35.472 09:28:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:07:35.472 09:28:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:07:35.472 09:28:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:07:35.472 09:28:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:07:35.472 09:28:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:35.472 09:28:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:07:35.472 09:28:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:07:35.472 09:28:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:35.472 09:28:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:35.472 09:28:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:07:35.473 09:28:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:07:35.473 09:28:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:35.473 09:28:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:07:35.473 09:28:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:07:35.473 09:28:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:07:35.473 09:28:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:07:35.473 09:28:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:35.473 09:28:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:07:35.473 09:28:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:07:35.473 09:28:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:35.473 09:28:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:35.473 09:28:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:07:35.473 09:28:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:35.473 09:28:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:35.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.473 --rc genhtml_branch_coverage=1 00:07:35.473 --rc genhtml_function_coverage=1 00:07:35.473 --rc genhtml_legend=1 00:07:35.473 --rc geninfo_all_blocks=1 00:07:35.473 --rc geninfo_unexecuted_blocks=1 00:07:35.473 00:07:35.473 ' 00:07:35.473 09:28:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:35.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.473 --rc genhtml_branch_coverage=1 00:07:35.473 --rc genhtml_function_coverage=1 00:07:35.473 --rc genhtml_legend=1 00:07:35.473 --rc geninfo_all_blocks=1 00:07:35.473 --rc geninfo_unexecuted_blocks=1 00:07:35.473 00:07:35.473 ' 00:07:35.473 09:28:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:35.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.473 --rc genhtml_branch_coverage=1 00:07:35.473 --rc genhtml_function_coverage=1 00:07:35.473 --rc genhtml_legend=1 00:07:35.473 --rc geninfo_all_blocks=1 00:07:35.473 --rc geninfo_unexecuted_blocks=1 00:07:35.473 00:07:35.473 ' 00:07:35.473 09:28:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:35.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.473 --rc genhtml_branch_coverage=1 00:07:35.473 --rc genhtml_function_coverage=1 00:07:35.473 --rc genhtml_legend=1 00:07:35.473 --rc geninfo_all_blocks=1 00:07:35.473 --rc geninfo_unexecuted_blocks=1 00:07:35.473 00:07:35.473 ' 00:07:35.473 09:28:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:07:35.473 09:28:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:35.473 09:28:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:35.473 09:28:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:07:35.473 09:28:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:35.473 09:28:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:35.473 09:28:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:35.473 09:28:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:35.473 09:28:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:35.473 09:28:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:35.473 09:28:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:35.473 09:28:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:35.473 09:28:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:35.473 09:28:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:35.473 09:28:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:07:35.473 09:28:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:07:35.473 09:28:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:35.473 09:28:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:35.473 09:28:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:35.473 09:28:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:35.473 09:28:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:35.473 09:28:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:07:35.473 09:28:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:35.473 09:28:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:35.473 09:28:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:35.473 09:28:24 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.473 09:28:24 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.473 09:28:24 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.473 09:28:24 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:07:35.473 09:28:24 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.473 09:28:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:07:35.473 09:28:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:35.473 09:28:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:35.473 09:28:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:35.473 09:28:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:35.473 09:28:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:35.473 09:28:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:35.473 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:35.473 09:28:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:35.473 09:28:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:35.473 09:28:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:35.734 09:28:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:35.734 09:28:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:07:35.734 09:28:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:07:35.734 09:28:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:35.734 09:28:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:35.734 09:28:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:35.734 09:28:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:35.734 ************************************ 00:07:35.734 START TEST nvmf_abort 00:07:35.734 ************************************ 00:07:35.734 09:28:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:35.734 * Looking for test storage... 00:07:35.734 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:35.734 09:28:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:35.734 09:28:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1681 -- # lcov --version 00:07:35.734 09:28:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:35.734 09:28:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:35.734 09:28:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:35.734 09:28:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:35.734 09:28:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:35.734 09:28:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:07:35.734 09:28:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:07:35.734 09:28:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:07:35.734 09:28:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:07:35.734 09:28:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:07:35.734 09:28:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:07:35.734 09:28:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:07:35.734 09:28:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:35.734 09:28:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:07:35.734 09:28:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:07:35.734 09:28:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:35.734 09:28:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:35.734 09:28:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:07:35.734 09:28:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:07:35.734 09:28:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:35.734 09:28:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:07:35.734 09:28:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:07:35.734 09:28:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:07:35.734 09:28:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:07:35.734 09:28:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:35.734 09:28:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:07:35.734 09:28:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:07:35.734 09:28:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:35.734 09:28:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:35.734 09:28:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:07:35.734 09:28:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:35.734 09:28:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:35.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.734 --rc genhtml_branch_coverage=1 00:07:35.734 --rc genhtml_function_coverage=1 00:07:35.734 --rc genhtml_legend=1 00:07:35.734 --rc geninfo_all_blocks=1 00:07:35.734 --rc geninfo_unexecuted_blocks=1 00:07:35.734 00:07:35.734 ' 00:07:35.734 09:28:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:35.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.734 --rc genhtml_branch_coverage=1 00:07:35.734 --rc genhtml_function_coverage=1 00:07:35.734 --rc genhtml_legend=1 00:07:35.734 --rc geninfo_all_blocks=1 00:07:35.734 --rc geninfo_unexecuted_blocks=1 00:07:35.734 00:07:35.734 ' 00:07:35.734 09:28:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:35.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.734 --rc genhtml_branch_coverage=1 00:07:35.734 --rc genhtml_function_coverage=1 00:07:35.734 --rc genhtml_legend=1 00:07:35.734 --rc geninfo_all_blocks=1 00:07:35.734 --rc geninfo_unexecuted_blocks=1 00:07:35.734 00:07:35.734 ' 00:07:35.734 09:28:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:35.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.734 --rc genhtml_branch_coverage=1 00:07:35.734 --rc genhtml_function_coverage=1 00:07:35.734 --rc genhtml_legend=1 00:07:35.734 --rc geninfo_all_blocks=1 00:07:35.734 --rc geninfo_unexecuted_blocks=1 00:07:35.734 00:07:35.734 ' 00:07:35.735 09:28:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:35.735 09:28:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:07:35.735 09:28:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:35.735 09:28:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:35.735 09:28:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:35.735 09:28:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:35.735 09:28:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:35.735 09:28:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:35.735 09:28:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:35.735 09:28:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:35.735 09:28:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:35.735 09:28:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:35.735 09:28:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:07:35.735 09:28:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:07:35.735 09:28:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:35.735 09:28:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:35.735 09:28:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:35.735 09:28:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:35.735 09:28:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:35.735 09:28:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:07:35.735 09:28:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:35.735 09:28:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:35.735 09:28:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:35.735 09:28:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.735 09:28:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.735 09:28:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.735 09:28:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:07:35.735 09:28:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.735 09:28:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:07:35.735 09:28:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:35.735 09:28:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:35.735 09:28:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:35.735 09:28:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:35.735 09:28:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:35.735 09:28:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:35.735 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:35.735 09:28:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:35.735 09:28:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:35.735 09:28:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:35.735 09:28:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:35.735 09:28:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:07:35.735 09:28:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:07:35.735 09:28:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:07:35.735 09:28:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:35.735 09:28:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # prepare_net_devs 00:07:35.735 09:28:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@436 -- # local -g is_hw=no 00:07:35.735 09:28:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # remove_spdk_ns 00:07:35.735 09:28:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:35.735 09:28:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:35.735 09:28:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:35.735 09:28:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:07:35.735 09:28:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:07:35.735 09:28:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:07:35.735 09:28:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:37.674 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:37.674 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:07:37.674 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:37.674 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:37.674 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:37.674 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:37.674 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:37.674 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:07:37.933 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:37.933 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:07:37.933 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:07:37.933 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:07:37.933 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:07:37.933 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:07:37.933 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:07:37.933 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:37.933 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:37.933 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:37.933 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:37.933 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:37.933 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:37.933 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:37.933 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:37.933 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:37.933 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:37.933 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:37.933 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:37.933 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:37.933 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:37.933 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:37.933 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:37.933 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:37.933 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:37.933 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:37.933 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x1592)' 00:07:37.933 Found 0000:09:00.0 (0x8086 - 0x1592) 00:07:37.933 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:37.933 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:37.934 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:07:37.934 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:07:37.934 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:37.934 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:37.934 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x1592)' 00:07:37.934 Found 0000:09:00.1 (0x8086 - 0x1592) 00:07:37.934 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:37.934 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:37.934 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:07:37.934 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:07:37.934 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:37.934 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:37.934 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:37.934 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:37.934 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:37.934 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:37.934 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:37.934 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:37.934 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:37.934 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:37.934 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:37.934 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:07:37.934 Found net devices under 0000:09:00.0: cvl_0_0 00:07:37.934 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:37.934 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:37.934 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:37.934 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:37.934 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:37.934 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:37.934 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:37.934 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:37.934 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:07:37.934 Found net devices under 0000:09:00.1: cvl_0_1 00:07:37.934 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:37.934 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:07:37.934 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # is_hw=yes 00:07:37.934 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:07:37.934 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:07:37.934 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:07:37.934 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:37.934 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:37.934 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:37.934 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:37.934 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:37.934 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:37.934 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:37.934 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:37.934 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:37.934 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:37.934 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:37.934 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:37.934 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:37.934 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:37.934 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:37.934 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:37.934 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:37.934 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:37.934 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:37.934 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:37.934 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:37.934 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:37.934 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:37.934 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:37.934 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.138 ms 00:07:37.934 00:07:37.934 --- 10.0.0.2 ping statistics --- 00:07:37.934 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:37.934 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:07:37.934 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:37.934 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:37.934 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.131 ms 00:07:37.934 00:07:37.934 --- 10.0.0.1 ping statistics --- 00:07:37.934 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:37.934 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:07:37.934 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:37.934 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # return 0 00:07:37.934 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:07:37.934 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:37.934 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:07:37.934 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:07:37.934 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:37.934 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:07:37.934 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:07:37.934 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:07:37.934 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:07:37.934 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:37.934 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:37.934 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # nvmfpid=112779 00:07:37.934 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:37.934 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # waitforlisten 112779 00:07:37.934 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 112779 ']' 00:07:37.935 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:37.935 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:37.935 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:37.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:37.935 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:37.935 09:28:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:38.195 [2024-10-07 09:28:26.939720] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:07:38.195 [2024-10-07 09:28:26.939813] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:38.196 [2024-10-07 09:28:27.002633] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:38.196 [2024-10-07 09:28:27.109193] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:38.196 [2024-10-07 09:28:27.109253] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:38.196 [2024-10-07 09:28:27.109281] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:38.196 [2024-10-07 09:28:27.109292] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:38.196 [2024-10-07 09:28:27.109302] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:38.196 [2024-10-07 09:28:27.110080] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:07:38.196 [2024-10-07 09:28:27.110141] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:07:38.196 [2024-10-07 09:28:27.110145] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:07:38.458 09:28:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:38.458 09:28:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:07:38.458 09:28:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:07:38.458 09:28:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:38.458 09:28:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:38.458 09:28:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:38.458 09:28:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:07:38.458 09:28:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.458 09:28:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:38.458 [2024-10-07 09:28:27.258111] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:38.458 09:28:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.458 09:28:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:07:38.458 09:28:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.458 09:28:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:38.458 Malloc0 00:07:38.458 09:28:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.458 09:28:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:38.458 09:28:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.458 09:28:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:38.458 Delay0 00:07:38.458 09:28:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.458 09:28:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:38.458 09:28:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.458 09:28:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:38.458 09:28:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.458 09:28:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:07:38.458 09:28:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.458 09:28:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:38.458 09:28:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.458 09:28:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:38.458 09:28:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.458 09:28:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:38.458 [2024-10-07 09:28:27.336317] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:38.458 09:28:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.458 09:28:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:38.458 09:28:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.458 09:28:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:38.458 09:28:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.458 09:28:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:07:38.458 [2024-10-07 09:28:27.442408] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:07:41.003 Initializing NVMe Controllers 00:07:41.003 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:41.003 controller IO queue size 128 less than required 00:07:41.003 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:07:41.003 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:07:41.003 Initialization complete. Launching workers. 00:07:41.003 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28737 00:07:41.003 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28798, failed to submit 62 00:07:41.003 success 28741, unsuccessful 57, failed 0 00:07:41.003 09:28:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:41.003 09:28:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.003 09:28:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:41.003 09:28:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.003 09:28:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:07:41.003 09:28:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:07:41.003 09:28:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@514 -- # nvmfcleanup 00:07:41.003 09:28:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:07:41.003 09:28:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:41.003 09:28:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:07:41.003 09:28:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:41.003 09:28:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:41.003 rmmod nvme_tcp 00:07:41.003 rmmod nvme_fabrics 00:07:41.003 rmmod nvme_keyring 00:07:41.003 09:28:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:41.003 09:28:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:07:41.003 09:28:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:07:41.003 09:28:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@515 -- # '[' -n 112779 ']' 00:07:41.003 09:28:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # killprocess 112779 00:07:41.003 09:28:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 112779 ']' 00:07:41.003 09:28:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 112779 00:07:41.003 09:28:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:07:41.003 09:28:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:41.003 09:28:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 112779 00:07:41.003 09:28:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:41.003 09:28:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:41.003 09:28:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 112779' 00:07:41.003 killing process with pid 112779 00:07:41.003 09:28:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@969 -- # kill 112779 00:07:41.003 09:28:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@974 -- # wait 112779 00:07:41.003 09:28:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:07:41.003 09:28:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:07:41.003 09:28:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:07:41.003 09:28:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:07:41.003 09:28:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@789 -- # iptables-save 00:07:41.003 09:28:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:07:41.003 09:28:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@789 -- # iptables-restore 00:07:41.003 09:28:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:41.003 09:28:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:41.003 09:28:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:41.003 09:28:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:41.003 09:28:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:43.543 09:28:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:43.543 00:07:43.543 real 0m7.491s 00:07:43.543 user 0m10.878s 00:07:43.543 sys 0m2.409s 00:07:43.543 09:28:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:43.543 09:28:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:43.543 ************************************ 00:07:43.543 END TEST nvmf_abort 00:07:43.543 ************************************ 00:07:43.543 09:28:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:43.543 09:28:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:43.543 09:28:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:43.543 09:28:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:43.543 ************************************ 00:07:43.543 START TEST nvmf_ns_hotplug_stress 00:07:43.543 ************************************ 00:07:43.543 09:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:43.543 * Looking for test storage... 00:07:43.543 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:43.543 09:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:43.543 09:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lcov --version 00:07:43.543 09:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:43.543 09:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:43.543 09:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:43.543 09:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:43.543 09:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:43.543 09:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:07:43.543 09:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:07:43.543 09:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:07:43.543 09:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:07:43.543 09:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:07:43.543 09:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:07:43.543 09:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:07:43.543 09:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:43.543 09:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:07:43.543 09:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:07:43.543 09:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:43.543 09:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:43.544 09:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:07:43.544 09:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:07:43.544 09:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:43.544 09:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:07:43.544 09:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:07:43.544 09:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:07:43.544 09:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:07:43.544 09:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:43.544 09:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:07:43.544 09:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:07:43.544 09:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:43.544 09:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:43.544 09:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:07:43.544 09:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:43.544 09:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:43.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.544 --rc genhtml_branch_coverage=1 00:07:43.544 --rc genhtml_function_coverage=1 00:07:43.544 --rc genhtml_legend=1 00:07:43.544 --rc geninfo_all_blocks=1 00:07:43.544 --rc geninfo_unexecuted_blocks=1 00:07:43.544 00:07:43.544 ' 00:07:43.544 09:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:43.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.544 --rc genhtml_branch_coverage=1 00:07:43.544 --rc genhtml_function_coverage=1 00:07:43.544 --rc genhtml_legend=1 00:07:43.544 --rc geninfo_all_blocks=1 00:07:43.544 --rc geninfo_unexecuted_blocks=1 00:07:43.544 00:07:43.544 ' 00:07:43.544 09:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:43.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.544 --rc genhtml_branch_coverage=1 00:07:43.544 --rc genhtml_function_coverage=1 00:07:43.544 --rc genhtml_legend=1 00:07:43.544 --rc geninfo_all_blocks=1 00:07:43.544 --rc geninfo_unexecuted_blocks=1 00:07:43.544 00:07:43.544 ' 00:07:43.544 09:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:43.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.544 --rc genhtml_branch_coverage=1 00:07:43.544 --rc genhtml_function_coverage=1 00:07:43.544 --rc genhtml_legend=1 00:07:43.544 --rc geninfo_all_blocks=1 00:07:43.544 --rc geninfo_unexecuted_blocks=1 00:07:43.544 00:07:43.544 ' 00:07:43.544 09:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:43.544 09:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:07:43.544 09:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:43.544 09:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:43.544 09:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:43.544 09:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:43.544 09:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:43.544 09:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:43.544 09:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:43.544 09:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:43.544 09:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:43.544 09:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:43.544 09:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:07:43.544 09:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:07:43.544 09:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:43.544 09:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:43.544 09:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:43.544 09:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:43.544 09:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:43.544 09:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:07:43.544 09:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:43.544 09:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:43.544 09:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:43.544 09:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.544 09:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.544 09:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.544 09:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:07:43.544 09:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.544 09:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:07:43.544 09:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:43.544 09:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:43.544 09:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:43.544 09:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:43.544 09:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:43.544 09:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:43.544 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:43.544 09:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:43.544 09:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:43.544 09:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:43.544 09:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:43.544 09:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:07:43.544 09:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:07:43.544 09:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:43.544 09:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # prepare_net_devs 00:07:43.544 09:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@436 -- # local -g is_hw=no 00:07:43.544 09:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # remove_spdk_ns 00:07:43.544 09:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:43.544 09:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:43.544 09:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:43.544 09:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:07:43.544 09:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:07:43.544 09:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:07:43.544 09:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:45.454 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:45.454 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:07:45.454 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:45.454 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:45.454 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:45.454 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:45.454 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:45.454 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:07:45.454 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:45.454 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:07:45.454 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:07:45.454 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:07:45.454 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:07:45.454 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:07:45.454 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:07:45.454 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:45.454 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:45.454 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:45.454 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:45.454 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:45.454 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:45.454 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:45.454 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:45.454 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:45.454 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:45.454 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:45.454 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:45.454 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:45.454 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:45.454 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:45.454 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:45.454 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:45.454 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:45.454 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:45.454 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x1592)' 00:07:45.454 Found 0000:09:00.0 (0x8086 - 0x1592) 00:07:45.454 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:45.454 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:45.454 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:07:45.454 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:07:45.454 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:45.454 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:45.454 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x1592)' 00:07:45.454 Found 0000:09:00.1 (0x8086 - 0x1592) 00:07:45.454 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:45.454 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:45.454 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:07:45.454 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:07:45.454 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:45.454 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:45.454 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:45.454 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:45.455 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:45.455 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:45.455 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:45.455 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:45.455 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:45.455 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:45.455 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:45.455 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:07:45.455 Found net devices under 0000:09:00.0: cvl_0_0 00:07:45.455 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:45.455 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:45.455 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:45.455 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:45.455 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:45.455 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:45.455 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:45.455 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:45.455 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:07:45.455 Found net devices under 0000:09:00.1: cvl_0_1 00:07:45.455 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:45.455 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:07:45.455 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # is_hw=yes 00:07:45.455 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:07:45.455 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:07:45.455 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:07:45.455 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:45.455 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:45.455 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:45.455 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:45.455 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:45.455 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:45.455 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:45.455 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:45.455 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:45.455 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:45.455 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:45.455 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:45.455 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:45.455 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:45.455 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:45.455 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:45.455 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:45.455 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:45.455 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:45.455 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:45.455 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:45.455 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:45.455 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:45.455 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:45.455 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.191 ms 00:07:45.455 00:07:45.455 --- 10.0.0.2 ping statistics --- 00:07:45.455 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:45.455 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:07:45.455 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:45.455 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:45.455 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:07:45.455 00:07:45.455 --- 10.0.0.1 ping statistics --- 00:07:45.455 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:45.455 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:07:45.455 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:45.455 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # return 0 00:07:45.455 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:07:45.455 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:45.455 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:07:45.455 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:07:45.455 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:45.455 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:07:45.455 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:07:45.714 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:07:45.714 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:07:45.714 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:45.714 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:45.714 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # nvmfpid=114917 00:07:45.714 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:45.714 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # waitforlisten 114917 00:07:45.714 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 114917 ']' 00:07:45.714 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:45.714 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:45.714 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:45.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:45.714 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:45.714 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:45.714 [2024-10-07 09:28:34.515229] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:07:45.715 [2024-10-07 09:28:34.515307] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:45.715 [2024-10-07 09:28:34.575743] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:45.715 [2024-10-07 09:28:34.688841] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:45.715 [2024-10-07 09:28:34.688894] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:45.715 [2024-10-07 09:28:34.688924] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:45.715 [2024-10-07 09:28:34.688935] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:45.715 [2024-10-07 09:28:34.688945] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:45.715 [2024-10-07 09:28:34.692698] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:07:45.715 [2024-10-07 09:28:34.692761] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:07:45.715 [2024-10-07 09:28:34.692756] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:07:45.974 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:45.974 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:07:45.974 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:07:45.974 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:45.974 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:45.974 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:45.974 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:07:45.974 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:46.232 [2024-10-07 09:28:35.092891] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:46.232 09:28:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:46.490 09:28:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:46.749 [2024-10-07 09:28:35.627381] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:46.749 09:28:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:47.007 09:28:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:07:47.264 Malloc0 00:07:47.265 09:28:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:47.523 Delay0 00:07:47.523 09:28:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:47.782 09:28:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:07:48.040 NULL1 00:07:48.040 09:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:07:48.298 09:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=115324 00:07:48.298 09:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:07:48.298 09:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115324 00:07:48.298 09:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:49.676 Read completed with error (sct=0, sc=11) 00:07:49.676 09:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:49.676 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:49.676 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:49.676 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:49.676 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:49.676 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:49.933 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:49.933 09:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:07:49.933 09:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:07:50.190 true 00:07:50.190 09:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115324 00:07:50.190 09:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:50.756 09:28:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:51.324 09:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:07:51.324 09:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:07:51.324 true 00:07:51.324 09:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115324 00:07:51.324 09:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:51.582 09:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:52.150 09:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:07:52.150 09:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:07:52.150 true 00:07:52.150 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115324 00:07:52.150 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:52.407 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:52.665 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:07:52.665 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:07:52.925 true 00:07:53.185 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115324 00:07:53.185 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:54.123 09:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:54.380 09:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:07:54.380 09:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:07:54.638 true 00:07:54.638 09:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115324 00:07:54.638 09:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:54.896 09:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:55.155 09:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:07:55.155 09:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:07:55.414 true 00:07:55.414 09:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115324 00:07:55.414 09:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:55.673 09:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:55.932 09:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:07:55.932 09:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:07:56.191 true 00:07:56.191 09:28:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115324 00:07:56.191 09:28:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:57.129 09:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:57.387 09:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:07:57.387 09:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:07:57.645 true 00:07:57.645 09:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115324 00:07:57.645 09:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:57.904 09:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:58.163 09:28:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:07:58.163 09:28:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:07:58.421 true 00:07:58.421 09:28:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115324 00:07:58.421 09:28:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:58.991 09:28:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:58.991 09:28:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:07:58.991 09:28:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:07:59.250 true 00:07:59.250 09:28:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115324 00:07:59.250 09:28:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:00.187 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:00.187 09:28:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:00.446 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:00.446 09:28:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:08:00.707 09:28:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:08:00.707 true 00:08:00.968 09:28:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115324 00:08:00.968 09:28:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:01.227 09:28:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:01.485 09:28:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:08:01.485 09:28:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:08:01.743 true 00:08:01.743 09:28:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115324 00:08:01.743 09:28:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:02.002 09:28:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:02.260 09:28:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:08:02.260 09:28:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:08:02.518 true 00:08:02.518 09:28:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115324 00:08:02.518 09:28:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:03.458 09:28:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:03.458 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:03.458 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:03.716 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:03.716 09:28:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:08:03.716 09:28:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:08:03.974 true 00:08:03.974 09:28:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115324 00:08:03.974 09:28:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:04.233 09:28:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:04.492 09:28:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:08:04.492 09:28:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:08:04.751 true 00:08:04.751 09:28:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115324 00:08:04.751 09:28:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:05.689 09:28:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:05.689 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:05.948 09:28:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:08:05.948 09:28:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:08:06.206 true 00:08:06.206 09:28:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115324 00:08:06.206 09:28:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:06.464 09:28:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:06.723 09:28:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:08:06.723 09:28:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:08:06.982 true 00:08:06.982 09:28:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115324 00:08:06.982 09:28:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:07.920 09:28:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:07.920 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:07.920 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:08.179 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:08.179 09:28:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:08:08.179 09:28:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:08:08.437 true 00:08:08.437 09:28:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115324 00:08:08.437 09:28:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:08.696 09:28:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:08.954 09:28:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:08:08.954 09:28:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:08:09.212 true 00:08:09.212 09:28:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115324 00:08:09.212 09:28:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:10.149 09:28:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:10.149 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:10.405 09:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:08:10.405 09:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:08:10.662 true 00:08:10.662 09:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115324 00:08:10.662 09:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:10.919 09:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:11.177 09:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:08:11.177 09:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:08:11.435 true 00:08:11.435 09:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115324 00:08:11.435 09:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:12.375 09:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:12.375 09:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:08:12.375 09:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:08:12.634 true 00:08:12.634 09:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115324 00:08:12.634 09:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:12.892 09:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:13.150 09:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:08:13.150 09:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:08:13.409 true 00:08:13.409 09:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115324 00:08:13.409 09:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:13.669 09:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:14.238 09:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:08:14.238 09:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:08:14.238 true 00:08:14.238 09:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115324 00:08:14.238 09:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:15.617 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:15.617 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:08:15.617 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:08:15.875 true 00:08:15.875 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115324 00:08:15.875 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:16.134 09:29:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:16.392 09:29:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:08:16.392 09:29:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:08:16.651 true 00:08:16.651 09:29:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115324 00:08:16.651 09:29:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:16.909 09:29:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:17.167 09:29:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:08:17.167 09:29:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:08:17.426 true 00:08:17.426 09:29:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115324 00:08:17.426 09:29:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:18.366 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:18.626 Initializing NVMe Controllers 00:08:18.626 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:18.626 Controller IO queue size 128, less than required. 00:08:18.626 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:18.626 Controller IO queue size 128, less than required. 00:08:18.626 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:18.626 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:18.626 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:08:18.626 Initialization complete. Launching workers. 00:08:18.626 ======================================================== 00:08:18.626 Latency(us) 00:08:18.626 Device Information : IOPS MiB/s Average min max 00:08:18.626 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 603.99 0.29 94913.16 3379.37 1077699.95 00:08:18.626 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 9245.65 4.51 13845.20 3479.69 490920.27 00:08:18.626 ======================================================== 00:08:18.626 Total : 9849.64 4.81 18816.38 3379.37 1077699.95 00:08:18.626 00:08:18.626 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:08:18.626 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:08:19.194 true 00:08:19.194 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115324 00:08:19.194 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (115324) - No such process 00:08:19.194 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 115324 00:08:19.194 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:19.194 09:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:19.453 09:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:08:19.453 09:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:08:19.453 09:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:08:19.453 09:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:19.453 09:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:08:19.712 null0 00:08:19.712 09:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:19.712 09:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:19.712 09:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:08:19.970 null1 00:08:20.229 09:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:20.229 09:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:20.229 09:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:08:20.487 null2 00:08:20.487 09:29:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:20.487 09:29:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:20.487 09:29:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:08:20.745 null3 00:08:20.745 09:29:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:20.745 09:29:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:20.745 09:29:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:08:21.003 null4 00:08:21.003 09:29:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:21.003 09:29:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:21.003 09:29:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:08:21.262 null5 00:08:21.262 09:29:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:21.262 09:29:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:21.262 09:29:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:08:21.520 null6 00:08:21.520 09:29:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:21.520 09:29:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:21.520 09:29:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:08:21.779 null7 00:08:21.779 09:29:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:21.779 09:29:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:21.779 09:29:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:08:21.779 09:29:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:21.779 09:29:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:21.779 09:29:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:21.779 09:29:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:08:21.779 09:29:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:21.779 09:29:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:08:21.779 09:29:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:21.779 09:29:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.779 09:29:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:21.779 09:29:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:21.779 09:29:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:08:21.779 09:29:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:21.779 09:29:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:21.780 09:29:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:08:21.780 09:29:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:21.780 09:29:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.780 09:29:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:21.780 09:29:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:21.780 09:29:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:08:21.780 09:29:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:21.780 09:29:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:21.780 09:29:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:08:21.780 09:29:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:21.780 09:29:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.780 09:29:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:21.780 09:29:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:21.780 09:29:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:08:21.780 09:29:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:21.780 09:29:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:08:21.780 09:29:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:21.780 09:29:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:21.780 09:29:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.780 09:29:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:21.780 09:29:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:21.780 09:29:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:08:21.780 09:29:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:21.780 09:29:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:21.780 09:29:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:08:21.780 09:29:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:21.780 09:29:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.780 09:29:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:21.780 09:29:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:21.780 09:29:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:08:21.780 09:29:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:21.780 09:29:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:21.780 09:29:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:08:21.780 09:29:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:21.780 09:29:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.780 09:29:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:21.780 09:29:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:21.780 09:29:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:08:21.780 09:29:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:21.780 09:29:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:21.780 09:29:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:08:21.780 09:29:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:21.780 09:29:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.780 09:29:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:21.780 09:29:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:21.780 09:29:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:08:21.780 09:29:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:21.780 09:29:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:21.780 09:29:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:08:21.780 09:29:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:21.780 09:29:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 119215 119217 119219 119221 119223 119225 119227 119229 00:08:21.780 09:29:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.780 09:29:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:22.039 09:29:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:22.039 09:29:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:22.039 09:29:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:22.039 09:29:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:22.039 09:29:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:22.039 09:29:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:22.039 09:29:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:22.039 09:29:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:22.297 09:29:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.297 09:29:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.297 09:29:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:22.297 09:29:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.297 09:29:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.297 09:29:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:22.297 09:29:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.297 09:29:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.297 09:29:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:22.297 09:29:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.297 09:29:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.297 09:29:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:22.297 09:29:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.297 09:29:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.297 09:29:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:22.297 09:29:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.297 09:29:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.297 09:29:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:22.297 09:29:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.297 09:29:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.297 09:29:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:22.297 09:29:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.297 09:29:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.297 09:29:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:22.556 09:29:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:22.556 09:29:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:22.556 09:29:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:22.556 09:29:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:22.556 09:29:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:22.556 09:29:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:22.556 09:29:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:22.556 09:29:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:23.121 09:29:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.121 09:29:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.121 09:29:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:23.121 09:29:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.121 09:29:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.121 09:29:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:23.121 09:29:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.121 09:29:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.121 09:29:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:23.122 09:29:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.122 09:29:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.122 09:29:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:23.122 09:29:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.122 09:29:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.122 09:29:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:23.122 09:29:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.122 09:29:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.122 09:29:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:23.122 09:29:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.122 09:29:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.122 09:29:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:23.122 09:29:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.122 09:29:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.122 09:29:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:23.122 09:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:23.380 09:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:23.380 09:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:23.380 09:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:23.380 09:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:23.380 09:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:23.380 09:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:23.380 09:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:23.638 09:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.638 09:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.638 09:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:23.638 09:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.638 09:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.638 09:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:23.638 09:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.638 09:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.638 09:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:23.638 09:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.638 09:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.638 09:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:23.638 09:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.638 09:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.638 09:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.638 09:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:23.638 09:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.638 09:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:23.638 09:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.638 09:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.638 09:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:23.638 09:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.638 09:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.638 09:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:23.896 09:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:23.896 09:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:23.896 09:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:23.896 09:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:23.896 09:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:23.896 09:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:23.897 09:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:23.897 09:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:24.155 09:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:24.155 09:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:24.155 09:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:24.155 09:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:24.155 09:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:24.155 09:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:24.155 09:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:24.155 09:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:24.155 09:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:24.155 09:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:24.155 09:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:24.155 09:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:24.155 09:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:24.155 09:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:24.155 09:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:24.155 09:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:24.155 09:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:24.155 09:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:24.155 09:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:24.155 09:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:24.155 09:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:24.155 09:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:24.155 09:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:24.155 09:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:24.414 09:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:24.414 09:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:24.414 09:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:24.414 09:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:24.414 09:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:24.414 09:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:24.414 09:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:24.414 09:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:24.672 09:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:24.672 09:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:24.672 09:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:24.672 09:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:24.672 09:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:24.672 09:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:24.672 09:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:24.672 09:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:24.672 09:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:24.672 09:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:24.672 09:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:24.672 09:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:24.672 09:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:24.672 09:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:24.672 09:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:24.672 09:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:24.672 09:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:24.672 09:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:24.672 09:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:24.672 09:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:24.672 09:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:24.672 09:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:24.672 09:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:24.672 09:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:24.930 09:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:24.930 09:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:24.930 09:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:24.930 09:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:24.930 09:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:25.187 09:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:25.187 09:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:25.187 09:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:25.445 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:25.445 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:25.445 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:25.445 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:25.445 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:25.445 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:25.445 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:25.445 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:25.445 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:25.445 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:25.445 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:25.445 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:25.445 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:25.445 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:25.446 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:25.446 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:25.446 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:25.446 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:25.446 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:25.446 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:25.446 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:25.446 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:25.446 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:25.446 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:25.704 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:25.705 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:25.705 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:25.705 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:25.705 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:25.705 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:25.705 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:25.705 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:25.964 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:25.964 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:25.964 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:25.964 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:25.964 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:25.964 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:25.964 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:25.964 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:25.964 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:25.964 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:25.964 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:25.964 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:25.964 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:25.964 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:25.964 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:25.964 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:25.964 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:25.964 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:25.964 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:25.964 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:25.964 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:25.964 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:25.964 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:25.964 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:26.222 09:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:26.222 09:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:26.222 09:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:26.222 09:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:26.222 09:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:26.222 09:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:26.222 09:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:26.222 09:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:26.481 09:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:26.481 09:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:26.481 09:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:26.481 09:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:26.481 09:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:26.481 09:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:26.481 09:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:26.481 09:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:26.481 09:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:26.481 09:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:26.481 09:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:26.481 09:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:26.481 09:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:26.481 09:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:26.481 09:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:26.481 09:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:26.481 09:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:26.481 09:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:26.481 09:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:26.481 09:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:26.481 09:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:26.481 09:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:26.481 09:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:26.481 09:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:26.740 09:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:26.740 09:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:26.740 09:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:26.740 09:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:26.740 09:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:26.740 09:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:26.740 09:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:26.999 09:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:27.258 09:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:27.258 09:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:27.258 09:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:27.258 09:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:27.258 09:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:27.258 09:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:27.258 09:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:27.258 09:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:27.258 09:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:27.258 09:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:27.258 09:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:27.258 09:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:27.258 09:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:27.258 09:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:27.258 09:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:27.258 09:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:27.258 09:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:27.258 09:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:27.258 09:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:27.258 09:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:27.258 09:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:27.258 09:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:27.258 09:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:27.258 09:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:27.517 09:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:27.517 09:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:27.517 09:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:27.517 09:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:27.517 09:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:27.517 09:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:27.517 09:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:27.517 09:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:27.775 09:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:27.775 09:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:27.775 09:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:27.775 09:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:27.775 09:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:27.775 09:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:27.775 09:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:27.775 09:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:27.775 09:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:27.775 09:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:27.775 09:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:27.775 09:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:27.775 09:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:27.775 09:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:27.775 09:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:27.775 09:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:27.775 09:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:08:27.775 09:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:08:27.775 09:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@514 -- # nvmfcleanup 00:08:27.775 09:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:08:27.775 09:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:27.775 09:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:08:27.775 09:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:27.775 09:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:27.775 rmmod nvme_tcp 00:08:27.775 rmmod nvme_fabrics 00:08:27.775 rmmod nvme_keyring 00:08:27.775 09:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:27.775 09:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:08:27.775 09:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:08:27.775 09:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@515 -- # '[' -n 114917 ']' 00:08:27.775 09:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # killprocess 114917 00:08:27.775 09:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 114917 ']' 00:08:27.775 09:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 114917 00:08:27.775 09:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:08:27.775 09:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:27.775 09:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 114917 00:08:27.775 09:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:27.775 09:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:27.775 09:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 114917' 00:08:27.775 killing process with pid 114917 00:08:27.775 09:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 114917 00:08:27.775 09:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 114917 00:08:28.355 09:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:08:28.355 09:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:08:28.355 09:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:08:28.355 09:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:08:28.355 09:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-save 00:08:28.355 09:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-restore 00:08:28.355 09:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:08:28.355 09:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:28.355 09:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:28.355 09:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:28.355 09:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:28.355 09:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:30.264 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:30.264 00:08:30.264 real 0m47.044s 00:08:30.264 user 3m38.425s 00:08:30.264 sys 0m15.455s 00:08:30.264 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:30.264 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:30.264 ************************************ 00:08:30.264 END TEST nvmf_ns_hotplug_stress 00:08:30.264 ************************************ 00:08:30.264 09:29:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:30.264 09:29:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:30.264 09:29:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:30.264 09:29:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:30.264 ************************************ 00:08:30.264 START TEST nvmf_delete_subsystem 00:08:30.264 ************************************ 00:08:30.264 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:30.264 * Looking for test storage... 00:08:30.264 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:30.264 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:30.264 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lcov --version 00:08:30.264 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:30.523 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:30.524 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:30.524 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:30.524 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:30.524 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:08:30.524 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:08:30.524 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:08:30.524 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:08:30.524 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:08:30.524 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:08:30.524 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:08:30.524 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:30.524 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:08:30.524 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:08:30.524 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:30.524 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:30.524 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:08:30.524 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:08:30.524 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:30.524 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:08:30.524 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:08:30.524 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:08:30.524 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:08:30.524 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:30.524 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:08:30.524 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:08:30.524 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:30.524 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:30.524 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:08:30.524 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:30.524 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:30.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.524 --rc genhtml_branch_coverage=1 00:08:30.524 --rc genhtml_function_coverage=1 00:08:30.524 --rc genhtml_legend=1 00:08:30.524 --rc geninfo_all_blocks=1 00:08:30.524 --rc geninfo_unexecuted_blocks=1 00:08:30.524 00:08:30.524 ' 00:08:30.524 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:30.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.524 --rc genhtml_branch_coverage=1 00:08:30.524 --rc genhtml_function_coverage=1 00:08:30.524 --rc genhtml_legend=1 00:08:30.524 --rc geninfo_all_blocks=1 00:08:30.524 --rc geninfo_unexecuted_blocks=1 00:08:30.524 00:08:30.524 ' 00:08:30.524 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:30.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.524 --rc genhtml_branch_coverage=1 00:08:30.524 --rc genhtml_function_coverage=1 00:08:30.524 --rc genhtml_legend=1 00:08:30.524 --rc geninfo_all_blocks=1 00:08:30.524 --rc geninfo_unexecuted_blocks=1 00:08:30.524 00:08:30.524 ' 00:08:30.524 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:30.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.524 --rc genhtml_branch_coverage=1 00:08:30.524 --rc genhtml_function_coverage=1 00:08:30.524 --rc genhtml_legend=1 00:08:30.524 --rc geninfo_all_blocks=1 00:08:30.524 --rc geninfo_unexecuted_blocks=1 00:08:30.524 00:08:30.524 ' 00:08:30.524 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:30.524 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:08:30.524 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:30.524 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:30.524 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:30.524 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:30.524 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:30.524 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:30.524 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:30.524 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:30.524 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:30.524 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:30.524 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:08:30.524 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:08:30.524 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:30.524 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:30.524 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:30.524 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:30.524 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:30.524 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:08:30.524 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:30.524 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:30.524 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:30.524 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.524 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.524 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.524 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:08:30.524 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.524 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:08:30.524 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:30.524 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:30.524 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:30.524 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:30.524 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:30.524 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:30.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:30.524 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:30.524 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:30.524 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:30.524 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:08:30.525 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:08:30.525 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:30.525 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # prepare_net_devs 00:08:30.525 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@436 -- # local -g is_hw=no 00:08:30.525 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # remove_spdk_ns 00:08:30.525 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:30.525 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:30.525 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:30.525 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:08:30.525 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:08:30.525 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:08:30.525 09:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:32.431 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:32.431 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:08:32.431 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:32.431 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:32.431 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:32.431 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:32.431 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:32.431 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:08:32.431 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:32.431 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:08:32.431 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:08:32.431 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:08:32.431 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:08:32.431 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:08:32.431 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:08:32.431 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:32.431 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:32.431 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:32.431 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:32.431 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:32.431 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:32.431 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:32.431 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:32.431 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:32.431 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:32.431 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:32.431 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:32.431 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:32.431 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:32.431 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:32.431 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:32.431 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:32.431 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:32.431 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:32.431 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x1592)' 00:08:32.431 Found 0000:09:00.0 (0x8086 - 0x1592) 00:08:32.431 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:32.431 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:32.431 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:08:32.431 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:08:32.431 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:32.431 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:32.431 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x1592)' 00:08:32.431 Found 0000:09:00.1 (0x8086 - 0x1592) 00:08:32.431 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:32.431 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:32.431 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:08:32.431 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:08:32.431 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:32.431 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:32.431 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:32.431 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:32.431 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:32.431 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:32.431 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:32.431 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:32.431 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:32.431 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:32.431 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:32.431 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:08:32.431 Found net devices under 0000:09:00.0: cvl_0_0 00:08:32.431 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:32.431 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:32.431 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:32.431 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:32.431 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:32.431 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:32.431 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:32.431 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:32.431 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:08:32.431 Found net devices under 0000:09:00.1: cvl_0_1 00:08:32.431 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:32.431 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:08:32.431 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # is_hw=yes 00:08:32.431 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:08:32.431 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:08:32.431 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:08:32.431 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:32.431 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:32.431 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:32.431 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:32.431 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:32.431 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:32.431 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:32.431 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:32.431 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:32.431 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:32.431 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:32.431 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:32.690 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:32.690 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:32.690 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:32.690 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:32.690 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:32.690 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:32.690 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:32.690 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:32.690 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:32.690 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:32.690 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:32.690 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:32.690 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.193 ms 00:08:32.690 00:08:32.690 --- 10.0.0.2 ping statistics --- 00:08:32.690 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:32.690 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:08:32.690 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:32.690 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:32.690 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:08:32.690 00:08:32.690 --- 10.0.0.1 ping statistics --- 00:08:32.690 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:32.690 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:08:32.690 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:32.690 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # return 0 00:08:32.690 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:08:32.690 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:32.690 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:08:32.690 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:08:32.690 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:32.690 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:08:32.690 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:08:32.690 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:08:32.690 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:08:32.691 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:32.691 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:32.691 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # nvmfpid=121993 00:08:32.691 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:08:32.691 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # waitforlisten 121993 00:08:32.691 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 121993 ']' 00:08:32.691 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:32.691 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:32.691 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:32.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:32.691 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:32.691 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:32.691 [2024-10-07 09:29:21.639253] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:08:32.691 [2024-10-07 09:29:21.639319] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:32.950 [2024-10-07 09:29:21.700048] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:32.950 [2024-10-07 09:29:21.809502] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:32.950 [2024-10-07 09:29:21.809575] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:32.950 [2024-10-07 09:29:21.809603] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:32.950 [2024-10-07 09:29:21.809614] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:32.950 [2024-10-07 09:29:21.809624] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:32.950 [2024-10-07 09:29:21.813688] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:08:32.950 [2024-10-07 09:29:21.813750] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.950 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:32.950 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:08:32.950 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:08:32.950 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:32.950 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:33.208 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:33.208 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:33.208 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.208 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:33.208 [2024-10-07 09:29:21.962751] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:33.208 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.208 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:33.208 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.208 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:33.208 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.208 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:33.208 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.208 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:33.208 [2024-10-07 09:29:21.978944] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:33.208 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.208 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:33.208 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.208 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:33.208 NULL1 00:08:33.208 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.208 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:33.208 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.208 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:33.208 Delay0 00:08:33.208 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.208 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:33.208 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.208 09:29:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:33.208 09:29:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.208 09:29:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=122020 00:08:33.208 09:29:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:08:33.209 09:29:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:33.209 [2024-10-07 09:29:22.053749] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:35.108 09:29:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:35.108 09:29:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.108 09:29:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 Write completed with error (sct=0, sc=8) 00:08:35.367 Write completed with error (sct=0, sc=8) 00:08:35.367 starting I/O failed: -6 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 starting I/O failed: -6 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 Write completed with error (sct=0, sc=8) 00:08:35.367 starting I/O failed: -6 00:08:35.367 Write completed with error (sct=0, sc=8) 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 starting I/O failed: -6 00:08:35.367 Write completed with error (sct=0, sc=8) 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 starting I/O failed: -6 00:08:35.367 Write completed with error (sct=0, sc=8) 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 starting I/O failed: -6 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 starting I/O failed: -6 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 Write completed with error (sct=0, sc=8) 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 Write completed with error (sct=0, sc=8) 00:08:35.367 starting I/O failed: -6 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 Write completed with error (sct=0, sc=8) 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 starting I/O failed: -6 00:08:35.367 Write completed with error (sct=0, sc=8) 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 Write completed with error (sct=0, sc=8) 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 starting I/O failed: -6 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 starting I/O failed: -6 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 [2024-10-07 09:29:24.187549] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1752750 is same with the state(6) to be set 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 Write completed with error (sct=0, sc=8) 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 Write completed with error (sct=0, sc=8) 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 Write completed with error (sct=0, sc=8) 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 Write completed with error (sct=0, sc=8) 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 Write completed with error (sct=0, sc=8) 00:08:35.367 Write completed with error (sct=0, sc=8) 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 Write completed with error (sct=0, sc=8) 00:08:35.367 Write completed with error (sct=0, sc=8) 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 Write completed with error (sct=0, sc=8) 00:08:35.367 Write completed with error (sct=0, sc=8) 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 Write completed with error (sct=0, sc=8) 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 Write completed with error (sct=0, sc=8) 00:08:35.367 Write completed with error (sct=0, sc=8) 00:08:35.367 Write completed with error (sct=0, sc=8) 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 Write completed with error (sct=0, sc=8) 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 Write completed with error (sct=0, sc=8) 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 Write completed with error (sct=0, sc=8) 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 Write completed with error (sct=0, sc=8) 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 Write completed with error (sct=0, sc=8) 00:08:35.367 Write completed with error (sct=0, sc=8) 00:08:35.367 [2024-10-07 09:29:24.188072] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1752390 is same with the state(6) to be set 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 starting I/O failed: -6 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 starting I/O failed: -6 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 starting I/O failed: -6 00:08:35.367 Write completed with error (sct=0, sc=8) 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 Write completed with error (sct=0, sc=8) 00:08:35.367 starting I/O failed: -6 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 Write completed with error (sct=0, sc=8) 00:08:35.367 starting I/O failed: -6 00:08:35.367 Write completed with error (sct=0, sc=8) 00:08:35.367 Write completed with error (sct=0, sc=8) 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 Write completed with error (sct=0, sc=8) 00:08:35.367 starting I/O failed: -6 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 Write completed with error (sct=0, sc=8) 00:08:35.367 starting I/O failed: -6 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 Write completed with error (sct=0, sc=8) 00:08:35.367 starting I/O failed: -6 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 Write completed with error (sct=0, sc=8) 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 starting I/O failed: -6 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 Write completed with error (sct=0, sc=8) 00:08:35.367 [2024-10-07 09:29:24.188523] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fc354000c00 is same with the state(6) to be set 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 Write completed with error (sct=0, sc=8) 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 Write completed with error (sct=0, sc=8) 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 Write completed with error (sct=0, sc=8) 00:08:35.367 Write completed with error (sct=0, sc=8) 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 Write completed with error (sct=0, sc=8) 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 Write completed with error (sct=0, sc=8) 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 Write completed with error (sct=0, sc=8) 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 Write completed with error (sct=0, sc=8) 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 Write completed with error (sct=0, sc=8) 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.367 Read completed with error (sct=0, sc=8) 00:08:35.368 Read completed with error (sct=0, sc=8) 00:08:35.368 Read completed with error (sct=0, sc=8) 00:08:35.368 Read completed with error (sct=0, sc=8) 00:08:35.368 Read completed with error (sct=0, sc=8) 00:08:35.368 Write completed with error (sct=0, sc=8) 00:08:35.368 Read completed with error (sct=0, sc=8) 00:08:35.368 Read completed with error (sct=0, sc=8) 00:08:35.368 Read completed with error (sct=0, sc=8) 00:08:36.303 [2024-10-07 09:29:25.155902] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753a70 is same with the state(6) to be set 00:08:36.303 Read completed with error (sct=0, sc=8) 00:08:36.303 Read completed with error (sct=0, sc=8) 00:08:36.303 Read completed with error (sct=0, sc=8) 00:08:36.303 Read completed with error (sct=0, sc=8) 00:08:36.303 Read completed with error (sct=0, sc=8) 00:08:36.303 Read completed with error (sct=0, sc=8) 00:08:36.303 Read completed with error (sct=0, sc=8) 00:08:36.303 Read completed with error (sct=0, sc=8) 00:08:36.303 Write completed with error (sct=0, sc=8) 00:08:36.303 Read completed with error (sct=0, sc=8) 00:08:36.303 Write completed with error (sct=0, sc=8) 00:08:36.303 Read completed with error (sct=0, sc=8) 00:08:36.303 Read completed with error (sct=0, sc=8) 00:08:36.303 Read completed with error (sct=0, sc=8) 00:08:36.303 Read completed with error (sct=0, sc=8) 00:08:36.303 Read completed with error (sct=0, sc=8) 00:08:36.303 Read completed with error (sct=0, sc=8) 00:08:36.303 Read completed with error (sct=0, sc=8) 00:08:36.303 Write completed with error (sct=0, sc=8) 00:08:36.303 [2024-10-07 09:29:25.190240] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fc35400cfe0 is same with the state(6) to be set 00:08:36.303 Read completed with error (sct=0, sc=8) 00:08:36.303 Write completed with error (sct=0, sc=8) 00:08:36.303 Write completed with error (sct=0, sc=8) 00:08:36.303 Write completed with error (sct=0, sc=8) 00:08:36.303 Read completed with error (sct=0, sc=8) 00:08:36.303 Read completed with error (sct=0, sc=8) 00:08:36.303 Write completed with error (sct=0, sc=8) 00:08:36.303 Write completed with error (sct=0, sc=8) 00:08:36.303 Write completed with error (sct=0, sc=8) 00:08:36.303 Write completed with error (sct=0, sc=8) 00:08:36.303 Read completed with error (sct=0, sc=8) 00:08:36.303 Read completed with error (sct=0, sc=8) 00:08:36.303 Read completed with error (sct=0, sc=8) 00:08:36.303 Write completed with error (sct=0, sc=8) 00:08:36.303 Read completed with error (sct=0, sc=8) 00:08:36.303 Read completed with error (sct=0, sc=8) 00:08:36.303 Write completed with error (sct=0, sc=8) 00:08:36.303 Read completed with error (sct=0, sc=8) 00:08:36.303 Read completed with error (sct=0, sc=8) 00:08:36.303 Read completed with error (sct=0, sc=8) 00:08:36.303 [2024-10-07 09:29:25.190574] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fc35400d640 is same with the state(6) to be set 00:08:36.303 Write completed with error (sct=0, sc=8) 00:08:36.303 Read completed with error (sct=0, sc=8) 00:08:36.303 Read completed with error (sct=0, sc=8) 00:08:36.303 Read completed with error (sct=0, sc=8) 00:08:36.303 Read completed with error (sct=0, sc=8) 00:08:36.303 Read completed with error (sct=0, sc=8) 00:08:36.303 Read completed with error (sct=0, sc=8) 00:08:36.303 Write completed with error (sct=0, sc=8) 00:08:36.303 Write completed with error (sct=0, sc=8) 00:08:36.303 Read completed with error (sct=0, sc=8) 00:08:36.303 Write completed with error (sct=0, sc=8) 00:08:36.303 Read completed with error (sct=0, sc=8) 00:08:36.303 Read completed with error (sct=0, sc=8) 00:08:36.303 Read completed with error (sct=0, sc=8) 00:08:36.303 Read completed with error (sct=0, sc=8) 00:08:36.303 Read completed with error (sct=0, sc=8) 00:08:36.303 Read completed with error (sct=0, sc=8) 00:08:36.303 Write completed with error (sct=0, sc=8) 00:08:36.303 Read completed with error (sct=0, sc=8) 00:08:36.303 Read completed with error (sct=0, sc=8) 00:08:36.303 Write completed with error (sct=0, sc=8) 00:08:36.303 Write completed with error (sct=0, sc=8) 00:08:36.303 Read completed with error (sct=0, sc=8) 00:08:36.303 Read completed with error (sct=0, sc=8) 00:08:36.303 Write completed with error (sct=0, sc=8) 00:08:36.303 Write completed with error (sct=0, sc=8) 00:08:36.303 Read completed with error (sct=0, sc=8) 00:08:36.303 Write completed with error (sct=0, sc=8) 00:08:36.303 Read completed with error (sct=0, sc=8) 00:08:36.303 Read completed with error (sct=0, sc=8) 00:08:36.303 [2024-10-07 09:29:25.191832] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1752570 is same with the state(6) to be set 00:08:36.303 Read completed with error (sct=0, sc=8) 00:08:36.303 Read completed with error (sct=0, sc=8) 00:08:36.303 Read completed with error (sct=0, sc=8) 00:08:36.303 Read completed with error (sct=0, sc=8) 00:08:36.303 Read completed with error (sct=0, sc=8) 00:08:36.303 Write completed with error (sct=0, sc=8) 00:08:36.303 Write completed with error (sct=0, sc=8) 00:08:36.303 Write completed with error (sct=0, sc=8) 00:08:36.303 Read completed with error (sct=0, sc=8) 00:08:36.303 Write completed with error (sct=0, sc=8) 00:08:36.303 Read completed with error (sct=0, sc=8) 00:08:36.303 Write completed with error (sct=0, sc=8) 00:08:36.303 Read completed with error (sct=0, sc=8) 00:08:36.303 Read completed with error (sct=0, sc=8) 00:08:36.303 Read completed with error (sct=0, sc=8) 00:08:36.303 Write completed with error (sct=0, sc=8) 00:08:36.303 Read completed with error (sct=0, sc=8) 00:08:36.303 Write completed with error (sct=0, sc=8) 00:08:36.303 Read completed with error (sct=0, sc=8) 00:08:36.303 Read completed with error (sct=0, sc=8) 00:08:36.303 Read completed with error (sct=0, sc=8) 00:08:36.303 Read completed with error (sct=0, sc=8) 00:08:36.303 Read completed with error (sct=0, sc=8) 00:08:36.303 Read completed with error (sct=0, sc=8) 00:08:36.303 Read completed with error (sct=0, sc=8) 00:08:36.303 Read completed with error (sct=0, sc=8) 00:08:36.303 Write completed with error (sct=0, sc=8) 00:08:36.303 Write completed with error (sct=0, sc=8) 00:08:36.303 Read completed with error (sct=0, sc=8) 00:08:36.303 Read completed with error (sct=0, sc=8) 00:08:36.303 [2024-10-07 09:29:25.192052] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1752930 is same with the state(6) to be set 00:08:36.303 Initializing NVMe Controllers 00:08:36.303 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:36.303 Controller IO queue size 128, less than required. 00:08:36.303 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:36.303 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:36.303 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:36.303 Initialization complete. Launching workers. 00:08:36.303 ======================================================== 00:08:36.303 Latency(us) 00:08:36.303 Device Information : IOPS MiB/s Average min max 00:08:36.303 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 169.79 0.08 929625.42 527.95 2003964.51 00:08:36.303 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 154.40 0.08 954785.17 350.24 2000597.06 00:08:36.303 ======================================================== 00:08:36.303 Total : 324.18 0.16 941608.09 350.24 2003964.51 00:08:36.303 00:08:36.303 [2024-10-07 09:29:25.193029] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1753a70 (9): Bad file descriptor 00:08:36.303 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:08:36.303 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.303 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:08:36.303 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 122020 00:08:36.303 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:08:36.870 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:08:36.870 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 122020 00:08:36.870 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (122020) - No such process 00:08:36.870 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 122020 00:08:36.870 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:08:36.870 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 122020 00:08:36.870 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:08:36.870 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:36.870 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:08:36.870 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:36.870 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 122020 00:08:36.870 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:08:36.870 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:36.870 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:36.870 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:36.870 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:36.870 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.870 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:36.870 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.870 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:36.870 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.870 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:36.870 [2024-10-07 09:29:25.715458] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:36.870 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.870 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:36.870 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.870 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:36.870 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.870 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=122517 00:08:36.870 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:08:36.870 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 122517 00:08:36.870 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:36.870 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:36.870 [2024-10-07 09:29:25.772015] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:37.437 09:29:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:37.437 09:29:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 122517 00:08:37.437 09:29:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:38.003 09:29:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:38.003 09:29:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 122517 00:08:38.003 09:29:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:38.260 09:29:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:38.260 09:29:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 122517 00:08:38.260 09:29:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:38.825 09:29:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:38.825 09:29:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 122517 00:08:38.825 09:29:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:39.392 09:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:39.392 09:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 122517 00:08:39.392 09:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:39.957 09:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:39.957 09:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 122517 00:08:39.957 09:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:39.957 Initializing NVMe Controllers 00:08:39.957 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:39.957 Controller IO queue size 128, less than required. 00:08:39.957 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:39.958 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:39.958 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:39.958 Initialization complete. Launching workers. 00:08:39.958 ======================================================== 00:08:39.958 Latency(us) 00:08:39.958 Device Information : IOPS MiB/s Average min max 00:08:39.958 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004222.54 1000166.63 1011313.40 00:08:39.958 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004304.18 1000172.52 1011419.96 00:08:39.958 ======================================================== 00:08:39.958 Total : 256.00 0.12 1004263.36 1000166.63 1011419.96 00:08:39.958 00:08:40.524 09:29:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:40.524 09:29:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 122517 00:08:40.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (122517) - No such process 00:08:40.524 09:29:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 122517 00:08:40.524 09:29:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:40.524 09:29:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:08:40.524 09:29:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@514 -- # nvmfcleanup 00:08:40.524 09:29:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:08:40.524 09:29:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:40.524 09:29:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:08:40.524 09:29:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:40.524 09:29:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:40.524 rmmod nvme_tcp 00:08:40.524 rmmod nvme_fabrics 00:08:40.524 rmmod nvme_keyring 00:08:40.524 09:29:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:40.524 09:29:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:08:40.524 09:29:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:08:40.524 09:29:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@515 -- # '[' -n 121993 ']' 00:08:40.524 09:29:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # killprocess 121993 00:08:40.524 09:29:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 121993 ']' 00:08:40.524 09:29:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 121993 00:08:40.524 09:29:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:08:40.524 09:29:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:40.524 09:29:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 121993 00:08:40.524 09:29:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:40.524 09:29:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:40.524 09:29:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 121993' 00:08:40.524 killing process with pid 121993 00:08:40.524 09:29:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 121993 00:08:40.524 09:29:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 121993 00:08:40.785 09:29:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:08:40.785 09:29:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:08:40.785 09:29:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:08:40.785 09:29:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:08:40.785 09:29:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-save 00:08:40.785 09:29:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:08:40.785 09:29:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-restore 00:08:40.785 09:29:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:40.785 09:29:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:40.785 09:29:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:40.785 09:29:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:40.785 09:29:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:42.698 09:29:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:42.698 00:08:42.698 real 0m12.523s 00:08:42.698 user 0m27.985s 00:08:42.698 sys 0m2.984s 00:08:42.698 09:29:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:42.698 09:29:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:42.698 ************************************ 00:08:42.698 END TEST nvmf_delete_subsystem 00:08:42.698 ************************************ 00:08:42.698 09:29:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:42.698 09:29:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:42.698 09:29:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:42.698 09:29:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:42.960 ************************************ 00:08:42.960 START TEST nvmf_host_management 00:08:42.960 ************************************ 00:08:42.960 09:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:42.960 * Looking for test storage... 00:08:42.960 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:42.960 09:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:42.960 09:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # lcov --version 00:08:42.960 09:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:42.960 09:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:42.960 09:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:42.960 09:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:42.960 09:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:42.960 09:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:08:42.960 09:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:08:42.960 09:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:08:42.960 09:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:08:42.960 09:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:08:42.960 09:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:08:42.960 09:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:08:42.960 09:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:42.960 09:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:08:42.960 09:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:08:42.960 09:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:42.960 09:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:42.960 09:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:08:42.960 09:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:08:42.960 09:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:42.960 09:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:08:42.960 09:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:08:42.960 09:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:08:42.960 09:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:08:42.961 09:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:42.961 09:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:08:42.961 09:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:08:42.961 09:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:42.961 09:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:42.961 09:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:08:42.961 09:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:42.961 09:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:42.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.961 --rc genhtml_branch_coverage=1 00:08:42.961 --rc genhtml_function_coverage=1 00:08:42.961 --rc genhtml_legend=1 00:08:42.961 --rc geninfo_all_blocks=1 00:08:42.961 --rc geninfo_unexecuted_blocks=1 00:08:42.961 00:08:42.961 ' 00:08:42.961 09:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:42.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.961 --rc genhtml_branch_coverage=1 00:08:42.961 --rc genhtml_function_coverage=1 00:08:42.961 --rc genhtml_legend=1 00:08:42.961 --rc geninfo_all_blocks=1 00:08:42.961 --rc geninfo_unexecuted_blocks=1 00:08:42.961 00:08:42.961 ' 00:08:42.961 09:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:42.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.961 --rc genhtml_branch_coverage=1 00:08:42.961 --rc genhtml_function_coverage=1 00:08:42.961 --rc genhtml_legend=1 00:08:42.961 --rc geninfo_all_blocks=1 00:08:42.961 --rc geninfo_unexecuted_blocks=1 00:08:42.961 00:08:42.961 ' 00:08:42.961 09:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:42.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.961 --rc genhtml_branch_coverage=1 00:08:42.961 --rc genhtml_function_coverage=1 00:08:42.961 --rc genhtml_legend=1 00:08:42.961 --rc geninfo_all_blocks=1 00:08:42.961 --rc geninfo_unexecuted_blocks=1 00:08:42.961 00:08:42.961 ' 00:08:42.961 09:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:42.961 09:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:08:42.961 09:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:42.961 09:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:42.961 09:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:42.961 09:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:42.961 09:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:42.961 09:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:42.961 09:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:42.961 09:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:42.961 09:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:42.961 09:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:42.961 09:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:08:42.961 09:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:08:42.961 09:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:42.961 09:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:42.961 09:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:42.961 09:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:42.961 09:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:42.961 09:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:08:42.961 09:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:42.961 09:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:42.961 09:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:42.961 09:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.961 09:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.961 09:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.961 09:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:08:42.961 09:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.961 09:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:08:42.961 09:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:42.961 09:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:42.961 09:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:42.961 09:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:42.961 09:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:42.961 09:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:42.961 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:42.961 09:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:42.961 09:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:42.961 09:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:42.961 09:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:42.961 09:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:42.961 09:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:08:42.961 09:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:08:42.961 09:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:42.961 09:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # prepare_net_devs 00:08:42.961 09:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@436 -- # local -g is_hw=no 00:08:42.961 09:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # remove_spdk_ns 00:08:42.961 09:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:42.961 09:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:42.961 09:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:42.961 09:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:08:42.961 09:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:08:42.961 09:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:08:42.961 09:29:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:44.873 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:44.873 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:08:44.873 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:44.873 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:44.873 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:44.873 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:44.873 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:44.873 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:08:44.873 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:44.873 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:08:44.873 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:08:44.873 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:08:44.873 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:08:45.133 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:08:45.133 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:08:45.133 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:45.133 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:45.134 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:45.134 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:45.134 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:45.134 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:45.134 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:45.134 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:45.134 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:45.134 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:45.134 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:45.134 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:45.134 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:45.134 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:45.134 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:45.134 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:45.134 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:45.134 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:45.134 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:45.134 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x1592)' 00:08:45.134 Found 0000:09:00.0 (0x8086 - 0x1592) 00:08:45.134 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:45.134 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:45.134 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:08:45.134 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:08:45.134 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:45.134 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:45.134 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x1592)' 00:08:45.134 Found 0000:09:00.1 (0x8086 - 0x1592) 00:08:45.134 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:45.134 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:45.134 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:08:45.134 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:08:45.134 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:45.134 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:45.134 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:45.134 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:45.134 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:45.134 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:45.134 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:45.134 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:45.134 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:45.134 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:45.134 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:45.134 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:08:45.134 Found net devices under 0000:09:00.0: cvl_0_0 00:08:45.134 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:45.134 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:45.134 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:45.134 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:45.134 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:45.134 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:45.134 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:45.134 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:45.134 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:08:45.134 Found net devices under 0000:09:00.1: cvl_0_1 00:08:45.134 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:45.134 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:08:45.134 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # is_hw=yes 00:08:45.134 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:08:45.134 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:08:45.134 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:08:45.134 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:45.134 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:45.134 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:45.134 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:45.134 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:45.134 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:45.134 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:45.134 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:45.134 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:45.134 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:45.134 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:45.134 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:45.134 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:45.134 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:45.134 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:45.134 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:45.134 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:45.134 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:45.134 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:45.134 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:45.134 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:45.134 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:45.134 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:45.134 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:45.134 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.164 ms 00:08:45.134 00:08:45.134 --- 10.0.0.2 ping statistics --- 00:08:45.134 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:45.134 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:08:45.134 09:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:45.134 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:45.134 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.078 ms 00:08:45.134 00:08:45.134 --- 10.0.0.1 ping statistics --- 00:08:45.134 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:45.134 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:08:45.134 09:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:45.134 09:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # return 0 00:08:45.134 09:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:08:45.134 09:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:45.134 09:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:08:45.134 09:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:08:45.134 09:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:45.134 09:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:08:45.134 09:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:08:45.134 09:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:08:45.134 09:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:08:45.134 09:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:08:45.135 09:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:08:45.135 09:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:45.135 09:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:45.135 09:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # nvmfpid=124764 00:08:45.135 09:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:08:45.135 09:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # waitforlisten 124764 00:08:45.135 09:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 124764 ']' 00:08:45.135 09:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:45.135 09:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:45.135 09:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:45.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:45.135 09:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:45.135 09:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:45.135 [2024-10-07 09:29:34.087206] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:08:45.135 [2024-10-07 09:29:34.087272] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:45.393 [2024-10-07 09:29:34.148797] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:45.393 [2024-10-07 09:29:34.259118] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:45.393 [2024-10-07 09:29:34.259188] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:45.393 [2024-10-07 09:29:34.259202] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:45.393 [2024-10-07 09:29:34.259213] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:45.393 [2024-10-07 09:29:34.259222] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:45.393 [2024-10-07 09:29:34.260949] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:08:45.393 [2024-10-07 09:29:34.260987] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:08:45.393 [2024-10-07 09:29:34.261034] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:08:45.393 [2024-10-07 09:29:34.261037] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:08:45.653 09:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:45.653 09:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:08:45.653 09:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:08:45.653 09:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:45.653 09:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:45.653 09:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:45.653 09:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:45.653 09:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.653 09:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:45.653 [2024-10-07 09:29:34.425409] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:45.653 09:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.653 09:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:08:45.653 09:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:45.653 09:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:45.653 09:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:45.653 09:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:08:45.653 09:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:08:45.653 09:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.653 09:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:45.653 Malloc0 00:08:45.653 [2024-10-07 09:29:34.489969] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:45.653 09:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.653 09:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:08:45.653 09:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:45.653 09:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:45.653 09:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=124811 00:08:45.653 09:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 124811 /var/tmp/bdevperf.sock 00:08:45.653 09:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 124811 ']' 00:08:45.653 09:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:45.653 09:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:08:45.653 09:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:08:45.653 09:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:45.653 09:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:45.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:45.653 09:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:08:45.653 09:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:45.653 09:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:08:45.653 09:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:45.653 09:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:08:45.653 09:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:08:45.653 { 00:08:45.653 "params": { 00:08:45.653 "name": "Nvme$subsystem", 00:08:45.653 "trtype": "$TEST_TRANSPORT", 00:08:45.653 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:45.653 "adrfam": "ipv4", 00:08:45.653 "trsvcid": "$NVMF_PORT", 00:08:45.653 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:45.653 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:45.653 "hdgst": ${hdgst:-false}, 00:08:45.653 "ddgst": ${ddgst:-false} 00:08:45.653 }, 00:08:45.653 "method": "bdev_nvme_attach_controller" 00:08:45.653 } 00:08:45.653 EOF 00:08:45.653 )") 00:08:45.653 09:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:08:45.653 09:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:08:45.653 09:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:08:45.653 09:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:08:45.653 "params": { 00:08:45.653 "name": "Nvme0", 00:08:45.653 "trtype": "tcp", 00:08:45.653 "traddr": "10.0.0.2", 00:08:45.653 "adrfam": "ipv4", 00:08:45.653 "trsvcid": "4420", 00:08:45.653 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:45.653 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:45.653 "hdgst": false, 00:08:45.653 "ddgst": false 00:08:45.653 }, 00:08:45.653 "method": "bdev_nvme_attach_controller" 00:08:45.653 }' 00:08:45.653 [2024-10-07 09:29:34.576228] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:08:45.653 [2024-10-07 09:29:34.576317] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124811 ] 00:08:45.653 [2024-10-07 09:29:34.638346] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.912 [2024-10-07 09:29:34.748939] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.170 Running I/O for 10 seconds... 00:08:46.170 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:46.170 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:08:46.170 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:08:46.170 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.170 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:46.170 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.170 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:46.170 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:08:46.170 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:08:46.170 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:08:46.170 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:08:46.170 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:08:46.170 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:08:46.170 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:46.170 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:46.170 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:46.170 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.170 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:46.170 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.170 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:08:46.170 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:08:46.170 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:08:46.429 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:08:46.429 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:46.429 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:46.429 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:46.429 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.429 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:46.429 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.688 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=550 00:08:46.688 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 550 -ge 100 ']' 00:08:46.688 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:08:46.688 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:08:46.688 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:08:46.688 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:46.688 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.688 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:46.688 [2024-10-07 09:29:35.453154] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eb350 is same with the state(6) to be set 00:08:46.688 [2024-10-07 09:29:35.453272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eb350 is same with the state(6) to be set 00:08:46.688 [2024-10-07 09:29:35.453571] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:08:46.688 [2024-10-07 09:29:35.453610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:46.688 [2024-10-07 09:29:35.453628] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:08:46.688 [2024-10-07 09:29:35.453642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:46.689 [2024-10-07 09:29:35.453655] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:08:46.689 [2024-10-07 09:29:35.453677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:46.689 [2024-10-07 09:29:35.453693] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:08:46.689 [2024-10-07 09:29:35.453719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:46.689 [2024-10-07 09:29:35.453737] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1709c90 is same with the state(6) to be set 00:08:46.689 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.689 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:46.689 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.689 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:46.689 [2024-10-07 09:29:35.459294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.689 [2024-10-07 09:29:35.459323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:46.689 [2024-10-07 09:29:35.459348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.689 [2024-10-07 09:29:35.459363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:46.689 [2024-10-07 09:29:35.459379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.689 [2024-10-07 09:29:35.459393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:46.689 [2024-10-07 09:29:35.459408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.689 [2024-10-07 09:29:35.459422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:46.689 [2024-10-07 09:29:35.459437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.689 [2024-10-07 09:29:35.459450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:46.689 [2024-10-07 09:29:35.459465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.689 [2024-10-07 09:29:35.459478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:46.689 [2024-10-07 09:29:35.459493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.689 [2024-10-07 09:29:35.459507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:46.689 [2024-10-07 09:29:35.459521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.689 [2024-10-07 09:29:35.459534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:46.689 [2024-10-07 09:29:35.459548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.689 [2024-10-07 09:29:35.459561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:46.689 [2024-10-07 09:29:35.459576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.689 [2024-10-07 09:29:35.459590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:46.689 [2024-10-07 09:29:35.459610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.689 [2024-10-07 09:29:35.459625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:46.689 [2024-10-07 09:29:35.459640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.689 [2024-10-07 09:29:35.459654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:46.689 [2024-10-07 09:29:35.459676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.689 [2024-10-07 09:29:35.459692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:46.689 [2024-10-07 09:29:35.459707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.689 [2024-10-07 09:29:35.459721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:46.689 [2024-10-07 09:29:35.459744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.689 [2024-10-07 09:29:35.459758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:46.689 [2024-10-07 09:29:35.459772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.689 [2024-10-07 09:29:35.459786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:46.689 [2024-10-07 09:29:35.459800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.689 [2024-10-07 09:29:35.459814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:46.689 [2024-10-07 09:29:35.459829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.689 [2024-10-07 09:29:35.459842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:46.689 [2024-10-07 09:29:35.459856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.689 [2024-10-07 09:29:35.459869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:46.689 [2024-10-07 09:29:35.459884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.689 [2024-10-07 09:29:35.459897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:46.689 [2024-10-07 09:29:35.459911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.689 [2024-10-07 09:29:35.459925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:46.689 [2024-10-07 09:29:35.459939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.689 [2024-10-07 09:29:35.459953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:46.689 [2024-10-07 09:29:35.459967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.689 [2024-10-07 09:29:35.459996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:46.689 [2024-10-07 09:29:35.460012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.689 [2024-10-07 09:29:35.460025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:46.689 [2024-10-07 09:29:35.460040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.689 [2024-10-07 09:29:35.460054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:46.689 [2024-10-07 09:29:35.460068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.689 [2024-10-07 09:29:35.460081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:46.689 [2024-10-07 09:29:35.460096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.689 [2024-10-07 09:29:35.460109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:46.689 [2024-10-07 09:29:35.460125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.689 [2024-10-07 09:29:35.460138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:46.689 [2024-10-07 09:29:35.460153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.689 [2024-10-07 09:29:35.460166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:46.689 [2024-10-07 09:29:35.460180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.689 [2024-10-07 09:29:35.460194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:46.689 [2024-10-07 09:29:35.460215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.689 [2024-10-07 09:29:35.460230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:46.689 [2024-10-07 09:29:35.460245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.689 [2024-10-07 09:29:35.460258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:46.689 [2024-10-07 09:29:35.460273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.689 [2024-10-07 09:29:35.460286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:46.689 [2024-10-07 09:29:35.460301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.689 [2024-10-07 09:29:35.460315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:46.689 [2024-10-07 09:29:35.460329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.690 [2024-10-07 09:29:35.460343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:46.690 [2024-10-07 09:29:35.460361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.690 [2024-10-07 09:29:35.460376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:46.690 [2024-10-07 09:29:35.460390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.690 [2024-10-07 09:29:35.460404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:46.690 [2024-10-07 09:29:35.460419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.690 [2024-10-07 09:29:35.460432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:46.690 [2024-10-07 09:29:35.460447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.690 [2024-10-07 09:29:35.460461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:46.690 [2024-10-07 09:29:35.460475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.690 [2024-10-07 09:29:35.460489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:46.690 [2024-10-07 09:29:35.460504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.690 [2024-10-07 09:29:35.460517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:46.690 [2024-10-07 09:29:35.460531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.690 [2024-10-07 09:29:35.460545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:46.690 [2024-10-07 09:29:35.460560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.690 [2024-10-07 09:29:35.460573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:46.690 [2024-10-07 09:29:35.460588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.690 [2024-10-07 09:29:35.460602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:46.690 [2024-10-07 09:29:35.460617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.690 [2024-10-07 09:29:35.460630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:46.690 [2024-10-07 09:29:35.460645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.690 [2024-10-07 09:29:35.460659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:46.690 [2024-10-07 09:29:35.460685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.690 [2024-10-07 09:29:35.460700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:46.690 [2024-10-07 09:29:35.460715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.690 [2024-10-07 09:29:35.460733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:46.690 [2024-10-07 09:29:35.460753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.690 [2024-10-07 09:29:35.460767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:46.690 [2024-10-07 09:29:35.460781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.690 [2024-10-07 09:29:35.460794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:46.690 [2024-10-07 09:29:35.460809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.690 [2024-10-07 09:29:35.460823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:46.690 [2024-10-07 09:29:35.460837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.690 [2024-10-07 09:29:35.460850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:46.690 [2024-10-07 09:29:35.460865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.690 [2024-10-07 09:29:35.460878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:46.690 [2024-10-07 09:29:35.460893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.690 [2024-10-07 09:29:35.460907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:46.690 [2024-10-07 09:29:35.460921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.690 [2024-10-07 09:29:35.460934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:46.690 [2024-10-07 09:29:35.460949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.690 [2024-10-07 09:29:35.460962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:46.690 [2024-10-07 09:29:35.460977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.690 [2024-10-07 09:29:35.460990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:46.690 [2024-10-07 09:29:35.461004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.690 [2024-10-07 09:29:35.461018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:46.690 [2024-10-07 09:29:35.461032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.690 [2024-10-07 09:29:35.461046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:46.690 [2024-10-07 09:29:35.461060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.690 [2024-10-07 09:29:35.461077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:46.690 [2024-10-07 09:29:35.461093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.690 [2024-10-07 09:29:35.461106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:46.690 [2024-10-07 09:29:35.461121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.690 [2024-10-07 09:29:35.461135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:46.690 [2024-10-07 09:29:35.461155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.690 [2024-10-07 09:29:35.461169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:46.690 [2024-10-07 09:29:35.461184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.690 [2024-10-07 09:29:35.461197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:46.690 [2024-10-07 09:29:35.461282] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1921e70 was disconnected and freed. reset controller. 00:08:46.690 [2024-10-07 09:29:35.462386] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:08:46.690 task offset: 81920 on job bdev=Nvme0n1 fails 00:08:46.690 00:08:46.690 Latency(us) 00:08:46.690 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:46.690 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:46.690 Job: Nvme0n1 ended in about 0.41 seconds with error 00:08:46.690 Verification LBA range: start 0x0 length 0x400 00:08:46.690 Nvme0n1 : 0.41 1578.57 98.66 157.86 0.00 35809.10 2342.31 34952.53 00:08:46.690 =================================================================================================================== 00:08:46.690 Total : 1578.57 98.66 157.86 0.00 35809.10 2342.31 34952.53 00:08:46.690 [2024-10-07 09:29:35.464267] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:46.690 [2024-10-07 09:29:35.464311] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1709c90 (9): Bad file descriptor 00:08:46.690 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.690 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:08:46.690 [2024-10-07 09:29:35.515837] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:47.631 09:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 124811 00:08:47.631 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (124811) - No such process 00:08:47.631 09:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:08:47.631 09:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:47.631 09:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:47.631 09:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:47.631 09:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:08:47.631 09:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:08:47.631 09:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:08:47.631 09:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:08:47.631 { 00:08:47.631 "params": { 00:08:47.631 "name": "Nvme$subsystem", 00:08:47.631 "trtype": "$TEST_TRANSPORT", 00:08:47.631 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:47.631 "adrfam": "ipv4", 00:08:47.631 "trsvcid": "$NVMF_PORT", 00:08:47.631 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:47.631 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:47.631 "hdgst": ${hdgst:-false}, 00:08:47.631 "ddgst": ${ddgst:-false} 00:08:47.631 }, 00:08:47.631 "method": "bdev_nvme_attach_controller" 00:08:47.631 } 00:08:47.631 EOF 00:08:47.631 )") 00:08:47.631 09:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:08:47.631 09:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:08:47.631 09:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:08:47.631 09:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:08:47.631 "params": { 00:08:47.631 "name": "Nvme0", 00:08:47.631 "trtype": "tcp", 00:08:47.631 "traddr": "10.0.0.2", 00:08:47.631 "adrfam": "ipv4", 00:08:47.631 "trsvcid": "4420", 00:08:47.631 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:47.631 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:47.631 "hdgst": false, 00:08:47.631 "ddgst": false 00:08:47.631 }, 00:08:47.631 "method": "bdev_nvme_attach_controller" 00:08:47.631 }' 00:08:47.631 [2024-10-07 09:29:36.519615] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:08:47.631 [2024-10-07 09:29:36.519729] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125080 ] 00:08:47.631 [2024-10-07 09:29:36.577277] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.890 [2024-10-07 09:29:36.692165] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.890 Running I/O for 1 seconds... 00:08:49.265 1664.00 IOPS, 104.00 MiB/s 00:08:49.265 Latency(us) 00:08:49.265 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:49.265 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:49.265 Verification LBA range: start 0x0 length 0x400 00:08:49.265 Nvme0n1 : 1.07 1615.65 100.98 0.00 0.00 37543.77 8932.31 49516.09 00:08:49.265 =================================================================================================================== 00:08:49.265 Total : 1615.65 100.98 0.00 0.00 37543.77 8932.31 49516.09 00:08:49.265 09:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:08:49.265 09:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:49.265 09:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:08:49.265 09:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:49.265 09:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:08:49.265 09:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@514 -- # nvmfcleanup 00:08:49.265 09:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:08:49.265 09:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:49.265 09:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:08:49.265 09:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:49.265 09:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:49.265 rmmod nvme_tcp 00:08:49.265 rmmod nvme_fabrics 00:08:49.265 rmmod nvme_keyring 00:08:49.522 09:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:49.522 09:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:08:49.523 09:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:08:49.523 09:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@515 -- # '[' -n 124764 ']' 00:08:49.523 09:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # killprocess 124764 00:08:49.523 09:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 124764 ']' 00:08:49.523 09:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 124764 00:08:49.523 09:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:08:49.523 09:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:49.523 09:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 124764 00:08:49.523 09:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:49.523 09:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:49.523 09:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 124764' 00:08:49.523 killing process with pid 124764 00:08:49.523 09:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 124764 00:08:49.523 09:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 124764 00:08:49.783 [2024-10-07 09:29:38.576502] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:49.783 09:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:08:49.783 09:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:08:49.783 09:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:08:49.783 09:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:08:49.783 09:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-save 00:08:49.783 09:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:08:49.783 09:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-restore 00:08:49.783 09:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:49.783 09:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:49.783 09:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:49.783 09:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:49.783 09:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:51.699 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:51.699 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:08:51.699 00:08:51.699 real 0m8.945s 00:08:51.699 user 0m20.219s 00:08:51.699 sys 0m2.867s 00:08:51.699 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:51.699 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:51.699 ************************************ 00:08:51.699 END TEST nvmf_host_management 00:08:51.699 ************************************ 00:08:51.699 09:29:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:51.699 09:29:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:51.699 09:29:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:51.699 09:29:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:51.958 ************************************ 00:08:51.958 START TEST nvmf_lvol 00:08:51.958 ************************************ 00:08:51.958 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:51.958 * Looking for test storage... 00:08:51.958 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:51.958 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:51.958 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # lcov --version 00:08:51.958 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:51.958 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:51.958 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:51.958 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:51.958 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:51.958 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:08:51.958 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:08:51.958 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:08:51.958 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:08:51.958 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:08:51.958 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:08:51.958 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:08:51.958 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:51.958 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:08:51.958 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:08:51.958 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:51.958 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:51.958 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:08:51.958 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:08:51.958 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:51.958 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:08:51.958 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:08:51.958 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:08:51.958 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:08:51.958 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:51.958 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:08:51.958 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:08:51.958 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:51.958 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:51.958 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:08:51.958 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:51.958 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:51.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.958 --rc genhtml_branch_coverage=1 00:08:51.958 --rc genhtml_function_coverage=1 00:08:51.958 --rc genhtml_legend=1 00:08:51.958 --rc geninfo_all_blocks=1 00:08:51.958 --rc geninfo_unexecuted_blocks=1 00:08:51.958 00:08:51.958 ' 00:08:51.958 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:51.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.958 --rc genhtml_branch_coverage=1 00:08:51.958 --rc genhtml_function_coverage=1 00:08:51.958 --rc genhtml_legend=1 00:08:51.958 --rc geninfo_all_blocks=1 00:08:51.958 --rc geninfo_unexecuted_blocks=1 00:08:51.958 00:08:51.958 ' 00:08:51.958 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:51.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.958 --rc genhtml_branch_coverage=1 00:08:51.958 --rc genhtml_function_coverage=1 00:08:51.958 --rc genhtml_legend=1 00:08:51.958 --rc geninfo_all_blocks=1 00:08:51.958 --rc geninfo_unexecuted_blocks=1 00:08:51.958 00:08:51.958 ' 00:08:51.958 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:51.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.958 --rc genhtml_branch_coverage=1 00:08:51.958 --rc genhtml_function_coverage=1 00:08:51.958 --rc genhtml_legend=1 00:08:51.958 --rc geninfo_all_blocks=1 00:08:51.958 --rc geninfo_unexecuted_blocks=1 00:08:51.958 00:08:51.958 ' 00:08:51.958 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:51.958 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:08:51.958 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:51.958 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:51.958 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:51.958 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:51.958 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:51.958 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:51.958 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:51.958 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:51.958 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:51.958 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:51.958 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:08:51.958 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:08:51.958 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:51.958 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:51.958 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:51.959 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:51.959 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:51.959 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:08:51.959 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:51.959 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:51.959 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:51.959 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.959 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.959 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.959 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:08:51.959 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.959 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:08:51.959 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:51.959 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:51.959 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:51.959 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:51.959 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:51.959 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:51.959 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:51.959 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:51.959 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:51.959 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:51.959 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:51.959 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:51.959 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:51.959 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:51.959 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:51.959 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:51.959 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:08:51.959 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:51.959 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # prepare_net_devs 00:08:51.959 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@436 -- # local -g is_hw=no 00:08:51.959 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # remove_spdk_ns 00:08:51.959 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:51.959 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:51.959 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:51.959 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:08:51.959 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:08:51.959 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:08:51.959 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:54.496 09:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:54.496 09:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:08:54.496 09:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:54.496 09:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:54.496 09:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:54.496 09:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:54.496 09:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:54.496 09:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:08:54.496 09:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:54.496 09:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:08:54.496 09:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:08:54.496 09:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:08:54.496 09:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:08:54.496 09:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:08:54.496 09:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:08:54.496 09:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:54.496 09:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:54.496 09:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:54.497 09:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:54.497 09:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:54.497 09:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:54.497 09:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:54.497 09:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:54.497 09:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:54.497 09:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:54.497 09:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:54.497 09:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:54.497 09:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:54.497 09:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:54.497 09:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:54.497 09:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:54.497 09:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:54.497 09:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:54.497 09:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:54.497 09:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x1592)' 00:08:54.497 Found 0000:09:00.0 (0x8086 - 0x1592) 00:08:54.497 09:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:54.497 09:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:54.497 09:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:08:54.497 09:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:08:54.497 09:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:54.497 09:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:54.497 09:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x1592)' 00:08:54.497 Found 0000:09:00.1 (0x8086 - 0x1592) 00:08:54.497 09:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:54.497 09:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:54.497 09:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:08:54.497 09:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:08:54.497 09:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:54.497 09:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:54.497 09:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:54.497 09:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:54.497 09:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:54.497 09:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:54.497 09:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:54.497 09:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:54.497 09:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:54.497 09:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:54.497 09:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:54.497 09:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:08:54.497 Found net devices under 0000:09:00.0: cvl_0_0 00:08:54.497 09:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:54.497 09:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:54.497 09:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:54.497 09:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:54.497 09:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:54.497 09:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:54.497 09:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:54.497 09:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:54.497 09:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:08:54.497 Found net devices under 0000:09:00.1: cvl_0_1 00:08:54.497 09:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:54.497 09:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:08:54.497 09:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # is_hw=yes 00:08:54.497 09:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:08:54.497 09:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:08:54.497 09:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:08:54.497 09:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:54.497 09:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:54.497 09:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:54.497 09:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:54.497 09:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:54.497 09:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:54.497 09:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:54.497 09:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:54.497 09:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:54.497 09:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:54.497 09:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:54.497 09:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:54.497 09:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:54.497 09:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:54.497 09:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:54.497 09:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:54.497 09:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:54.497 09:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:54.497 09:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:54.497 09:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:54.497 09:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:54.497 09:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:54.497 09:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:54.497 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:54.497 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.309 ms 00:08:54.497 00:08:54.497 --- 10.0.0.2 ping statistics --- 00:08:54.497 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:54.497 rtt min/avg/max/mdev = 0.309/0.309/0.309/0.000 ms 00:08:54.497 09:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:54.497 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:54.497 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:08:54.497 00:08:54.497 --- 10.0.0.1 ping statistics --- 00:08:54.497 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:54.497 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:08:54.497 09:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:54.497 09:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # return 0 00:08:54.497 09:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:08:54.497 09:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:54.497 09:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:08:54.497 09:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:08:54.497 09:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:54.497 09:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:08:54.497 09:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:08:54.497 09:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:54.497 09:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:08:54.497 09:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:54.497 09:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:54.497 09:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # nvmfpid=127176 00:08:54.497 09:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:54.497 09:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # waitforlisten 127176 00:08:54.497 09:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 127176 ']' 00:08:54.497 09:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:54.497 09:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:54.497 09:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:54.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:54.497 09:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:54.497 09:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:54.498 [2024-10-07 09:29:43.108939] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:08:54.498 [2024-10-07 09:29:43.109017] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:54.498 [2024-10-07 09:29:43.172149] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:54.498 [2024-10-07 09:29:43.284227] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:54.498 [2024-10-07 09:29:43.284297] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:54.498 [2024-10-07 09:29:43.284325] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:54.498 [2024-10-07 09:29:43.284336] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:54.498 [2024-10-07 09:29:43.284345] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:54.498 [2024-10-07 09:29:43.288687] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:08:54.498 [2024-10-07 09:29:43.288758] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.498 [2024-10-07 09:29:43.288753] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:08:54.498 09:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:54.498 09:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:08:54.498 09:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:08:54.498 09:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:54.498 09:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:54.498 09:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:54.498 09:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:54.755 [2024-10-07 09:29:43.687511] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:54.755 09:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:55.012 09:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:55.012 09:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:55.578 09:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:55.578 09:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:55.578 09:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:56.175 09:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=70cf684f-8f34-4fe4-9885-b9d809339026 00:08:56.175 09:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 70cf684f-8f34-4fe4-9885-b9d809339026 lvol 20 00:08:56.175 09:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=e5e7fa8b-251a-440d-866c-29541cc69690 00:08:56.175 09:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:56.433 09:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e5e7fa8b-251a-440d-866c-29541cc69690 00:08:56.691 09:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:56.949 [2024-10-07 09:29:45.917167] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:56.949 09:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:57.206 09:29:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=127586 00:08:57.206 09:29:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:57.206 09:29:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:58.597 09:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot e5e7fa8b-251a-440d-866c-29541cc69690 MY_SNAPSHOT 00:08:58.597 09:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=1f8e34d1-0e7f-4c1a-8205-c49581f958fa 00:08:58.597 09:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize e5e7fa8b-251a-440d-866c-29541cc69690 30 00:08:58.874 09:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 1f8e34d1-0e7f-4c1a-8205-c49581f958fa MY_CLONE 00:08:59.163 09:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=cb2a9c81-8432-4316-9734-1aee4f0e2a5e 00:08:59.163 09:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate cb2a9c81-8432-4316-9734-1aee4f0e2a5e 00:09:00.207 09:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 127586 00:09:08.694 Initializing NVMe Controllers 00:09:08.694 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:09:08.694 Controller IO queue size 128, less than required. 00:09:08.694 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:08.694 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:09:08.694 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:09:08.694 Initialization complete. Launching workers. 00:09:08.694 ======================================================== 00:09:08.694 Latency(us) 00:09:08.694 Device Information : IOPS MiB/s Average min max 00:09:08.695 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10514.60 41.07 12176.79 1122.19 79070.01 00:09:08.695 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10438.80 40.78 12271.57 2007.63 55576.46 00:09:08.695 ======================================================== 00:09:08.695 Total : 20953.40 81.85 12224.01 1122.19 79070.01 00:09:08.695 00:09:08.695 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:08.695 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e5e7fa8b-251a-440d-866c-29541cc69690 00:09:08.695 09:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 70cf684f-8f34-4fe4-9885-b9d809339026 00:09:08.695 09:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:09:08.695 09:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:09:08.695 09:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:09:08.695 09:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:08.695 09:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:09:08.695 09:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:08.695 09:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:09:08.695 09:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:08.695 09:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:08.695 rmmod nvme_tcp 00:09:08.695 rmmod nvme_fabrics 00:09:08.695 rmmod nvme_keyring 00:09:08.695 09:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:08.695 09:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:09:08.695 09:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:09:08.695 09:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@515 -- # '[' -n 127176 ']' 00:09:08.695 09:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # killprocess 127176 00:09:08.695 09:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 127176 ']' 00:09:08.695 09:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 127176 00:09:08.695 09:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:09:08.695 09:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:08.695 09:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 127176 00:09:08.695 09:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:08.695 09:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:08.695 09:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 127176' 00:09:08.695 killing process with pid 127176 00:09:08.695 09:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 127176 00:09:08.695 09:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 127176 00:09:08.978 09:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:08.978 09:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:09:08.978 09:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:09:08.978 09:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:09:08.978 09:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-save 00:09:08.978 09:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:09:08.978 09:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-restore 00:09:08.978 09:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:08.978 09:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:08.978 09:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:08.978 09:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:08.978 09:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:10.985 09:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:10.985 00:09:10.985 real 0m19.155s 00:09:10.985 user 1m4.804s 00:09:10.985 sys 0m5.712s 00:09:10.985 09:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:10.985 09:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:10.985 ************************************ 00:09:10.985 END TEST nvmf_lvol 00:09:10.985 ************************************ 00:09:10.985 09:29:59 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:10.985 09:29:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:10.985 09:29:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:10.985 09:29:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:10.985 ************************************ 00:09:10.985 START TEST nvmf_lvs_grow 00:09:10.985 ************************************ 00:09:10.985 09:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:10.985 * Looking for test storage... 00:09:10.985 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:10.985 09:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:10.985 09:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lcov --version 00:09:10.985 09:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:11.291 09:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:11.291 09:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:11.291 09:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:11.291 09:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:11.291 09:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:09:11.291 09:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:09:11.291 09:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:09:11.291 09:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:09:11.291 09:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:09:11.291 09:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:09:11.291 09:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:09:11.291 09:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:11.291 09:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:09:11.291 09:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:09:11.291 09:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:11.291 09:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:11.291 09:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:09:11.291 09:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:09:11.291 09:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:11.291 09:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:09:11.291 09:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:09:11.291 09:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:09:11.291 09:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:09:11.291 09:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:11.291 09:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:09:11.291 09:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:09:11.291 09:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:11.291 09:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:11.291 09:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:09:11.291 09:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:11.291 09:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:11.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.291 --rc genhtml_branch_coverage=1 00:09:11.291 --rc genhtml_function_coverage=1 00:09:11.291 --rc genhtml_legend=1 00:09:11.291 --rc geninfo_all_blocks=1 00:09:11.291 --rc geninfo_unexecuted_blocks=1 00:09:11.291 00:09:11.291 ' 00:09:11.291 09:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:11.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.291 --rc genhtml_branch_coverage=1 00:09:11.291 --rc genhtml_function_coverage=1 00:09:11.291 --rc genhtml_legend=1 00:09:11.291 --rc geninfo_all_blocks=1 00:09:11.292 --rc geninfo_unexecuted_blocks=1 00:09:11.292 00:09:11.292 ' 00:09:11.292 09:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:11.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.292 --rc genhtml_branch_coverage=1 00:09:11.292 --rc genhtml_function_coverage=1 00:09:11.292 --rc genhtml_legend=1 00:09:11.292 --rc geninfo_all_blocks=1 00:09:11.292 --rc geninfo_unexecuted_blocks=1 00:09:11.292 00:09:11.292 ' 00:09:11.292 09:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:11.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.292 --rc genhtml_branch_coverage=1 00:09:11.292 --rc genhtml_function_coverage=1 00:09:11.292 --rc genhtml_legend=1 00:09:11.292 --rc geninfo_all_blocks=1 00:09:11.292 --rc geninfo_unexecuted_blocks=1 00:09:11.292 00:09:11.292 ' 00:09:11.292 09:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:11.292 09:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:09:11.292 09:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:11.292 09:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:11.292 09:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:11.292 09:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:11.292 09:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:11.292 09:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:11.292 09:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:11.292 09:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:11.292 09:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:11.292 09:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:11.292 09:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:09:11.292 09:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:09:11.292 09:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:11.292 09:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:11.292 09:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:11.292 09:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:11.292 09:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:11.292 09:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:09:11.292 09:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:11.292 09:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:11.292 09:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:11.292 09:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.292 09:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.292 09:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.292 09:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:09:11.292 09:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.292 09:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:09:11.292 09:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:11.292 09:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:11.292 09:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:11.292 09:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:11.292 09:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:11.292 09:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:11.292 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:11.292 09:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:11.292 09:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:11.292 09:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:11.292 09:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:11.292 09:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:11.292 09:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:09:11.292 09:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:09:11.292 09:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:11.292 09:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # prepare_net_devs 00:09:11.292 09:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@436 -- # local -g is_hw=no 00:09:11.292 09:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # remove_spdk_ns 00:09:11.292 09:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:11.292 09:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:11.292 09:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:11.292 09:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:09:11.292 09:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:09:11.292 09:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:09:11.292 09:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:13.224 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:13.224 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:09:13.224 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:13.224 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:13.224 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:13.224 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:13.224 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:13.224 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:09:13.224 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:13.224 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:09:13.224 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:09:13.224 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:09:13.224 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:09:13.224 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:09:13.224 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:09:13.224 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:13.224 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:13.224 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:13.224 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:13.224 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:13.224 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:13.224 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:13.224 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:13.224 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:13.224 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:13.224 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:13.224 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:13.224 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:13.224 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:13.224 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:13.224 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:13.224 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:13.224 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:13.224 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:13.224 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x1592)' 00:09:13.224 Found 0000:09:00.0 (0x8086 - 0x1592) 00:09:13.224 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:13.224 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:13.224 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:09:13.224 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:09:13.224 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:13.224 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:13.224 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x1592)' 00:09:13.224 Found 0000:09:00.1 (0x8086 - 0x1592) 00:09:13.224 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:13.224 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:13.224 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:09:13.224 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:09:13.224 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:13.224 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:13.224 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:13.224 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:13.224 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:13.224 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:13.224 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:13.224 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:13.224 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:13.224 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:13.224 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:13.224 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:09:13.224 Found net devices under 0000:09:00.0: cvl_0_0 00:09:13.224 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:13.224 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:13.224 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:13.224 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:13.224 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:13.224 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:13.224 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:13.224 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:13.224 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:09:13.224 Found net devices under 0000:09:00.1: cvl_0_1 00:09:13.224 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:13.224 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:09:13.224 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # is_hw=yes 00:09:13.224 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:09:13.224 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:09:13.224 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:09:13.224 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:13.224 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:13.225 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:13.225 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:13.225 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:13.225 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:13.225 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:13.225 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:13.225 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:13.225 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:13.225 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:13.225 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:13.225 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:13.225 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:13.225 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:13.225 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:13.225 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:13.225 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:13.225 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:13.225 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:13.225 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:13.225 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:13.225 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:13.225 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:13.225 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:09:13.225 00:09:13.225 --- 10.0.0.2 ping statistics --- 00:09:13.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:13.225 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:09:13.225 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:13.225 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:13.225 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:09:13.225 00:09:13.225 --- 10.0.0.1 ping statistics --- 00:09:13.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:13.225 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:09:13.225 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:13.225 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # return 0 00:09:13.225 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:09:13.225 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:13.225 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:09:13.225 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:09:13.225 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:13.225 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:09:13.225 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:09:13.484 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:09:13.484 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:09:13.484 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:13.484 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:13.484 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # nvmfpid=130871 00:09:13.484 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:13.484 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # waitforlisten 130871 00:09:13.484 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 130871 ']' 00:09:13.484 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:13.484 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:13.484 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:13.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:13.484 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:13.484 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:13.484 [2024-10-07 09:30:02.275993] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:09:13.484 [2024-10-07 09:30:02.276072] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:13.484 [2024-10-07 09:30:02.337386] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:13.484 [2024-10-07 09:30:02.442639] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:13.484 [2024-10-07 09:30:02.442713] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:13.484 [2024-10-07 09:30:02.442729] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:13.484 [2024-10-07 09:30:02.442741] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:13.484 [2024-10-07 09:30:02.442751] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:13.484 [2024-10-07 09:30:02.443223] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.743 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:13.743 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:09:13.743 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:09:13.743 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:13.743 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:13.743 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:13.743 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:14.000 [2024-10-07 09:30:02.822775] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:14.000 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:09:14.000 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:14.000 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:14.000 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:14.000 ************************************ 00:09:14.000 START TEST lvs_grow_clean 00:09:14.000 ************************************ 00:09:14.000 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:09:14.000 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:14.000 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:14.000 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:14.000 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:14.000 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:14.000 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:14.000 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:14.000 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:14.000 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:14.259 09:30:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:14.259 09:30:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:14.517 09:30:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=e76023cf-be57-4662-a968-d999d32be369 00:09:14.517 09:30:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e76023cf-be57-4662-a968-d999d32be369 00:09:14.517 09:30:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:14.775 09:30:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:14.775 09:30:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:14.775 09:30:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u e76023cf-be57-4662-a968-d999d32be369 lvol 150 00:09:15.033 09:30:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=e7b741a6-5204-4747-a9df-d9f3d87cf442 00:09:15.033 09:30:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:15.033 09:30:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:15.292 [2024-10-07 09:30:04.266146] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:15.292 [2024-10-07 09:30:04.266252] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:15.292 true 00:09:15.292 09:30:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e76023cf-be57-4662-a968-d999d32be369 00:09:15.292 09:30:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:15.858 09:30:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:15.858 09:30:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:15.858 09:30:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e7b741a6-5204-4747-a9df-d9f3d87cf442 00:09:16.425 09:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:16.425 [2024-10-07 09:30:05.389678] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:16.425 09:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:16.683 09:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=131302 00:09:16.683 09:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:16.683 09:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 131302 /var/tmp/bdevperf.sock 00:09:16.683 09:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:16.683 09:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 131302 ']' 00:09:16.683 09:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:16.683 09:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:16.683 09:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:16.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:16.683 09:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:16.683 09:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:16.942 [2024-10-07 09:30:05.717088] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:09:16.942 [2024-10-07 09:30:05.717171] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131302 ] 00:09:16.942 [2024-10-07 09:30:05.772906] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:16.942 [2024-10-07 09:30:05.881842] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:09:17.201 09:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:17.201 09:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:09:17.201 09:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:17.459 Nvme0n1 00:09:17.459 09:30:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:17.717 [ 00:09:17.717 { 00:09:17.717 "name": "Nvme0n1", 00:09:17.717 "aliases": [ 00:09:17.717 "e7b741a6-5204-4747-a9df-d9f3d87cf442" 00:09:17.717 ], 00:09:17.717 "product_name": "NVMe disk", 00:09:17.717 "block_size": 4096, 00:09:17.717 "num_blocks": 38912, 00:09:17.717 "uuid": "e7b741a6-5204-4747-a9df-d9f3d87cf442", 00:09:17.717 "numa_id": 0, 00:09:17.717 "assigned_rate_limits": { 00:09:17.717 "rw_ios_per_sec": 0, 00:09:17.717 "rw_mbytes_per_sec": 0, 00:09:17.717 "r_mbytes_per_sec": 0, 00:09:17.717 "w_mbytes_per_sec": 0 00:09:17.717 }, 00:09:17.717 "claimed": false, 00:09:17.717 "zoned": false, 00:09:17.717 "supported_io_types": { 00:09:17.717 "read": true, 00:09:17.717 "write": true, 00:09:17.717 "unmap": true, 00:09:17.717 "flush": true, 00:09:17.717 "reset": true, 00:09:17.717 "nvme_admin": true, 00:09:17.717 "nvme_io": true, 00:09:17.717 "nvme_io_md": false, 00:09:17.717 "write_zeroes": true, 00:09:17.717 "zcopy": false, 00:09:17.717 "get_zone_info": false, 00:09:17.717 "zone_management": false, 00:09:17.717 "zone_append": false, 00:09:17.717 "compare": true, 00:09:17.717 "compare_and_write": true, 00:09:17.717 "abort": true, 00:09:17.717 "seek_hole": false, 00:09:17.717 "seek_data": false, 00:09:17.717 "copy": true, 00:09:17.717 "nvme_iov_md": false 00:09:17.717 }, 00:09:17.717 "memory_domains": [ 00:09:17.717 { 00:09:17.717 "dma_device_id": "system", 00:09:17.717 "dma_device_type": 1 00:09:17.717 } 00:09:17.717 ], 00:09:17.717 "driver_specific": { 00:09:17.717 "nvme": [ 00:09:17.717 { 00:09:17.717 "trid": { 00:09:17.717 "trtype": "TCP", 00:09:17.717 "adrfam": "IPv4", 00:09:17.717 "traddr": "10.0.0.2", 00:09:17.717 "trsvcid": "4420", 00:09:17.717 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:17.717 }, 00:09:17.717 "ctrlr_data": { 00:09:17.717 "cntlid": 1, 00:09:17.717 "vendor_id": "0x8086", 00:09:17.717 "model_number": "SPDK bdev Controller", 00:09:17.717 "serial_number": "SPDK0", 00:09:17.717 "firmware_revision": "25.01", 00:09:17.717 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:17.717 "oacs": { 00:09:17.717 "security": 0, 00:09:17.717 "format": 0, 00:09:17.717 "firmware": 0, 00:09:17.717 "ns_manage": 0 00:09:17.717 }, 00:09:17.717 "multi_ctrlr": true, 00:09:17.717 "ana_reporting": false 00:09:17.717 }, 00:09:17.717 "vs": { 00:09:17.717 "nvme_version": "1.3" 00:09:17.717 }, 00:09:17.717 "ns_data": { 00:09:17.717 "id": 1, 00:09:17.717 "can_share": true 00:09:17.717 } 00:09:17.717 } 00:09:17.717 ], 00:09:17.717 "mp_policy": "active_passive" 00:09:17.717 } 00:09:17.717 } 00:09:17.717 ] 00:09:17.717 09:30:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=131432 00:09:17.718 09:30:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:17.718 09:30:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:17.976 Running I/O for 10 seconds... 00:09:18.912 Latency(us) 00:09:18.912 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:18.912 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:18.912 Nvme0n1 : 1.00 14987.00 58.54 0.00 0.00 0.00 0.00 0.00 00:09:18.912 =================================================================================================================== 00:09:18.912 Total : 14987.00 58.54 0.00 0.00 0.00 0.00 0.00 00:09:18.912 00:09:19.854 09:30:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u e76023cf-be57-4662-a968-d999d32be369 00:09:19.854 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:19.854 Nvme0n1 : 2.00 15209.00 59.41 0.00 0.00 0.00 0.00 0.00 00:09:19.854 =================================================================================================================== 00:09:19.854 Total : 15209.00 59.41 0.00 0.00 0.00 0.00 0.00 00:09:19.854 00:09:20.113 true 00:09:20.113 09:30:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e76023cf-be57-4662-a968-d999d32be369 00:09:20.113 09:30:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:20.377 09:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:20.377 09:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:20.377 09:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 131432 00:09:20.944 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:20.944 Nvme0n1 : 3.00 15336.33 59.91 0.00 0.00 0.00 0.00 0.00 00:09:20.944 =================================================================================================================== 00:09:20.944 Total : 15336.33 59.91 0.00 0.00 0.00 0.00 0.00 00:09:20.944 00:09:21.878 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:21.878 Nvme0n1 : 4.00 15407.50 60.19 0.00 0.00 0.00 0.00 0.00 00:09:21.878 =================================================================================================================== 00:09:21.878 Total : 15407.50 60.19 0.00 0.00 0.00 0.00 0.00 00:09:21.878 00:09:22.812 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:22.812 Nvme0n1 : 5.00 15526.40 60.65 0.00 0.00 0.00 0.00 0.00 00:09:22.812 =================================================================================================================== 00:09:22.812 Total : 15526.40 60.65 0.00 0.00 0.00 0.00 0.00 00:09:22.812 00:09:24.186 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:24.186 Nvme0n1 : 6.00 15605.67 60.96 0.00 0.00 0.00 0.00 0.00 00:09:24.186 =================================================================================================================== 00:09:24.186 Total : 15605.67 60.96 0.00 0.00 0.00 0.00 0.00 00:09:24.186 00:09:25.120 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:25.120 Nvme0n1 : 7.00 15680.43 61.25 0.00 0.00 0.00 0.00 0.00 00:09:25.120 =================================================================================================================== 00:09:25.120 Total : 15680.43 61.25 0.00 0.00 0.00 0.00 0.00 00:09:25.120 00:09:26.056 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:26.056 Nvme0n1 : 8.00 15728.75 61.44 0.00 0.00 0.00 0.00 0.00 00:09:26.056 =================================================================================================================== 00:09:26.056 Total : 15728.75 61.44 0.00 0.00 0.00 0.00 0.00 00:09:26.056 00:09:26.992 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:26.992 Nvme0n1 : 9.00 15773.22 61.61 0.00 0.00 0.00 0.00 0.00 00:09:26.992 =================================================================================================================== 00:09:26.992 Total : 15773.22 61.61 0.00 0.00 0.00 0.00 0.00 00:09:26.992 00:09:27.925 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:27.925 Nvme0n1 : 10.00 15815.20 61.78 0.00 0.00 0.00 0.00 0.00 00:09:27.925 =================================================================================================================== 00:09:27.925 Total : 15815.20 61.78 0.00 0.00 0.00 0.00 0.00 00:09:27.925 00:09:27.925 00:09:27.925 Latency(us) 00:09:27.925 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:27.925 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:27.925 Nvme0n1 : 10.00 15815.33 61.78 0.00 0.00 8088.67 4320.52 18835.53 00:09:27.925 =================================================================================================================== 00:09:27.925 Total : 15815.33 61.78 0.00 0.00 8088.67 4320.52 18835.53 00:09:27.925 { 00:09:27.925 "results": [ 00:09:27.925 { 00:09:27.925 "job": "Nvme0n1", 00:09:27.925 "core_mask": "0x2", 00:09:27.925 "workload": "randwrite", 00:09:27.925 "status": "finished", 00:09:27.925 "queue_depth": 128, 00:09:27.925 "io_size": 4096, 00:09:27.925 "runtime": 10.003966, 00:09:27.925 "iops": 15815.327641057556, 00:09:27.925 "mibps": 61.77862359788108, 00:09:27.925 "io_failed": 0, 00:09:27.925 "io_timeout": 0, 00:09:27.925 "avg_latency_us": 8088.6657454319375, 00:09:27.925 "min_latency_us": 4320.521481481482, 00:09:27.925 "max_latency_us": 18835.53185185185 00:09:27.925 } 00:09:27.925 ], 00:09:27.925 "core_count": 1 00:09:27.925 } 00:09:27.925 09:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 131302 00:09:27.925 09:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 131302 ']' 00:09:27.925 09:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 131302 00:09:27.925 09:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:09:27.925 09:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:27.925 09:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 131302 00:09:27.925 09:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:27.925 09:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:27.925 09:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 131302' 00:09:27.925 killing process with pid 131302 00:09:27.925 09:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 131302 00:09:27.925 Received shutdown signal, test time was about 10.000000 seconds 00:09:27.925 00:09:27.925 Latency(us) 00:09:27.925 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:27.925 =================================================================================================================== 00:09:27.925 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:27.925 09:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 131302 00:09:28.183 09:30:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:28.440 09:30:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:28.698 09:30:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e76023cf-be57-4662-a968-d999d32be369 00:09:28.698 09:30:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:28.956 09:30:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:28.956 09:30:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:09:28.956 09:30:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:29.214 [2024-10-07 09:30:18.117513] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:29.214 09:30:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e76023cf-be57-4662-a968-d999d32be369 00:09:29.214 09:30:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:09:29.214 09:30:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e76023cf-be57-4662-a968-d999d32be369 00:09:29.214 09:30:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:29.214 09:30:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:29.214 09:30:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:29.214 09:30:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:29.214 09:30:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:29.214 09:30:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:29.214 09:30:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:29.214 09:30:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:29.214 09:30:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e76023cf-be57-4662-a968-d999d32be369 00:09:29.472 request: 00:09:29.472 { 00:09:29.472 "uuid": "e76023cf-be57-4662-a968-d999d32be369", 00:09:29.472 "method": "bdev_lvol_get_lvstores", 00:09:29.472 "req_id": 1 00:09:29.472 } 00:09:29.472 Got JSON-RPC error response 00:09:29.472 response: 00:09:29.472 { 00:09:29.472 "code": -19, 00:09:29.472 "message": "No such device" 00:09:29.472 } 00:09:29.472 09:30:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:09:29.472 09:30:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:29.472 09:30:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:29.472 09:30:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:29.472 09:30:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:29.730 aio_bdev 00:09:29.730 09:30:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev e7b741a6-5204-4747-a9df-d9f3d87cf442 00:09:29.730 09:30:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=e7b741a6-5204-4747-a9df-d9f3d87cf442 00:09:29.730 09:30:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:29.730 09:30:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:09:29.730 09:30:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:29.730 09:30:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:29.730 09:30:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:29.988 09:30:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b e7b741a6-5204-4747-a9df-d9f3d87cf442 -t 2000 00:09:30.246 [ 00:09:30.246 { 00:09:30.246 "name": "e7b741a6-5204-4747-a9df-d9f3d87cf442", 00:09:30.246 "aliases": [ 00:09:30.246 "lvs/lvol" 00:09:30.246 ], 00:09:30.246 "product_name": "Logical Volume", 00:09:30.246 "block_size": 4096, 00:09:30.246 "num_blocks": 38912, 00:09:30.246 "uuid": "e7b741a6-5204-4747-a9df-d9f3d87cf442", 00:09:30.246 "assigned_rate_limits": { 00:09:30.246 "rw_ios_per_sec": 0, 00:09:30.246 "rw_mbytes_per_sec": 0, 00:09:30.246 "r_mbytes_per_sec": 0, 00:09:30.246 "w_mbytes_per_sec": 0 00:09:30.246 }, 00:09:30.246 "claimed": false, 00:09:30.246 "zoned": false, 00:09:30.246 "supported_io_types": { 00:09:30.246 "read": true, 00:09:30.246 "write": true, 00:09:30.246 "unmap": true, 00:09:30.246 "flush": false, 00:09:30.246 "reset": true, 00:09:30.246 "nvme_admin": false, 00:09:30.246 "nvme_io": false, 00:09:30.246 "nvme_io_md": false, 00:09:30.246 "write_zeroes": true, 00:09:30.246 "zcopy": false, 00:09:30.246 "get_zone_info": false, 00:09:30.246 "zone_management": false, 00:09:30.246 "zone_append": false, 00:09:30.246 "compare": false, 00:09:30.246 "compare_and_write": false, 00:09:30.246 "abort": false, 00:09:30.246 "seek_hole": true, 00:09:30.246 "seek_data": true, 00:09:30.246 "copy": false, 00:09:30.246 "nvme_iov_md": false 00:09:30.246 }, 00:09:30.246 "driver_specific": { 00:09:30.246 "lvol": { 00:09:30.246 "lvol_store_uuid": "e76023cf-be57-4662-a968-d999d32be369", 00:09:30.246 "base_bdev": "aio_bdev", 00:09:30.246 "thin_provision": false, 00:09:30.246 "num_allocated_clusters": 38, 00:09:30.246 "snapshot": false, 00:09:30.246 "clone": false, 00:09:30.246 "esnap_clone": false 00:09:30.246 } 00:09:30.246 } 00:09:30.246 } 00:09:30.246 ] 00:09:30.246 09:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:09:30.246 09:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e76023cf-be57-4662-a968-d999d32be369 00:09:30.246 09:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:30.503 09:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:30.504 09:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e76023cf-be57-4662-a968-d999d32be369 00:09:30.504 09:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:30.762 09:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:30.762 09:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e7b741a6-5204-4747-a9df-d9f3d87cf442 00:09:31.021 09:30:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e76023cf-be57-4662-a968-d999d32be369 00:09:31.587 09:30:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:31.846 09:30:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:31.846 00:09:31.846 real 0m17.760s 00:09:31.846 user 0m17.305s 00:09:31.846 sys 0m1.830s 00:09:31.846 09:30:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:31.846 09:30:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:31.846 ************************************ 00:09:31.846 END TEST lvs_grow_clean 00:09:31.846 ************************************ 00:09:31.846 09:30:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:09:31.846 09:30:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:31.846 09:30:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:31.846 09:30:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:31.846 ************************************ 00:09:31.846 START TEST lvs_grow_dirty 00:09:31.846 ************************************ 00:09:31.846 09:30:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:09:31.846 09:30:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:31.846 09:30:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:31.846 09:30:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:31.846 09:30:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:31.846 09:30:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:31.846 09:30:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:31.846 09:30:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:31.846 09:30:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:31.846 09:30:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:32.104 09:30:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:32.104 09:30:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:32.363 09:30:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=48f48b31-acf8-4f65-8b0d-0f1634f57018 00:09:32.363 09:30:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 48f48b31-acf8-4f65-8b0d-0f1634f57018 00:09:32.363 09:30:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:32.621 09:30:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:32.621 09:30:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:32.621 09:30:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 48f48b31-acf8-4f65-8b0d-0f1634f57018 lvol 150 00:09:32.879 09:30:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=ebf2f17c-e34a-453c-96a3-16e4e2330ed4 00:09:32.879 09:30:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:32.879 09:30:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:33.138 [2024-10-07 09:30:22.048124] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:33.138 [2024-10-07 09:30:22.048224] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:33.138 true 00:09:33.138 09:30:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 48f48b31-acf8-4f65-8b0d-0f1634f57018 00:09:33.138 09:30:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:33.396 09:30:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:33.396 09:30:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:33.654 09:30:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ebf2f17c-e34a-453c-96a3-16e4e2330ed4 00:09:33.913 09:30:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:34.173 [2024-10-07 09:30:23.127371] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:34.173 09:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:34.431 09:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=133893 00:09:34.431 09:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:34.431 09:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:34.431 09:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 133893 /var/tmp/bdevperf.sock 00:09:34.431 09:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 133893 ']' 00:09:34.431 09:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:34.431 09:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:34.431 09:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:34.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:34.431 09:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:34.431 09:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:34.690 [2024-10-07 09:30:23.449204] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:09:34.690 [2024-10-07 09:30:23.449288] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133893 ] 00:09:34.690 [2024-10-07 09:30:23.506274] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:34.690 [2024-10-07 09:30:23.612289] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:09:34.948 09:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:34.948 09:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:09:34.948 09:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:35.514 Nvme0n1 00:09:35.514 09:30:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:35.773 [ 00:09:35.773 { 00:09:35.773 "name": "Nvme0n1", 00:09:35.773 "aliases": [ 00:09:35.773 "ebf2f17c-e34a-453c-96a3-16e4e2330ed4" 00:09:35.773 ], 00:09:35.773 "product_name": "NVMe disk", 00:09:35.774 "block_size": 4096, 00:09:35.774 "num_blocks": 38912, 00:09:35.774 "uuid": "ebf2f17c-e34a-453c-96a3-16e4e2330ed4", 00:09:35.774 "numa_id": 0, 00:09:35.774 "assigned_rate_limits": { 00:09:35.774 "rw_ios_per_sec": 0, 00:09:35.774 "rw_mbytes_per_sec": 0, 00:09:35.774 "r_mbytes_per_sec": 0, 00:09:35.774 "w_mbytes_per_sec": 0 00:09:35.774 }, 00:09:35.774 "claimed": false, 00:09:35.774 "zoned": false, 00:09:35.774 "supported_io_types": { 00:09:35.774 "read": true, 00:09:35.774 "write": true, 00:09:35.774 "unmap": true, 00:09:35.774 "flush": true, 00:09:35.774 "reset": true, 00:09:35.774 "nvme_admin": true, 00:09:35.774 "nvme_io": true, 00:09:35.774 "nvme_io_md": false, 00:09:35.774 "write_zeroes": true, 00:09:35.774 "zcopy": false, 00:09:35.774 "get_zone_info": false, 00:09:35.774 "zone_management": false, 00:09:35.774 "zone_append": false, 00:09:35.774 "compare": true, 00:09:35.774 "compare_and_write": true, 00:09:35.774 "abort": true, 00:09:35.774 "seek_hole": false, 00:09:35.774 "seek_data": false, 00:09:35.774 "copy": true, 00:09:35.774 "nvme_iov_md": false 00:09:35.774 }, 00:09:35.774 "memory_domains": [ 00:09:35.774 { 00:09:35.774 "dma_device_id": "system", 00:09:35.774 "dma_device_type": 1 00:09:35.774 } 00:09:35.774 ], 00:09:35.774 "driver_specific": { 00:09:35.774 "nvme": [ 00:09:35.774 { 00:09:35.774 "trid": { 00:09:35.774 "trtype": "TCP", 00:09:35.774 "adrfam": "IPv4", 00:09:35.774 "traddr": "10.0.0.2", 00:09:35.774 "trsvcid": "4420", 00:09:35.774 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:35.774 }, 00:09:35.774 "ctrlr_data": { 00:09:35.774 "cntlid": 1, 00:09:35.774 "vendor_id": "0x8086", 00:09:35.774 "model_number": "SPDK bdev Controller", 00:09:35.774 "serial_number": "SPDK0", 00:09:35.774 "firmware_revision": "25.01", 00:09:35.774 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:35.774 "oacs": { 00:09:35.774 "security": 0, 00:09:35.774 "format": 0, 00:09:35.774 "firmware": 0, 00:09:35.774 "ns_manage": 0 00:09:35.774 }, 00:09:35.774 "multi_ctrlr": true, 00:09:35.774 "ana_reporting": false 00:09:35.774 }, 00:09:35.774 "vs": { 00:09:35.774 "nvme_version": "1.3" 00:09:35.774 }, 00:09:35.774 "ns_data": { 00:09:35.774 "id": 1, 00:09:35.774 "can_share": true 00:09:35.774 } 00:09:35.774 } 00:09:35.774 ], 00:09:35.774 "mp_policy": "active_passive" 00:09:35.774 } 00:09:35.774 } 00:09:35.774 ] 00:09:35.774 09:30:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=134023 00:09:35.774 09:30:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:35.774 09:30:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:35.774 Running I/O for 10 seconds... 00:09:36.711 Latency(us) 00:09:36.711 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:36.711 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:36.711 Nvme0n1 : 1.00 15178.00 59.29 0.00 0.00 0.00 0.00 0.00 00:09:36.711 =================================================================================================================== 00:09:36.711 Total : 15178.00 59.29 0.00 0.00 0.00 0.00 0.00 00:09:36.711 00:09:37.645 09:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 48f48b31-acf8-4f65-8b0d-0f1634f57018 00:09:37.903 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:37.903 Nvme0n1 : 2.00 15368.00 60.03 0.00 0.00 0.00 0.00 0.00 00:09:37.903 =================================================================================================================== 00:09:37.903 Total : 15368.00 60.03 0.00 0.00 0.00 0.00 0.00 00:09:37.903 00:09:37.903 true 00:09:37.903 09:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 48f48b31-acf8-4f65-8b0d-0f1634f57018 00:09:37.903 09:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:38.162 09:30:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:38.162 09:30:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:38.162 09:30:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 134023 00:09:38.729 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:38.729 Nvme0n1 : 3.00 15453.33 60.36 0.00 0.00 0.00 0.00 0.00 00:09:38.729 =================================================================================================================== 00:09:38.729 Total : 15453.33 60.36 0.00 0.00 0.00 0.00 0.00 00:09:38.729 00:09:39.665 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:39.665 Nvme0n1 : 4.00 15575.00 60.84 0.00 0.00 0.00 0.00 0.00 00:09:39.665 =================================================================================================================== 00:09:39.665 Total : 15575.00 60.84 0.00 0.00 0.00 0.00 0.00 00:09:39.665 00:09:41.039 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:41.039 Nvme0n1 : 5.00 15660.40 61.17 0.00 0.00 0.00 0.00 0.00 00:09:41.039 =================================================================================================================== 00:09:41.039 Total : 15660.40 61.17 0.00 0.00 0.00 0.00 0.00 00:09:41.039 00:09:41.976 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:41.976 Nvme0n1 : 6.00 15696.17 61.31 0.00 0.00 0.00 0.00 0.00 00:09:41.976 =================================================================================================================== 00:09:41.976 Total : 15696.17 61.31 0.00 0.00 0.00 0.00 0.00 00:09:41.976 00:09:42.911 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:42.911 Nvme0n1 : 7.00 15758.00 61.55 0.00 0.00 0.00 0.00 0.00 00:09:42.911 =================================================================================================================== 00:09:42.911 Total : 15758.00 61.55 0.00 0.00 0.00 0.00 0.00 00:09:42.911 00:09:43.847 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:43.847 Nvme0n1 : 8.00 15804.38 61.74 0.00 0.00 0.00 0.00 0.00 00:09:43.847 =================================================================================================================== 00:09:43.847 Total : 15804.38 61.74 0.00 0.00 0.00 0.00 0.00 00:09:43.847 00:09:44.782 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:44.782 Nvme0n1 : 9.00 15840.44 61.88 0.00 0.00 0.00 0.00 0.00 00:09:44.782 =================================================================================================================== 00:09:44.782 Total : 15840.44 61.88 0.00 0.00 0.00 0.00 0.00 00:09:44.782 00:09:45.717 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:45.717 Nvme0n1 : 10.00 15882.30 62.04 0.00 0.00 0.00 0.00 0.00 00:09:45.717 =================================================================================================================== 00:09:45.717 Total : 15882.30 62.04 0.00 0.00 0.00 0.00 0.00 00:09:45.717 00:09:45.717 00:09:45.717 Latency(us) 00:09:45.717 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:45.717 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:45.717 Nvme0n1 : 10.01 15883.81 62.05 0.00 0.00 8054.02 4393.34 16505.36 00:09:45.717 =================================================================================================================== 00:09:45.717 Total : 15883.81 62.05 0.00 0.00 8054.02 4393.34 16505.36 00:09:45.717 { 00:09:45.717 "results": [ 00:09:45.717 { 00:09:45.717 "job": "Nvme0n1", 00:09:45.717 "core_mask": "0x2", 00:09:45.717 "workload": "randwrite", 00:09:45.717 "status": "finished", 00:09:45.717 "queue_depth": 128, 00:09:45.717 "io_size": 4096, 00:09:45.717 "runtime": 10.007106, 00:09:45.717 "iops": 15883.81296250884, 00:09:45.717 "mibps": 62.04614438480016, 00:09:45.717 "io_failed": 0, 00:09:45.718 "io_timeout": 0, 00:09:45.718 "avg_latency_us": 8054.0203033546095, 00:09:45.718 "min_latency_us": 4393.339259259259, 00:09:45.718 "max_latency_us": 16505.36296296296 00:09:45.718 } 00:09:45.718 ], 00:09:45.718 "core_count": 1 00:09:45.718 } 00:09:45.718 09:30:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 133893 00:09:45.718 09:30:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 133893 ']' 00:09:45.718 09:30:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 133893 00:09:45.718 09:30:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:09:45.718 09:30:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:45.718 09:30:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 133893 00:09:45.977 09:30:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:45.977 09:30:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:45.977 09:30:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 133893' 00:09:45.977 killing process with pid 133893 00:09:45.977 09:30:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 133893 00:09:45.977 Received shutdown signal, test time was about 10.000000 seconds 00:09:45.977 00:09:45.977 Latency(us) 00:09:45.977 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:45.977 =================================================================================================================== 00:09:45.977 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:45.977 09:30:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 133893 00:09:46.235 09:30:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:46.493 09:30:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:46.751 09:30:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 48f48b31-acf8-4f65-8b0d-0f1634f57018 00:09:46.751 09:30:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:47.009 09:30:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:47.009 09:30:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:09:47.009 09:30:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 130871 00:09:47.009 09:30:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 130871 00:09:47.009 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 130871 Killed "${NVMF_APP[@]}" "$@" 00:09:47.009 09:30:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:09:47.009 09:30:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:09:47.009 09:30:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:09:47.009 09:30:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:47.009 09:30:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:47.009 09:30:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # nvmfpid=135299 00:09:47.009 09:30:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:47.009 09:30:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # waitforlisten 135299 00:09:47.009 09:30:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 135299 ']' 00:09:47.009 09:30:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:47.009 09:30:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:47.010 09:30:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:47.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:47.010 09:30:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:47.010 09:30:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:47.010 [2024-10-07 09:30:35.884178] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:09:47.010 [2024-10-07 09:30:35.884256] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:47.010 [2024-10-07 09:30:35.945069] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:47.268 [2024-10-07 09:30:36.048476] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:47.268 [2024-10-07 09:30:36.048538] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:47.268 [2024-10-07 09:30:36.048567] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:47.268 [2024-10-07 09:30:36.048579] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:47.268 [2024-10-07 09:30:36.048589] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:47.268 [2024-10-07 09:30:36.049157] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:47.268 09:30:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:47.268 09:30:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:09:47.268 09:30:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:09:47.268 09:30:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:47.268 09:30:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:47.268 09:30:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:47.268 09:30:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:47.527 [2024-10-07 09:30:36.429127] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:47.527 [2024-10-07 09:30:36.429259] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:47.527 [2024-10-07 09:30:36.429305] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:47.527 09:30:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:09:47.527 09:30:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev ebf2f17c-e34a-453c-96a3-16e4e2330ed4 00:09:47.527 09:30:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=ebf2f17c-e34a-453c-96a3-16e4e2330ed4 00:09:47.527 09:30:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:47.527 09:30:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:09:47.527 09:30:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:47.527 09:30:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:47.527 09:30:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:47.786 09:30:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b ebf2f17c-e34a-453c-96a3-16e4e2330ed4 -t 2000 00:09:48.045 [ 00:09:48.045 { 00:09:48.045 "name": "ebf2f17c-e34a-453c-96a3-16e4e2330ed4", 00:09:48.045 "aliases": [ 00:09:48.045 "lvs/lvol" 00:09:48.045 ], 00:09:48.045 "product_name": "Logical Volume", 00:09:48.045 "block_size": 4096, 00:09:48.045 "num_blocks": 38912, 00:09:48.045 "uuid": "ebf2f17c-e34a-453c-96a3-16e4e2330ed4", 00:09:48.045 "assigned_rate_limits": { 00:09:48.045 "rw_ios_per_sec": 0, 00:09:48.045 "rw_mbytes_per_sec": 0, 00:09:48.045 "r_mbytes_per_sec": 0, 00:09:48.045 "w_mbytes_per_sec": 0 00:09:48.045 }, 00:09:48.045 "claimed": false, 00:09:48.045 "zoned": false, 00:09:48.045 "supported_io_types": { 00:09:48.045 "read": true, 00:09:48.045 "write": true, 00:09:48.045 "unmap": true, 00:09:48.045 "flush": false, 00:09:48.045 "reset": true, 00:09:48.045 "nvme_admin": false, 00:09:48.045 "nvme_io": false, 00:09:48.045 "nvme_io_md": false, 00:09:48.045 "write_zeroes": true, 00:09:48.045 "zcopy": false, 00:09:48.045 "get_zone_info": false, 00:09:48.045 "zone_management": false, 00:09:48.045 "zone_append": false, 00:09:48.045 "compare": false, 00:09:48.045 "compare_and_write": false, 00:09:48.045 "abort": false, 00:09:48.045 "seek_hole": true, 00:09:48.045 "seek_data": true, 00:09:48.045 "copy": false, 00:09:48.045 "nvme_iov_md": false 00:09:48.045 }, 00:09:48.045 "driver_specific": { 00:09:48.045 "lvol": { 00:09:48.045 "lvol_store_uuid": "48f48b31-acf8-4f65-8b0d-0f1634f57018", 00:09:48.045 "base_bdev": "aio_bdev", 00:09:48.045 "thin_provision": false, 00:09:48.045 "num_allocated_clusters": 38, 00:09:48.045 "snapshot": false, 00:09:48.045 "clone": false, 00:09:48.045 "esnap_clone": false 00:09:48.045 } 00:09:48.045 } 00:09:48.045 } 00:09:48.045 ] 00:09:48.045 09:30:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:09:48.045 09:30:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 48f48b31-acf8-4f65-8b0d-0f1634f57018 00:09:48.045 09:30:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:09:48.305 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:09:48.305 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 48f48b31-acf8-4f65-8b0d-0f1634f57018 00:09:48.305 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:09:48.563 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:09:48.563 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:48.822 [2024-10-07 09:30:37.774985] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:48.822 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 48f48b31-acf8-4f65-8b0d-0f1634f57018 00:09:48.822 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:09:48.822 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 48f48b31-acf8-4f65-8b0d-0f1634f57018 00:09:48.822 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:48.822 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:48.822 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:48.822 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:48.822 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:48.822 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:48.822 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:48.822 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:48.823 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 48f48b31-acf8-4f65-8b0d-0f1634f57018 00:09:49.081 request: 00:09:49.081 { 00:09:49.081 "uuid": "48f48b31-acf8-4f65-8b0d-0f1634f57018", 00:09:49.081 "method": "bdev_lvol_get_lvstores", 00:09:49.081 "req_id": 1 00:09:49.081 } 00:09:49.081 Got JSON-RPC error response 00:09:49.081 response: 00:09:49.081 { 00:09:49.081 "code": -19, 00:09:49.081 "message": "No such device" 00:09:49.081 } 00:09:49.081 09:30:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:09:49.081 09:30:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:49.081 09:30:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:49.081 09:30:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:49.081 09:30:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:49.340 aio_bdev 00:09:49.599 09:30:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev ebf2f17c-e34a-453c-96a3-16e4e2330ed4 00:09:49.599 09:30:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=ebf2f17c-e34a-453c-96a3-16e4e2330ed4 00:09:49.599 09:30:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:49.599 09:30:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:09:49.599 09:30:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:49.599 09:30:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:49.599 09:30:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:49.857 09:30:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b ebf2f17c-e34a-453c-96a3-16e4e2330ed4 -t 2000 00:09:50.115 [ 00:09:50.115 { 00:09:50.115 "name": "ebf2f17c-e34a-453c-96a3-16e4e2330ed4", 00:09:50.115 "aliases": [ 00:09:50.115 "lvs/lvol" 00:09:50.115 ], 00:09:50.115 "product_name": "Logical Volume", 00:09:50.115 "block_size": 4096, 00:09:50.115 "num_blocks": 38912, 00:09:50.115 "uuid": "ebf2f17c-e34a-453c-96a3-16e4e2330ed4", 00:09:50.115 "assigned_rate_limits": { 00:09:50.115 "rw_ios_per_sec": 0, 00:09:50.115 "rw_mbytes_per_sec": 0, 00:09:50.115 "r_mbytes_per_sec": 0, 00:09:50.115 "w_mbytes_per_sec": 0 00:09:50.115 }, 00:09:50.115 "claimed": false, 00:09:50.115 "zoned": false, 00:09:50.115 "supported_io_types": { 00:09:50.115 "read": true, 00:09:50.115 "write": true, 00:09:50.115 "unmap": true, 00:09:50.115 "flush": false, 00:09:50.115 "reset": true, 00:09:50.115 "nvme_admin": false, 00:09:50.115 "nvme_io": false, 00:09:50.115 "nvme_io_md": false, 00:09:50.115 "write_zeroes": true, 00:09:50.115 "zcopy": false, 00:09:50.115 "get_zone_info": false, 00:09:50.115 "zone_management": false, 00:09:50.115 "zone_append": false, 00:09:50.115 "compare": false, 00:09:50.115 "compare_and_write": false, 00:09:50.115 "abort": false, 00:09:50.115 "seek_hole": true, 00:09:50.115 "seek_data": true, 00:09:50.115 "copy": false, 00:09:50.115 "nvme_iov_md": false 00:09:50.115 }, 00:09:50.115 "driver_specific": { 00:09:50.115 "lvol": { 00:09:50.115 "lvol_store_uuid": "48f48b31-acf8-4f65-8b0d-0f1634f57018", 00:09:50.115 "base_bdev": "aio_bdev", 00:09:50.115 "thin_provision": false, 00:09:50.115 "num_allocated_clusters": 38, 00:09:50.115 "snapshot": false, 00:09:50.115 "clone": false, 00:09:50.115 "esnap_clone": false 00:09:50.115 } 00:09:50.115 } 00:09:50.115 } 00:09:50.115 ] 00:09:50.115 09:30:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:09:50.115 09:30:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 48f48b31-acf8-4f65-8b0d-0f1634f57018 00:09:50.115 09:30:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:50.374 09:30:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:50.374 09:30:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 48f48b31-acf8-4f65-8b0d-0f1634f57018 00:09:50.374 09:30:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:50.631 09:30:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:50.631 09:30:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete ebf2f17c-e34a-453c-96a3-16e4e2330ed4 00:09:50.888 09:30:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 48f48b31-acf8-4f65-8b0d-0f1634f57018 00:09:51.146 09:30:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:51.403 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:51.403 00:09:51.403 real 0m19.609s 00:09:51.403 user 0m49.595s 00:09:51.403 sys 0m4.581s 00:09:51.403 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:51.403 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:51.403 ************************************ 00:09:51.403 END TEST lvs_grow_dirty 00:09:51.403 ************************************ 00:09:51.403 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:51.403 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:09:51.403 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:09:51.403 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:09:51.403 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:51.403 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:09:51.403 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:09:51.403 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:09:51.403 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:51.403 nvmf_trace.0 00:09:51.403 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:09:51.403 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:51.403 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:51.403 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:09:51.403 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:51.403 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:09:51.403 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:51.403 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:51.403 rmmod nvme_tcp 00:09:51.403 rmmod nvme_fabrics 00:09:51.403 rmmod nvme_keyring 00:09:51.660 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:51.660 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:09:51.660 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:09:51.660 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@515 -- # '[' -n 135299 ']' 00:09:51.660 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # killprocess 135299 00:09:51.660 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 135299 ']' 00:09:51.660 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 135299 00:09:51.660 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:09:51.660 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:51.660 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 135299 00:09:51.660 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:51.660 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:51.660 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 135299' 00:09:51.660 killing process with pid 135299 00:09:51.660 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 135299 00:09:51.660 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 135299 00:09:51.919 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:51.919 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:09:51.919 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:09:51.919 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:09:51.919 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-save 00:09:51.919 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:09:51.919 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-restore 00:09:51.919 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:51.919 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:51.919 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:51.919 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:51.919 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:53.829 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:53.829 00:09:53.829 real 0m42.841s 00:09:53.829 user 1m12.978s 00:09:53.829 sys 0m8.301s 00:09:53.829 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:53.829 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:53.829 ************************************ 00:09:53.829 END TEST nvmf_lvs_grow 00:09:53.829 ************************************ 00:09:53.829 09:30:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:53.829 09:30:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:53.829 09:30:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:53.829 09:30:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:53.829 ************************************ 00:09:53.829 START TEST nvmf_bdev_io_wait 00:09:53.829 ************************************ 00:09:53.829 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:54.089 * Looking for test storage... 00:09:54.089 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:54.089 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:54.089 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lcov --version 00:09:54.089 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:54.089 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:54.089 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:54.089 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:54.089 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:54.089 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:09:54.089 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:09:54.089 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:09:54.089 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:09:54.089 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:09:54.089 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:09:54.089 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:09:54.089 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:54.089 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:09:54.089 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:09:54.089 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:54.089 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:54.089 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:09:54.089 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:09:54.089 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:54.089 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:09:54.089 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:09:54.089 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:09:54.089 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:09:54.089 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:54.089 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:09:54.089 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:09:54.089 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:54.089 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:54.089 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:09:54.089 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:54.089 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:54.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.089 --rc genhtml_branch_coverage=1 00:09:54.089 --rc genhtml_function_coverage=1 00:09:54.089 --rc genhtml_legend=1 00:09:54.089 --rc geninfo_all_blocks=1 00:09:54.089 --rc geninfo_unexecuted_blocks=1 00:09:54.089 00:09:54.089 ' 00:09:54.089 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:54.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.089 --rc genhtml_branch_coverage=1 00:09:54.089 --rc genhtml_function_coverage=1 00:09:54.089 --rc genhtml_legend=1 00:09:54.089 --rc geninfo_all_blocks=1 00:09:54.089 --rc geninfo_unexecuted_blocks=1 00:09:54.089 00:09:54.089 ' 00:09:54.089 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:54.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.089 --rc genhtml_branch_coverage=1 00:09:54.089 --rc genhtml_function_coverage=1 00:09:54.089 --rc genhtml_legend=1 00:09:54.089 --rc geninfo_all_blocks=1 00:09:54.089 --rc geninfo_unexecuted_blocks=1 00:09:54.089 00:09:54.089 ' 00:09:54.089 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:54.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.089 --rc genhtml_branch_coverage=1 00:09:54.089 --rc genhtml_function_coverage=1 00:09:54.089 --rc genhtml_legend=1 00:09:54.089 --rc geninfo_all_blocks=1 00:09:54.089 --rc geninfo_unexecuted_blocks=1 00:09:54.089 00:09:54.089 ' 00:09:54.089 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:54.089 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:54.089 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:54.089 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:54.089 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:54.089 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:54.089 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:54.089 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:54.089 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:54.089 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:54.089 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:54.089 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:54.089 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:09:54.089 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:09:54.089 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:54.089 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:54.089 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:54.089 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:54.089 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:54.089 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:09:54.089 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:54.089 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:54.089 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:54.089 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:54.089 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:54.089 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:54.089 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:54.089 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:54.089 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:09:54.089 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:54.089 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:54.089 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:54.089 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:54.089 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:54.089 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:54.089 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:54.090 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:54.090 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:54.090 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:54.090 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:54.090 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:54.090 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:54.090 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:09:54.090 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:54.090 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # prepare_net_devs 00:09:54.090 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # local -g is_hw=no 00:09:54.090 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # remove_spdk_ns 00:09:54.090 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:54.090 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:54.090 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:54.090 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:09:54.090 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:09:54.090 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:09:54.090 09:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:56.001 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:56.001 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:09:56.001 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:56.001 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:56.001 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:56.001 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:56.001 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:56.001 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:09:56.001 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:56.001 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:09:56.001 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:09:56.001 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:09:56.001 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:09:56.001 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:09:56.001 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:09:56.001 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:56.002 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:56.002 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:56.002 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:56.002 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:56.002 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:56.002 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:56.002 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:56.002 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:56.002 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:56.002 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:56.002 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:56.002 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:56.002 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:56.002 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:56.002 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:56.002 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:56.002 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:56.002 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:56.002 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x1592)' 00:09:56.002 Found 0000:09:00.0 (0x8086 - 0x1592) 00:09:56.002 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:56.002 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:56.002 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:09:56.002 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:09:56.002 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:56.002 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:56.002 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x1592)' 00:09:56.002 Found 0000:09:00.1 (0x8086 - 0x1592) 00:09:56.002 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:56.002 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:56.002 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:09:56.002 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:09:56.002 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:56.002 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:56.002 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:56.002 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:56.002 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:56.002 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:56.002 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:56.002 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:56.002 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:56.002 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:56.002 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:56.002 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:09:56.002 Found net devices under 0000:09:00.0: cvl_0_0 00:09:56.002 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:56.002 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:56.002 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:56.002 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:56.002 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:56.002 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:56.002 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:56.002 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:56.002 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:09:56.002 Found net devices under 0000:09:00.1: cvl_0_1 00:09:56.002 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:56.002 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:09:56.002 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # is_hw=yes 00:09:56.002 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:09:56.002 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:09:56.002 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:09:56.002 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:56.002 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:56.002 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:56.002 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:56.002 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:56.002 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:56.002 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:56.002 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:56.002 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:56.002 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:56.002 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:56.002 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:56.002 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:56.002 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:56.002 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:56.002 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:56.002 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:56.002 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:56.002 09:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:56.262 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:56.262 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:56.262 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:56.262 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:56.262 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:56.262 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:09:56.262 00:09:56.262 --- 10.0.0.2 ping statistics --- 00:09:56.262 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:56.262 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:09:56.262 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:56.262 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:56.262 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.055 ms 00:09:56.262 00:09:56.262 --- 10.0.0.1 ping statistics --- 00:09:56.262 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:56.262 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:09:56.262 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:56.262 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # return 0 00:09:56.262 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:09:56.262 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:56.262 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:09:56.262 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:09:56.262 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:56.262 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:09:56.262 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:09:56.262 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:56.262 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:09:56.262 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:56.262 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:56.262 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # nvmfpid=137773 00:09:56.262 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:56.262 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # waitforlisten 137773 00:09:56.262 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 137773 ']' 00:09:56.262 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:56.262 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:56.262 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:56.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:56.262 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:56.262 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:56.262 [2024-10-07 09:30:45.115796] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:09:56.262 [2024-10-07 09:30:45.115892] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:56.262 [2024-10-07 09:30:45.179029] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:56.521 [2024-10-07 09:30:45.289538] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:56.521 [2024-10-07 09:30:45.289600] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:56.521 [2024-10-07 09:30:45.289627] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:56.521 [2024-10-07 09:30:45.289639] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:56.521 [2024-10-07 09:30:45.289648] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:56.521 [2024-10-07 09:30:45.292688] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:09:56.521 [2024-10-07 09:30:45.292769] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:09:56.521 [2024-10-07 09:30:45.292836] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:09:56.521 [2024-10-07 09:30:45.292840] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:56.521 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:56.521 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:09:56.521 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:09:56.521 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:56.521 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:56.521 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:56.521 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:56.521 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.521 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:56.521 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.521 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:56.521 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.521 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:56.521 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.521 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:56.521 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.521 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:56.521 [2024-10-07 09:30:45.456456] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:56.521 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.521 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:56.521 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.521 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:56.521 Malloc0 00:09:56.521 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.521 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:56.521 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.521 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:56.521 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.521 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:56.521 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.521 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:56.780 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.780 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:56.780 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.780 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:56.780 [2024-10-07 09:30:45.524179] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:56.780 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.780 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=137864 00:09:56.780 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:56.780 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:56.780 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:09:56.780 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=137866 00:09:56.780 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:09:56.780 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:09:56.780 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:09:56.780 { 00:09:56.780 "params": { 00:09:56.780 "name": "Nvme$subsystem", 00:09:56.780 "trtype": "$TEST_TRANSPORT", 00:09:56.780 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:56.780 "adrfam": "ipv4", 00:09:56.780 "trsvcid": "$NVMF_PORT", 00:09:56.780 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:56.780 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:56.780 "hdgst": ${hdgst:-false}, 00:09:56.780 "ddgst": ${ddgst:-false} 00:09:56.780 }, 00:09:56.780 "method": "bdev_nvme_attach_controller" 00:09:56.780 } 00:09:56.780 EOF 00:09:56.780 )") 00:09:56.780 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:56.780 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:56.780 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=137868 00:09:56.780 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:09:56.780 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:09:56.780 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:09:56.780 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:09:56.780 { 00:09:56.780 "params": { 00:09:56.780 "name": "Nvme$subsystem", 00:09:56.780 "trtype": "$TEST_TRANSPORT", 00:09:56.780 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:56.780 "adrfam": "ipv4", 00:09:56.780 "trsvcid": "$NVMF_PORT", 00:09:56.780 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:56.780 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:56.780 "hdgst": ${hdgst:-false}, 00:09:56.780 "ddgst": ${ddgst:-false} 00:09:56.780 }, 00:09:56.780 "method": "bdev_nvme_attach_controller" 00:09:56.780 } 00:09:56.780 EOF 00:09:56.780 )") 00:09:56.780 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:09:56.780 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:56.780 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:56.780 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=137871 00:09:56.780 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:09:56.780 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:56.780 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:09:56.780 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:09:56.780 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:09:56.780 { 00:09:56.780 "params": { 00:09:56.780 "name": "Nvme$subsystem", 00:09:56.780 "trtype": "$TEST_TRANSPORT", 00:09:56.780 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:56.780 "adrfam": "ipv4", 00:09:56.780 "trsvcid": "$NVMF_PORT", 00:09:56.780 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:56.780 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:56.780 "hdgst": ${hdgst:-false}, 00:09:56.780 "ddgst": ${ddgst:-false} 00:09:56.780 }, 00:09:56.780 "method": "bdev_nvme_attach_controller" 00:09:56.780 } 00:09:56.780 EOF 00:09:56.780 )") 00:09:56.780 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:56.780 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:56.780 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:09:56.780 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:09:56.780 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:09:56.780 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:09:56.780 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:09:56.780 { 00:09:56.780 "params": { 00:09:56.780 "name": "Nvme$subsystem", 00:09:56.780 "trtype": "$TEST_TRANSPORT", 00:09:56.780 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:56.780 "adrfam": "ipv4", 00:09:56.780 "trsvcid": "$NVMF_PORT", 00:09:56.780 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:56.781 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:56.781 "hdgst": ${hdgst:-false}, 00:09:56.781 "ddgst": ${ddgst:-false} 00:09:56.781 }, 00:09:56.781 "method": "bdev_nvme_attach_controller" 00:09:56.781 } 00:09:56.781 EOF 00:09:56.781 )") 00:09:56.781 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:09:56.781 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:09:56.781 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 137864 00:09:56.781 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:09:56.781 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:09:56.781 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:09:56.781 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:09:56.781 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:09:56.781 "params": { 00:09:56.781 "name": "Nvme1", 00:09:56.781 "trtype": "tcp", 00:09:56.781 "traddr": "10.0.0.2", 00:09:56.781 "adrfam": "ipv4", 00:09:56.781 "trsvcid": "4420", 00:09:56.781 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:56.781 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:56.781 "hdgst": false, 00:09:56.781 "ddgst": false 00:09:56.781 }, 00:09:56.781 "method": "bdev_nvme_attach_controller" 00:09:56.781 }' 00:09:56.781 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:09:56.781 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:09:56.781 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:09:56.781 "params": { 00:09:56.781 "name": "Nvme1", 00:09:56.781 "trtype": "tcp", 00:09:56.781 "traddr": "10.0.0.2", 00:09:56.781 "adrfam": "ipv4", 00:09:56.781 "trsvcid": "4420", 00:09:56.781 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:56.781 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:56.781 "hdgst": false, 00:09:56.781 "ddgst": false 00:09:56.781 }, 00:09:56.781 "method": "bdev_nvme_attach_controller" 00:09:56.781 }' 00:09:56.781 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:09:56.781 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:09:56.781 "params": { 00:09:56.781 "name": "Nvme1", 00:09:56.781 "trtype": "tcp", 00:09:56.781 "traddr": "10.0.0.2", 00:09:56.781 "adrfam": "ipv4", 00:09:56.781 "trsvcid": "4420", 00:09:56.781 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:56.781 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:56.781 "hdgst": false, 00:09:56.781 "ddgst": false 00:09:56.781 }, 00:09:56.781 "method": "bdev_nvme_attach_controller" 00:09:56.781 }' 00:09:56.781 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:09:56.781 09:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:09:56.781 "params": { 00:09:56.781 "name": "Nvme1", 00:09:56.781 "trtype": "tcp", 00:09:56.781 "traddr": "10.0.0.2", 00:09:56.781 "adrfam": "ipv4", 00:09:56.781 "trsvcid": "4420", 00:09:56.781 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:56.781 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:56.781 "hdgst": false, 00:09:56.781 "ddgst": false 00:09:56.781 }, 00:09:56.781 "method": "bdev_nvme_attach_controller" 00:09:56.781 }' 00:09:56.781 [2024-10-07 09:30:45.573697] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:09:56.781 [2024-10-07 09:30:45.573692] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:09:56.781 [2024-10-07 09:30:45.573779] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-10-07 09:30:45.573779] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:09:56.781 --proc-type=auto ] 00:09:56.781 [2024-10-07 09:30:45.574200] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:09:56.781 [2024-10-07 09:30:45.574214] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:09:56.781 [2024-10-07 09:30:45.574271] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-10-07 09:30:45.574272] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:09:56.781 --proc-type=auto ] 00:09:56.781 [2024-10-07 09:30:45.746522] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:57.040 [2024-10-07 09:30:45.846376] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 5 00:09:57.040 [2024-10-07 09:30:45.850334] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:57.040 [2024-10-07 09:30:45.947954] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:57.040 [2024-10-07 09:30:45.953126] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:09:57.040 [2024-10-07 09:30:46.008022] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:57.298 [2024-10-07 09:30:46.050061] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 6 00:09:57.298 [2024-10-07 09:30:46.100658] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 7 00:09:57.298 Running I/O for 1 seconds... 00:09:57.298 Running I/O for 1 seconds... 00:09:57.556 Running I/O for 1 seconds... 00:09:57.556 Running I/O for 1 seconds... 00:09:58.491 7476.00 IOPS, 29.20 MiB/s 00:09:58.491 Latency(us) 00:09:58.491 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:58.491 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:58.491 Nvme1n1 : 1.02 7466.02 29.16 0.00 0.00 17006.59 8543.95 29515.47 00:09:58.491 =================================================================================================================== 00:09:58.491 Total : 7466.02 29.16 0.00 0.00 17006.59 8543.95 29515.47 00:09:58.491 8257.00 IOPS, 32.25 MiB/s 00:09:58.491 Latency(us) 00:09:58.491 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:58.491 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:58.491 Nvme1n1 : 1.01 8314.01 32.48 0.00 0.00 15323.11 6844.87 28156.21 00:09:58.491 =================================================================================================================== 00:09:58.491 Total : 8314.01 32.48 0.00 0.00 15323.11 6844.87 28156.21 00:09:58.491 197808.00 IOPS, 772.69 MiB/s 00:09:58.491 Latency(us) 00:09:58.491 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:58.491 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:58.491 Nvme1n1 : 1.00 197440.67 771.25 0.00 0.00 644.87 300.37 1856.85 00:09:58.491 =================================================================================================================== 00:09:58.491 Total : 197440.67 771.25 0.00 0.00 644.87 300.37 1856.85 00:09:58.749 09:30:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 137866 00:09:58.749 09:30:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 137868 00:09:58.749 8216.00 IOPS, 32.09 MiB/s 00:09:58.749 Latency(us) 00:09:58.749 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:58.749 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:58.749 Nvme1n1 : 1.01 8315.59 32.48 0.00 0.00 15349.39 3519.53 44079.03 00:09:58.749 =================================================================================================================== 00:09:58.749 Total : 8315.59 32.48 0.00 0.00 15349.39 3519.53 44079.03 00:09:59.008 09:30:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 137871 00:09:59.008 09:30:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:59.008 09:30:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.008 09:30:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:59.008 09:30:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.008 09:30:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:59.008 09:30:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:59.008 09:30:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:59.008 09:30:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:09:59.008 09:30:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:59.008 09:30:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:09:59.008 09:30:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:59.008 09:30:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:59.008 rmmod nvme_tcp 00:09:59.008 rmmod nvme_fabrics 00:09:59.008 rmmod nvme_keyring 00:09:59.008 09:30:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:59.008 09:30:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:09:59.008 09:30:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:09:59.008 09:30:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@515 -- # '[' -n 137773 ']' 00:09:59.008 09:30:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # killprocess 137773 00:09:59.008 09:30:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 137773 ']' 00:09:59.008 09:30:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 137773 00:09:59.008 09:30:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:09:59.008 09:30:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:59.008 09:30:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 137773 00:09:59.008 09:30:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:59.008 09:30:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:59.008 09:30:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 137773' 00:09:59.008 killing process with pid 137773 00:09:59.008 09:30:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 137773 00:09:59.008 09:30:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 137773 00:09:59.268 09:30:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:59.268 09:30:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:09:59.268 09:30:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:09:59.268 09:30:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:09:59.268 09:30:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-save 00:09:59.268 09:30:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:09:59.268 09:30:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-restore 00:09:59.268 09:30:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:59.268 09:30:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:59.268 09:30:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:59.268 09:30:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:59.268 09:30:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:01.821 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:01.821 00:10:01.821 real 0m7.451s 00:10:01.821 user 0m17.369s 00:10:01.821 sys 0m3.676s 00:10:01.821 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:01.821 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:01.821 ************************************ 00:10:01.821 END TEST nvmf_bdev_io_wait 00:10:01.821 ************************************ 00:10:01.821 09:30:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:01.821 09:30:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:01.821 09:30:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:01.821 09:30:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:01.821 ************************************ 00:10:01.821 START TEST nvmf_queue_depth 00:10:01.821 ************************************ 00:10:01.821 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:01.821 * Looking for test storage... 00:10:01.821 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:01.821 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:01.821 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lcov --version 00:10:01.821 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:01.821 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:01.821 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:01.821 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:01.821 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:01.821 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:10:01.821 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:10:01.821 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:10:01.821 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:10:01.821 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:10:01.821 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:10:01.821 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:10:01.821 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:01.821 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:10:01.821 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:10:01.821 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:01.821 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:01.821 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:10:01.821 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:10:01.821 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:01.821 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:10:01.821 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:10:01.821 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:10:01.821 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:10:01.821 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:01.821 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:10:01.821 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:10:01.821 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:01.821 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:01.821 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:10:01.821 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:01.821 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:01.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:01.821 --rc genhtml_branch_coverage=1 00:10:01.821 --rc genhtml_function_coverage=1 00:10:01.821 --rc genhtml_legend=1 00:10:01.821 --rc geninfo_all_blocks=1 00:10:01.821 --rc geninfo_unexecuted_blocks=1 00:10:01.821 00:10:01.821 ' 00:10:01.821 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:01.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:01.821 --rc genhtml_branch_coverage=1 00:10:01.821 --rc genhtml_function_coverage=1 00:10:01.821 --rc genhtml_legend=1 00:10:01.822 --rc geninfo_all_blocks=1 00:10:01.822 --rc geninfo_unexecuted_blocks=1 00:10:01.822 00:10:01.822 ' 00:10:01.822 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:01.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:01.822 --rc genhtml_branch_coverage=1 00:10:01.822 --rc genhtml_function_coverage=1 00:10:01.822 --rc genhtml_legend=1 00:10:01.822 --rc geninfo_all_blocks=1 00:10:01.822 --rc geninfo_unexecuted_blocks=1 00:10:01.822 00:10:01.822 ' 00:10:01.822 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:01.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:01.822 --rc genhtml_branch_coverage=1 00:10:01.822 --rc genhtml_function_coverage=1 00:10:01.822 --rc genhtml_legend=1 00:10:01.822 --rc geninfo_all_blocks=1 00:10:01.822 --rc geninfo_unexecuted_blocks=1 00:10:01.822 00:10:01.822 ' 00:10:01.822 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:01.822 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:10:01.822 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:01.822 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:01.822 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:01.822 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:01.822 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:01.822 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:01.822 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:01.822 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:01.822 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:01.822 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:01.822 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:10:01.822 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:10:01.822 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:01.822 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:01.822 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:01.822 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:01.822 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:01.822 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:10:01.822 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:01.822 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:01.822 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:01.822 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.822 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.822 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.822 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:10:01.822 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.822 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:10:01.822 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:01.822 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:01.822 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:01.822 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:01.822 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:01.822 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:01.822 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:01.822 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:01.822 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:01.822 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:01.822 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:10:01.822 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:10:01.822 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:01.822 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:10:01.822 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:10:01.822 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:01.822 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # prepare_net_devs 00:10:01.822 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@436 -- # local -g is_hw=no 00:10:01.822 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # remove_spdk_ns 00:10:01.822 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:01.822 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:01.822 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:01.822 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:10:01.822 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:10:01.822 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:10:01.822 09:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:03.725 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:03.725 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:10:03.725 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:03.725 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:03.725 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:03.725 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:03.725 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:03.725 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:10:03.725 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:03.725 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:10:03.725 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:10:03.725 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:10:03.725 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:10:03.725 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:10:03.725 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:10:03.725 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:03.725 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:03.725 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:03.725 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:03.725 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:03.725 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:03.725 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:03.725 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:03.725 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:03.725 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:03.725 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:03.725 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:03.725 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:03.725 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:03.725 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:03.725 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:03.725 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:03.725 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:03.725 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:03.725 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x1592)' 00:10:03.725 Found 0000:09:00.0 (0x8086 - 0x1592) 00:10:03.725 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:03.725 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:03.725 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:10:03.725 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:10:03.725 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:03.725 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:03.725 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x1592)' 00:10:03.725 Found 0000:09:00.1 (0x8086 - 0x1592) 00:10:03.725 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:03.725 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:03.725 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:10:03.725 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:10:03.725 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:03.725 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:03.725 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:03.725 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:03.725 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:03.725 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:03.725 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:03.725 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:03.725 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:03.725 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:03.725 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:03.725 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:10:03.725 Found net devices under 0000:09:00.0: cvl_0_0 00:10:03.725 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:03.725 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:03.725 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:03.725 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:03.725 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:03.725 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:03.725 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:03.725 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:03.725 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:10:03.725 Found net devices under 0000:09:00.1: cvl_0_1 00:10:03.725 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:03.725 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:10:03.725 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # is_hw=yes 00:10:03.725 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:10:03.725 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:10:03.725 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:10:03.725 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:03.725 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:03.725 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:03.725 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:03.725 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:03.725 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:03.725 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:03.725 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:03.725 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:03.725 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:03.725 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:03.725 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:03.725 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:03.725 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:03.725 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:03.726 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:03.726 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:03.726 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:03.726 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:03.726 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:03.726 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:03.726 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:03.726 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:03.726 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:03.726 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.342 ms 00:10:03.726 00:10:03.726 --- 10.0.0.2 ping statistics --- 00:10:03.726 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:03.726 rtt min/avg/max/mdev = 0.342/0.342/0.342/0.000 ms 00:10:03.726 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:03.726 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:03.726 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.158 ms 00:10:03.726 00:10:03.726 --- 10.0.0.1 ping statistics --- 00:10:03.726 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:03.726 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:10:03.726 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:03.726 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # return 0 00:10:03.726 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:10:03.726 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:03.726 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:10:03.726 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:10:03.726 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:03.726 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:10:03.726 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:10:03.726 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:10:03.726 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:10:03.726 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:03.726 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:03.726 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # nvmfpid=139995 00:10:03.726 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:03.726 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # waitforlisten 139995 00:10:03.726 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 139995 ']' 00:10:03.726 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:03.726 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:03.726 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:03.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:03.726 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:03.726 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:03.726 [2024-10-07 09:30:52.703159] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:10:03.726 [2024-10-07 09:30:52.703235] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:03.985 [2024-10-07 09:30:52.768860] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:03.985 [2024-10-07 09:30:52.879298] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:03.985 [2024-10-07 09:30:52.879357] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:03.985 [2024-10-07 09:30:52.879377] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:03.985 [2024-10-07 09:30:52.879395] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:03.985 [2024-10-07 09:30:52.879409] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:03.985 [2024-10-07 09:30:52.879980] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:10:04.244 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:04.244 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:10:04.244 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:10:04.244 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:04.244 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:04.244 09:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:04.244 09:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:04.244 09:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.244 09:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:04.244 [2024-10-07 09:30:53.029719] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:04.244 09:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.244 09:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:04.244 09:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.244 09:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:04.244 Malloc0 00:10:04.244 09:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.244 09:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:04.244 09:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.244 09:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:04.244 09:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.244 09:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:04.244 09:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.244 09:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:04.244 09:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.244 09:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:04.245 09:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.245 09:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:04.245 [2024-10-07 09:30:53.095882] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:04.245 09:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.245 09:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=140124 00:10:04.245 09:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:10:04.245 09:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:04.245 09:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 140124 /var/tmp/bdevperf.sock 00:10:04.245 09:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 140124 ']' 00:10:04.245 09:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:04.245 09:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:04.245 09:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:04.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:04.245 09:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:04.245 09:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:04.245 [2024-10-07 09:30:53.141911] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:10:04.245 [2024-10-07 09:30:53.141998] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140124 ] 00:10:04.245 [2024-10-07 09:30:53.198360] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:04.504 [2024-10-07 09:30:53.307796] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:04.504 09:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:04.504 09:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:10:04.504 09:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:10:04.504 09:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.504 09:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:04.762 NVMe0n1 00:10:04.762 09:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.762 09:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:04.762 Running I/O for 10 seconds... 00:10:14.997 8192.00 IOPS, 32.00 MiB/s 8529.00 IOPS, 33.32 MiB/s 8522.00 IOPS, 33.29 MiB/s 8554.75 IOPS, 33.42 MiB/s 8590.40 IOPS, 33.56 MiB/s 8616.67 IOPS, 33.66 MiB/s 8619.57 IOPS, 33.67 MiB/s 8654.88 IOPS, 33.81 MiB/s 8639.11 IOPS, 33.75 MiB/s 8681.60 IOPS, 33.91 MiB/s 00:10:14.997 Latency(us) 00:10:14.997 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:14.997 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:10:14.997 Verification LBA range: start 0x0 length 0x4000 00:10:14.997 NVMe0n1 : 10.10 8699.00 33.98 0.00 0.00 117239.32 21748.24 69905.07 00:10:14.997 =================================================================================================================== 00:10:14.997 Total : 8699.00 33.98 0.00 0.00 117239.32 21748.24 69905.07 00:10:14.997 { 00:10:14.997 "results": [ 00:10:14.997 { 00:10:14.997 "job": "NVMe0n1", 00:10:14.997 "core_mask": "0x1", 00:10:14.997 "workload": "verify", 00:10:14.997 "status": "finished", 00:10:14.997 "verify_range": { 00:10:14.997 "start": 0, 00:10:14.997 "length": 16384 00:10:14.997 }, 00:10:14.997 "queue_depth": 1024, 00:10:14.997 "io_size": 4096, 00:10:14.997 "runtime": 10.097711, 00:10:14.997 "iops": 8699.001189477496, 00:10:14.997 "mibps": 33.98047339639647, 00:10:14.997 "io_failed": 0, 00:10:14.997 "io_timeout": 0, 00:10:14.997 "avg_latency_us": 117239.32359819202, 00:10:14.997 "min_latency_us": 21748.242962962962, 00:10:14.997 "max_latency_us": 69905.06666666667 00:10:14.997 } 00:10:14.997 ], 00:10:14.997 "core_count": 1 00:10:14.997 } 00:10:14.997 09:31:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 140124 00:10:14.997 09:31:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 140124 ']' 00:10:14.997 09:31:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 140124 00:10:14.997 09:31:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:10:14.997 09:31:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:14.997 09:31:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 140124 00:10:14.997 09:31:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:14.997 09:31:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:14.997 09:31:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 140124' 00:10:14.997 killing process with pid 140124 00:10:14.997 09:31:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 140124 00:10:14.997 Received shutdown signal, test time was about 10.000000 seconds 00:10:14.997 00:10:14.997 Latency(us) 00:10:14.997 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:14.997 =================================================================================================================== 00:10:14.997 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:14.997 09:31:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 140124 00:10:15.256 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:15.256 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:10:15.256 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@514 -- # nvmfcleanup 00:10:15.256 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:10:15.256 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:15.256 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:10:15.256 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:15.256 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:15.256 rmmod nvme_tcp 00:10:15.256 rmmod nvme_fabrics 00:10:15.256 rmmod nvme_keyring 00:10:15.256 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:15.515 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:10:15.515 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:10:15.515 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@515 -- # '[' -n 139995 ']' 00:10:15.515 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # killprocess 139995 00:10:15.515 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 139995 ']' 00:10:15.515 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 139995 00:10:15.515 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:10:15.515 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:15.515 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 139995 00:10:15.515 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:15.515 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:15.515 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 139995' 00:10:15.515 killing process with pid 139995 00:10:15.515 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 139995 00:10:15.515 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 139995 00:10:15.774 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:10:15.774 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:10:15.774 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:10:15.774 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:10:15.774 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-save 00:10:15.774 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:10:15.774 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-restore 00:10:15.774 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:15.774 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:15.774 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:15.774 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:15.774 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:17.684 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:17.684 00:10:17.684 real 0m16.333s 00:10:17.684 user 0m22.954s 00:10:17.684 sys 0m3.086s 00:10:17.684 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:17.684 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:17.684 ************************************ 00:10:17.684 END TEST nvmf_queue_depth 00:10:17.684 ************************************ 00:10:17.684 09:31:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:17.684 09:31:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:17.684 09:31:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:17.684 09:31:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:17.943 ************************************ 00:10:17.944 START TEST nvmf_target_multipath 00:10:17.944 ************************************ 00:10:17.944 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:17.944 * Looking for test storage... 00:10:17.944 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:17.944 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:17.944 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lcov --version 00:10:17.944 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:17.944 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:17.944 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:17.944 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:17.944 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:17.944 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:10:17.944 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:10:17.944 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:10:17.944 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:10:17.944 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:10:17.944 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:10:17.944 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:10:17.944 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:17.944 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:10:17.944 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:10:17.944 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:17.944 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:17.944 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:10:17.944 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:10:17.944 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:17.944 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:10:17.944 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:10:17.944 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:10:17.944 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:10:17.944 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:17.944 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:10:17.944 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:10:17.944 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:17.944 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:17.944 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:10:17.944 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:17.944 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:17.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.944 --rc genhtml_branch_coverage=1 00:10:17.944 --rc genhtml_function_coverage=1 00:10:17.944 --rc genhtml_legend=1 00:10:17.944 --rc geninfo_all_blocks=1 00:10:17.944 --rc geninfo_unexecuted_blocks=1 00:10:17.944 00:10:17.944 ' 00:10:17.944 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:17.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.944 --rc genhtml_branch_coverage=1 00:10:17.944 --rc genhtml_function_coverage=1 00:10:17.944 --rc genhtml_legend=1 00:10:17.944 --rc geninfo_all_blocks=1 00:10:17.944 --rc geninfo_unexecuted_blocks=1 00:10:17.944 00:10:17.944 ' 00:10:17.944 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:17.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.944 --rc genhtml_branch_coverage=1 00:10:17.944 --rc genhtml_function_coverage=1 00:10:17.944 --rc genhtml_legend=1 00:10:17.944 --rc geninfo_all_blocks=1 00:10:17.944 --rc geninfo_unexecuted_blocks=1 00:10:17.944 00:10:17.944 ' 00:10:17.944 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:17.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.944 --rc genhtml_branch_coverage=1 00:10:17.944 --rc genhtml_function_coverage=1 00:10:17.944 --rc genhtml_legend=1 00:10:17.944 --rc geninfo_all_blocks=1 00:10:17.944 --rc geninfo_unexecuted_blocks=1 00:10:17.944 00:10:17.944 ' 00:10:17.944 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:17.944 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:10:17.944 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:17.944 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:17.944 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:17.944 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:17.944 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:17.944 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:17.944 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:17.944 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:17.944 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:17.944 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:17.944 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:10:17.944 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:10:17.944 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:17.944 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:17.944 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:17.944 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:17.944 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:17.944 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:10:17.944 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:17.944 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:17.944 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:17.944 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.944 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.944 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.944 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:10:17.944 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.944 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:10:17.944 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:17.944 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:17.944 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:17.944 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:17.944 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:17.945 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:17.945 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:17.945 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:17.945 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:17.945 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:17.945 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:17.945 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:17.945 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:10:17.945 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:17.945 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:10:17.945 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:10:17.945 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:17.945 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # prepare_net_devs 00:10:17.945 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@436 -- # local -g is_hw=no 00:10:17.945 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # remove_spdk_ns 00:10:17.945 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:17.945 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:17.945 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:17.945 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:10:17.945 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:10:17.945 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:10:17.945 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:20.481 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:20.481 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:10:20.481 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:20.481 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:20.481 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:20.481 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:20.481 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:20.481 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:10:20.481 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:20.481 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:10:20.481 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:10:20.481 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:10:20.481 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:10:20.481 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:10:20.481 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:10:20.481 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:20.481 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:20.481 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:20.481 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:20.481 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:20.481 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:20.481 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:20.481 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:20.481 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:20.481 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:20.481 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:20.481 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:20.482 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:20.482 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:20.482 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:20.482 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:20.482 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:20.482 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:20.482 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:20.482 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x1592)' 00:10:20.482 Found 0000:09:00.0 (0x8086 - 0x1592) 00:10:20.482 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:20.482 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:20.482 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:10:20.482 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:10:20.482 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:20.482 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:20.482 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x1592)' 00:10:20.482 Found 0000:09:00.1 (0x8086 - 0x1592) 00:10:20.482 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:20.482 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:20.482 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:10:20.482 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:10:20.482 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:20.482 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:20.482 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:20.482 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:20.482 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:20.482 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:20.482 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:20.482 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:20.482 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:20.482 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:20.482 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:20.482 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:10:20.482 Found net devices under 0000:09:00.0: cvl_0_0 00:10:20.482 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:20.482 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:20.482 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:20.482 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:20.482 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:20.482 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:20.482 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:20.482 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:20.482 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:10:20.482 Found net devices under 0000:09:00.1: cvl_0_1 00:10:20.482 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:20.482 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:10:20.482 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # is_hw=yes 00:10:20.482 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:10:20.482 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:10:20.482 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:10:20.482 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:20.482 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:20.482 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:20.482 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:20.482 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:20.482 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:20.482 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:20.482 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:20.482 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:20.482 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:20.482 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:20.482 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:20.482 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:20.482 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:20.482 09:31:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:20.482 09:31:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:20.482 09:31:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:20.482 09:31:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:20.482 09:31:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:20.482 09:31:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:20.482 09:31:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:20.482 09:31:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:20.482 09:31:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:20.482 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:20.482 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.329 ms 00:10:20.482 00:10:20.482 --- 10.0.0.2 ping statistics --- 00:10:20.482 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:20.482 rtt min/avg/max/mdev = 0.329/0.329/0.329/0.000 ms 00:10:20.482 09:31:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:20.482 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:20.482 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:10:20.482 00:10:20.482 --- 10.0.0.1 ping statistics --- 00:10:20.482 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:20.482 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:10:20.482 09:31:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:20.482 09:31:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # return 0 00:10:20.482 09:31:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:10:20.482 09:31:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:20.482 09:31:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:10:20.482 09:31:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:10:20.482 09:31:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:20.482 09:31:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:10:20.482 09:31:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:10:20.482 09:31:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:10:20.482 09:31:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:10:20.482 only one NIC for nvmf test 00:10:20.482 09:31:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:10:20.482 09:31:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:10:20.482 09:31:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:10:20.482 09:31:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:20.482 09:31:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:10:20.482 09:31:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:20.482 09:31:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:20.482 rmmod nvme_tcp 00:10:20.482 rmmod nvme_fabrics 00:10:20.482 rmmod nvme_keyring 00:10:20.482 09:31:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:20.482 09:31:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:10:20.482 09:31:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:10:20.482 09:31:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:10:20.482 09:31:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:10:20.482 09:31:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:10:20.482 09:31:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:10:20.482 09:31:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:10:20.483 09:31:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:10:20.483 09:31:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:10:20.483 09:31:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:10:20.483 09:31:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:20.483 09:31:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:20.483 09:31:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:20.483 09:31:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:20.483 09:31:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:22.396 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:22.396 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:10:22.396 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:10:22.396 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:10:22.396 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:10:22.396 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:22.396 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:10:22.396 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:22.396 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:22.396 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:22.396 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:10:22.396 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:10:22.396 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:10:22.396 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:10:22.396 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:10:22.396 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:10:22.396 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:10:22.396 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:10:22.396 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:10:22.396 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:10:22.396 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:22.396 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:22.396 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:22.396 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:22.396 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:22.396 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:22.396 00:10:22.396 real 0m4.550s 00:10:22.396 user 0m0.917s 00:10:22.396 sys 0m1.652s 00:10:22.396 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:22.396 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:22.396 ************************************ 00:10:22.396 END TEST nvmf_target_multipath 00:10:22.396 ************************************ 00:10:22.396 09:31:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:22.396 09:31:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:22.396 09:31:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:22.396 09:31:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:22.396 ************************************ 00:10:22.396 START TEST nvmf_zcopy 00:10:22.396 ************************************ 00:10:22.396 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:22.396 * Looking for test storage... 00:10:22.396 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:22.396 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:22.396 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lcov --version 00:10:22.396 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:22.655 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:22.655 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:22.655 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:22.655 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:22.655 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:10:22.655 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:10:22.655 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:10:22.656 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:10:22.656 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:10:22.656 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:10:22.656 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:10:22.656 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:22.656 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:10:22.656 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:10:22.656 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:22.656 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:22.656 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:10:22.656 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:10:22.656 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:22.656 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:10:22.656 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:10:22.656 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:10:22.656 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:10:22.656 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:22.656 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:10:22.656 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:10:22.656 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:22.656 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:22.656 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:10:22.656 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:22.656 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:22.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.656 --rc genhtml_branch_coverage=1 00:10:22.656 --rc genhtml_function_coverage=1 00:10:22.656 --rc genhtml_legend=1 00:10:22.656 --rc geninfo_all_blocks=1 00:10:22.656 --rc geninfo_unexecuted_blocks=1 00:10:22.656 00:10:22.656 ' 00:10:22.656 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:22.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.656 --rc genhtml_branch_coverage=1 00:10:22.656 --rc genhtml_function_coverage=1 00:10:22.656 --rc genhtml_legend=1 00:10:22.656 --rc geninfo_all_blocks=1 00:10:22.656 --rc geninfo_unexecuted_blocks=1 00:10:22.656 00:10:22.656 ' 00:10:22.656 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:22.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.656 --rc genhtml_branch_coverage=1 00:10:22.656 --rc genhtml_function_coverage=1 00:10:22.656 --rc genhtml_legend=1 00:10:22.656 --rc geninfo_all_blocks=1 00:10:22.656 --rc geninfo_unexecuted_blocks=1 00:10:22.656 00:10:22.656 ' 00:10:22.656 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:22.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.656 --rc genhtml_branch_coverage=1 00:10:22.656 --rc genhtml_function_coverage=1 00:10:22.656 --rc genhtml_legend=1 00:10:22.656 --rc geninfo_all_blocks=1 00:10:22.656 --rc geninfo_unexecuted_blocks=1 00:10:22.656 00:10:22.656 ' 00:10:22.656 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:22.656 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:10:22.656 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:22.656 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:22.656 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:22.656 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:22.656 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:22.656 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:22.656 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:22.656 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:22.656 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:22.656 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:22.656 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:10:22.656 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:10:22.656 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:22.656 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:22.656 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:22.656 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:22.656 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:22.656 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:10:22.656 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:22.656 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:22.656 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:22.656 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.656 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.656 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.656 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:10:22.656 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.656 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:10:22.656 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:22.656 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:22.656 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:22.656 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:22.656 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:22.656 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:22.656 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:22.656 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:22.656 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:22.656 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:22.656 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:10:22.656 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:10:22.656 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:22.656 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # prepare_net_devs 00:10:22.656 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@436 -- # local -g is_hw=no 00:10:22.656 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # remove_spdk_ns 00:10:22.656 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:22.656 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:22.656 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:22.656 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:10:22.656 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:10:22.656 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:10:22.656 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:24.562 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:24.562 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:10:24.562 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:24.562 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:24.562 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:24.562 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:24.562 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:24.562 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:10:24.562 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:24.562 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:10:24.562 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:10:24.562 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:10:24.562 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:10:24.562 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:10:24.562 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:10:24.562 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:24.562 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:24.562 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:24.562 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:24.562 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:24.562 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:24.562 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:24.562 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:24.562 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:24.562 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:24.562 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:24.562 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:24.562 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:24.562 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:24.562 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:24.562 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:24.562 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:24.562 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:24.562 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:24.562 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x1592)' 00:10:24.562 Found 0000:09:00.0 (0x8086 - 0x1592) 00:10:24.562 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:24.562 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:24.562 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:10:24.562 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:10:24.562 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:24.562 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:24.562 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x1592)' 00:10:24.562 Found 0000:09:00.1 (0x8086 - 0x1592) 00:10:24.562 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:24.562 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:24.562 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:10:24.562 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:10:24.562 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:24.562 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:24.562 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:24.562 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:24.562 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:24.562 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:24.562 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:24.562 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:24.562 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:24.562 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:24.562 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:24.562 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:10:24.562 Found net devices under 0000:09:00.0: cvl_0_0 00:10:24.562 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:24.562 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:24.562 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:24.562 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:24.562 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:24.562 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:24.562 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:24.562 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:24.562 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:10:24.562 Found net devices under 0000:09:00.1: cvl_0_1 00:10:24.562 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:24.562 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:10:24.562 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # is_hw=yes 00:10:24.562 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:10:24.562 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:10:24.562 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:10:24.562 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:24.562 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:24.562 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:24.562 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:24.562 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:24.562 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:24.562 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:24.562 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:24.562 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:24.562 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:24.562 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:24.562 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:24.562 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:24.562 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:24.821 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:24.821 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:24.821 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:24.821 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:24.821 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:24.821 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:24.821 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:24.821 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:24.821 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:24.821 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:24.821 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.299 ms 00:10:24.821 00:10:24.821 --- 10.0.0.2 ping statistics --- 00:10:24.821 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:24.821 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:10:24.821 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:24.821 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:24.821 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:10:24.821 00:10:24.821 --- 10.0.0.1 ping statistics --- 00:10:24.821 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:24.821 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:10:24.821 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:24.821 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # return 0 00:10:24.821 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:10:24.821 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:24.821 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:10:24.821 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:10:24.821 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:24.821 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:10:24.821 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:10:24.821 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:10:24.821 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:10:24.821 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:24.821 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:24.821 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # nvmfpid=145088 00:10:24.821 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # waitforlisten 145088 00:10:24.821 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 145088 ']' 00:10:24.822 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:24.822 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:24.822 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:24.822 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:24.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:24.822 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:24.822 09:31:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:24.822 [2024-10-07 09:31:13.752699] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:10:24.822 [2024-10-07 09:31:13.752789] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:24.822 [2024-10-07 09:31:13.814534] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:25.081 [2024-10-07 09:31:13.925952] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:25.081 [2024-10-07 09:31:13.926030] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:25.081 [2024-10-07 09:31:13.926044] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:25.081 [2024-10-07 09:31:13.926055] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:25.081 [2024-10-07 09:31:13.926064] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:25.081 [2024-10-07 09:31:13.926594] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:10:25.081 09:31:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:25.081 09:31:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:10:25.081 09:31:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:10:25.081 09:31:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:25.081 09:31:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:25.081 09:31:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:25.081 09:31:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:10:25.081 09:31:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:10:25.081 09:31:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.081 09:31:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:25.081 [2024-10-07 09:31:14.061975] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:25.081 09:31:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.081 09:31:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:25.081 09:31:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.081 09:31:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:25.081 09:31:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.081 09:31:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:25.081 09:31:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.081 09:31:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:25.341 [2024-10-07 09:31:14.078238] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:25.341 09:31:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.341 09:31:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:25.341 09:31:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.341 09:31:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:25.341 09:31:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.341 09:31:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:10:25.341 09:31:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.341 09:31:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:25.341 malloc0 00:10:25.341 09:31:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.341 09:31:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:25.341 09:31:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.341 09:31:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:25.341 09:31:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.341 09:31:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:10:25.341 09:31:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:10:25.341 09:31:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:10:25.341 09:31:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:10:25.341 09:31:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:10:25.341 09:31:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:10:25.341 { 00:10:25.341 "params": { 00:10:25.341 "name": "Nvme$subsystem", 00:10:25.341 "trtype": "$TEST_TRANSPORT", 00:10:25.341 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:25.341 "adrfam": "ipv4", 00:10:25.341 "trsvcid": "$NVMF_PORT", 00:10:25.341 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:25.341 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:25.341 "hdgst": ${hdgst:-false}, 00:10:25.341 "ddgst": ${ddgst:-false} 00:10:25.341 }, 00:10:25.341 "method": "bdev_nvme_attach_controller" 00:10:25.341 } 00:10:25.341 EOF 00:10:25.341 )") 00:10:25.341 09:31:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:10:25.341 09:31:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:10:25.341 09:31:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:10:25.341 09:31:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:10:25.341 "params": { 00:10:25.341 "name": "Nvme1", 00:10:25.341 "trtype": "tcp", 00:10:25.341 "traddr": "10.0.0.2", 00:10:25.341 "adrfam": "ipv4", 00:10:25.341 "trsvcid": "4420", 00:10:25.341 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:25.341 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:25.341 "hdgst": false, 00:10:25.341 "ddgst": false 00:10:25.341 }, 00:10:25.341 "method": "bdev_nvme_attach_controller" 00:10:25.341 }' 00:10:25.341 [2024-10-07 09:31:14.176846] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:10:25.341 [2024-10-07 09:31:14.176928] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145113 ] 00:10:25.341 [2024-10-07 09:31:14.238890] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:25.600 [2024-10-07 09:31:14.349989] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:25.858 Running I/O for 10 seconds... 00:10:35.712 5748.00 IOPS, 44.91 MiB/s 5773.50 IOPS, 45.11 MiB/s 5801.67 IOPS, 45.33 MiB/s 5803.00 IOPS, 45.34 MiB/s 5803.40 IOPS, 45.34 MiB/s 5809.50 IOPS, 45.39 MiB/s 5812.29 IOPS, 45.41 MiB/s 5818.00 IOPS, 45.45 MiB/s 5815.56 IOPS, 45.43 MiB/s 5814.80 IOPS, 45.43 MiB/s 00:10:35.712 Latency(us) 00:10:35.712 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:35.712 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:35.712 Verification LBA range: start 0x0 length 0x1000 00:10:35.712 Nvme1n1 : 10.01 5820.12 45.47 0.00 0.00 21934.85 424.77 29515.47 00:10:35.712 =================================================================================================================== 00:10:35.712 Total : 5820.12 45.47 0.00 0.00 21934.85 424.77 29515.47 00:10:35.970 09:31:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=146373 00:10:35.970 09:31:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:10:35.970 09:31:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:35.970 09:31:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:35.970 09:31:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:35.970 09:31:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:10:35.970 09:31:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:10:35.970 09:31:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:10:35.970 09:31:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:10:35.970 { 00:10:35.970 "params": { 00:10:35.970 "name": "Nvme$subsystem", 00:10:35.970 "trtype": "$TEST_TRANSPORT", 00:10:35.970 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:35.970 "adrfam": "ipv4", 00:10:35.970 "trsvcid": "$NVMF_PORT", 00:10:35.970 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:35.970 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:35.970 "hdgst": ${hdgst:-false}, 00:10:35.970 "ddgst": ${ddgst:-false} 00:10:35.970 }, 00:10:35.970 "method": "bdev_nvme_attach_controller" 00:10:35.970 } 00:10:35.970 EOF 00:10:35.970 )") 00:10:35.970 09:31:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:10:35.970 [2024-10-07 09:31:24.947500] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.970 [2024-10-07 09:31:24.947541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.970 09:31:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:10:35.970 09:31:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:10:35.970 09:31:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:10:35.970 "params": { 00:10:35.970 "name": "Nvme1", 00:10:35.970 "trtype": "tcp", 00:10:35.970 "traddr": "10.0.0.2", 00:10:35.970 "adrfam": "ipv4", 00:10:35.970 "trsvcid": "4420", 00:10:35.970 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:35.970 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:35.970 "hdgst": false, 00:10:35.970 "ddgst": false 00:10:35.970 }, 00:10:35.970 "method": "bdev_nvme_attach_controller" 00:10:35.970 }' 00:10:35.970 [2024-10-07 09:31:24.955465] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.970 [2024-10-07 09:31:24.955489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.970 [2024-10-07 09:31:24.963492] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.970 [2024-10-07 09:31:24.963516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.227 [2024-10-07 09:31:24.971505] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.227 [2024-10-07 09:31:24.971526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.227 [2024-10-07 09:31:24.979530] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.227 [2024-10-07 09:31:24.979551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.227 [2024-10-07 09:31:24.985167] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:10:36.227 [2024-10-07 09:31:24.985224] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146373 ] 00:10:36.227 [2024-10-07 09:31:24.987551] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.227 [2024-10-07 09:31:24.987572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.227 [2024-10-07 09:31:24.995573] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.227 [2024-10-07 09:31:24.995594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.227 [2024-10-07 09:31:25.003594] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.227 [2024-10-07 09:31:25.003615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.227 [2024-10-07 09:31:25.011617] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.227 [2024-10-07 09:31:25.011638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.227 [2024-10-07 09:31:25.019638] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.227 [2024-10-07 09:31:25.019681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.227 [2024-10-07 09:31:25.027687] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.227 [2024-10-07 09:31:25.027710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.227 [2024-10-07 09:31:25.035708] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.227 [2024-10-07 09:31:25.035731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.227 [2024-10-07 09:31:25.043193] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:36.227 [2024-10-07 09:31:25.043733] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.227 [2024-10-07 09:31:25.043755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.227 [2024-10-07 09:31:25.051773] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.227 [2024-10-07 09:31:25.051811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.227 [2024-10-07 09:31:25.059794] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.227 [2024-10-07 09:31:25.059831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.227 [2024-10-07 09:31:25.067787] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.227 [2024-10-07 09:31:25.067809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.227 [2024-10-07 09:31:25.075807] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.227 [2024-10-07 09:31:25.075829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.227 [2024-10-07 09:31:25.083832] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.227 [2024-10-07 09:31:25.083854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.227 [2024-10-07 09:31:25.091852] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.227 [2024-10-07 09:31:25.091874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.227 [2024-10-07 09:31:25.099872] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.227 [2024-10-07 09:31:25.099894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.228 [2024-10-07 09:31:25.107926] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.228 [2024-10-07 09:31:25.107979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.228 [2024-10-07 09:31:25.115927] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.228 [2024-10-07 09:31:25.115968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.228 [2024-10-07 09:31:25.123937] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.228 [2024-10-07 09:31:25.123974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.228 [2024-10-07 09:31:25.131973] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.228 [2024-10-07 09:31:25.131995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.228 [2024-10-07 09:31:25.139992] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.228 [2024-10-07 09:31:25.140013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.228 [2024-10-07 09:31:25.148029] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.228 [2024-10-07 09:31:25.148050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.228 [2024-10-07 09:31:25.156029] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.228 [2024-10-07 09:31:25.156049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.228 [2024-10-07 09:31:25.156568] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:36.228 [2024-10-07 09:31:25.164068] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.228 [2024-10-07 09:31:25.164089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.228 [2024-10-07 09:31:25.172105] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.228 [2024-10-07 09:31:25.172135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.228 [2024-10-07 09:31:25.180123] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.228 [2024-10-07 09:31:25.180158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.228 [2024-10-07 09:31:25.188153] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.228 [2024-10-07 09:31:25.188192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.228 [2024-10-07 09:31:25.196174] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.228 [2024-10-07 09:31:25.196212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.228 [2024-10-07 09:31:25.204196] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.228 [2024-10-07 09:31:25.204237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.228 [2024-10-07 09:31:25.212216] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.228 [2024-10-07 09:31:25.212256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.228 [2024-10-07 09:31:25.220230] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.228 [2024-10-07 09:31:25.220267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.484 [2024-10-07 09:31:25.228235] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.485 [2024-10-07 09:31:25.228257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.485 [2024-10-07 09:31:25.236279] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.485 [2024-10-07 09:31:25.236315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.485 [2024-10-07 09:31:25.244303] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.485 [2024-10-07 09:31:25.244339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.485 [2024-10-07 09:31:25.252302] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.485 [2024-10-07 09:31:25.252325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.485 [2024-10-07 09:31:25.260322] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.485 [2024-10-07 09:31:25.260342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.485 [2024-10-07 09:31:25.268340] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.485 [2024-10-07 09:31:25.268359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.485 [2024-10-07 09:31:25.276370] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.485 [2024-10-07 09:31:25.276402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.485 [2024-10-07 09:31:25.284398] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.485 [2024-10-07 09:31:25.284424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.485 [2024-10-07 09:31:25.292413] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.485 [2024-10-07 09:31:25.292436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.485 [2024-10-07 09:31:25.300438] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.485 [2024-10-07 09:31:25.300461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.485 [2024-10-07 09:31:25.308473] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.485 [2024-10-07 09:31:25.308500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.485 [2024-10-07 09:31:25.316487] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.485 [2024-10-07 09:31:25.316510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.485 [2024-10-07 09:31:25.324507] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.485 [2024-10-07 09:31:25.324528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.485 [2024-10-07 09:31:25.332557] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.485 [2024-10-07 09:31:25.332583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.485 [2024-10-07 09:31:25.340553] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.485 [2024-10-07 09:31:25.340574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.485 Running I/O for 5 seconds... 00:10:36.485 [2024-10-07 09:31:25.348575] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.485 [2024-10-07 09:31:25.348596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.485 [2024-10-07 09:31:25.363283] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.485 [2024-10-07 09:31:25.363313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.485 [2024-10-07 09:31:25.374054] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.485 [2024-10-07 09:31:25.374081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.485 [2024-10-07 09:31:25.385280] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.485 [2024-10-07 09:31:25.385307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.485 [2024-10-07 09:31:25.396316] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.485 [2024-10-07 09:31:25.396344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.485 [2024-10-07 09:31:25.407691] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.485 [2024-10-07 09:31:25.407719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.485 [2024-10-07 09:31:25.418822] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.485 [2024-10-07 09:31:25.418850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.485 [2024-10-07 09:31:25.429921] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.485 [2024-10-07 09:31:25.429950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.485 [2024-10-07 09:31:25.441023] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.485 [2024-10-07 09:31:25.441051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.485 [2024-10-07 09:31:25.451723] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.485 [2024-10-07 09:31:25.451751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.485 [2024-10-07 09:31:25.462741] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.485 [2024-10-07 09:31:25.462768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.485 [2024-10-07 09:31:25.474097] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.485 [2024-10-07 09:31:25.474125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.743 [2024-10-07 09:31:25.485227] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.743 [2024-10-07 09:31:25.485255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.743 [2024-10-07 09:31:25.496362] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.743 [2024-10-07 09:31:25.496389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.743 [2024-10-07 09:31:25.507365] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.743 [2024-10-07 09:31:25.507392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.743 [2024-10-07 09:31:25.518352] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.743 [2024-10-07 09:31:25.518379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.743 [2024-10-07 09:31:25.531056] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.743 [2024-10-07 09:31:25.531083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.743 [2024-10-07 09:31:25.541424] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.743 [2024-10-07 09:31:25.541452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.743 [2024-10-07 09:31:25.552124] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.743 [2024-10-07 09:31:25.552152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.743 [2024-10-07 09:31:25.563124] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.743 [2024-10-07 09:31:25.563151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.743 [2024-10-07 09:31:25.574224] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.743 [2024-10-07 09:31:25.574251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.743 [2024-10-07 09:31:25.585356] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.743 [2024-10-07 09:31:25.585384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.743 [2024-10-07 09:31:25.596343] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.743 [2024-10-07 09:31:25.596371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.743 [2024-10-07 09:31:25.608000] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.743 [2024-10-07 09:31:25.608027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.743 [2024-10-07 09:31:25.618724] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.743 [2024-10-07 09:31:25.618753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.743 [2024-10-07 09:31:25.629611] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.743 [2024-10-07 09:31:25.629639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.743 [2024-10-07 09:31:25.643003] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.743 [2024-10-07 09:31:25.643030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.743 [2024-10-07 09:31:25.653780] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.743 [2024-10-07 09:31:25.653809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.743 [2024-10-07 09:31:25.664775] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.743 [2024-10-07 09:31:25.664803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.743 [2024-10-07 09:31:25.675592] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.743 [2024-10-07 09:31:25.675620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.743 [2024-10-07 09:31:25.686737] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.743 [2024-10-07 09:31:25.686765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.743 [2024-10-07 09:31:25.700022] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.743 [2024-10-07 09:31:25.700053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.743 [2024-10-07 09:31:25.710873] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.743 [2024-10-07 09:31:25.710901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.743 [2024-10-07 09:31:25.721775] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.743 [2024-10-07 09:31:25.721803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.743 [2024-10-07 09:31:25.733077] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.743 [2024-10-07 09:31:25.733104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.001 [2024-10-07 09:31:25.744300] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.001 [2024-10-07 09:31:25.744327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.001 [2024-10-07 09:31:25.755430] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.001 [2024-10-07 09:31:25.755456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.001 [2024-10-07 09:31:25.766513] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.001 [2024-10-07 09:31:25.766540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.001 [2024-10-07 09:31:25.777778] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.001 [2024-10-07 09:31:25.777806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.002 [2024-10-07 09:31:25.788573] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.002 [2024-10-07 09:31:25.788599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.002 [2024-10-07 09:31:25.802403] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.002 [2024-10-07 09:31:25.802429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.002 [2024-10-07 09:31:25.813440] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.002 [2024-10-07 09:31:25.813466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.002 [2024-10-07 09:31:25.824379] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.002 [2024-10-07 09:31:25.824405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.002 [2024-10-07 09:31:25.835319] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.002 [2024-10-07 09:31:25.835345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.002 [2024-10-07 09:31:25.846169] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.002 [2024-10-07 09:31:25.846195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.002 [2024-10-07 09:31:25.856883] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.002 [2024-10-07 09:31:25.856910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.002 [2024-10-07 09:31:25.868027] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.002 [2024-10-07 09:31:25.868055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.002 [2024-10-07 09:31:25.879079] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.002 [2024-10-07 09:31:25.879106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.002 [2024-10-07 09:31:25.893073] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.002 [2024-10-07 09:31:25.893099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.002 [2024-10-07 09:31:25.903495] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.002 [2024-10-07 09:31:25.903521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.002 [2024-10-07 09:31:25.913883] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.002 [2024-10-07 09:31:25.913910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.002 [2024-10-07 09:31:25.925276] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.002 [2024-10-07 09:31:25.925302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.002 [2024-10-07 09:31:25.938192] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.002 [2024-10-07 09:31:25.938218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.002 [2024-10-07 09:31:25.948410] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.002 [2024-10-07 09:31:25.948436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.002 [2024-10-07 09:31:25.958972] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.002 [2024-10-07 09:31:25.958999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.002 [2024-10-07 09:31:25.969686] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.002 [2024-10-07 09:31:25.969713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.002 [2024-10-07 09:31:25.980277] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.002 [2024-10-07 09:31:25.980302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.002 [2024-10-07 09:31:25.991484] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.002 [2024-10-07 09:31:25.991510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.261 [2024-10-07 09:31:26.004970] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.261 [2024-10-07 09:31:26.004997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.261 [2024-10-07 09:31:26.015349] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.261 [2024-10-07 09:31:26.015375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.261 [2024-10-07 09:31:26.026436] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.261 [2024-10-07 09:31:26.026462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.261 [2024-10-07 09:31:26.037662] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.261 [2024-10-07 09:31:26.037713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.261 [2024-10-07 09:31:26.048721] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.261 [2024-10-07 09:31:26.048748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.261 [2024-10-07 09:31:26.062150] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.261 [2024-10-07 09:31:26.062176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.261 [2024-10-07 09:31:26.072848] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.261 [2024-10-07 09:31:26.072876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.261 [2024-10-07 09:31:26.084129] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.261 [2024-10-07 09:31:26.084155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.261 [2024-10-07 09:31:26.097463] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.261 [2024-10-07 09:31:26.097490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.261 [2024-10-07 09:31:26.108446] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.261 [2024-10-07 09:31:26.108472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.261 [2024-10-07 09:31:26.119455] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.261 [2024-10-07 09:31:26.119480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.261 [2024-10-07 09:31:26.130825] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.261 [2024-10-07 09:31:26.130852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.261 [2024-10-07 09:31:26.141712] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.261 [2024-10-07 09:31:26.141754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.261 [2024-10-07 09:31:26.154230] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.261 [2024-10-07 09:31:26.154258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.261 [2024-10-07 09:31:26.165056] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.261 [2024-10-07 09:31:26.165084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.261 [2024-10-07 09:31:26.176059] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.261 [2024-10-07 09:31:26.176085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.261 [2024-10-07 09:31:26.189157] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.261 [2024-10-07 09:31:26.189184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.261 [2024-10-07 09:31:26.199461] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.261 [2024-10-07 09:31:26.199486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.261 [2024-10-07 09:31:26.209724] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.261 [2024-10-07 09:31:26.209751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.261 [2024-10-07 09:31:26.220398] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.261 [2024-10-07 09:31:26.220425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.261 [2024-10-07 09:31:26.232850] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.261 [2024-10-07 09:31:26.232878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.261 [2024-10-07 09:31:26.242378] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.261 [2024-10-07 09:31:26.242404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.261 [2024-10-07 09:31:26.254376] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.261 [2024-10-07 09:31:26.254402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.525 [2024-10-07 09:31:26.265490] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.525 [2024-10-07 09:31:26.265515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.525 [2024-10-07 09:31:26.275745] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.525 [2024-10-07 09:31:26.275771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.525 [2024-10-07 09:31:26.286600] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.525 [2024-10-07 09:31:26.286629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.525 [2024-10-07 09:31:26.299340] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.525 [2024-10-07 09:31:26.299366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.525 [2024-10-07 09:31:26.309074] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.525 [2024-10-07 09:31:26.309109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.525 [2024-10-07 09:31:26.320612] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.525 [2024-10-07 09:31:26.320638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.525 [2024-10-07 09:31:26.331398] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.525 [2024-10-07 09:31:26.331424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.525 [2024-10-07 09:31:26.342008] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.525 [2024-10-07 09:31:26.342052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.525 11478.00 IOPS, 89.67 MiB/s [2024-10-07 09:31:26.353519] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.525 [2024-10-07 09:31:26.353544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.525 [2024-10-07 09:31:26.364159] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.525 [2024-10-07 09:31:26.364185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.525 [2024-10-07 09:31:26.375189] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.525 [2024-10-07 09:31:26.375214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.525 [2024-10-07 09:31:26.386113] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.525 [2024-10-07 09:31:26.386139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.525 [2024-10-07 09:31:26.397031] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.525 [2024-10-07 09:31:26.397057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.525 [2024-10-07 09:31:26.408153] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.525 [2024-10-07 09:31:26.408179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.525 [2024-10-07 09:31:26.420387] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.525 [2024-10-07 09:31:26.420413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.525 [2024-10-07 09:31:26.430606] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.525 [2024-10-07 09:31:26.430632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.525 [2024-10-07 09:31:26.441875] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.525 [2024-10-07 09:31:26.441918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.525 [2024-10-07 09:31:26.452428] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.525 [2024-10-07 09:31:26.452454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.525 [2024-10-07 09:31:26.463188] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.525 [2024-10-07 09:31:26.463214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.525 [2024-10-07 09:31:26.475836] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.525 [2024-10-07 09:31:26.475863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.525 [2024-10-07 09:31:26.486007] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.525 [2024-10-07 09:31:26.486033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.525 [2024-10-07 09:31:26.497132] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.525 [2024-10-07 09:31:26.497157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.525 [2024-10-07 09:31:26.510171] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.525 [2024-10-07 09:31:26.510197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.525 [2024-10-07 09:31:26.520547] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.525 [2024-10-07 09:31:26.520581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.784 [2024-10-07 09:31:26.531391] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.784 [2024-10-07 09:31:26.531417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.784 [2024-10-07 09:31:26.542527] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.784 [2024-10-07 09:31:26.542553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.784 [2024-10-07 09:31:26.553428] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.784 [2024-10-07 09:31:26.553454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.784 [2024-10-07 09:31:26.564794] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.784 [2024-10-07 09:31:26.564821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.784 [2024-10-07 09:31:26.575720] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.784 [2024-10-07 09:31:26.575747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.784 [2024-10-07 09:31:26.587107] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.784 [2024-10-07 09:31:26.587133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.784 [2024-10-07 09:31:26.598726] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.784 [2024-10-07 09:31:26.598753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.784 [2024-10-07 09:31:26.610027] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.784 [2024-10-07 09:31:26.610057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.784 [2024-10-07 09:31:26.622995] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.784 [2024-10-07 09:31:26.623021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.784 [2024-10-07 09:31:26.633567] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.784 [2024-10-07 09:31:26.633594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.784 [2024-10-07 09:31:26.644315] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.784 [2024-10-07 09:31:26.644342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.784 [2024-10-07 09:31:26.655313] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.784 [2024-10-07 09:31:26.655340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.784 [2024-10-07 09:31:26.666244] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.784 [2024-10-07 09:31:26.666271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.784 [2024-10-07 09:31:26.676853] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.784 [2024-10-07 09:31:26.676881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.784 [2024-10-07 09:31:26.687812] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.784 [2024-10-07 09:31:26.687843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.784 [2024-10-07 09:31:26.699142] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.784 [2024-10-07 09:31:26.699169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.784 [2024-10-07 09:31:26.710423] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.784 [2024-10-07 09:31:26.710449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.784 [2024-10-07 09:31:26.721036] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.784 [2024-10-07 09:31:26.721062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.784 [2024-10-07 09:31:26.732217] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.784 [2024-10-07 09:31:26.732253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.784 [2024-10-07 09:31:26.743275] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.784 [2024-10-07 09:31:26.743300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.784 [2024-10-07 09:31:26.753922] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.784 [2024-10-07 09:31:26.753964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.784 [2024-10-07 09:31:26.764995] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.784 [2024-10-07 09:31:26.765036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.784 [2024-10-07 09:31:26.776496] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.784 [2024-10-07 09:31:26.776524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.043 [2024-10-07 09:31:26.787426] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.043 [2024-10-07 09:31:26.787454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.043 [2024-10-07 09:31:26.798959] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.043 [2024-10-07 09:31:26.799001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.043 [2024-10-07 09:31:26.809662] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.043 [2024-10-07 09:31:26.809716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.043 [2024-10-07 09:31:26.820319] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.043 [2024-10-07 09:31:26.820346] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.043 [2024-10-07 09:31:26.831102] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.043 [2024-10-07 09:31:26.831128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.043 [2024-10-07 09:31:26.841847] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.043 [2024-10-07 09:31:26.841874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.043 [2024-10-07 09:31:26.852784] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.043 [2024-10-07 09:31:26.852811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.043 [2024-10-07 09:31:26.865358] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.043 [2024-10-07 09:31:26.865385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.043 [2024-10-07 09:31:26.875945] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.043 [2024-10-07 09:31:26.875973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.043 [2024-10-07 09:31:26.887362] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.043 [2024-10-07 09:31:26.887390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.043 [2024-10-07 09:31:26.898342] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.043 [2024-10-07 09:31:26.898369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.043 [2024-10-07 09:31:26.909752] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.043 [2024-10-07 09:31:26.909780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.043 [2024-10-07 09:31:26.921254] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.043 [2024-10-07 09:31:26.921280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.043 [2024-10-07 09:31:26.934311] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.043 [2024-10-07 09:31:26.934336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.043 [2024-10-07 09:31:26.946274] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.043 [2024-10-07 09:31:26.946304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.043 [2024-10-07 09:31:26.955400] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.043 [2024-10-07 09:31:26.955425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.043 [2024-10-07 09:31:26.967748] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.043 [2024-10-07 09:31:26.967791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.043 [2024-10-07 09:31:26.978553] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.043 [2024-10-07 09:31:26.978580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.043 [2024-10-07 09:31:26.989637] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.043 [2024-10-07 09:31:26.989663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.043 [2024-10-07 09:31:27.000691] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.043 [2024-10-07 09:31:27.000718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.043 [2024-10-07 09:31:27.011823] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.044 [2024-10-07 09:31:27.011853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.044 [2024-10-07 09:31:27.022945] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.044 [2024-10-07 09:31:27.022987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.044 [2024-10-07 09:31:27.034112] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.044 [2024-10-07 09:31:27.034138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.302 [2024-10-07 09:31:27.047597] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.302 [2024-10-07 09:31:27.047623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.302 [2024-10-07 09:31:27.058772] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.302 [2024-10-07 09:31:27.058799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.302 [2024-10-07 09:31:27.069801] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.302 [2024-10-07 09:31:27.069828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.302 [2024-10-07 09:31:27.080986] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.302 [2024-10-07 09:31:27.081013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.303 [2024-10-07 09:31:27.091704] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.303 [2024-10-07 09:31:27.091731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.303 [2024-10-07 09:31:27.102702] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.303 [2024-10-07 09:31:27.102729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.303 [2024-10-07 09:31:27.113557] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.303 [2024-10-07 09:31:27.113584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.303 [2024-10-07 09:31:27.124584] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.303 [2024-10-07 09:31:27.124613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.303 [2024-10-07 09:31:27.135286] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.303 [2024-10-07 09:31:27.135313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.303 [2024-10-07 09:31:27.146120] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.303 [2024-10-07 09:31:27.146147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.303 [2024-10-07 09:31:27.157237] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.303 [2024-10-07 09:31:27.157264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.303 [2024-10-07 09:31:27.170298] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.303 [2024-10-07 09:31:27.170325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.303 [2024-10-07 09:31:27.180469] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.303 [2024-10-07 09:31:27.180495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.303 [2024-10-07 09:31:27.191866] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.303 [2024-10-07 09:31:27.191894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.303 [2024-10-07 09:31:27.203181] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.303 [2024-10-07 09:31:27.203207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.303 [2024-10-07 09:31:27.214079] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.303 [2024-10-07 09:31:27.214105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.303 [2024-10-07 09:31:27.225553] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.303 [2024-10-07 09:31:27.225580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.303 [2024-10-07 09:31:27.238726] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.303 [2024-10-07 09:31:27.238754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.303 [2024-10-07 09:31:27.249625] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.303 [2024-10-07 09:31:27.249676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.303 [2024-10-07 09:31:27.260571] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.303 [2024-10-07 09:31:27.260598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.303 [2024-10-07 09:31:27.273589] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.303 [2024-10-07 09:31:27.273616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.303 [2024-10-07 09:31:27.283947] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.303 [2024-10-07 09:31:27.283990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.303 [2024-10-07 09:31:27.294967] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.303 [2024-10-07 09:31:27.294995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.562 [2024-10-07 09:31:27.308812] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.562 [2024-10-07 09:31:27.308855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.562 [2024-10-07 09:31:27.319475] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.562 [2024-10-07 09:31:27.319501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.562 [2024-10-07 09:31:27.330305] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.562 [2024-10-07 09:31:27.330332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.562 [2024-10-07 09:31:27.343092] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.562 [2024-10-07 09:31:27.343119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.562 11511.00 IOPS, 89.93 MiB/s [2024-10-07 09:31:27.353551] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.562 [2024-10-07 09:31:27.353578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.562 [2024-10-07 09:31:27.364567] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.562 [2024-10-07 09:31:27.364593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.562 [2024-10-07 09:31:27.377401] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.562 [2024-10-07 09:31:27.377427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.562 [2024-10-07 09:31:27.387912] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.562 [2024-10-07 09:31:27.387938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.562 [2024-10-07 09:31:27.398906] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.562 [2024-10-07 09:31:27.398933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.562 [2024-10-07 09:31:27.409776] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.562 [2024-10-07 09:31:27.409803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.562 [2024-10-07 09:31:27.420460] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.562 [2024-10-07 09:31:27.420486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.562 [2024-10-07 09:31:27.433141] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.562 [2024-10-07 09:31:27.433166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.562 [2024-10-07 09:31:27.444943] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.562 [2024-10-07 09:31:27.444970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.562 [2024-10-07 09:31:27.454905] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.562 [2024-10-07 09:31:27.454932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.562 [2024-10-07 09:31:27.466587] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.562 [2024-10-07 09:31:27.466613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.562 [2024-10-07 09:31:27.477136] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.562 [2024-10-07 09:31:27.477165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.562 [2024-10-07 09:31:27.488240] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.562 [2024-10-07 09:31:27.488266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.562 [2024-10-07 09:31:27.499019] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.562 [2024-10-07 09:31:27.499045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.562 [2024-10-07 09:31:27.510097] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.562 [2024-10-07 09:31:27.510122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.562 [2024-10-07 09:31:27.523020] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.562 [2024-10-07 09:31:27.523046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.562 [2024-10-07 09:31:27.533571] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.562 [2024-10-07 09:31:27.533597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.562 [2024-10-07 09:31:27.544468] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.562 [2024-10-07 09:31:27.544494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.562 [2024-10-07 09:31:27.557222] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.562 [2024-10-07 09:31:27.557250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.821 [2024-10-07 09:31:27.567802] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.821 [2024-10-07 09:31:27.567830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.821 [2024-10-07 09:31:27.578578] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.821 [2024-10-07 09:31:27.578618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.821 [2024-10-07 09:31:27.589284] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.821 [2024-10-07 09:31:27.589310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.821 [2024-10-07 09:31:27.600480] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.821 [2024-10-07 09:31:27.600506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.821 [2024-10-07 09:31:27.611541] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.821 [2024-10-07 09:31:27.611568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.821 [2024-10-07 09:31:27.622350] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.821 [2024-10-07 09:31:27.622377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.821 [2024-10-07 09:31:27.635615] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.821 [2024-10-07 09:31:27.635641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.821 [2024-10-07 09:31:27.646343] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.821 [2024-10-07 09:31:27.646370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.821 [2024-10-07 09:31:27.657016] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.821 [2024-10-07 09:31:27.657042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.821 [2024-10-07 09:31:27.669804] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.821 [2024-10-07 09:31:27.669832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.821 [2024-10-07 09:31:27.680353] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.821 [2024-10-07 09:31:27.680386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.821 [2024-10-07 09:31:27.691397] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.821 [2024-10-07 09:31:27.691424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.821 [2024-10-07 09:31:27.704440] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.821 [2024-10-07 09:31:27.704466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.821 [2024-10-07 09:31:27.715465] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.821 [2024-10-07 09:31:27.715492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.821 [2024-10-07 09:31:27.726793] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.821 [2024-10-07 09:31:27.726821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.821 [2024-10-07 09:31:27.738226] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.821 [2024-10-07 09:31:27.738253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.821 [2024-10-07 09:31:27.749759] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.821 [2024-10-07 09:31:27.749787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.821 [2024-10-07 09:31:27.760713] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.821 [2024-10-07 09:31:27.760747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.821 [2024-10-07 09:31:27.773619] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.821 [2024-10-07 09:31:27.773645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.821 [2024-10-07 09:31:27.783758] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.821 [2024-10-07 09:31:27.783787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.821 [2024-10-07 09:31:27.794629] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.821 [2024-10-07 09:31:27.794691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.822 [2024-10-07 09:31:27.805523] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.822 [2024-10-07 09:31:27.805549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.081 [2024-10-07 09:31:27.818456] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.081 [2024-10-07 09:31:27.818503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.081 [2024-10-07 09:31:27.828855] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.081 [2024-10-07 09:31:27.828886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.081 [2024-10-07 09:31:27.839854] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.081 [2024-10-07 09:31:27.839883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.081 [2024-10-07 09:31:27.852486] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.081 [2024-10-07 09:31:27.852512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.081 [2024-10-07 09:31:27.862787] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.081 [2024-10-07 09:31:27.862816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.081 [2024-10-07 09:31:27.873740] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.081 [2024-10-07 09:31:27.873768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.081 [2024-10-07 09:31:27.884557] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.081 [2024-10-07 09:31:27.884583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.081 [2024-10-07 09:31:27.895674] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.081 [2024-10-07 09:31:27.895701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.081 [2024-10-07 09:31:27.908561] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.081 [2024-10-07 09:31:27.908587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.081 [2024-10-07 09:31:27.919176] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.081 [2024-10-07 09:31:27.919203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.081 [2024-10-07 09:31:27.929997] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.081 [2024-10-07 09:31:27.930039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.081 [2024-10-07 09:31:27.942655] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.081 [2024-10-07 09:31:27.942695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.081 [2024-10-07 09:31:27.952398] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.081 [2024-10-07 09:31:27.952424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.081 [2024-10-07 09:31:27.964238] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.081 [2024-10-07 09:31:27.964265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.081 [2024-10-07 09:31:27.975482] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.081 [2024-10-07 09:31:27.975508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.081 [2024-10-07 09:31:27.986565] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.081 [2024-10-07 09:31:27.986592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.081 [2024-10-07 09:31:27.997871] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.081 [2024-10-07 09:31:27.997899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.081 [2024-10-07 09:31:28.009080] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.081 [2024-10-07 09:31:28.009115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.081 [2024-10-07 09:31:28.020157] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.081 [2024-10-07 09:31:28.020187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.081 [2024-10-07 09:31:28.031845] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.081 [2024-10-07 09:31:28.031878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.081 [2024-10-07 09:31:28.042929] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.081 [2024-10-07 09:31:28.042975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.081 [2024-10-07 09:31:28.053793] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.081 [2024-10-07 09:31:28.053821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.081 [2024-10-07 09:31:28.064764] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.081 [2024-10-07 09:31:28.064791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.341 [2024-10-07 09:31:28.078553] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.341 [2024-10-07 09:31:28.078579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.341 [2024-10-07 09:31:28.089347] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.341 [2024-10-07 09:31:28.089372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.341 [2024-10-07 09:31:28.099842] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.341 [2024-10-07 09:31:28.099873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.341 [2024-10-07 09:31:28.110621] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.341 [2024-10-07 09:31:28.110647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.341 [2024-10-07 09:31:28.121914] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.341 [2024-10-07 09:31:28.121942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.341 [2024-10-07 09:31:28.132866] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.341 [2024-10-07 09:31:28.132893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.341 [2024-10-07 09:31:28.145301] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.341 [2024-10-07 09:31:28.145326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.341 [2024-10-07 09:31:28.155319] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.341 [2024-10-07 09:31:28.155344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.341 [2024-10-07 09:31:28.166376] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.341 [2024-10-07 09:31:28.166403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.341 [2024-10-07 09:31:28.180167] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.341 [2024-10-07 09:31:28.180193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.341 [2024-10-07 09:31:28.190971] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.341 [2024-10-07 09:31:28.191015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.341 [2024-10-07 09:31:28.202020] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.341 [2024-10-07 09:31:28.202046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.341 [2024-10-07 09:31:28.212893] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.341 [2024-10-07 09:31:28.212921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.341 [2024-10-07 09:31:28.223631] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.341 [2024-10-07 09:31:28.223696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.341 [2024-10-07 09:31:28.236102] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.341 [2024-10-07 09:31:28.236128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.341 [2024-10-07 09:31:28.245609] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.341 [2024-10-07 09:31:28.245635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.341 [2024-10-07 09:31:28.257257] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.341 [2024-10-07 09:31:28.257283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.341 [2024-10-07 09:31:28.268131] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.341 [2024-10-07 09:31:28.268157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.341 [2024-10-07 09:31:28.279196] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.341 [2024-10-07 09:31:28.279222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.341 [2024-10-07 09:31:28.292057] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.341 [2024-10-07 09:31:28.292082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.341 [2024-10-07 09:31:28.302397] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.341 [2024-10-07 09:31:28.302423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.341 [2024-10-07 09:31:28.313614] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.341 [2024-10-07 09:31:28.313640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.341 [2024-10-07 09:31:28.326937] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.341 [2024-10-07 09:31:28.326962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.601 [2024-10-07 09:31:28.337612] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.601 [2024-10-07 09:31:28.337640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.601 [2024-10-07 09:31:28.348611] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.601 [2024-10-07 09:31:28.348636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.601 11522.67 IOPS, 90.02 MiB/s [2024-10-07 09:31:28.361330] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.601 [2024-10-07 09:31:28.361358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.601 [2024-10-07 09:31:28.371710] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.601 [2024-10-07 09:31:28.371748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.601 [2024-10-07 09:31:28.382487] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.601 [2024-10-07 09:31:28.382514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.601 [2024-10-07 09:31:28.393516] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.601 [2024-10-07 09:31:28.393544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.601 [2024-10-07 09:31:28.404144] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.601 [2024-10-07 09:31:28.404170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.601 [2024-10-07 09:31:28.415090] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.601 [2024-10-07 09:31:28.415117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.601 [2024-10-07 09:31:28.426065] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.601 [2024-10-07 09:31:28.426092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.601 [2024-10-07 09:31:28.440035] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.601 [2024-10-07 09:31:28.440062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.602 [2024-10-07 09:31:28.450873] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.602 [2024-10-07 09:31:28.450901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.602 [2024-10-07 09:31:28.461900] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.602 [2024-10-07 09:31:28.461927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.602 [2024-10-07 09:31:28.475207] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.602 [2024-10-07 09:31:28.475233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.602 [2024-10-07 09:31:28.485327] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.602 [2024-10-07 09:31:28.485354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.602 [2024-10-07 09:31:28.496506] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.602 [2024-10-07 09:31:28.496533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.602 [2024-10-07 09:31:28.507400] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.602 [2024-10-07 09:31:28.507426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.602 [2024-10-07 09:31:28.518295] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.602 [2024-10-07 09:31:28.518322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.602 [2024-10-07 09:31:28.529145] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.602 [2024-10-07 09:31:28.529171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.602 [2024-10-07 09:31:28.540220] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.602 [2024-10-07 09:31:28.540247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.602 [2024-10-07 09:31:28.553564] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.602 [2024-10-07 09:31:28.553591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.602 [2024-10-07 09:31:28.564304] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.602 [2024-10-07 09:31:28.564332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.602 [2024-10-07 09:31:28.575127] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.602 [2024-10-07 09:31:28.575154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.602 [2024-10-07 09:31:28.588198] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.602 [2024-10-07 09:31:28.588225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.862 [2024-10-07 09:31:28.598939] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.862 [2024-10-07 09:31:28.598965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.862 [2024-10-07 09:31:28.609513] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.862 [2024-10-07 09:31:28.609539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.862 [2024-10-07 09:31:28.620346] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.862 [2024-10-07 09:31:28.620372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.862 [2024-10-07 09:31:28.631781] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.862 [2024-10-07 09:31:28.631808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.862 [2024-10-07 09:31:28.642593] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.862 [2024-10-07 09:31:28.642619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.862 [2024-10-07 09:31:28.653908] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.862 [2024-10-07 09:31:28.653934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.862 [2024-10-07 09:31:28.666777] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.862 [2024-10-07 09:31:28.666804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.862 [2024-10-07 09:31:28.677458] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.862 [2024-10-07 09:31:28.677484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.862 [2024-10-07 09:31:28.688629] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.862 [2024-10-07 09:31:28.688655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.862 [2024-10-07 09:31:28.701420] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.862 [2024-10-07 09:31:28.701447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.862 [2024-10-07 09:31:28.711587] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.862 [2024-10-07 09:31:28.711615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.862 [2024-10-07 09:31:28.722627] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.862 [2024-10-07 09:31:28.722657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.862 [2024-10-07 09:31:28.735617] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.862 [2024-10-07 09:31:28.735643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.862 [2024-10-07 09:31:28.745935] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.862 [2024-10-07 09:31:28.745976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.862 [2024-10-07 09:31:28.756449] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.862 [2024-10-07 09:31:28.756475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.862 [2024-10-07 09:31:28.767102] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.862 [2024-10-07 09:31:28.767128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.862 [2024-10-07 09:31:28.777942] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.862 [2024-10-07 09:31:28.777985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.862 [2024-10-07 09:31:28.788872] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.862 [2024-10-07 09:31:28.788899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.862 [2024-10-07 09:31:28.799609] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.862 [2024-10-07 09:31:28.799635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.862 [2024-10-07 09:31:28.810528] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.862 [2024-10-07 09:31:28.810555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.862 [2024-10-07 09:31:28.821621] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.862 [2024-10-07 09:31:28.821663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.862 [2024-10-07 09:31:28.834579] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.862 [2024-10-07 09:31:28.834620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.862 [2024-10-07 09:31:28.845563] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.862 [2024-10-07 09:31:28.845588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.862 [2024-10-07 09:31:28.856166] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.862 [2024-10-07 09:31:28.856220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.121 [2024-10-07 09:31:28.867650] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.122 [2024-10-07 09:31:28.867715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.122 [2024-10-07 09:31:28.878839] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.122 [2024-10-07 09:31:28.878866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.122 [2024-10-07 09:31:28.891591] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.122 [2024-10-07 09:31:28.891618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.122 [2024-10-07 09:31:28.902061] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.122 [2024-10-07 09:31:28.902087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.122 [2024-10-07 09:31:28.913509] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.122 [2024-10-07 09:31:28.913540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.122 [2024-10-07 09:31:28.924294] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.122 [2024-10-07 09:31:28.924319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.122 [2024-10-07 09:31:28.935638] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.122 [2024-10-07 09:31:28.935690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.122 [2024-10-07 09:31:28.948212] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.122 [2024-10-07 09:31:28.948238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.122 [2024-10-07 09:31:28.958413] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.122 [2024-10-07 09:31:28.958439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.122 [2024-10-07 09:31:28.970156] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.122 [2024-10-07 09:31:28.970186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.122 [2024-10-07 09:31:28.981281] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.122 [2024-10-07 09:31:28.981308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.122 [2024-10-07 09:31:28.992818] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.122 [2024-10-07 09:31:28.992845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.122 [2024-10-07 09:31:29.003970] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.122 [2024-10-07 09:31:29.004011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.122 [2024-10-07 09:31:29.016899] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.122 [2024-10-07 09:31:29.016926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.122 [2024-10-07 09:31:29.027699] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.122 [2024-10-07 09:31:29.027726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.122 [2024-10-07 09:31:29.039015] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.122 [2024-10-07 09:31:29.039041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.122 [2024-10-07 09:31:29.051578] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.122 [2024-10-07 09:31:29.051605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.122 [2024-10-07 09:31:29.062175] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.122 [2024-10-07 09:31:29.062202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.122 [2024-10-07 09:31:29.073096] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.122 [2024-10-07 09:31:29.073139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.122 [2024-10-07 09:31:29.086204] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.122 [2024-10-07 09:31:29.086230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.122 [2024-10-07 09:31:29.096458] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.122 [2024-10-07 09:31:29.096485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.122 [2024-10-07 09:31:29.107534] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.122 [2024-10-07 09:31:29.107561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.381 [2024-10-07 09:31:29.120316] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.381 [2024-10-07 09:31:29.120345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.381 [2024-10-07 09:31:29.129863] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.381 [2024-10-07 09:31:29.129890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.381 [2024-10-07 09:31:29.141282] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.381 [2024-10-07 09:31:29.141308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.381 [2024-10-07 09:31:29.151745] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.381 [2024-10-07 09:31:29.151774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.381 [2024-10-07 09:31:29.162969] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.381 [2024-10-07 09:31:29.163012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.381 [2024-10-07 09:31:29.173813] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.381 [2024-10-07 09:31:29.173841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.381 [2024-10-07 09:31:29.185149] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.381 [2024-10-07 09:31:29.185174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.381 [2024-10-07 09:31:29.196090] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.381 [2024-10-07 09:31:29.196115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.381 [2024-10-07 09:31:29.207116] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.381 [2024-10-07 09:31:29.207141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.381 [2024-10-07 09:31:29.218247] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.381 [2024-10-07 09:31:29.218289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.381 [2024-10-07 09:31:29.228918] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.381 [2024-10-07 09:31:29.228945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.381 [2024-10-07 09:31:29.240053] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.381 [2024-10-07 09:31:29.240078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.381 [2024-10-07 09:31:29.251233] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.381 [2024-10-07 09:31:29.251260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.381 [2024-10-07 09:31:29.261963] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.381 [2024-10-07 09:31:29.261989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.381 [2024-10-07 09:31:29.275504] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.381 [2024-10-07 09:31:29.275530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.381 [2024-10-07 09:31:29.286412] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.381 [2024-10-07 09:31:29.286450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.381 [2024-10-07 09:31:29.297398] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.381 [2024-10-07 09:31:29.297424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.381 [2024-10-07 09:31:29.308471] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.381 [2024-10-07 09:31:29.308497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.381 [2024-10-07 09:31:29.319677] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.381 [2024-10-07 09:31:29.319703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.381 [2024-10-07 09:31:29.330758] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.381 [2024-10-07 09:31:29.330785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.381 [2024-10-07 09:31:29.341609] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.381 [2024-10-07 09:31:29.341635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.381 [2024-10-07 09:31:29.354608] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.381 [2024-10-07 09:31:29.354634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.381 11534.00 IOPS, 90.11 MiB/s [2024-10-07 09:31:29.365344] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.381 [2024-10-07 09:31:29.365371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.381 [2024-10-07 09:31:29.376589] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.381 [2024-10-07 09:31:29.376615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.640 [2024-10-07 09:31:29.389683] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.640 [2024-10-07 09:31:29.389709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.640 [2024-10-07 09:31:29.400572] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.640 [2024-10-07 09:31:29.400598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.640 [2024-10-07 09:31:29.411033] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.640 [2024-10-07 09:31:29.411059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.640 [2024-10-07 09:31:29.421813] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.640 [2024-10-07 09:31:29.421841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.640 [2024-10-07 09:31:29.432456] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.640 [2024-10-07 09:31:29.432482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.640 [2024-10-07 09:31:29.443077] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.640 [2024-10-07 09:31:29.443103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.640 [2024-10-07 09:31:29.454312] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.640 [2024-10-07 09:31:29.454337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.640 [2024-10-07 09:31:29.465561] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.640 [2024-10-07 09:31:29.465587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.640 [2024-10-07 09:31:29.478414] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.640 [2024-10-07 09:31:29.478441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.640 [2024-10-07 09:31:29.489188] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.640 [2024-10-07 09:31:29.489214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.640 [2024-10-07 09:31:29.500341] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.640 [2024-10-07 09:31:29.500376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.640 [2024-10-07 09:31:29.512989] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.640 [2024-10-07 09:31:29.513015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.640 [2024-10-07 09:31:29.523148] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.640 [2024-10-07 09:31:29.523175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.640 [2024-10-07 09:31:29.534135] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.640 [2024-10-07 09:31:29.534161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.640 [2024-10-07 09:31:29.547028] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.640 [2024-10-07 09:31:29.547055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.640 [2024-10-07 09:31:29.557239] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.640 [2024-10-07 09:31:29.557264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.641 [2024-10-07 09:31:29.567860] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.641 [2024-10-07 09:31:29.567886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.641 [2024-10-07 09:31:29.578918] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.641 [2024-10-07 09:31:29.578944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.641 [2024-10-07 09:31:29.591876] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.641 [2024-10-07 09:31:29.591904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.641 [2024-10-07 09:31:29.602177] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.641 [2024-10-07 09:31:29.602204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.641 [2024-10-07 09:31:29.613155] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.641 [2024-10-07 09:31:29.613182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.641 [2024-10-07 09:31:29.625965] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.641 [2024-10-07 09:31:29.626007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.641 [2024-10-07 09:31:29.636192] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.641 [2024-10-07 09:31:29.636220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.900 [2024-10-07 09:31:29.646282] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.900 [2024-10-07 09:31:29.646311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.900 [2024-10-07 09:31:29.657216] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.900 [2024-10-07 09:31:29.657242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.900 [2024-10-07 09:31:29.668320] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.900 [2024-10-07 09:31:29.668347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.900 [2024-10-07 09:31:29.679025] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.900 [2024-10-07 09:31:29.679051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.900 [2024-10-07 09:31:29.690305] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.900 [2024-10-07 09:31:29.690335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.900 [2024-10-07 09:31:29.701243] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.900 [2024-10-07 09:31:29.701269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.900 [2024-10-07 09:31:29.714125] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.900 [2024-10-07 09:31:29.714153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.900 [2024-10-07 09:31:29.724335] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.900 [2024-10-07 09:31:29.724362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.900 [2024-10-07 09:31:29.735602] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.900 [2024-10-07 09:31:29.735629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.900 [2024-10-07 09:31:29.747109] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.900 [2024-10-07 09:31:29.747136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.900 [2024-10-07 09:31:29.758113] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.900 [2024-10-07 09:31:29.758140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.900 [2024-10-07 09:31:29.769807] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.900 [2024-10-07 09:31:29.769836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.900 [2024-10-07 09:31:29.780854] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.900 [2024-10-07 09:31:29.780883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.900 [2024-10-07 09:31:29.793694] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.900 [2024-10-07 09:31:29.793721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.900 [2024-10-07 09:31:29.804196] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.900 [2024-10-07 09:31:29.804223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.900 [2024-10-07 09:31:29.815167] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.900 [2024-10-07 09:31:29.815194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.900 [2024-10-07 09:31:29.826255] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.900 [2024-10-07 09:31:29.826283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.900 [2024-10-07 09:31:29.837071] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.900 [2024-10-07 09:31:29.837098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.900 [2024-10-07 09:31:29.849542] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.900 [2024-10-07 09:31:29.849570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.900 [2024-10-07 09:31:29.860065] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.900 [2024-10-07 09:31:29.860091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.900 [2024-10-07 09:31:29.871598] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.900 [2024-10-07 09:31:29.871623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.900 [2024-10-07 09:31:29.884943] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.900 [2024-10-07 09:31:29.884984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.900 [2024-10-07 09:31:29.896091] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.900 [2024-10-07 09:31:29.896134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.159 [2024-10-07 09:31:29.906776] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.159 [2024-10-07 09:31:29.906803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.159 [2024-10-07 09:31:29.917449] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.159 [2024-10-07 09:31:29.917476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.159 [2024-10-07 09:31:29.930796] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.159 [2024-10-07 09:31:29.930823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.159 [2024-10-07 09:31:29.941121] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.159 [2024-10-07 09:31:29.941147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.159 [2024-10-07 09:31:29.951867] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.159 [2024-10-07 09:31:29.951893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.159 [2024-10-07 09:31:29.962504] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.159 [2024-10-07 09:31:29.962530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.159 [2024-10-07 09:31:29.973601] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.159 [2024-10-07 09:31:29.973627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.159 [2024-10-07 09:31:29.986607] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.159 [2024-10-07 09:31:29.986633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.159 [2024-10-07 09:31:29.997557] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.159 [2024-10-07 09:31:29.997585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.159 [2024-10-07 09:31:30.008580] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.159 [2024-10-07 09:31:30.008614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.159 [2024-10-07 09:31:30.019895] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.159 [2024-10-07 09:31:30.019924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.159 [2024-10-07 09:31:30.030745] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.159 [2024-10-07 09:31:30.030773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.159 [2024-10-07 09:31:30.042272] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.159 [2024-10-07 09:31:30.042298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.159 [2024-10-07 09:31:30.053811] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.159 [2024-10-07 09:31:30.053838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.159 [2024-10-07 09:31:30.065384] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.159 [2024-10-07 09:31:30.065413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.159 [2024-10-07 09:31:30.078248] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.159 [2024-10-07 09:31:30.078274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.159 [2024-10-07 09:31:30.088155] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.159 [2024-10-07 09:31:30.088182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.159 [2024-10-07 09:31:30.099579] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.159 [2024-10-07 09:31:30.099607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.159 [2024-10-07 09:31:30.110418] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.159 [2024-10-07 09:31:30.110446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.159 [2024-10-07 09:31:30.121396] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.159 [2024-10-07 09:31:30.121424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.159 [2024-10-07 09:31:30.133589] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.159 [2024-10-07 09:31:30.133616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.159 [2024-10-07 09:31:30.144007] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.159 [2024-10-07 09:31:30.144034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.159 [2024-10-07 09:31:30.154558] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.159 [2024-10-07 09:31:30.154586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.418 [2024-10-07 09:31:30.165700] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.418 [2024-10-07 09:31:30.165744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.418 [2024-10-07 09:31:30.176583] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.418 [2024-10-07 09:31:30.176610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.418 [2024-10-07 09:31:30.188131] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.418 [2024-10-07 09:31:30.188173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.418 [2024-10-07 09:31:30.199387] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.418 [2024-10-07 09:31:30.199414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.418 [2024-10-07 09:31:30.210918] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.418 [2024-10-07 09:31:30.210952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.418 [2024-10-07 09:31:30.221993] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.418 [2024-10-07 09:31:30.222035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.418 [2024-10-07 09:31:30.235115] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.418 [2024-10-07 09:31:30.235143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.418 [2024-10-07 09:31:30.246059] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.418 [2024-10-07 09:31:30.246088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.418 [2024-10-07 09:31:30.257252] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.418 [2024-10-07 09:31:30.257278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.418 [2024-10-07 09:31:30.267971] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.418 [2024-10-07 09:31:30.267999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.418 [2024-10-07 09:31:30.279133] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.418 [2024-10-07 09:31:30.279159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.418 [2024-10-07 09:31:30.290829] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.418 [2024-10-07 09:31:30.290857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.418 [2024-10-07 09:31:30.302461] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.418 [2024-10-07 09:31:30.302489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.418 [2024-10-07 09:31:30.317644] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.418 [2024-10-07 09:31:30.317696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.418 [2024-10-07 09:31:30.328244] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.418 [2024-10-07 09:31:30.328270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.418 [2024-10-07 09:31:30.339612] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.418 [2024-10-07 09:31:30.339639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.418 [2024-10-07 09:31:30.352602] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.418 [2024-10-07 09:31:30.352639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.418 11532.60 IOPS, 90.10 MiB/s [2024-10-07 09:31:30.363140] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.418 [2024-10-07 09:31:30.363166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.418 [2024-10-07 09:31:30.370129] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.418 [2024-10-07 09:31:30.370156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.418 00:10:41.418 Latency(us) 00:10:41.418 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:41.418 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:41.418 Nvme1n1 : 5.01 11535.06 90.12 0.00 0.00 11081.86 4878.79 23107.51 00:10:41.418 =================================================================================================================== 00:10:41.418 Total : 11535.06 90.12 0.00 0.00 11081.86 4878.79 23107.51 00:10:41.418 [2024-10-07 09:31:30.377776] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.418 [2024-10-07 09:31:30.377803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.418 [2024-10-07 09:31:30.385788] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.418 [2024-10-07 09:31:30.385813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.418 [2024-10-07 09:31:30.393800] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.418 [2024-10-07 09:31:30.393826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.418 [2024-10-07 09:31:30.401866] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.418 [2024-10-07 09:31:30.401918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.418 [2024-10-07 09:31:30.409889] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.418 [2024-10-07 09:31:30.409939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.679 [2024-10-07 09:31:30.417921] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.679 [2024-10-07 09:31:30.417970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.679 [2024-10-07 09:31:30.425925] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.679 [2024-10-07 09:31:30.425975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.679 [2024-10-07 09:31:30.433946] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.679 [2024-10-07 09:31:30.433995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.679 [2024-10-07 09:31:30.441974] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.679 [2024-10-07 09:31:30.442024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.679 [2024-10-07 09:31:30.449990] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.679 [2024-10-07 09:31:30.450038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.679 [2024-10-07 09:31:30.458008] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.679 [2024-10-07 09:31:30.458057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.679 [2024-10-07 09:31:30.466078] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.679 [2024-10-07 09:31:30.466131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.679 [2024-10-07 09:31:30.474062] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.679 [2024-10-07 09:31:30.474114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.679 [2024-10-07 09:31:30.482084] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.679 [2024-10-07 09:31:30.482156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.679 [2024-10-07 09:31:30.490110] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.679 [2024-10-07 09:31:30.490160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.679 [2024-10-07 09:31:30.498132] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.679 [2024-10-07 09:31:30.498182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.679 [2024-10-07 09:31:30.506147] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.679 [2024-10-07 09:31:30.506195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.679 [2024-10-07 09:31:30.514157] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.679 [2024-10-07 09:31:30.514217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.679 [2024-10-07 09:31:30.522158] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.679 [2024-10-07 09:31:30.522181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.679 [2024-10-07 09:31:30.530169] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.679 [2024-10-07 09:31:30.530190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.679 [2024-10-07 09:31:30.538189] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.679 [2024-10-07 09:31:30.538208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.679 [2024-10-07 09:31:30.546218] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.679 [2024-10-07 09:31:30.546239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.679 [2024-10-07 09:31:30.554292] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.679 [2024-10-07 09:31:30.554338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.679 [2024-10-07 09:31:30.562311] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.679 [2024-10-07 09:31:30.562357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.679 [2024-10-07 09:31:30.570335] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.679 [2024-10-07 09:31:30.570378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.679 [2024-10-07 09:31:30.578299] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.679 [2024-10-07 09:31:30.578318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.679 [2024-10-07 09:31:30.586321] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.679 [2024-10-07 09:31:30.586340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.679 [2024-10-07 09:31:30.594343] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.679 [2024-10-07 09:31:30.594363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.679 [2024-10-07 09:31:30.602376] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.679 [2024-10-07 09:31:30.602399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.679 [2024-10-07 09:31:30.610444] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.679 [2024-10-07 09:31:30.610493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.679 [2024-10-07 09:31:30.618468] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.679 [2024-10-07 09:31:30.618518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.679 [2024-10-07 09:31:30.626431] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.679 [2024-10-07 09:31:30.626451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.679 [2024-10-07 09:31:30.634451] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.679 [2024-10-07 09:31:30.634481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.679 [2024-10-07 09:31:30.642471] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.679 [2024-10-07 09:31:30.642490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.679 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (146373) - No such process 00:10:41.679 09:31:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 146373 00:10:41.679 09:31:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:41.679 09:31:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.679 09:31:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:41.679 09:31:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.679 09:31:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:41.679 09:31:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.679 09:31:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:41.679 delay0 00:10:41.679 09:31:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.679 09:31:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:41.679 09:31:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.679 09:31:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:41.679 09:31:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.679 09:31:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:10:41.939 [2024-10-07 09:31:30.761509] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:50.056 Initializing NVMe Controllers 00:10:50.056 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:50.056 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:50.056 Initialization complete. Launching workers. 00:10:50.056 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 221, failed: 26550 00:10:50.056 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 26611, failed to submit 160 00:10:50.056 success 26552, unsuccessful 59, failed 0 00:10:50.056 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:50.056 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:10:50.056 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@514 -- # nvmfcleanup 00:10:50.056 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:10:50.056 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:50.056 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:10:50.056 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:50.056 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:50.056 rmmod nvme_tcp 00:10:50.056 rmmod nvme_fabrics 00:10:50.056 rmmod nvme_keyring 00:10:50.056 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:50.056 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:10:50.056 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:10:50.056 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@515 -- # '[' -n 145088 ']' 00:10:50.056 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # killprocess 145088 00:10:50.056 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 145088 ']' 00:10:50.056 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 145088 00:10:50.056 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:10:50.056 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:50.056 09:31:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 145088 00:10:50.056 09:31:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:50.056 09:31:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:50.056 09:31:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 145088' 00:10:50.056 killing process with pid 145088 00:10:50.056 09:31:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 145088 00:10:50.056 09:31:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 145088 00:10:50.056 09:31:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:10:50.056 09:31:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:10:50.056 09:31:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:10:50.056 09:31:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:10:50.056 09:31:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-save 00:10:50.056 09:31:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:10:50.056 09:31:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-restore 00:10:50.056 09:31:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:50.056 09:31:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:50.056 09:31:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:50.056 09:31:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:50.056 09:31:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:51.442 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:51.442 00:10:51.442 real 0m29.070s 00:10:51.442 user 0m41.394s 00:10:51.442 sys 0m9.106s 00:10:51.442 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:51.442 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:51.442 ************************************ 00:10:51.442 END TEST nvmf_zcopy 00:10:51.442 ************************************ 00:10:51.442 09:31:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:51.442 09:31:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:51.442 09:31:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:51.442 09:31:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:51.442 ************************************ 00:10:51.442 START TEST nvmf_nmic 00:10:51.442 ************************************ 00:10:51.442 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:51.701 * Looking for test storage... 00:10:51.701 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:51.701 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:51.701 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # lcov --version 00:10:51.702 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:51.702 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:51.702 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:51.702 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:51.702 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:51.702 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:10:51.702 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:10:51.702 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:10:51.702 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:10:51.702 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:10:51.702 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:10:51.702 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:10:51.702 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:51.702 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:10:51.702 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:10:51.702 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:51.702 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:51.702 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:10:51.702 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:10:51.702 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:51.702 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:10:51.702 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:10:51.702 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:10:51.702 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:10:51.702 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:51.702 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:10:51.702 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:10:51.702 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:51.702 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:51.702 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:10:51.702 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:51.702 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:51.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:51.702 --rc genhtml_branch_coverage=1 00:10:51.702 --rc genhtml_function_coverage=1 00:10:51.702 --rc genhtml_legend=1 00:10:51.702 --rc geninfo_all_blocks=1 00:10:51.702 --rc geninfo_unexecuted_blocks=1 00:10:51.702 00:10:51.702 ' 00:10:51.702 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:51.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:51.702 --rc genhtml_branch_coverage=1 00:10:51.702 --rc genhtml_function_coverage=1 00:10:51.702 --rc genhtml_legend=1 00:10:51.702 --rc geninfo_all_blocks=1 00:10:51.702 --rc geninfo_unexecuted_blocks=1 00:10:51.702 00:10:51.702 ' 00:10:51.702 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:51.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:51.702 --rc genhtml_branch_coverage=1 00:10:51.702 --rc genhtml_function_coverage=1 00:10:51.702 --rc genhtml_legend=1 00:10:51.702 --rc geninfo_all_blocks=1 00:10:51.702 --rc geninfo_unexecuted_blocks=1 00:10:51.702 00:10:51.702 ' 00:10:51.702 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:51.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:51.702 --rc genhtml_branch_coverage=1 00:10:51.702 --rc genhtml_function_coverage=1 00:10:51.702 --rc genhtml_legend=1 00:10:51.702 --rc geninfo_all_blocks=1 00:10:51.702 --rc geninfo_unexecuted_blocks=1 00:10:51.702 00:10:51.702 ' 00:10:51.702 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:51.702 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:51.702 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:51.702 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:51.702 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:51.702 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:51.702 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:51.702 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:51.702 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:51.702 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:51.702 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:51.702 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:51.702 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:10:51.702 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:10:51.702 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:51.702 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:51.702 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:51.702 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:51.702 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:51.702 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:10:51.702 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:51.702 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:51.702 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:51.702 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:51.703 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:51.703 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:51.703 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:51.703 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:51.703 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:10:51.703 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:51.703 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:51.703 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:51.703 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:51.703 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:51.703 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:51.703 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:51.703 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:51.703 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:51.703 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:51.703 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:51.703 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:51.703 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:51.703 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:10:51.703 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:51.703 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # prepare_net_devs 00:10:51.703 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@436 -- # local -g is_hw=no 00:10:51.703 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # remove_spdk_ns 00:10:51.703 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:51.703 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:51.703 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:51.703 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:10:51.703 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:10:51.703 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:10:51.703 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:53.618 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:53.618 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:10:53.618 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:53.618 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:53.618 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:53.618 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:53.618 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:53.618 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:10:53.618 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:53.618 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:10:53.618 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:10:53.618 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:10:53.618 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:10:53.618 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:10:53.618 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:10:53.618 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:53.618 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:53.618 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:53.618 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:53.618 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:53.618 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:53.618 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:53.618 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:53.618 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:53.618 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:53.618 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:53.618 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:53.618 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:53.618 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:53.618 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:53.618 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:53.618 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:53.618 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:53.619 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:53.619 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x1592)' 00:10:53.619 Found 0000:09:00.0 (0x8086 - 0x1592) 00:10:53.619 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:53.619 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:53.619 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:10:53.619 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:10:53.619 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:53.619 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:53.619 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x1592)' 00:10:53.619 Found 0000:09:00.1 (0x8086 - 0x1592) 00:10:53.619 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:53.619 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:53.619 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:10:53.619 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:10:53.619 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:53.619 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:53.619 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:53.619 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:53.619 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:53.619 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:53.619 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:53.619 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:53.619 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:53.619 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:53.619 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:53.619 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:10:53.619 Found net devices under 0000:09:00.0: cvl_0_0 00:10:53.619 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:53.619 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:53.619 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:53.619 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:53.619 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:53.619 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:53.619 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:53.619 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:53.619 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:10:53.619 Found net devices under 0000:09:00.1: cvl_0_1 00:10:53.619 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:53.619 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:10:53.619 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # is_hw=yes 00:10:53.619 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:10:53.619 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:10:53.619 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:10:53.619 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:53.619 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:53.619 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:53.619 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:53.619 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:53.619 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:53.619 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:53.619 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:53.619 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:53.619 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:53.619 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:53.619 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:53.619 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:53.619 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:53.619 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:53.878 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:53.878 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:53.878 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:53.878 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:53.878 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:53.878 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:53.878 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:53.878 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:53.879 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:53.879 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.196 ms 00:10:53.879 00:10:53.879 --- 10.0.0.2 ping statistics --- 00:10:53.879 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:53.879 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:10:53.879 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:53.879 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:53.879 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.067 ms 00:10:53.879 00:10:53.879 --- 10.0.0.1 ping statistics --- 00:10:53.879 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:53.879 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:10:53.879 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:53.879 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # return 0 00:10:53.879 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:10:53.879 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:53.879 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:10:53.879 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:10:53.879 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:53.879 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:10:53.879 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:10:53.879 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:53.879 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:10:53.879 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:53.879 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:53.879 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # nvmfpid=149729 00:10:53.879 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # waitforlisten 149729 00:10:53.879 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:53.879 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 149729 ']' 00:10:53.879 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:53.879 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:53.879 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:53.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:53.879 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:53.879 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:53.879 [2024-10-07 09:31:42.804024] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:10:53.879 [2024-10-07 09:31:42.804124] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:53.879 [2024-10-07 09:31:42.865641] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:54.138 [2024-10-07 09:31:42.971874] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:54.138 [2024-10-07 09:31:42.971935] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:54.138 [2024-10-07 09:31:42.971964] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:54.138 [2024-10-07 09:31:42.971975] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:54.138 [2024-10-07 09:31:42.971984] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:54.138 [2024-10-07 09:31:42.973519] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:10:54.138 [2024-10-07 09:31:42.973703] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:10:54.138 [2024-10-07 09:31:42.973732] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:10:54.138 [2024-10-07 09:31:42.973735] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:54.138 09:31:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:54.138 09:31:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:10:54.138 09:31:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:10:54.138 09:31:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:54.138 09:31:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:54.138 09:31:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:54.138 09:31:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:54.138 09:31:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.138 09:31:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:54.138 [2024-10-07 09:31:43.134207] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:54.397 09:31:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.397 09:31:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:54.397 09:31:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.397 09:31:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:54.397 Malloc0 00:10:54.397 09:31:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.397 09:31:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:54.397 09:31:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.397 09:31:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:54.397 09:31:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.397 09:31:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:54.397 09:31:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.397 09:31:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:54.397 09:31:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.397 09:31:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:54.397 09:31:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.397 09:31:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:54.397 [2024-10-07 09:31:43.187305] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:54.397 09:31:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.397 09:31:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:54.397 test case1: single bdev can't be used in multiple subsystems 00:10:54.397 09:31:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:54.397 09:31:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.397 09:31:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:54.397 09:31:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.397 09:31:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:54.397 09:31:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.397 09:31:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:54.397 09:31:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.397 09:31:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:54.397 09:31:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:54.397 09:31:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.397 09:31:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:54.397 [2024-10-07 09:31:43.211124] bdev.c:8202:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:54.397 [2024-10-07 09:31:43.211153] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:54.397 [2024-10-07 09:31:43.211167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.397 request: 00:10:54.397 { 00:10:54.397 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:54.397 "namespace": { 00:10:54.397 "bdev_name": "Malloc0", 00:10:54.397 "no_auto_visible": false 00:10:54.397 }, 00:10:54.397 "method": "nvmf_subsystem_add_ns", 00:10:54.397 "req_id": 1 00:10:54.397 } 00:10:54.397 Got JSON-RPC error response 00:10:54.397 response: 00:10:54.397 { 00:10:54.397 "code": -32602, 00:10:54.397 "message": "Invalid parameters" 00:10:54.397 } 00:10:54.397 09:31:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:54.397 09:31:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:54.397 09:31:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:54.397 09:31:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:54.397 Adding namespace failed - expected result. 00:10:54.397 09:31:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:54.397 test case2: host connect to nvmf target in multiple paths 00:10:54.397 09:31:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:10:54.397 09:31:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.397 09:31:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:54.397 [2024-10-07 09:31:43.219219] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:10:54.398 09:31:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.398 09:31:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid=21b7cb46-a602-e411-a339-001e67bc3be4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:54.964 09:31:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid=21b7cb46-a602-e411-a339-001e67bc3be4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:55.530 09:31:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:55.530 09:31:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:10:55.530 09:31:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:55.530 09:31:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:55.530 09:31:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:10:57.429 09:31:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:57.429 09:31:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:57.429 09:31:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:57.687 09:31:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:57.687 09:31:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:57.687 09:31:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:10:57.687 09:31:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:57.687 [global] 00:10:57.687 thread=1 00:10:57.687 invalidate=1 00:10:57.687 rw=write 00:10:57.687 time_based=1 00:10:57.687 runtime=1 00:10:57.687 ioengine=libaio 00:10:57.687 direct=1 00:10:57.687 bs=4096 00:10:57.687 iodepth=1 00:10:57.687 norandommap=0 00:10:57.687 numjobs=1 00:10:57.687 00:10:57.687 verify_dump=1 00:10:57.687 verify_backlog=512 00:10:57.687 verify_state_save=0 00:10:57.687 do_verify=1 00:10:57.687 verify=crc32c-intel 00:10:57.687 [job0] 00:10:57.687 filename=/dev/nvme0n1 00:10:57.687 Could not set queue depth (nvme0n1) 00:10:57.945 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:57.945 fio-3.35 00:10:57.945 Starting 1 thread 00:10:59.318 00:10:59.318 job0: (groupid=0, jobs=1): err= 0: pid=150229: Mon Oct 7 09:31:48 2024 00:10:59.318 read: IOPS=1934, BW=7736KiB/s (7922kB/s)(7744KiB/1001msec) 00:10:59.318 slat (nsec): min=5159, max=37575, avg=12466.40, stdev=5360.04 00:10:59.318 clat (usec): min=163, max=42000, avg=299.64, stdev=1635.95 00:10:59.318 lat (usec): min=169, max=42017, avg=312.10, stdev=1636.09 00:10:59.318 clat percentiles (usec): 00:10:59.318 | 1.00th=[ 178], 5.00th=[ 190], 10.00th=[ 196], 20.00th=[ 210], 00:10:59.318 | 30.00th=[ 219], 40.00th=[ 225], 50.00th=[ 231], 60.00th=[ 237], 00:10:59.318 | 70.00th=[ 243], 80.00th=[ 251], 90.00th=[ 273], 95.00th=[ 293], 00:10:59.318 | 99.00th=[ 420], 99.50th=[ 553], 99.90th=[42206], 99.95th=[42206], 00:10:59.318 | 99.99th=[42206] 00:10:59.318 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:59.318 slat (nsec): min=6455, max=50132, avg=15783.28, stdev=6600.67 00:10:59.318 clat (usec): min=121, max=260, avg=169.34, stdev=19.12 00:10:59.318 lat (usec): min=128, max=276, avg=185.13, stdev=23.02 00:10:59.318 clat percentiles (usec): 00:10:59.318 | 1.00th=[ 128], 5.00th=[ 139], 10.00th=[ 145], 20.00th=[ 153], 00:10:59.318 | 30.00th=[ 161], 40.00th=[ 165], 50.00th=[ 169], 60.00th=[ 174], 00:10:59.318 | 70.00th=[ 180], 80.00th=[ 184], 90.00th=[ 194], 95.00th=[ 202], 00:10:59.318 | 99.00th=[ 219], 99.50th=[ 223], 99.90th=[ 237], 99.95th=[ 241], 00:10:59.318 | 99.99th=[ 260] 00:10:59.318 bw ( KiB/s): min= 8192, max= 8192, per=100.00%, avg=8192.00, stdev= 0.00, samples=1 00:10:59.318 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:59.318 lat (usec) : 250=89.68%, 500=9.94%, 750=0.30% 00:10:59.318 lat (msec) : 50=0.08% 00:10:59.318 cpu : usr=4.80%, sys=7.30%, ctx=3984, majf=0, minf=1 00:10:59.318 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:59.318 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:59.318 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:59.318 issued rwts: total=1936,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:59.318 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:59.318 00:10:59.318 Run status group 0 (all jobs): 00:10:59.318 READ: bw=7736KiB/s (7922kB/s), 7736KiB/s-7736KiB/s (7922kB/s-7922kB/s), io=7744KiB (7930kB), run=1001-1001msec 00:10:59.319 WRITE: bw=8184KiB/s (8380kB/s), 8184KiB/s-8184KiB/s (8380kB/s-8380kB/s), io=8192KiB (8389kB), run=1001-1001msec 00:10:59.319 00:10:59.319 Disk stats (read/write): 00:10:59.319 nvme0n1: ios=1651/2048, merge=0/0, ticks=487/317, in_queue=804, util=91.48% 00:10:59.319 09:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:59.319 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:59.319 09:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:59.319 09:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:10:59.319 09:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:59.319 09:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:59.319 09:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:59.319 09:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:59.319 09:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:10:59.319 09:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:59.319 09:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:59.319 09:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@514 -- # nvmfcleanup 00:10:59.319 09:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:10:59.319 09:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:59.319 09:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:10:59.319 09:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:59.319 09:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:59.319 rmmod nvme_tcp 00:10:59.319 rmmod nvme_fabrics 00:10:59.319 rmmod nvme_keyring 00:10:59.319 09:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:59.319 09:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:10:59.319 09:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:10:59.319 09:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@515 -- # '[' -n 149729 ']' 00:10:59.319 09:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # killprocess 149729 00:10:59.319 09:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 149729 ']' 00:10:59.319 09:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 149729 00:10:59.319 09:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:10:59.319 09:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:59.319 09:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 149729 00:10:59.319 09:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:59.319 09:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:59.319 09:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 149729' 00:10:59.319 killing process with pid 149729 00:10:59.319 09:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 149729 00:10:59.319 09:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 149729 00:10:59.577 09:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:10:59.577 09:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:10:59.577 09:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:10:59.577 09:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:10:59.577 09:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-save 00:10:59.577 09:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:10:59.577 09:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-restore 00:10:59.577 09:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:59.577 09:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:59.577 09:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:59.577 09:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:59.577 09:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:02.119 09:31:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:02.119 00:11:02.119 real 0m10.206s 00:11:02.119 user 0m22.970s 00:11:02.119 sys 0m2.814s 00:11:02.119 09:31:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:02.119 09:31:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:02.119 ************************************ 00:11:02.119 END TEST nvmf_nmic 00:11:02.119 ************************************ 00:11:02.119 09:31:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:02.119 09:31:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:02.119 09:31:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:02.119 09:31:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:02.119 ************************************ 00:11:02.119 START TEST nvmf_fio_target 00:11:02.119 ************************************ 00:11:02.119 09:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:02.119 * Looking for test storage... 00:11:02.119 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:02.119 09:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:02.119 09:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lcov --version 00:11:02.119 09:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:02.119 09:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:02.119 09:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:02.120 09:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:02.120 09:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:02.120 09:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:11:02.120 09:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:11:02.120 09:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:11:02.120 09:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:11:02.120 09:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:11:02.120 09:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:11:02.120 09:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:11:02.120 09:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:02.120 09:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:11:02.120 09:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:11:02.120 09:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:02.120 09:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:02.120 09:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:11:02.120 09:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:11:02.120 09:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:02.120 09:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:11:02.120 09:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:11:02.120 09:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:11:02.120 09:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:11:02.120 09:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:02.120 09:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:11:02.120 09:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:11:02.120 09:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:02.120 09:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:02.120 09:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:11:02.120 09:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:02.120 09:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:02.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:02.120 --rc genhtml_branch_coverage=1 00:11:02.120 --rc genhtml_function_coverage=1 00:11:02.120 --rc genhtml_legend=1 00:11:02.120 --rc geninfo_all_blocks=1 00:11:02.120 --rc geninfo_unexecuted_blocks=1 00:11:02.120 00:11:02.120 ' 00:11:02.120 09:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:02.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:02.120 --rc genhtml_branch_coverage=1 00:11:02.120 --rc genhtml_function_coverage=1 00:11:02.120 --rc genhtml_legend=1 00:11:02.120 --rc geninfo_all_blocks=1 00:11:02.120 --rc geninfo_unexecuted_blocks=1 00:11:02.120 00:11:02.120 ' 00:11:02.120 09:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:02.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:02.120 --rc genhtml_branch_coverage=1 00:11:02.120 --rc genhtml_function_coverage=1 00:11:02.120 --rc genhtml_legend=1 00:11:02.120 --rc geninfo_all_blocks=1 00:11:02.120 --rc geninfo_unexecuted_blocks=1 00:11:02.120 00:11:02.120 ' 00:11:02.120 09:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:02.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:02.120 --rc genhtml_branch_coverage=1 00:11:02.120 --rc genhtml_function_coverage=1 00:11:02.120 --rc genhtml_legend=1 00:11:02.120 --rc geninfo_all_blocks=1 00:11:02.120 --rc geninfo_unexecuted_blocks=1 00:11:02.120 00:11:02.120 ' 00:11:02.120 09:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:02.120 09:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:11:02.120 09:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:02.120 09:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:02.120 09:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:02.120 09:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:02.120 09:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:02.120 09:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:02.120 09:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:02.120 09:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:02.120 09:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:02.120 09:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:02.120 09:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:11:02.120 09:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:11:02.120 09:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:02.120 09:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:02.120 09:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:02.120 09:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:02.120 09:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:02.120 09:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:11:02.120 09:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:02.120 09:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:02.120 09:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:02.120 09:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.120 09:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.120 09:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.120 09:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:11:02.120 09:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.120 09:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:11:02.120 09:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:02.120 09:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:02.120 09:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:02.120 09:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:02.120 09:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:02.120 09:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:02.120 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:02.120 09:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:02.120 09:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:02.120 09:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:02.120 09:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:02.120 09:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:02.120 09:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:02.120 09:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:11:02.120 09:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:11:02.120 09:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:02.120 09:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:11:02.120 09:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:11:02.120 09:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:11:02.120 09:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:02.121 09:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:02.121 09:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:02.121 09:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:11:02.121 09:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:11:02.121 09:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:11:02.121 09:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.027 09:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:04.027 09:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:11:04.027 09:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:04.027 09:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:04.027 09:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:04.027 09:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:04.027 09:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:04.027 09:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:11:04.027 09:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:04.027 09:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:11:04.027 09:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:11:04.027 09:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:11:04.027 09:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:11:04.027 09:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:11:04.027 09:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:11:04.027 09:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:04.027 09:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:04.027 09:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:04.027 09:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:04.027 09:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:04.027 09:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:04.027 09:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:04.027 09:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:04.027 09:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:04.027 09:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:04.027 09:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:04.027 09:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:04.027 09:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:04.027 09:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:04.027 09:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:04.027 09:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:04.027 09:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:04.027 09:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:04.027 09:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:04.027 09:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x1592)' 00:11:04.027 Found 0000:09:00.0 (0x8086 - 0x1592) 00:11:04.027 09:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:04.027 09:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:04.027 09:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:11:04.027 09:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:11:04.027 09:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:04.027 09:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:04.027 09:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x1592)' 00:11:04.027 Found 0000:09:00.1 (0x8086 - 0x1592) 00:11:04.027 09:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:04.027 09:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:04.027 09:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:11:04.027 09:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:11:04.027 09:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:04.027 09:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:04.027 09:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:04.027 09:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:04.027 09:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:04.027 09:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:04.027 09:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:04.027 09:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:04.027 09:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:04.027 09:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:04.027 09:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:04.027 09:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:11:04.027 Found net devices under 0000:09:00.0: cvl_0_0 00:11:04.027 09:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:04.027 09:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:04.027 09:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:04.027 09:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:04.027 09:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:04.028 09:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:04.028 09:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:04.028 09:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:04.028 09:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:11:04.028 Found net devices under 0000:09:00.1: cvl_0_1 00:11:04.028 09:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:04.028 09:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:11:04.028 09:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # is_hw=yes 00:11:04.028 09:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:11:04.028 09:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:11:04.028 09:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:11:04.028 09:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:04.028 09:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:04.028 09:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:04.028 09:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:04.028 09:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:04.028 09:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:04.028 09:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:04.028 09:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:04.028 09:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:04.028 09:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:04.028 09:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:04.028 09:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:04.028 09:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:04.028 09:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:04.028 09:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:04.028 09:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:04.028 09:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:04.028 09:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:04.028 09:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:04.028 09:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:04.028 09:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:04.028 09:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:04.028 09:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:04.028 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:04.028 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.256 ms 00:11:04.028 00:11:04.028 --- 10.0.0.2 ping statistics --- 00:11:04.028 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:04.028 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:11:04.028 09:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:04.028 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:04.028 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.154 ms 00:11:04.028 00:11:04.028 --- 10.0.0.1 ping statistics --- 00:11:04.028 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:04.028 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:11:04.028 09:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:04.028 09:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # return 0 00:11:04.028 09:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:11:04.028 09:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:04.028 09:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:11:04.028 09:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:11:04.028 09:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:04.028 09:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:11:04.028 09:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:11:04.028 09:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:11:04.028 09:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:11:04.028 09:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:04.028 09:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.028 09:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # nvmfpid=152322 00:11:04.028 09:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:04.028 09:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # waitforlisten 152322 00:11:04.028 09:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 152322 ']' 00:11:04.028 09:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:04.028 09:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:04.028 09:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:04.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:04.028 09:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:04.028 09:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.028 [2024-10-07 09:31:52.985330] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:11:04.028 [2024-10-07 09:31:52.985410] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:04.288 [2024-10-07 09:31:53.053236] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:04.288 [2024-10-07 09:31:53.164475] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:04.288 [2024-10-07 09:31:53.164541] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:04.288 [2024-10-07 09:31:53.164554] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:04.288 [2024-10-07 09:31:53.164565] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:04.288 [2024-10-07 09:31:53.164574] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:04.288 [2024-10-07 09:31:53.166226] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:11:04.288 [2024-10-07 09:31:53.166258] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:11:04.288 [2024-10-07 09:31:53.166316] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:11:04.288 [2024-10-07 09:31:53.166319] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:04.546 09:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:04.546 09:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:11:04.546 09:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:11:04.546 09:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:04.546 09:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.546 09:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:04.546 09:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:04.804 [2024-10-07 09:31:53.614311] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:04.804 09:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:05.062 09:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:11:05.062 09:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:05.320 09:31:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:11:05.320 09:31:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:05.579 09:31:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:11:05.579 09:31:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:05.837 09:31:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:11:05.837 09:31:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:11:06.095 09:31:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:06.354 09:31:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:11:06.354 09:31:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:06.920 09:31:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:11:06.920 09:31:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:06.920 09:31:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:11:06.920 09:31:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:11:07.486 09:31:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:07.486 09:31:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:07.486 09:31:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:07.743 09:31:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:07.743 09:31:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:08.001 09:31:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:08.258 [2024-10-07 09:31:57.217250] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:08.258 09:31:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:11:08.515 09:31:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:11:08.772 09:31:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid=21b7cb46-a602-e411-a339-001e67bc3be4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:09.704 09:31:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:11:09.704 09:31:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:11:09.704 09:31:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:09.704 09:31:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:11:09.704 09:31:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:11:09.704 09:31:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:11:11.601 09:32:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:11.601 09:32:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:11.601 09:32:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:11.601 09:32:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:11:11.601 09:32:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:11.601 09:32:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:11:11.601 09:32:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:11.601 [global] 00:11:11.601 thread=1 00:11:11.601 invalidate=1 00:11:11.601 rw=write 00:11:11.601 time_based=1 00:11:11.601 runtime=1 00:11:11.601 ioengine=libaio 00:11:11.601 direct=1 00:11:11.601 bs=4096 00:11:11.601 iodepth=1 00:11:11.601 norandommap=0 00:11:11.601 numjobs=1 00:11:11.601 00:11:11.601 verify_dump=1 00:11:11.601 verify_backlog=512 00:11:11.601 verify_state_save=0 00:11:11.601 do_verify=1 00:11:11.601 verify=crc32c-intel 00:11:11.601 [job0] 00:11:11.601 filename=/dev/nvme0n1 00:11:11.601 [job1] 00:11:11.601 filename=/dev/nvme0n2 00:11:11.601 [job2] 00:11:11.601 filename=/dev/nvme0n3 00:11:11.601 [job3] 00:11:11.601 filename=/dev/nvme0n4 00:11:11.601 Could not set queue depth (nvme0n1) 00:11:11.601 Could not set queue depth (nvme0n2) 00:11:11.601 Could not set queue depth (nvme0n3) 00:11:11.601 Could not set queue depth (nvme0n4) 00:11:11.859 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:11.859 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:11.859 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:11.859 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:11.859 fio-3.35 00:11:11.859 Starting 4 threads 00:11:13.233 00:11:13.233 job0: (groupid=0, jobs=1): err= 0: pid=153364: Mon Oct 7 09:32:01 2024 00:11:13.233 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:11:13.233 slat (nsec): min=4665, max=53755, avg=13526.87, stdev=8865.66 00:11:13.233 clat (usec): min=171, max=41973, avg=375.80, stdev=2090.12 00:11:13.233 lat (usec): min=178, max=41991, avg=389.33, stdev=2090.07 00:11:13.233 clat percentiles (usec): 00:11:13.233 | 1.00th=[ 186], 5.00th=[ 192], 10.00th=[ 196], 20.00th=[ 206], 00:11:13.233 | 30.00th=[ 217], 40.00th=[ 241], 50.00th=[ 249], 60.00th=[ 265], 00:11:13.233 | 70.00th=[ 293], 80.00th=[ 326], 90.00th=[ 375], 95.00th=[ 412], 00:11:13.233 | 99.00th=[ 510], 99.50th=[ 537], 99.90th=[41157], 99.95th=[42206], 00:11:13.233 | 99.99th=[42206] 00:11:13.233 write: IOPS=1792, BW=7169KiB/s (7341kB/s)(7176KiB/1001msec); 0 zone resets 00:11:13.233 slat (nsec): min=6561, max=67305, avg=18187.93, stdev=7230.66 00:11:13.233 clat (usec): min=123, max=4043, avg=197.83, stdev=109.63 00:11:13.233 lat (usec): min=132, max=4066, avg=216.02, stdev=112.04 00:11:13.233 clat percentiles (usec): 00:11:13.233 | 1.00th=[ 130], 5.00th=[ 137], 10.00th=[ 141], 20.00th=[ 147], 00:11:13.233 | 30.00th=[ 151], 40.00th=[ 159], 50.00th=[ 184], 60.00th=[ 202], 00:11:13.233 | 70.00th=[ 215], 80.00th=[ 231], 90.00th=[ 273], 95.00th=[ 338], 00:11:13.233 | 99.00th=[ 404], 99.50th=[ 433], 99.90th=[ 461], 99.95th=[ 4047], 00:11:13.233 | 99.99th=[ 4047] 00:11:13.233 bw ( KiB/s): min= 8192, max= 8192, per=37.34%, avg=8192.00, stdev= 0.00, samples=1 00:11:13.233 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:13.233 lat (usec) : 250=70.72%, 500=28.59%, 750=0.54% 00:11:13.233 lat (msec) : 10=0.03%, 50=0.12% 00:11:13.233 cpu : usr=3.10%, sys=5.70%, ctx=3331, majf=0, minf=1 00:11:13.233 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:13.233 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:13.233 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:13.233 issued rwts: total=1536,1794,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:13.233 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:13.233 job1: (groupid=0, jobs=1): err= 0: pid=153365: Mon Oct 7 09:32:01 2024 00:11:13.233 read: IOPS=27, BW=111KiB/s (114kB/s)(112KiB/1010msec) 00:11:13.233 slat (nsec): min=13418, max=32491, avg=24883.57, stdev=8571.54 00:11:13.233 clat (usec): min=332, max=41991, avg=32320.66, stdev=16962.78 00:11:13.233 lat (usec): min=347, max=42005, avg=32345.54, stdev=16960.45 00:11:13.233 clat percentiles (usec): 00:11:13.233 | 1.00th=[ 334], 5.00th=[ 359], 10.00th=[ 465], 20.00th=[ 478], 00:11:13.233 | 30.00th=[40633], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:13.233 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:11:13.233 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:13.233 | 99.99th=[42206] 00:11:13.233 write: IOPS=506, BW=2028KiB/s (2076kB/s)(2048KiB/1010msec); 0 zone resets 00:11:13.233 slat (nsec): min=7681, max=60869, avg=15139.23, stdev=3561.22 00:11:13.233 clat (usec): min=136, max=365, avg=184.18, stdev=30.98 00:11:13.233 lat (usec): min=151, max=426, avg=199.31, stdev=31.14 00:11:13.233 clat percentiles (usec): 00:11:13.233 | 1.00th=[ 145], 5.00th=[ 151], 10.00th=[ 153], 20.00th=[ 159], 00:11:13.233 | 30.00th=[ 163], 40.00th=[ 167], 50.00th=[ 176], 60.00th=[ 184], 00:11:13.233 | 70.00th=[ 200], 80.00th=[ 210], 90.00th=[ 225], 95.00th=[ 237], 00:11:13.233 | 99.00th=[ 269], 99.50th=[ 330], 99.90th=[ 367], 99.95th=[ 367], 00:11:13.233 | 99.99th=[ 367] 00:11:13.233 bw ( KiB/s): min= 4096, max= 4096, per=18.67%, avg=4096.00, stdev= 0.00, samples=1 00:11:13.233 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:13.233 lat (usec) : 250=91.85%, 500=4.07% 00:11:13.233 lat (msec) : 50=4.07% 00:11:13.233 cpu : usr=0.69%, sys=0.50%, ctx=540, majf=0, minf=1 00:11:13.233 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:13.233 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:13.233 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:13.233 issued rwts: total=28,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:13.233 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:13.233 job2: (groupid=0, jobs=1): err= 0: pid=153366: Mon Oct 7 09:32:01 2024 00:11:13.233 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:11:13.233 slat (nsec): min=4593, max=71861, avg=16409.97, stdev=9991.43 00:11:13.233 clat (usec): min=184, max=41011, avg=407.18, stdev=2318.22 00:11:13.233 lat (usec): min=189, max=41026, avg=423.59, stdev=2318.31 00:11:13.233 clat percentiles (usec): 00:11:13.233 | 1.00th=[ 190], 5.00th=[ 196], 10.00th=[ 200], 20.00th=[ 204], 00:11:13.233 | 30.00th=[ 210], 40.00th=[ 217], 50.00th=[ 239], 60.00th=[ 285], 00:11:13.233 | 70.00th=[ 310], 80.00th=[ 351], 90.00th=[ 388], 95.00th=[ 408], 00:11:13.233 | 99.00th=[ 461], 99.50th=[ 1020], 99.90th=[41157], 99.95th=[41157], 00:11:13.233 | 99.99th=[41157] 00:11:13.233 write: IOPS=1695, BW=6781KiB/s (6944kB/s)(6788KiB/1001msec); 0 zone resets 00:11:13.233 slat (nsec): min=8470, max=50965, avg=17121.43, stdev=5038.93 00:11:13.233 clat (usec): min=137, max=409, avg=180.16, stdev=29.48 00:11:13.233 lat (usec): min=153, max=426, avg=197.29, stdev=30.66 00:11:13.233 clat percentiles (usec): 00:11:13.233 | 1.00th=[ 143], 5.00th=[ 149], 10.00th=[ 153], 20.00th=[ 157], 00:11:13.233 | 30.00th=[ 161], 40.00th=[ 167], 50.00th=[ 172], 60.00th=[ 178], 00:11:13.233 | 70.00th=[ 188], 80.00th=[ 204], 90.00th=[ 223], 95.00th=[ 231], 00:11:13.233 | 99.00th=[ 281], 99.50th=[ 318], 99.90th=[ 347], 99.95th=[ 412], 00:11:13.233 | 99.99th=[ 412] 00:11:13.233 bw ( KiB/s): min= 4096, max= 4096, per=18.67%, avg=4096.00, stdev= 0.00, samples=1 00:11:13.233 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:13.233 lat (usec) : 250=76.40%, 500=23.23%, 750=0.06%, 1000=0.06% 00:11:13.233 lat (msec) : 2=0.06%, 4=0.03%, 50=0.15% 00:11:13.233 cpu : usr=3.00%, sys=5.50%, ctx=3235, majf=0, minf=1 00:11:13.233 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:13.233 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:13.233 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:13.233 issued rwts: total=1536,1697,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:13.234 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:13.234 job3: (groupid=0, jobs=1): err= 0: pid=153367: Mon Oct 7 09:32:01 2024 00:11:13.234 read: IOPS=1432, BW=5730KiB/s (5868kB/s)(5736KiB/1001msec) 00:11:13.234 slat (nsec): min=4467, max=57394, avg=13576.69, stdev=10256.60 00:11:13.234 clat (usec): min=176, max=41947, avg=481.25, stdev=3037.89 00:11:13.234 lat (usec): min=183, max=41981, avg=494.83, stdev=3038.64 00:11:13.234 clat percentiles (usec): 00:11:13.234 | 1.00th=[ 182], 5.00th=[ 188], 10.00th=[ 194], 20.00th=[ 202], 00:11:13.234 | 30.00th=[ 210], 40.00th=[ 217], 50.00th=[ 223], 60.00th=[ 235], 00:11:13.234 | 70.00th=[ 249], 80.00th=[ 293], 90.00th=[ 383], 95.00th=[ 457], 00:11:13.234 | 99.00th=[ 529], 99.50th=[40633], 99.90th=[41157], 99.95th=[42206], 00:11:13.234 | 99.99th=[42206] 00:11:13.234 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:11:13.234 slat (nsec): min=5694, max=51483, avg=11119.87, stdev=5076.08 00:11:13.234 clat (usec): min=119, max=347, avg=171.15, stdev=30.50 00:11:13.234 lat (usec): min=125, max=363, avg=182.27, stdev=33.84 00:11:13.234 clat percentiles (usec): 00:11:13.234 | 1.00th=[ 128], 5.00th=[ 133], 10.00th=[ 137], 20.00th=[ 145], 00:11:13.234 | 30.00th=[ 153], 40.00th=[ 161], 50.00th=[ 165], 60.00th=[ 174], 00:11:13.234 | 70.00th=[ 184], 80.00th=[ 194], 90.00th=[ 212], 95.00th=[ 231], 00:11:13.234 | 99.00th=[ 262], 99.50th=[ 273], 99.90th=[ 310], 99.95th=[ 347], 00:11:13.234 | 99.99th=[ 347] 00:11:13.234 bw ( KiB/s): min= 8192, max= 8192, per=37.34%, avg=8192.00, stdev= 0.00, samples=1 00:11:13.234 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:13.234 lat (usec) : 250=85.19%, 500=13.87%, 750=0.67% 00:11:13.234 lat (msec) : 50=0.27% 00:11:13.234 cpu : usr=1.90%, sys=3.90%, ctx=2970, majf=0, minf=1 00:11:13.234 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:13.234 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:13.234 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:13.234 issued rwts: total=1434,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:13.234 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:13.234 00:11:13.234 Run status group 0 (all jobs): 00:11:13.234 READ: bw=17.5MiB/s (18.4MB/s), 111KiB/s-6138KiB/s (114kB/s-6285kB/s), io=17.7MiB (18.6MB), run=1001-1010msec 00:11:13.234 WRITE: bw=21.4MiB/s (22.5MB/s), 2028KiB/s-7169KiB/s (2076kB/s-7341kB/s), io=21.6MiB (22.7MB), run=1001-1010msec 00:11:13.234 00:11:13.234 Disk stats (read/write): 00:11:13.234 nvme0n1: ios=1439/1536, merge=0/0, ticks=1307/268, in_queue=1575, util=84.67% 00:11:13.234 nvme0n2: ios=72/512, merge=0/0, ticks=720/93, in_queue=813, util=85.21% 00:11:13.234 nvme0n3: ios=1053/1449, merge=0/0, ticks=1376/255, in_queue=1631, util=92.56% 00:11:13.234 nvme0n4: ios=1081/1044, merge=0/0, ticks=648/174, in_queue=822, util=93.77% 00:11:13.234 09:32:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:11:13.234 [global] 00:11:13.234 thread=1 00:11:13.234 invalidate=1 00:11:13.234 rw=randwrite 00:11:13.234 time_based=1 00:11:13.234 runtime=1 00:11:13.234 ioengine=libaio 00:11:13.234 direct=1 00:11:13.234 bs=4096 00:11:13.234 iodepth=1 00:11:13.234 norandommap=0 00:11:13.234 numjobs=1 00:11:13.234 00:11:13.234 verify_dump=1 00:11:13.234 verify_backlog=512 00:11:13.234 verify_state_save=0 00:11:13.234 do_verify=1 00:11:13.234 verify=crc32c-intel 00:11:13.234 [job0] 00:11:13.234 filename=/dev/nvme0n1 00:11:13.234 [job1] 00:11:13.234 filename=/dev/nvme0n2 00:11:13.234 [job2] 00:11:13.234 filename=/dev/nvme0n3 00:11:13.234 [job3] 00:11:13.234 filename=/dev/nvme0n4 00:11:13.234 Could not set queue depth (nvme0n1) 00:11:13.234 Could not set queue depth (nvme0n2) 00:11:13.234 Could not set queue depth (nvme0n3) 00:11:13.234 Could not set queue depth (nvme0n4) 00:11:13.234 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:13.234 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:13.234 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:13.234 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:13.234 fio-3.35 00:11:13.234 Starting 4 threads 00:11:14.609 00:11:14.609 job0: (groupid=0, jobs=1): err= 0: pid=153587: Mon Oct 7 09:32:03 2024 00:11:14.609 read: IOPS=1261, BW=5047KiB/s (5168kB/s)(5052KiB/1001msec) 00:11:14.609 slat (nsec): min=6876, max=63174, avg=17479.48, stdev=8816.84 00:11:14.609 clat (usec): min=176, max=41073, avg=493.20, stdev=3011.12 00:11:14.609 lat (usec): min=189, max=41107, avg=510.68, stdev=3011.63 00:11:14.609 clat percentiles (usec): 00:11:14.609 | 1.00th=[ 184], 5.00th=[ 190], 10.00th=[ 196], 20.00th=[ 204], 00:11:14.609 | 30.00th=[ 215], 40.00th=[ 229], 50.00th=[ 241], 60.00th=[ 265], 00:11:14.609 | 70.00th=[ 293], 80.00th=[ 343], 90.00th=[ 400], 95.00th=[ 429], 00:11:14.609 | 99.00th=[ 469], 99.50th=[40633], 99.90th=[41157], 99.95th=[41157], 00:11:14.609 | 99.99th=[41157] 00:11:14.609 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:11:14.609 slat (nsec): min=6990, max=64551, avg=16288.07, stdev=6766.18 00:11:14.609 clat (usec): min=131, max=1333, avg=206.40, stdev=76.29 00:11:14.609 lat (usec): min=145, max=1343, avg=222.69, stdev=76.11 00:11:14.609 clat percentiles (usec): 00:11:14.609 | 1.00th=[ 135], 5.00th=[ 139], 10.00th=[ 143], 20.00th=[ 149], 00:11:14.609 | 30.00th=[ 159], 40.00th=[ 169], 50.00th=[ 198], 60.00th=[ 215], 00:11:14.609 | 70.00th=[ 233], 80.00th=[ 243], 90.00th=[ 262], 95.00th=[ 355], 00:11:14.609 | 99.00th=[ 441], 99.50th=[ 502], 99.90th=[ 971], 99.95th=[ 1336], 00:11:14.609 | 99.99th=[ 1336] 00:11:14.609 bw ( KiB/s): min= 8192, max= 8192, per=36.68%, avg=8192.00, stdev= 0.00, samples=1 00:11:14.609 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:14.609 lat (usec) : 250=71.35%, 500=27.97%, 750=0.32%, 1000=0.07% 00:11:14.609 lat (msec) : 2=0.04%, 50=0.25% 00:11:14.609 cpu : usr=3.20%, sys=4.60%, ctx=2800, majf=0, minf=1 00:11:14.609 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:14.609 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.609 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.609 issued rwts: total=1263,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:14.609 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:14.609 job1: (groupid=0, jobs=1): err= 0: pid=153594: Mon Oct 7 09:32:03 2024 00:11:14.609 read: IOPS=1095, BW=4384KiB/s (4489kB/s)(4388KiB/1001msec) 00:11:14.609 slat (nsec): min=7396, max=50306, avg=14214.11, stdev=5972.58 00:11:14.609 clat (usec): min=188, max=41389, avg=555.85, stdev=3243.31 00:11:14.609 lat (usec): min=196, max=41412, avg=570.07, stdev=3245.03 00:11:14.609 clat percentiles (usec): 00:11:14.609 | 1.00th=[ 196], 5.00th=[ 204], 10.00th=[ 212], 20.00th=[ 231], 00:11:14.609 | 30.00th=[ 243], 40.00th=[ 251], 50.00th=[ 269], 60.00th=[ 310], 00:11:14.609 | 70.00th=[ 334], 80.00th=[ 371], 90.00th=[ 408], 95.00th=[ 437], 00:11:14.609 | 99.00th=[ 570], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:14.609 | 99.99th=[41157] 00:11:14.609 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:11:14.609 slat (nsec): min=7801, max=59432, avg=18342.73, stdev=8030.70 00:11:14.609 clat (usec): min=144, max=1231, avg=217.66, stdev=64.88 00:11:14.609 lat (usec): min=156, max=1244, avg=236.00, stdev=63.15 00:11:14.609 clat percentiles (usec): 00:11:14.609 | 1.00th=[ 153], 5.00th=[ 165], 10.00th=[ 174], 20.00th=[ 184], 00:11:14.609 | 30.00th=[ 188], 40.00th=[ 194], 50.00th=[ 202], 60.00th=[ 215], 00:11:14.610 | 70.00th=[ 231], 80.00th=[ 239], 90.00th=[ 258], 95.00th=[ 330], 00:11:14.610 | 99.00th=[ 449], 99.50th=[ 529], 99.90th=[ 930], 99.95th=[ 1237], 00:11:14.610 | 99.99th=[ 1237] 00:11:14.610 bw ( KiB/s): min= 4096, max= 4096, per=18.34%, avg=4096.00, stdev= 0.00, samples=1 00:11:14.610 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:14.610 lat (usec) : 250=67.07%, 500=31.67%, 750=0.80%, 1000=0.15% 00:11:14.610 lat (msec) : 2=0.04%, 50=0.27% 00:11:14.610 cpu : usr=3.30%, sys=5.70%, ctx=2635, majf=0, minf=1 00:11:14.610 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:14.610 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.610 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.610 issued rwts: total=1097,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:14.610 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:14.610 job2: (groupid=0, jobs=1): err= 0: pid=153597: Mon Oct 7 09:32:03 2024 00:11:14.610 read: IOPS=1080, BW=4320KiB/s (4424kB/s)(4372KiB/1012msec) 00:11:14.610 slat (nsec): min=6691, max=69515, avg=19880.77, stdev=10218.42 00:11:14.610 clat (usec): min=179, max=41316, avg=591.74, stdev=3466.36 00:11:14.610 lat (usec): min=189, max=41329, avg=611.62, stdev=3466.61 00:11:14.610 clat percentiles (usec): 00:11:14.610 | 1.00th=[ 190], 5.00th=[ 194], 10.00th=[ 198], 20.00th=[ 208], 00:11:14.610 | 30.00th=[ 219], 40.00th=[ 231], 50.00th=[ 245], 60.00th=[ 277], 00:11:14.610 | 70.00th=[ 322], 80.00th=[ 424], 90.00th=[ 474], 95.00th=[ 498], 00:11:14.610 | 99.00th=[ 750], 99.50th=[40633], 99.90th=[41157], 99.95th=[41157], 00:11:14.610 | 99.99th=[41157] 00:11:14.610 write: IOPS=1517, BW=6071KiB/s (6217kB/s)(6144KiB/1012msec); 0 zone resets 00:11:14.610 slat (nsec): min=6354, max=59566, avg=12479.20, stdev=5046.01 00:11:14.610 clat (usec): min=132, max=520, avg=203.31, stdev=54.53 00:11:14.610 lat (usec): min=141, max=546, avg=215.78, stdev=55.98 00:11:14.610 clat percentiles (usec): 00:11:14.610 | 1.00th=[ 139], 5.00th=[ 143], 10.00th=[ 147], 20.00th=[ 155], 00:11:14.610 | 30.00th=[ 165], 40.00th=[ 184], 50.00th=[ 198], 60.00th=[ 212], 00:11:14.610 | 70.00th=[ 229], 80.00th=[ 239], 90.00th=[ 247], 95.00th=[ 277], 00:11:14.610 | 99.00th=[ 424], 99.50th=[ 490], 99.90th=[ 506], 99.95th=[ 519], 00:11:14.610 | 99.99th=[ 519] 00:11:14.610 bw ( KiB/s): min= 4576, max= 7712, per=27.51%, avg=6144.00, stdev=2217.49, samples=2 00:11:14.610 iops : min= 1144, max= 1928, avg=1536.00, stdev=554.37, samples=2 00:11:14.610 lat (usec) : 250=75.35%, 500=22.40%, 750=1.83%, 1000=0.11% 00:11:14.610 lat (msec) : 50=0.30% 00:11:14.610 cpu : usr=1.98%, sys=4.35%, ctx=2631, majf=0, minf=1 00:11:14.610 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:14.610 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.610 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.610 issued rwts: total=1093,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:14.610 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:14.610 job3: (groupid=0, jobs=1): err= 0: pid=153604: Mon Oct 7 09:32:03 2024 00:11:14.610 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:11:14.610 slat (nsec): min=6071, max=69684, avg=16767.14, stdev=6898.52 00:11:14.610 clat (usec): min=222, max=41227, avg=707.39, stdev=4199.32 00:11:14.610 lat (usec): min=233, max=41238, avg=724.16, stdev=4200.54 00:11:14.610 clat percentiles (usec): 00:11:14.610 | 1.00th=[ 229], 5.00th=[ 235], 10.00th=[ 239], 20.00th=[ 243], 00:11:14.610 | 30.00th=[ 247], 40.00th=[ 253], 50.00th=[ 258], 60.00th=[ 262], 00:11:14.610 | 70.00th=[ 273], 80.00th=[ 289], 90.00th=[ 322], 95.00th=[ 351], 00:11:14.610 | 99.00th=[40633], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:14.610 | 99.99th=[41157] 00:11:14.610 write: IOPS=1040, BW=4164KiB/s (4264kB/s)(4168KiB/1001msec); 0 zone resets 00:11:14.610 slat (nsec): min=6336, max=42374, avg=13356.29, stdev=6192.71 00:11:14.610 clat (usec): min=158, max=914, avg=226.21, stdev=45.66 00:11:14.610 lat (usec): min=165, max=924, avg=239.56, stdev=45.31 00:11:14.610 clat percentiles (usec): 00:11:14.610 | 1.00th=[ 167], 5.00th=[ 174], 10.00th=[ 182], 20.00th=[ 196], 00:11:14.610 | 30.00th=[ 210], 40.00th=[ 219], 50.00th=[ 225], 60.00th=[ 233], 00:11:14.610 | 70.00th=[ 243], 80.00th=[ 245], 90.00th=[ 249], 95.00th=[ 277], 00:11:14.610 | 99.00th=[ 383], 99.50th=[ 392], 99.90th=[ 799], 99.95th=[ 914], 00:11:14.610 | 99.99th=[ 914] 00:11:14.610 bw ( KiB/s): min= 4096, max= 4096, per=18.34%, avg=4096.00, stdev= 0.00, samples=1 00:11:14.610 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:14.610 lat (usec) : 250=63.12%, 500=36.16%, 750=0.05%, 1000=0.10% 00:11:14.610 lat (msec) : 4=0.05%, 50=0.53% 00:11:14.610 cpu : usr=1.80%, sys=3.10%, ctx=2067, majf=0, minf=1 00:11:14.610 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:14.610 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.610 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.610 issued rwts: total=1024,1042,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:14.610 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:14.610 00:11:14.610 Run status group 0 (all jobs): 00:11:14.610 READ: bw=17.3MiB/s (18.1MB/s), 4092KiB/s-5047KiB/s (4190kB/s-5168kB/s), io=17.5MiB (18.3MB), run=1001-1012msec 00:11:14.610 WRITE: bw=21.8MiB/s (22.9MB/s), 4164KiB/s-6138KiB/s (4264kB/s-6285kB/s), io=22.1MiB (23.1MB), run=1001-1012msec 00:11:14.610 00:11:14.610 Disk stats (read/write): 00:11:14.610 nvme0n1: ios=1160/1536, merge=0/0, ticks=1035/305, in_queue=1340, util=98.40% 00:11:14.610 nvme0n2: ios=980/1024, merge=0/0, ticks=1525/227, in_queue=1752, util=98.58% 00:11:14.610 nvme0n3: ios=1082/1101, merge=0/0, ticks=840/221, in_queue=1061, util=97.80% 00:11:14.610 nvme0n4: ios=571/1024, merge=0/0, ticks=852/226, in_queue=1078, util=97.46% 00:11:14.610 09:32:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:11:14.610 [global] 00:11:14.610 thread=1 00:11:14.610 invalidate=1 00:11:14.610 rw=write 00:11:14.610 time_based=1 00:11:14.610 runtime=1 00:11:14.610 ioengine=libaio 00:11:14.610 direct=1 00:11:14.610 bs=4096 00:11:14.610 iodepth=128 00:11:14.610 norandommap=0 00:11:14.610 numjobs=1 00:11:14.610 00:11:14.610 verify_dump=1 00:11:14.610 verify_backlog=512 00:11:14.610 verify_state_save=0 00:11:14.610 do_verify=1 00:11:14.610 verify=crc32c-intel 00:11:14.610 [job0] 00:11:14.610 filename=/dev/nvme0n1 00:11:14.610 [job1] 00:11:14.610 filename=/dev/nvme0n2 00:11:14.610 [job2] 00:11:14.610 filename=/dev/nvme0n3 00:11:14.610 [job3] 00:11:14.610 filename=/dev/nvme0n4 00:11:14.610 Could not set queue depth (nvme0n1) 00:11:14.610 Could not set queue depth (nvme0n2) 00:11:14.610 Could not set queue depth (nvme0n3) 00:11:14.610 Could not set queue depth (nvme0n4) 00:11:14.869 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:14.869 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:14.869 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:14.869 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:14.869 fio-3.35 00:11:14.869 Starting 4 threads 00:11:16.242 00:11:16.242 job0: (groupid=0, jobs=1): err= 0: pid=153823: Mon Oct 7 09:32:04 2024 00:11:16.242 read: IOPS=4753, BW=18.6MiB/s (19.5MB/s)(19.4MiB/1044msec) 00:11:16.242 slat (usec): min=3, max=4048, avg=91.44, stdev=439.42 00:11:16.242 clat (usec): min=8825, max=52759, avg=13321.24, stdev=5776.07 00:11:16.242 lat (usec): min=8980, max=52764, avg=13412.67, stdev=5777.91 00:11:16.242 clat percentiles (usec): 00:11:16.242 | 1.00th=[ 9241], 5.00th=[10290], 10.00th=[11076], 20.00th=[11469], 00:11:16.242 | 30.00th=[11863], 40.00th=[12256], 50.00th=[12518], 60.00th=[12649], 00:11:16.242 | 70.00th=[12911], 80.00th=[13304], 90.00th=[13698], 95.00th=[15008], 00:11:16.242 | 99.00th=[49546], 99.50th=[50070], 99.90th=[52691], 99.95th=[52691], 00:11:16.242 | 99.99th=[52691] 00:11:16.242 write: IOPS=4904, BW=19.2MiB/s (20.1MB/s)(20.0MiB/1044msec); 0 zone resets 00:11:16.242 slat (usec): min=5, max=25591, avg=94.59, stdev=564.33 00:11:16.242 clat (usec): min=4916, max=37222, avg=12458.46, stdev=2471.38 00:11:16.242 lat (usec): min=4942, max=37272, avg=12553.05, stdev=2508.36 00:11:16.242 clat percentiles (usec): 00:11:16.242 | 1.00th=[ 9110], 5.00th=[10028], 10.00th=[10945], 20.00th=[11338], 00:11:16.242 | 30.00th=[11600], 40.00th=[11863], 50.00th=[12256], 60.00th=[12387], 00:11:16.242 | 70.00th=[12649], 80.00th=[12911], 90.00th=[13435], 95.00th=[15270], 00:11:16.242 | 99.00th=[26346], 99.50th=[27919], 99.90th=[28443], 99.95th=[28443], 00:11:16.242 | 99.99th=[36963] 00:11:16.242 bw ( KiB/s): min=20480, max=20480, per=29.81%, avg=20480.00, stdev= 0.00, samples=2 00:11:16.242 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:11:16.242 lat (msec) : 10=4.28%, 20=92.80%, 50=2.67%, 100=0.25% 00:11:16.242 cpu : usr=7.38%, sys=13.23%, ctx=466, majf=0, minf=1 00:11:16.242 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:11:16.242 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:16.242 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:16.242 issued rwts: total=4963,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:16.243 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:16.243 job1: (groupid=0, jobs=1): err= 0: pid=153824: Mon Oct 7 09:32:04 2024 00:11:16.243 read: IOPS=4063, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1008msec) 00:11:16.243 slat (usec): min=2, max=16335, avg=114.53, stdev=804.68 00:11:16.243 clat (usec): min=4845, max=46242, avg=15592.47, stdev=6940.68 00:11:16.243 lat (usec): min=4853, max=50416, avg=15707.01, stdev=7011.32 00:11:16.243 clat percentiles (usec): 00:11:16.243 | 1.00th=[ 8356], 5.00th=[10421], 10.00th=[10683], 20.00th=[11600], 00:11:16.243 | 30.00th=[11994], 40.00th=[12125], 50.00th=[12256], 60.00th=[12649], 00:11:16.243 | 70.00th=[13960], 80.00th=[20317], 90.00th=[28967], 95.00th=[30802], 00:11:16.243 | 99.00th=[36963], 99.50th=[37487], 99.90th=[42730], 99.95th=[42730], 00:11:16.243 | 99.99th=[46400] 00:11:16.243 write: IOPS=4356, BW=17.0MiB/s (17.8MB/s)(17.2MiB/1008msec); 0 zone resets 00:11:16.243 slat (usec): min=4, max=14785, avg=105.56, stdev=692.21 00:11:16.243 clat (usec): min=1704, max=44115, avg=14297.45, stdev=6546.04 00:11:16.243 lat (usec): min=1709, max=45430, avg=14403.01, stdev=6610.59 00:11:16.243 clat percentiles (usec): 00:11:16.243 | 1.00th=[ 5080], 5.00th=[ 8094], 10.00th=[ 9503], 20.00th=[11076], 00:11:16.243 | 30.00th=[11600], 40.00th=[11731], 50.00th=[11994], 60.00th=[12125], 00:11:16.243 | 70.00th=[12387], 80.00th=[19530], 90.00th=[23200], 95.00th=[28705], 00:11:16.243 | 99.00th=[38536], 99.50th=[41157], 99.90th=[44303], 99.95th=[44303], 00:11:16.243 | 99.99th=[44303] 00:11:16.243 bw ( KiB/s): min=13632, max=20480, per=24.82%, avg=17056.00, stdev=4842.27, samples=2 00:11:16.243 iops : min= 3408, max= 5120, avg=4264.00, stdev=1210.57, samples=2 00:11:16.243 lat (msec) : 2=0.09%, 4=0.25%, 10=8.94%, 20=71.20%, 50=19.51% 00:11:16.243 cpu : usr=5.66%, sys=10.63%, ctx=265, majf=0, minf=1 00:11:16.243 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:11:16.243 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:16.243 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:16.243 issued rwts: total=4096,4391,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:16.243 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:16.243 job2: (groupid=0, jobs=1): err= 0: pid=153825: Mon Oct 7 09:32:04 2024 00:11:16.243 read: IOPS=3988, BW=15.6MiB/s (16.3MB/s)(16.2MiB/1043msec) 00:11:16.243 slat (usec): min=4, max=9453, avg=112.17, stdev=555.21 00:11:16.243 clat (usec): min=10534, max=47862, avg=15641.15, stdev=4813.13 00:11:16.243 lat (usec): min=10548, max=47869, avg=15753.32, stdev=4812.19 00:11:16.243 clat percentiles (usec): 00:11:16.243 | 1.00th=[10945], 5.00th=[11863], 10.00th=[13042], 20.00th=[13566], 00:11:16.243 | 30.00th=[13698], 40.00th=[14091], 50.00th=[14484], 60.00th=[14746], 00:11:16.243 | 70.00th=[15401], 80.00th=[15926], 90.00th=[19268], 95.00th=[22414], 00:11:16.243 | 99.00th=[47449], 99.50th=[47449], 99.90th=[47973], 99.95th=[47973], 00:11:16.243 | 99.99th=[47973] 00:11:16.243 write: IOPS=4418, BW=17.3MiB/s (18.1MB/s)(18.0MiB/1043msec); 0 zone resets 00:11:16.243 slat (usec): min=5, max=11223, avg=103.69, stdev=556.65 00:11:16.243 clat (usec): min=1113, max=54075, avg=14583.55, stdev=5555.24 00:11:16.243 lat (usec): min=1125, max=54086, avg=14687.25, stdev=5562.43 00:11:16.243 clat percentiles (usec): 00:11:16.243 | 1.00th=[ 9241], 5.00th=[10552], 10.00th=[11207], 20.00th=[12256], 00:11:16.243 | 30.00th=[12649], 40.00th=[13042], 50.00th=[13435], 60.00th=[13960], 00:11:16.243 | 70.00th=[14615], 80.00th=[15401], 90.00th=[16712], 95.00th=[24511], 00:11:16.243 | 99.00th=[53740], 99.50th=[53740], 99.90th=[54264], 99.95th=[54264], 00:11:16.243 | 99.99th=[54264] 00:11:16.243 bw ( KiB/s): min=16384, max=19976, per=26.46%, avg=18180.00, stdev=2539.93, samples=2 00:11:16.243 iops : min= 4096, max= 4994, avg=4545.00, stdev=634.98, samples=2 00:11:16.243 lat (msec) : 2=0.05%, 10=0.82%, 20=90.99%, 50=7.52%, 100=0.63% 00:11:16.243 cpu : usr=7.77%, sys=10.27%, ctx=405, majf=0, minf=2 00:11:16.243 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:11:16.243 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:16.243 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:16.243 issued rwts: total=4160,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:16.243 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:16.243 job3: (groupid=0, jobs=1): err= 0: pid=153826: Mon Oct 7 09:32:04 2024 00:11:16.243 read: IOPS=3552, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1009msec) 00:11:16.243 slat (usec): min=4, max=14023, avg=137.02, stdev=886.84 00:11:16.243 clat (usec): min=8722, max=41290, avg=17464.98, stdev=5313.03 00:11:16.243 lat (usec): min=8735, max=41309, avg=17602.00, stdev=5391.16 00:11:16.243 clat percentiles (usec): 00:11:16.243 | 1.00th=[ 9896], 5.00th=[11600], 10.00th=[13566], 20.00th=[13829], 00:11:16.243 | 30.00th=[14091], 40.00th=[14353], 50.00th=[14484], 60.00th=[15795], 00:11:16.243 | 70.00th=[19530], 80.00th=[22676], 90.00th=[25560], 95.00th=[29230], 00:11:16.243 | 99.00th=[31327], 99.50th=[31327], 99.90th=[35914], 99.95th=[39584], 00:11:16.243 | 99.99th=[41157] 00:11:16.243 write: IOPS=3780, BW=14.8MiB/s (15.5MB/s)(14.9MiB/1009msec); 0 zone resets 00:11:16.243 slat (usec): min=5, max=20960, avg=121.07, stdev=728.45 00:11:16.243 clat (usec): min=7370, max=44348, avg=17037.30, stdev=4884.96 00:11:16.243 lat (usec): min=8260, max=44391, avg=17158.37, stdev=4949.68 00:11:16.243 clat percentiles (usec): 00:11:16.243 | 1.00th=[ 8848], 5.00th=[12125], 10.00th=[13435], 20.00th=[13829], 00:11:16.243 | 30.00th=[14222], 40.00th=[14484], 50.00th=[14746], 60.00th=[15926], 00:11:16.243 | 70.00th=[18482], 80.00th=[20317], 90.00th=[23462], 95.00th=[26870], 00:11:16.243 | 99.00th=[34341], 99.50th=[34866], 99.90th=[34866], 99.95th=[40109], 00:11:16.243 | 99.99th=[44303] 00:11:16.243 bw ( KiB/s): min=13120, max=16384, per=21.47%, avg=14752.00, stdev=2308.00, samples=2 00:11:16.243 iops : min= 3280, max= 4096, avg=3688.00, stdev=577.00, samples=2 00:11:16.243 lat (msec) : 10=1.57%, 20=73.28%, 50=25.15% 00:11:16.243 cpu : usr=5.65%, sys=10.42%, ctx=365, majf=0, minf=1 00:11:16.243 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:11:16.243 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:16.243 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:16.243 issued rwts: total=3584,3815,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:16.243 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:16.243 00:11:16.243 Run status group 0 (all jobs): 00:11:16.243 READ: bw=62.9MiB/s (65.9MB/s), 13.9MiB/s-18.6MiB/s (14.5MB/s-19.5MB/s), io=65.6MiB (68.8MB), run=1008-1044msec 00:11:16.243 WRITE: bw=67.1MiB/s (70.4MB/s), 14.8MiB/s-19.2MiB/s (15.5MB/s-20.1MB/s), io=70.1MiB (73.5MB), run=1008-1044msec 00:11:16.243 00:11:16.243 Disk stats (read/write): 00:11:16.243 nvme0n1: ios=4134/4239, merge=0/0, ticks=15955/16895, in_queue=32850, util=96.29% 00:11:16.243 nvme0n2: ios=3697/4096, merge=0/0, ticks=26181/26234, in_queue=52415, util=97.56% 00:11:16.243 nvme0n3: ios=3631/3662, merge=0/0, ticks=16446/15956, in_queue=32402, util=96.65% 00:11:16.243 nvme0n4: ios=3118/3232, merge=0/0, ticks=26469/24439, in_queue=50908, util=100.00% 00:11:16.243 09:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:11:16.243 [global] 00:11:16.243 thread=1 00:11:16.243 invalidate=1 00:11:16.243 rw=randwrite 00:11:16.243 time_based=1 00:11:16.243 runtime=1 00:11:16.243 ioengine=libaio 00:11:16.243 direct=1 00:11:16.243 bs=4096 00:11:16.243 iodepth=128 00:11:16.243 norandommap=0 00:11:16.243 numjobs=1 00:11:16.243 00:11:16.243 verify_dump=1 00:11:16.243 verify_backlog=512 00:11:16.243 verify_state_save=0 00:11:16.243 do_verify=1 00:11:16.243 verify=crc32c-intel 00:11:16.243 [job0] 00:11:16.243 filename=/dev/nvme0n1 00:11:16.243 [job1] 00:11:16.243 filename=/dev/nvme0n2 00:11:16.243 [job2] 00:11:16.243 filename=/dev/nvme0n3 00:11:16.243 [job3] 00:11:16.243 filename=/dev/nvme0n4 00:11:16.243 Could not set queue depth (nvme0n1) 00:11:16.243 Could not set queue depth (nvme0n2) 00:11:16.243 Could not set queue depth (nvme0n3) 00:11:16.243 Could not set queue depth (nvme0n4) 00:11:16.243 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:16.243 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:16.243 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:16.243 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:16.243 fio-3.35 00:11:16.243 Starting 4 threads 00:11:17.678 00:11:17.678 job0: (groupid=0, jobs=1): err= 0: pid=154156: Mon Oct 7 09:32:06 2024 00:11:17.678 read: IOPS=3560, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec) 00:11:17.678 slat (usec): min=2, max=12326, avg=163.67, stdev=816.45 00:11:17.678 clat (usec): min=4512, max=64602, avg=21195.89, stdev=9654.25 00:11:17.678 lat (usec): min=5074, max=64622, avg=21359.56, stdev=9696.88 00:11:17.678 clat percentiles (usec): 00:11:17.678 | 1.00th=[ 6980], 5.00th=[ 9372], 10.00th=[10683], 20.00th=[12518], 00:11:17.678 | 30.00th=[16909], 40.00th=[19268], 50.00th=[20841], 60.00th=[21890], 00:11:17.678 | 70.00th=[22938], 80.00th=[25560], 90.00th=[30278], 95.00th=[43254], 00:11:17.678 | 99.00th=[58983], 99.50th=[60031], 99.90th=[64750], 99.95th=[64750], 00:11:17.678 | 99.99th=[64750] 00:11:17.678 write: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec); 0 zone resets 00:11:17.678 slat (usec): min=3, max=9514, avg=105.28, stdev=550.45 00:11:17.678 clat (usec): min=6403, max=23726, avg=14272.99, stdev=3408.46 00:11:17.678 lat (usec): min=6414, max=23762, avg=14378.28, stdev=3394.63 00:11:17.678 clat percentiles (usec): 00:11:17.678 | 1.00th=[ 8029], 5.00th=[ 8586], 10.00th=[10159], 20.00th=[11469], 00:11:17.678 | 30.00th=[11994], 40.00th=[13435], 50.00th=[13698], 60.00th=[15270], 00:11:17.678 | 70.00th=[15664], 80.00th=[16712], 90.00th=[19268], 95.00th=[20579], 00:11:17.678 | 99.00th=[21890], 99.50th=[22676], 99.90th=[23725], 99.95th=[23725], 00:11:17.678 | 99.99th=[23725] 00:11:17.678 bw ( KiB/s): min=12288, max=16384, per=24.82%, avg=14336.00, stdev=2896.31, samples=2 00:11:17.678 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:11:17.678 lat (msec) : 10=8.48%, 20=60.40%, 50=29.49%, 100=1.63% 00:11:17.678 cpu : usr=3.88%, sys=7.87%, ctx=310, majf=0, minf=1 00:11:17.678 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:11:17.678 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:17.678 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:17.678 issued rwts: total=3578,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:17.678 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:17.678 job1: (groupid=0, jobs=1): err= 0: pid=154158: Mon Oct 7 09:32:06 2024 00:11:17.678 read: IOPS=3065, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1002msec) 00:11:17.678 slat (usec): min=2, max=8598, avg=132.19, stdev=681.74 00:11:17.678 clat (usec): min=9061, max=42032, avg=16673.28, stdev=3907.75 00:11:17.678 lat (usec): min=9075, max=44723, avg=16805.46, stdev=3953.11 00:11:17.678 clat percentiles (usec): 00:11:17.678 | 1.00th=[ 9634], 5.00th=[11731], 10.00th=[12256], 20.00th=[13960], 00:11:17.678 | 30.00th=[14746], 40.00th=[15795], 50.00th=[16188], 60.00th=[16450], 00:11:17.678 | 70.00th=[17433], 80.00th=[18744], 90.00th=[21627], 95.00th=[23200], 00:11:17.678 | 99.00th=[30278], 99.50th=[36963], 99.90th=[41681], 99.95th=[42206], 00:11:17.678 | 99.99th=[42206] 00:11:17.678 write: IOPS=3370, BW=13.2MiB/s (13.8MB/s)(13.2MiB/1002msec); 0 zone resets 00:11:17.678 slat (usec): min=3, max=9125, avg=167.28, stdev=724.71 00:11:17.678 clat (usec): min=1849, max=64567, avg=22230.37, stdev=10070.93 00:11:17.678 lat (usec): min=5167, max=64600, avg=22397.66, stdev=10133.35 00:11:17.678 clat percentiles (usec): 00:11:17.678 | 1.00th=[ 8717], 5.00th=[10159], 10.00th=[11469], 20.00th=[12649], 00:11:17.678 | 30.00th=[15139], 40.00th=[18744], 50.00th=[21365], 60.00th=[24773], 00:11:17.678 | 70.00th=[27395], 80.00th=[29754], 90.00th=[31589], 95.00th=[39060], 00:11:17.678 | 99.00th=[58983], 99.50th=[61080], 99.90th=[64750], 99.95th=[64750], 00:11:17.678 | 99.99th=[64750] 00:11:17.678 bw ( KiB/s): min=12288, max=13712, per=22.51%, avg=13000.00, stdev=1006.92, samples=2 00:11:17.678 iops : min= 3072, max= 3428, avg=3250.00, stdev=251.73, samples=2 00:11:17.678 lat (msec) : 2=0.02%, 10=2.76%, 20=61.95%, 50=33.83%, 100=1.44% 00:11:17.678 cpu : usr=3.00%, sys=5.99%, ctx=391, majf=0, minf=1 00:11:17.678 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:11:17.678 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:17.678 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:17.678 issued rwts: total=3072,3377,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:17.678 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:17.678 job2: (groupid=0, jobs=1): err= 0: pid=154159: Mon Oct 7 09:32:06 2024 00:11:17.678 read: IOPS=3555, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1008msec) 00:11:17.678 slat (usec): min=2, max=10710, avg=117.18, stdev=666.65 00:11:17.678 clat (usec): min=7079, max=33681, avg=15203.18, stdev=3765.87 00:11:17.678 lat (usec): min=7101, max=33701, avg=15320.35, stdev=3822.36 00:11:17.678 clat percentiles (usec): 00:11:17.678 | 1.00th=[ 9503], 5.00th=[11076], 10.00th=[11469], 20.00th=[12256], 00:11:17.678 | 30.00th=[12649], 40.00th=[12911], 50.00th=[14091], 60.00th=[15401], 00:11:17.678 | 70.00th=[17171], 80.00th=[18220], 90.00th=[20841], 95.00th=[21627], 00:11:17.679 | 99.00th=[27919], 99.50th=[29230], 99.90th=[33817], 99.95th=[33817], 00:11:17.679 | 99.99th=[33817] 00:11:17.679 write: IOPS=4062, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1008msec); 0 zone resets 00:11:17.679 slat (usec): min=5, max=8296, avg=130.90, stdev=657.84 00:11:17.679 clat (usec): min=4800, max=43194, avg=17825.18, stdev=7956.68 00:11:17.679 lat (usec): min=6827, max=43223, avg=17956.08, stdev=8015.92 00:11:17.679 clat percentiles (usec): 00:11:17.679 | 1.00th=[ 7898], 5.00th=[11076], 10.00th=[11600], 20.00th=[12125], 00:11:17.679 | 30.00th=[12387], 40.00th=[12780], 50.00th=[13960], 60.00th=[16057], 00:11:17.679 | 70.00th=[18744], 80.00th=[26608], 90.00th=[31327], 95.00th=[35390], 00:11:17.679 | 99.00th=[39060], 99.50th=[39060], 99.90th=[43254], 99.95th=[43254], 00:11:17.679 | 99.99th=[43254] 00:11:17.679 bw ( KiB/s): min=13632, max=18140, per=27.50%, avg=15886.00, stdev=3187.64, samples=2 00:11:17.679 iops : min= 3408, max= 4535, avg=3971.50, stdev=796.91, samples=2 00:11:17.679 lat (msec) : 10=1.67%, 20=77.65%, 50=20.68% 00:11:17.679 cpu : usr=5.06%, sys=10.53%, ctx=342, majf=0, minf=2 00:11:17.679 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:11:17.679 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:17.679 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:17.679 issued rwts: total=3584,4095,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:17.679 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:17.679 job3: (groupid=0, jobs=1): err= 0: pid=154160: Mon Oct 7 09:32:06 2024 00:11:17.679 read: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec) 00:11:17.679 slat (usec): min=2, max=13746, avg=149.45, stdev=918.90 00:11:17.679 clat (usec): min=6979, max=48116, avg=19500.07, stdev=7286.04 00:11:17.679 lat (usec): min=7107, max=48188, avg=19649.52, stdev=7350.57 00:11:17.679 clat percentiles (usec): 00:11:17.679 | 1.00th=[ 7701], 5.00th=[12780], 10.00th=[13042], 20.00th=[14746], 00:11:17.679 | 30.00th=[15401], 40.00th=[16188], 50.00th=[16712], 60.00th=[17695], 00:11:17.679 | 70.00th=[20841], 80.00th=[25822], 90.00th=[30278], 95.00th=[35390], 00:11:17.679 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:11:17.679 | 99.99th=[47973] 00:11:17.679 write: IOPS=3485, BW=13.6MiB/s (14.3MB/s)(13.7MiB/1004msec); 0 zone resets 00:11:17.679 slat (usec): min=3, max=9994, avg=147.54, stdev=855.46 00:11:17.679 clat (usec): min=3617, max=34900, avg=18986.17, stdev=5534.54 00:11:17.679 lat (usec): min=6357, max=34919, avg=19133.71, stdev=5598.43 00:11:17.679 clat percentiles (usec): 00:11:17.679 | 1.00th=[ 8979], 5.00th=[12780], 10.00th=[13042], 20.00th=[13304], 00:11:17.679 | 30.00th=[14484], 40.00th=[15008], 50.00th=[18482], 60.00th=[21103], 00:11:17.679 | 70.00th=[23200], 80.00th=[24249], 90.00th=[26084], 95.00th=[28443], 00:11:17.679 | 99.00th=[30540], 99.50th=[32375], 99.90th=[33424], 99.95th=[34341], 00:11:17.679 | 99.99th=[34866] 00:11:17.679 bw ( KiB/s): min=12288, max=14688, per=23.35%, avg=13488.00, stdev=1697.06, samples=2 00:11:17.679 iops : min= 3072, max= 3672, avg=3372.00, stdev=424.26, samples=2 00:11:17.679 lat (msec) : 4=0.02%, 10=1.63%, 20=59.17%, 50=39.19% 00:11:17.679 cpu : usr=2.99%, sys=4.79%, ctx=252, majf=0, minf=1 00:11:17.679 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:11:17.679 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:17.679 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:17.679 issued rwts: total=3072,3499,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:17.679 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:17.679 00:11:17.679 Run status group 0 (all jobs): 00:11:17.679 READ: bw=51.6MiB/s (54.1MB/s), 12.0MiB/s-13.9MiB/s (12.5MB/s-14.6MB/s), io=52.0MiB (54.5MB), run=1002-1008msec 00:11:17.679 WRITE: bw=56.4MiB/s (59.1MB/s), 13.2MiB/s-15.9MiB/s (13.8MB/s-16.6MB/s), io=56.9MiB (59.6MB), run=1002-1008msec 00:11:17.679 00:11:17.679 Disk stats (read/write): 00:11:17.679 nvme0n1: ios=3052/3072, merge=0/0, ticks=18309/10378, in_queue=28687, util=98.20% 00:11:17.679 nvme0n2: ios=2599/2716, merge=0/0, ticks=13075/20515, in_queue=33590, util=97.76% 00:11:17.679 nvme0n3: ios=3350/3584, merge=0/0, ticks=24148/27505, in_queue=51653, util=98.12% 00:11:17.679 nvme0n4: ios=2617/2991, merge=0/0, ticks=18824/19669, in_queue=38493, util=97.68% 00:11:17.679 09:32:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:11:17.679 09:32:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=154298 00:11:17.679 09:32:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:11:17.679 09:32:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:11:17.679 [global] 00:11:17.679 thread=1 00:11:17.679 invalidate=1 00:11:17.679 rw=read 00:11:17.679 time_based=1 00:11:17.679 runtime=10 00:11:17.679 ioengine=libaio 00:11:17.679 direct=1 00:11:17.679 bs=4096 00:11:17.679 iodepth=1 00:11:17.679 norandommap=1 00:11:17.679 numjobs=1 00:11:17.679 00:11:17.679 [job0] 00:11:17.679 filename=/dev/nvme0n1 00:11:17.679 [job1] 00:11:17.679 filename=/dev/nvme0n2 00:11:17.679 [job2] 00:11:17.679 filename=/dev/nvme0n3 00:11:17.679 [job3] 00:11:17.679 filename=/dev/nvme0n4 00:11:17.679 Could not set queue depth (nvme0n1) 00:11:17.679 Could not set queue depth (nvme0n2) 00:11:17.679 Could not set queue depth (nvme0n3) 00:11:17.679 Could not set queue depth (nvme0n4) 00:11:17.679 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:17.679 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:17.679 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:17.679 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:17.679 fio-3.35 00:11:17.679 Starting 4 threads 00:11:20.970 09:32:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:11:20.970 09:32:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:11:20.970 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=36511744, buflen=4096 00:11:20.970 fio: pid=154389, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:20.970 09:32:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:20.970 09:32:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:11:20.970 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=51531776, buflen=4096 00:11:20.970 fio: pid=154388, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:21.537 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=1646592, buflen=4096 00:11:21.537 fio: pid=154386, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:21.537 09:32:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:21.537 09:32:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:11:21.794 09:32:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:21.794 09:32:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:11:21.794 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=45428736, buflen=4096 00:11:21.794 fio: pid=154387, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:21.794 00:11:21.794 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=154386: Mon Oct 7 09:32:10 2024 00:11:21.794 read: IOPS=115, BW=459KiB/s (470kB/s)(1608KiB/3501msec) 00:11:21.794 slat (usec): min=4, max=17871, avg=55.01, stdev=889.71 00:11:21.794 clat (usec): min=183, max=41113, avg=8591.46, stdev=16408.23 00:11:21.794 lat (usec): min=187, max=58984, avg=8646.57, stdev=16524.02 00:11:21.794 clat percentiles (usec): 00:11:21.794 | 1.00th=[ 188], 5.00th=[ 196], 10.00th=[ 208], 20.00th=[ 302], 00:11:21.794 | 30.00th=[ 306], 40.00th=[ 310], 50.00th=[ 310], 60.00th=[ 318], 00:11:21.794 | 70.00th=[ 326], 80.00th=[40633], 90.00th=[41157], 95.00th=[41157], 00:11:21.794 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:21.794 | 99.99th=[41157] 00:11:21.794 bw ( KiB/s): min= 96, max= 2208, per=1.30%, avg=449.33, stdev=861.57, samples=6 00:11:21.794 iops : min= 24, max= 552, avg=112.33, stdev=215.39, samples=6 00:11:21.794 lat (usec) : 250=11.91%, 500=67.25%, 750=0.25% 00:11:21.794 lat (msec) : 50=20.35% 00:11:21.794 cpu : usr=0.06%, sys=0.17%, ctx=405, majf=0, minf=1 00:11:21.794 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:21.794 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:21.794 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:21.794 issued rwts: total=403,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:21.794 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:21.794 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=154387: Mon Oct 7 09:32:10 2024 00:11:21.794 read: IOPS=2893, BW=11.3MiB/s (11.8MB/s)(43.3MiB/3834msec) 00:11:21.794 slat (usec): min=5, max=11929, avg=12.64, stdev=153.49 00:11:21.794 clat (usec): min=163, max=41100, avg=327.94, stdev=1683.74 00:11:21.794 lat (usec): min=168, max=53000, avg=340.58, stdev=1739.77 00:11:21.794 clat percentiles (usec): 00:11:21.794 | 1.00th=[ 182], 5.00th=[ 192], 10.00th=[ 198], 20.00th=[ 208], 00:11:21.794 | 30.00th=[ 221], 40.00th=[ 233], 50.00th=[ 247], 60.00th=[ 281], 00:11:21.794 | 70.00th=[ 297], 80.00th=[ 310], 90.00th=[ 322], 95.00th=[ 334], 00:11:21.794 | 99.00th=[ 359], 99.50th=[ 465], 99.90th=[41157], 99.95th=[41157], 00:11:21.794 | 99.99th=[41157] 00:11:21.794 bw ( KiB/s): min= 5172, max=15840, per=36.44%, avg=12540.00, stdev=3513.31, samples=7 00:11:21.794 iops : min= 1293, max= 3960, avg=3135.00, stdev=878.33, samples=7 00:11:21.794 lat (usec) : 250=52.01%, 500=47.66%, 750=0.14% 00:11:21.794 lat (msec) : 2=0.02%, 50=0.17% 00:11:21.794 cpu : usr=2.61%, sys=4.44%, ctx=11094, majf=0, minf=2 00:11:21.794 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:21.794 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:21.794 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:21.794 issued rwts: total=11092,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:21.794 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:21.794 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=154388: Mon Oct 7 09:32:10 2024 00:11:21.794 read: IOPS=3936, BW=15.4MiB/s (16.1MB/s)(49.1MiB/3196msec) 00:11:21.794 slat (usec): min=4, max=11719, avg=11.80, stdev=145.59 00:11:21.794 clat (usec): min=172, max=41251, avg=238.08, stdev=519.17 00:11:21.794 lat (usec): min=179, max=41256, avg=249.89, stdev=539.75 00:11:21.794 clat percentiles (usec): 00:11:21.794 | 1.00th=[ 184], 5.00th=[ 188], 10.00th=[ 192], 20.00th=[ 198], 00:11:21.794 | 30.00th=[ 202], 40.00th=[ 206], 50.00th=[ 212], 60.00th=[ 221], 00:11:21.794 | 70.00th=[ 233], 80.00th=[ 269], 90.00th=[ 318], 95.00th=[ 334], 00:11:21.794 | 99.00th=[ 359], 99.50th=[ 379], 99.90th=[ 562], 99.95th=[ 717], 00:11:21.794 | 99.99th=[41157] 00:11:21.794 bw ( KiB/s): min=11912, max=18632, per=45.70%, avg=15729.33, stdev=2369.47, samples=6 00:11:21.794 iops : min= 2978, max= 4658, avg=3932.33, stdev=592.37, samples=6 00:11:21.794 lat (usec) : 250=77.21%, 500=22.58%, 750=0.16%, 1000=0.02% 00:11:21.795 lat (msec) : 2=0.01%, 50=0.02% 00:11:21.795 cpu : usr=1.44%, sys=5.20%, ctx=12585, majf=0, minf=2 00:11:21.795 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:21.795 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:21.795 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:21.795 issued rwts: total=12582,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:21.795 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:21.795 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=154389: Mon Oct 7 09:32:10 2024 00:11:21.795 read: IOPS=3053, BW=11.9MiB/s (12.5MB/s)(34.8MiB/2920msec) 00:11:21.795 slat (nsec): min=5300, max=66243, avg=13327.02, stdev=7415.25 00:11:21.795 clat (usec): min=178, max=41179, avg=307.68, stdev=865.23 00:11:21.795 lat (usec): min=184, max=41195, avg=321.00, stdev=865.28 00:11:21.795 clat percentiles (usec): 00:11:21.795 | 1.00th=[ 202], 5.00th=[ 217], 10.00th=[ 227], 20.00th=[ 245], 00:11:21.795 | 30.00th=[ 265], 40.00th=[ 285], 50.00th=[ 293], 60.00th=[ 302], 00:11:21.795 | 70.00th=[ 310], 80.00th=[ 322], 90.00th=[ 334], 95.00th=[ 363], 00:11:21.795 | 99.00th=[ 424], 99.50th=[ 465], 99.90th=[ 766], 99.95th=[ 1336], 00:11:21.795 | 99.99th=[41157] 00:11:21.795 bw ( KiB/s): min=10472, max=13232, per=34.18%, avg=11764.80, stdev=1148.93, samples=5 00:11:21.795 iops : min= 2618, max= 3308, avg=2941.20, stdev=287.23, samples=5 00:11:21.795 lat (usec) : 250=22.75%, 500=76.95%, 750=0.19%, 1000=0.04% 00:11:21.795 lat (msec) : 2=0.01%, 50=0.04% 00:11:21.795 cpu : usr=2.67%, sys=5.55%, ctx=8916, majf=0, minf=1 00:11:21.795 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:21.795 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:21.795 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:21.795 issued rwts: total=8915,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:21.795 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:21.795 00:11:21.795 Run status group 0 (all jobs): 00:11:21.795 READ: bw=33.6MiB/s (35.2MB/s), 459KiB/s-15.4MiB/s (470kB/s-16.1MB/s), io=129MiB (135MB), run=2920-3834msec 00:11:21.795 00:11:21.795 Disk stats (read/write): 00:11:21.795 nvme0n1: ios=398/0, merge=0/0, ticks=3289/0, in_queue=3289, util=95.11% 00:11:21.795 nvme0n2: ios=11085/0, merge=0/0, ticks=3316/0, in_queue=3316, util=96.00% 00:11:21.795 nvme0n3: ios=12236/0, merge=0/0, ticks=3061/0, in_queue=3061, util=99.28% 00:11:21.795 nvme0n4: ios=8776/0, merge=0/0, ticks=2793/0, in_queue=2793, util=99.93% 00:11:22.053 09:32:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:22.053 09:32:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:11:22.310 09:32:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:22.310 09:32:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:11:22.567 09:32:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:22.567 09:32:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:11:22.825 09:32:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:22.825 09:32:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:11:23.082 09:32:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:11:23.082 09:32:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 154298 00:11:23.083 09:32:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:11:23.083 09:32:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:23.341 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:23.341 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:23.341 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:11:23.341 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:23.341 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:23.341 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:23.341 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:23.341 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:11:23.341 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:11:23.341 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:11:23.341 nvmf hotplug test: fio failed as expected 00:11:23.341 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:23.599 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:11:23.599 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:11:23.599 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:11:23.599 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:11:23.599 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:11:23.599 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:11:23.599 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:11:23.599 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:23.599 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:11:23.599 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:23.599 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:23.599 rmmod nvme_tcp 00:11:23.599 rmmod nvme_fabrics 00:11:23.599 rmmod nvme_keyring 00:11:23.599 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:23.599 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:11:23.599 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:11:23.599 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@515 -- # '[' -n 152322 ']' 00:11:23.599 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # killprocess 152322 00:11:23.599 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 152322 ']' 00:11:23.599 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 152322 00:11:23.599 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:11:23.599 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:23.599 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 152322 00:11:23.599 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:23.599 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:23.599 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 152322' 00:11:23.599 killing process with pid 152322 00:11:23.599 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 152322 00:11:23.599 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 152322 00:11:23.859 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:11:23.859 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:11:23.859 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:11:23.859 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:11:23.859 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-save 00:11:23.859 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:11:23.859 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-restore 00:11:23.859 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:23.859 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:23.859 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:23.859 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:23.859 09:32:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:26.401 09:32:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:26.401 00:11:26.401 real 0m24.155s 00:11:26.401 user 1m24.594s 00:11:26.401 sys 0m7.952s 00:11:26.401 09:32:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:26.401 09:32:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:26.401 ************************************ 00:11:26.401 END TEST nvmf_fio_target 00:11:26.401 ************************************ 00:11:26.401 09:32:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:26.401 09:32:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:26.401 09:32:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:26.401 09:32:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:26.401 ************************************ 00:11:26.401 START TEST nvmf_bdevio 00:11:26.401 ************************************ 00:11:26.401 09:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:26.401 * Looking for test storage... 00:11:26.401 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:26.401 09:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:26.401 09:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lcov --version 00:11:26.401 09:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:26.401 09:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:26.401 09:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:26.401 09:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:26.401 09:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:26.401 09:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:11:26.401 09:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:11:26.401 09:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:11:26.401 09:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:11:26.401 09:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:11:26.401 09:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:11:26.401 09:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:11:26.401 09:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:26.401 09:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:11:26.401 09:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:11:26.401 09:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:26.401 09:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:26.401 09:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:11:26.401 09:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:11:26.401 09:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:26.401 09:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:11:26.401 09:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:11:26.401 09:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:11:26.401 09:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:11:26.401 09:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:26.401 09:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:11:26.401 09:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:11:26.401 09:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:26.401 09:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:26.401 09:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:11:26.401 09:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:26.401 09:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:26.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:26.401 --rc genhtml_branch_coverage=1 00:11:26.401 --rc genhtml_function_coverage=1 00:11:26.401 --rc genhtml_legend=1 00:11:26.401 --rc geninfo_all_blocks=1 00:11:26.401 --rc geninfo_unexecuted_blocks=1 00:11:26.401 00:11:26.401 ' 00:11:26.401 09:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:26.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:26.401 --rc genhtml_branch_coverage=1 00:11:26.401 --rc genhtml_function_coverage=1 00:11:26.401 --rc genhtml_legend=1 00:11:26.401 --rc geninfo_all_blocks=1 00:11:26.401 --rc geninfo_unexecuted_blocks=1 00:11:26.402 00:11:26.402 ' 00:11:26.402 09:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:26.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:26.402 --rc genhtml_branch_coverage=1 00:11:26.402 --rc genhtml_function_coverage=1 00:11:26.402 --rc genhtml_legend=1 00:11:26.402 --rc geninfo_all_blocks=1 00:11:26.402 --rc geninfo_unexecuted_blocks=1 00:11:26.402 00:11:26.402 ' 00:11:26.402 09:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:26.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:26.402 --rc genhtml_branch_coverage=1 00:11:26.402 --rc genhtml_function_coverage=1 00:11:26.402 --rc genhtml_legend=1 00:11:26.402 --rc geninfo_all_blocks=1 00:11:26.402 --rc geninfo_unexecuted_blocks=1 00:11:26.402 00:11:26.402 ' 00:11:26.402 09:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:26.402 09:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:11:26.402 09:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:26.402 09:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:26.402 09:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:26.402 09:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:26.402 09:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:26.402 09:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:26.402 09:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:26.402 09:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:26.402 09:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:26.402 09:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:26.402 09:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:11:26.402 09:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:11:26.402 09:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:26.402 09:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:26.402 09:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:26.402 09:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:26.402 09:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:26.402 09:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:11:26.402 09:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:26.402 09:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:26.402 09:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:26.402 09:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.402 09:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.402 09:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.402 09:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:11:26.402 09:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.402 09:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:11:26.402 09:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:26.402 09:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:26.402 09:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:26.402 09:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:26.402 09:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:26.402 09:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:26.402 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:26.402 09:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:26.402 09:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:26.402 09:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:26.402 09:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:26.402 09:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:26.402 09:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:11:26.402 09:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:11:26.402 09:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:26.402 09:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # prepare_net_devs 00:11:26.402 09:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@436 -- # local -g is_hw=no 00:11:26.402 09:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # remove_spdk_ns 00:11:26.402 09:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:26.402 09:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:26.403 09:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:26.403 09:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:11:26.403 09:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:11:26.403 09:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:11:26.403 09:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:28.307 09:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:28.307 09:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:11:28.307 09:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:28.307 09:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:28.307 09:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:28.307 09:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:28.307 09:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:28.307 09:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:11:28.307 09:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:28.307 09:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:11:28.307 09:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:11:28.307 09:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:11:28.307 09:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:11:28.307 09:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:11:28.307 09:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:11:28.307 09:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:28.307 09:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:28.307 09:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:28.307 09:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:28.307 09:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:28.307 09:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:28.307 09:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:28.307 09:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:28.307 09:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:28.307 09:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:28.307 09:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:28.307 09:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:28.307 09:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:28.307 09:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:28.307 09:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:28.307 09:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:28.307 09:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:28.307 09:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:28.307 09:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:28.307 09:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x1592)' 00:11:28.307 Found 0000:09:00.0 (0x8086 - 0x1592) 00:11:28.307 09:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:28.307 09:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:28.307 09:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:11:28.307 09:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:11:28.307 09:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:28.307 09:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:28.307 09:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x1592)' 00:11:28.307 Found 0000:09:00.1 (0x8086 - 0x1592) 00:11:28.307 09:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:28.307 09:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:28.307 09:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:11:28.307 09:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:11:28.307 09:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:28.307 09:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:28.307 09:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:28.307 09:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:28.307 09:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:28.307 09:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:28.307 09:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:28.307 09:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:28.307 09:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:28.307 09:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:28.307 09:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:28.307 09:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:11:28.307 Found net devices under 0000:09:00.0: cvl_0_0 00:11:28.307 09:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:28.307 09:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:28.307 09:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:28.307 09:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:28.307 09:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:28.307 09:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:28.307 09:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:28.307 09:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:28.307 09:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:11:28.307 Found net devices under 0000:09:00.1: cvl_0_1 00:11:28.307 09:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:28.307 09:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:11:28.307 09:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # is_hw=yes 00:11:28.307 09:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:11:28.307 09:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:11:28.307 09:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:11:28.307 09:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:28.307 09:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:28.307 09:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:28.307 09:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:28.307 09:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:28.307 09:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:28.307 09:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:28.307 09:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:28.307 09:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:28.308 09:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:28.308 09:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:28.308 09:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:28.308 09:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:28.308 09:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:28.308 09:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:28.308 09:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:28.308 09:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:28.308 09:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:28.308 09:32:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:28.308 09:32:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:28.308 09:32:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:28.308 09:32:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:28.308 09:32:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:28.308 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:28.308 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.289 ms 00:11:28.308 00:11:28.308 --- 10.0.0.2 ping statistics --- 00:11:28.308 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:28.308 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:11:28.308 09:32:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:28.308 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:28.308 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:11:28.308 00:11:28.308 --- 10.0.0.1 ping statistics --- 00:11:28.308 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:28.308 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:11:28.308 09:32:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:28.308 09:32:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # return 0 00:11:28.308 09:32:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:11:28.308 09:32:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:28.308 09:32:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:11:28.308 09:32:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:11:28.308 09:32:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:28.308 09:32:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:11:28.308 09:32:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:11:28.308 09:32:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:28.308 09:32:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:11:28.308 09:32:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:28.308 09:32:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:28.308 09:32:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # nvmfpid=156897 00:11:28.308 09:32:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:28.308 09:32:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # waitforlisten 156897 00:11:28.308 09:32:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 156897 ']' 00:11:28.308 09:32:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:28.308 09:32:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:28.308 09:32:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:28.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:28.308 09:32:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:28.308 09:32:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:28.308 [2024-10-07 09:32:17.134171] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:11:28.308 [2024-10-07 09:32:17.134259] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:28.308 [2024-10-07 09:32:17.196046] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:28.567 [2024-10-07 09:32:17.305317] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:28.567 [2024-10-07 09:32:17.305363] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:28.567 [2024-10-07 09:32:17.305399] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:28.567 [2024-10-07 09:32:17.305411] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:28.567 [2024-10-07 09:32:17.305421] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:28.567 [2024-10-07 09:32:17.307085] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:11:28.567 [2024-10-07 09:32:17.307137] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 5 00:11:28.567 [2024-10-07 09:32:17.307161] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 6 00:11:28.567 [2024-10-07 09:32:17.307164] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:11:28.567 09:32:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:28.567 09:32:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:11:28.567 09:32:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:11:28.567 09:32:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:28.567 09:32:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:28.567 09:32:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:28.567 09:32:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:28.567 09:32:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.567 09:32:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:28.567 [2024-10-07 09:32:17.459435] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:28.567 09:32:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.567 09:32:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:28.567 09:32:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.567 09:32:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:28.567 Malloc0 00:11:28.567 09:32:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.567 09:32:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:28.567 09:32:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.567 09:32:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:28.567 09:32:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.567 09:32:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:28.567 09:32:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.567 09:32:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:28.567 09:32:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.567 09:32:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:28.567 09:32:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.567 09:32:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:28.567 [2024-10-07 09:32:17.510826] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:28.567 09:32:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.567 09:32:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:28.567 09:32:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:28.567 09:32:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # config=() 00:11:28.567 09:32:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # local subsystem config 00:11:28.567 09:32:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:11:28.567 09:32:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:11:28.567 { 00:11:28.567 "params": { 00:11:28.567 "name": "Nvme$subsystem", 00:11:28.567 "trtype": "$TEST_TRANSPORT", 00:11:28.567 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:28.567 "adrfam": "ipv4", 00:11:28.567 "trsvcid": "$NVMF_PORT", 00:11:28.567 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:28.567 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:28.567 "hdgst": ${hdgst:-false}, 00:11:28.567 "ddgst": ${ddgst:-false} 00:11:28.567 }, 00:11:28.567 "method": "bdev_nvme_attach_controller" 00:11:28.567 } 00:11:28.567 EOF 00:11:28.567 )") 00:11:28.567 09:32:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # cat 00:11:28.567 09:32:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # jq . 00:11:28.567 09:32:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@583 -- # IFS=, 00:11:28.567 09:32:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:11:28.567 "params": { 00:11:28.567 "name": "Nvme1", 00:11:28.567 "trtype": "tcp", 00:11:28.567 "traddr": "10.0.0.2", 00:11:28.567 "adrfam": "ipv4", 00:11:28.567 "trsvcid": "4420", 00:11:28.567 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:28.567 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:28.567 "hdgst": false, 00:11:28.567 "ddgst": false 00:11:28.567 }, 00:11:28.567 "method": "bdev_nvme_attach_controller" 00:11:28.567 }' 00:11:28.567 [2024-10-07 09:32:17.559786] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:11:28.567 [2024-10-07 09:32:17.559861] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid157046 ] 00:11:28.826 [2024-10-07 09:32:17.617463] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:28.826 [2024-10-07 09:32:17.730183] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:11:28.826 [2024-10-07 09:32:17.730234] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:11:28.826 [2024-10-07 09:32:17.730237] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:29.084 I/O targets: 00:11:29.084 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:29.084 00:11:29.084 00:11:29.084 CUnit - A unit testing framework for C - Version 2.1-3 00:11:29.084 http://cunit.sourceforge.net/ 00:11:29.084 00:11:29.084 00:11:29.084 Suite: bdevio tests on: Nvme1n1 00:11:29.342 Test: blockdev write read block ...passed 00:11:29.342 Test: blockdev write zeroes read block ...passed 00:11:29.342 Test: blockdev write zeroes read no split ...passed 00:11:29.342 Test: blockdev write zeroes read split ...passed 00:11:29.342 Test: blockdev write zeroes read split partial ...passed 00:11:29.342 Test: blockdev reset ...[2024-10-07 09:32:18.226382] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:11:29.342 [2024-10-07 09:32:18.226494] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ecf2b0 (9): Bad file descriptor 00:11:29.600 [2024-10-07 09:32:18.363519] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:29.600 passed 00:11:29.600 Test: blockdev write read 8 blocks ...passed 00:11:29.600 Test: blockdev write read size > 128k ...passed 00:11:29.600 Test: blockdev write read invalid size ...passed 00:11:29.600 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:29.600 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:29.600 Test: blockdev write read max offset ...passed 00:11:29.600 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:29.600 Test: blockdev writev readv 8 blocks ...passed 00:11:29.600 Test: blockdev writev readv 30 x 1block ...passed 00:11:29.600 Test: blockdev writev readv block ...passed 00:11:29.600 Test: blockdev writev readv size > 128k ...passed 00:11:29.600 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:29.600 Test: blockdev comparev and writev ...[2024-10-07 09:32:18.534744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:29.600 [2024-10-07 09:32:18.534780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:29.600 [2024-10-07 09:32:18.534814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:29.600 [2024-10-07 09:32:18.534832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:29.600 [2024-10-07 09:32:18.535156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:29.600 [2024-10-07 09:32:18.535180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:29.600 [2024-10-07 09:32:18.535202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:29.600 [2024-10-07 09:32:18.535218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:29.600 [2024-10-07 09:32:18.535520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:29.600 [2024-10-07 09:32:18.535544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:29.600 [2024-10-07 09:32:18.535566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:29.600 [2024-10-07 09:32:18.535582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:29.600 [2024-10-07 09:32:18.535909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:29.600 [2024-10-07 09:32:18.535938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:29.600 [2024-10-07 09:32:18.535959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:29.600 [2024-10-07 09:32:18.535976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:29.600 passed 00:11:29.858 Test: blockdev nvme passthru rw ...passed 00:11:29.858 Test: blockdev nvme passthru vendor specific ...[2024-10-07 09:32:18.619914] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:29.858 [2024-10-07 09:32:18.619943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:29.858 [2024-10-07 09:32:18.620082] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:29.858 [2024-10-07 09:32:18.620105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:29.858 [2024-10-07 09:32:18.620242] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:29.858 [2024-10-07 09:32:18.620264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:29.858 [2024-10-07 09:32:18.620402] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:29.858 [2024-10-07 09:32:18.620425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:29.858 passed 00:11:29.858 Test: blockdev nvme admin passthru ...passed 00:11:29.858 Test: blockdev copy ...passed 00:11:29.858 00:11:29.858 Run Summary: Type Total Ran Passed Failed Inactive 00:11:29.858 suites 1 1 n/a 0 0 00:11:29.858 tests 23 23 23 0 0 00:11:29.858 asserts 152 152 152 0 n/a 00:11:29.858 00:11:29.858 Elapsed time = 1.211 seconds 00:11:30.116 09:32:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:30.116 09:32:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.116 09:32:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:30.116 09:32:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.116 09:32:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:30.117 09:32:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:11:30.117 09:32:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@514 -- # nvmfcleanup 00:11:30.117 09:32:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:11:30.117 09:32:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:30.117 09:32:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:11:30.117 09:32:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:30.117 09:32:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:30.117 rmmod nvme_tcp 00:11:30.117 rmmod nvme_fabrics 00:11:30.117 rmmod nvme_keyring 00:11:30.117 09:32:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:30.117 09:32:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:11:30.117 09:32:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:11:30.117 09:32:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@515 -- # '[' -n 156897 ']' 00:11:30.117 09:32:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # killprocess 156897 00:11:30.117 09:32:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 156897 ']' 00:11:30.117 09:32:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 156897 00:11:30.117 09:32:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:11:30.117 09:32:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:30.117 09:32:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 156897 00:11:30.117 09:32:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:11:30.117 09:32:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:11:30.117 09:32:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 156897' 00:11:30.117 killing process with pid 156897 00:11:30.117 09:32:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 156897 00:11:30.117 09:32:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 156897 00:11:30.377 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:11:30.377 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:11:30.377 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:11:30.377 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:11:30.377 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-save 00:11:30.377 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:11:30.377 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-restore 00:11:30.377 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:30.377 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:30.377 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:30.377 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:30.377 09:32:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:32.922 09:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:32.922 00:11:32.922 real 0m6.477s 00:11:32.922 user 0m11.212s 00:11:32.922 sys 0m2.027s 00:11:32.922 09:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:32.922 09:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:32.922 ************************************ 00:11:32.922 END TEST nvmf_bdevio 00:11:32.922 ************************************ 00:11:32.922 09:32:21 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:32.922 00:11:32.922 real 3m57.070s 00:11:32.922 user 10m17.076s 00:11:32.922 sys 1m8.537s 00:11:32.922 09:32:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:32.922 09:32:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:32.922 ************************************ 00:11:32.922 END TEST nvmf_target_core 00:11:32.922 ************************************ 00:11:32.922 09:32:21 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:32.922 09:32:21 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:32.922 09:32:21 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:32.922 09:32:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:32.922 ************************************ 00:11:32.922 START TEST nvmf_target_extra 00:11:32.922 ************************************ 00:11:32.922 09:32:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:32.922 * Looking for test storage... 00:11:32.923 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:11:32.923 09:32:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:32.923 09:32:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # lcov --version 00:11:32.923 09:32:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:32.923 09:32:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:32.923 09:32:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:32.923 09:32:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:32.923 09:32:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:32.923 09:32:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:11:32.923 09:32:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:11:32.923 09:32:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:11:32.923 09:32:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:11:32.923 09:32:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:11:32.923 09:32:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:11:32.923 09:32:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:11:32.923 09:32:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:32.923 09:32:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:11:32.923 09:32:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:11:32.923 09:32:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:32.923 09:32:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:32.923 09:32:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:11:32.923 09:32:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:11:32.923 09:32:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:32.923 09:32:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:11:32.923 09:32:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:11:32.923 09:32:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:11:32.923 09:32:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:11:32.923 09:32:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:32.923 09:32:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:11:32.923 09:32:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:11:32.923 09:32:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:32.923 09:32:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:32.923 09:32:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:11:32.923 09:32:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:32.923 09:32:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:32.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:32.923 --rc genhtml_branch_coverage=1 00:11:32.923 --rc genhtml_function_coverage=1 00:11:32.923 --rc genhtml_legend=1 00:11:32.923 --rc geninfo_all_blocks=1 00:11:32.923 --rc geninfo_unexecuted_blocks=1 00:11:32.923 00:11:32.923 ' 00:11:32.923 09:32:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:32.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:32.923 --rc genhtml_branch_coverage=1 00:11:32.923 --rc genhtml_function_coverage=1 00:11:32.923 --rc genhtml_legend=1 00:11:32.923 --rc geninfo_all_blocks=1 00:11:32.923 --rc geninfo_unexecuted_blocks=1 00:11:32.923 00:11:32.923 ' 00:11:32.923 09:32:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:32.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:32.923 --rc genhtml_branch_coverage=1 00:11:32.923 --rc genhtml_function_coverage=1 00:11:32.923 --rc genhtml_legend=1 00:11:32.923 --rc geninfo_all_blocks=1 00:11:32.923 --rc geninfo_unexecuted_blocks=1 00:11:32.923 00:11:32.923 ' 00:11:32.923 09:32:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:32.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:32.923 --rc genhtml_branch_coverage=1 00:11:32.923 --rc genhtml_function_coverage=1 00:11:32.923 --rc genhtml_legend=1 00:11:32.923 --rc geninfo_all_blocks=1 00:11:32.923 --rc geninfo_unexecuted_blocks=1 00:11:32.923 00:11:32.923 ' 00:11:32.923 09:32:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:32.923 09:32:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:11:32.923 09:32:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:32.923 09:32:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:32.923 09:32:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:32.923 09:32:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:32.923 09:32:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:32.923 09:32:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:32.923 09:32:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:32.923 09:32:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:32.923 09:32:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:32.923 09:32:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:32.923 09:32:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:11:32.923 09:32:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:11:32.923 09:32:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:32.923 09:32:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:32.923 09:32:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:32.923 09:32:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:32.923 09:32:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:32.923 09:32:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:11:32.923 09:32:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:32.923 09:32:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:32.923 09:32:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:32.923 09:32:21 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.923 09:32:21 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.923 09:32:21 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.923 09:32:21 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:11:32.923 09:32:21 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.923 09:32:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:11:32.923 09:32:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:32.923 09:32:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:32.923 09:32:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:32.923 09:32:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:32.923 09:32:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:32.923 09:32:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:32.923 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:32.923 09:32:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:32.923 09:32:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:32.923 09:32:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:32.923 09:32:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:32.923 09:32:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:11:32.923 09:32:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:11:32.923 09:32:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:32.923 09:32:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:32.923 09:32:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:32.923 09:32:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:32.923 ************************************ 00:11:32.923 START TEST nvmf_example 00:11:32.923 ************************************ 00:11:32.923 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:32.923 * Looking for test storage... 00:11:32.923 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:32.923 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:32.923 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1681 -- # lcov --version 00:11:32.924 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:32.924 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:32.924 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:32.924 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:32.924 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:32.924 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:11:32.924 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:11:32.924 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:11:32.924 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:11:32.924 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:11:32.924 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:11:32.924 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:11:32.924 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:32.924 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:11:32.924 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:11:32.924 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:32.924 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:32.924 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:11:32.924 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:11:32.924 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:32.924 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:11:32.924 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:11:32.924 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:11:32.924 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:11:32.924 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:32.924 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:11:32.924 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:11:32.924 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:32.924 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:32.924 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:11:32.924 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:32.924 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:32.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:32.924 --rc genhtml_branch_coverage=1 00:11:32.924 --rc genhtml_function_coverage=1 00:11:32.924 --rc genhtml_legend=1 00:11:32.924 --rc geninfo_all_blocks=1 00:11:32.924 --rc geninfo_unexecuted_blocks=1 00:11:32.924 00:11:32.924 ' 00:11:32.924 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:32.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:32.924 --rc genhtml_branch_coverage=1 00:11:32.924 --rc genhtml_function_coverage=1 00:11:32.924 --rc genhtml_legend=1 00:11:32.924 --rc geninfo_all_blocks=1 00:11:32.924 --rc geninfo_unexecuted_blocks=1 00:11:32.924 00:11:32.924 ' 00:11:32.924 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:32.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:32.924 --rc genhtml_branch_coverage=1 00:11:32.924 --rc genhtml_function_coverage=1 00:11:32.924 --rc genhtml_legend=1 00:11:32.924 --rc geninfo_all_blocks=1 00:11:32.924 --rc geninfo_unexecuted_blocks=1 00:11:32.924 00:11:32.924 ' 00:11:32.924 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:32.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:32.924 --rc genhtml_branch_coverage=1 00:11:32.924 --rc genhtml_function_coverage=1 00:11:32.924 --rc genhtml_legend=1 00:11:32.924 --rc geninfo_all_blocks=1 00:11:32.924 --rc geninfo_unexecuted_blocks=1 00:11:32.924 00:11:32.924 ' 00:11:32.924 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:32.924 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:11:32.924 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:32.924 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:32.924 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:32.924 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:32.924 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:32.924 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:32.924 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:32.924 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:32.924 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:32.924 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:32.924 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:11:32.924 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:11:32.924 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:32.924 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:32.924 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:32.924 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:32.924 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:32.924 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:11:32.924 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:32.924 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:32.924 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:32.924 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.924 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.924 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.924 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:11:32.924 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.924 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:11:32.924 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:32.924 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:32.924 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:32.924 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:32.924 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:32.924 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:32.924 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:32.924 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:32.924 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:32.924 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:32.924 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:11:32.924 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:11:32.924 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:11:32.924 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:11:32.924 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:11:32.924 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:11:32.925 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:11:32.925 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:11:32.925 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:32.925 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:32.925 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:11:32.925 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:11:32.925 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:32.925 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # prepare_net_devs 00:11:32.925 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@436 -- # local -g is_hw=no 00:11:32.925 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # remove_spdk_ns 00:11:32.925 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:32.925 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:32.925 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:32.925 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:11:32.925 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:11:32.925 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:11:32.925 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:34.834 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:34.834 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:11:34.834 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:34.834 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:34.834 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:34.834 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:34.834 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:34.834 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:11:34.834 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:34.834 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:11:34.834 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:11:34.834 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:11:34.834 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:11:34.834 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:11:34.834 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:11:34.834 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:34.834 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:34.834 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:34.834 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:34.834 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:34.834 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:34.834 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:34.834 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:34.834 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:34.834 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:34.834 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:34.834 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:34.834 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:34.834 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:34.834 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:34.834 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:34.834 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:34.834 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:34.834 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:34.834 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x1592)' 00:11:34.834 Found 0000:09:00.0 (0x8086 - 0x1592) 00:11:34.834 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:34.834 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:34.834 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:11:34.835 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:11:34.835 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:34.835 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:34.835 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x1592)' 00:11:34.835 Found 0000:09:00.1 (0x8086 - 0x1592) 00:11:34.835 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:34.835 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:34.835 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:11:34.835 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:11:34.835 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:34.835 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:34.835 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:34.835 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:34.835 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:34.835 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:34.835 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:34.835 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:34.835 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:34.835 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:34.835 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:34.835 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:11:34.835 Found net devices under 0000:09:00.0: cvl_0_0 00:11:34.835 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:34.835 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:34.835 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:34.835 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:34.835 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:34.835 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:34.835 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:34.835 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:34.835 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:11:34.835 Found net devices under 0000:09:00.1: cvl_0_1 00:11:34.835 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:34.835 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:11:34.835 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # is_hw=yes 00:11:34.835 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:11:34.835 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:11:34.835 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:11:34.835 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:34.835 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:34.835 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:34.835 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:34.835 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:34.835 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:34.835 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:34.835 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:34.835 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:34.835 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:34.835 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:34.835 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:34.835 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:34.835 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:34.835 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:34.835 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:34.835 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:34.835 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:34.835 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:35.095 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:35.095 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:35.095 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:35.095 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:35.095 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:35.095 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.342 ms 00:11:35.095 00:11:35.095 --- 10.0.0.2 ping statistics --- 00:11:35.095 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:35.095 rtt min/avg/max/mdev = 0.342/0.342/0.342/0.000 ms 00:11:35.095 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:35.095 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:35.095 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:11:35.095 00:11:35.095 --- 10.0.0.1 ping statistics --- 00:11:35.095 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:35.095 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:11:35.095 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:35.095 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # return 0 00:11:35.095 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:11:35.095 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:35.095 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:11:35.095 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:11:35.095 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:35.095 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:11:35.095 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:11:35.095 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:11:35.095 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:11:35.095 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:35.095 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:35.095 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:11:35.096 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:11:35.096 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=159081 00:11:35.096 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:11:35.096 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:35.096 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 159081 00:11:35.096 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@831 -- # '[' -z 159081 ']' 00:11:35.096 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:35.096 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:35.096 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:35.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:35.096 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:35.096 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:35.354 09:32:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:35.354 09:32:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # return 0 00:11:35.354 09:32:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:11:35.354 09:32:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:35.354 09:32:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:35.354 09:32:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:35.354 09:32:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.354 09:32:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:35.354 09:32:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.354 09:32:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:11:35.354 09:32:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.354 09:32:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:35.354 09:32:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.354 09:32:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:11:35.354 09:32:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:35.354 09:32:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.354 09:32:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:35.354 09:32:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.354 09:32:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:11:35.354 09:32:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:35.354 09:32:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.354 09:32:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:35.354 09:32:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.355 09:32:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:35.355 09:32:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.355 09:32:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:35.355 09:32:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.355 09:32:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:11:35.355 09:32:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:47.616 Initializing NVMe Controllers 00:11:47.616 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:47.616 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:47.616 Initialization complete. Launching workers. 00:11:47.616 ======================================================== 00:11:47.616 Latency(us) 00:11:47.616 Device Information : IOPS MiB/s Average min max 00:11:47.616 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14592.37 57.00 4385.53 687.26 15284.74 00:11:47.616 ======================================================== 00:11:47.616 Total : 14592.37 57.00 4385.53 687.26 15284.74 00:11:47.616 00:11:47.616 09:32:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:11:47.616 09:32:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:11:47.616 09:32:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@514 -- # nvmfcleanup 00:11:47.616 09:32:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:11:47.616 09:32:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:47.616 09:32:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:11:47.616 09:32:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:47.616 09:32:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:47.616 rmmod nvme_tcp 00:11:47.616 rmmod nvme_fabrics 00:11:47.616 rmmod nvme_keyring 00:11:47.616 09:32:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:47.616 09:32:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:11:47.616 09:32:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:11:47.616 09:32:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@515 -- # '[' -n 159081 ']' 00:11:47.616 09:32:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # killprocess 159081 00:11:47.616 09:32:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@950 -- # '[' -z 159081 ']' 00:11:47.616 09:32:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # kill -0 159081 00:11:47.616 09:32:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # uname 00:11:47.617 09:32:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:47.617 09:32:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 159081 00:11:47.617 09:32:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # process_name=nvmf 00:11:47.617 09:32:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # '[' nvmf = sudo ']' 00:11:47.617 09:32:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@968 -- # echo 'killing process with pid 159081' 00:11:47.617 killing process with pid 159081 00:11:47.617 09:32:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@969 -- # kill 159081 00:11:47.617 09:32:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@974 -- # wait 159081 00:11:47.617 nvmf threads initialize successfully 00:11:47.617 bdev subsystem init successfully 00:11:47.617 created a nvmf target service 00:11:47.617 create targets's poll groups done 00:11:47.617 all subsystems of target started 00:11:47.617 nvmf target is running 00:11:47.617 all subsystems of target stopped 00:11:47.617 destroy targets's poll groups done 00:11:47.617 destroyed the nvmf target service 00:11:47.617 bdev subsystem finish successfully 00:11:47.617 nvmf threads destroy successfully 00:11:47.617 09:32:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:11:47.617 09:32:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:11:47.617 09:32:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:11:47.617 09:32:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:11:47.617 09:32:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@789 -- # iptables-save 00:11:47.617 09:32:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:11:47.617 09:32:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@789 -- # iptables-restore 00:11:47.617 09:32:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:47.617 09:32:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:47.617 09:32:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:47.617 09:32:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:47.617 09:32:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:48.188 09:32:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:48.188 09:32:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:11:48.188 09:32:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:48.188 09:32:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:48.188 00:11:48.188 real 0m15.325s 00:11:48.188 user 0m42.226s 00:11:48.188 sys 0m3.287s 00:11:48.188 09:32:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:48.188 09:32:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:48.188 ************************************ 00:11:48.188 END TEST nvmf_example 00:11:48.188 ************************************ 00:11:48.188 09:32:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:48.188 09:32:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:48.188 09:32:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:48.188 09:32:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:48.188 ************************************ 00:11:48.188 START TEST nvmf_filesystem 00:11:48.188 ************************************ 00:11:48.188 09:32:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:48.188 * Looking for test storage... 00:11:48.188 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:48.188 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:48.188 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lcov --version 00:11:48.188 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:48.188 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:48.188 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:48.188 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:48.188 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:48.188 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:48.188 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:48.188 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:48.188 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:48.188 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:48.188 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:48.188 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:48.188 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:48.188 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:48.188 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:48.188 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:48.188 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:48.188 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:48.188 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:48.188 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:48.188 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:48.188 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:48.188 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:48.188 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:48.188 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:48.188 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:48.188 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:48.188 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:48.189 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:48.189 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:48.189 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:48.189 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:48.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.189 --rc genhtml_branch_coverage=1 00:11:48.189 --rc genhtml_function_coverage=1 00:11:48.189 --rc genhtml_legend=1 00:11:48.189 --rc geninfo_all_blocks=1 00:11:48.189 --rc geninfo_unexecuted_blocks=1 00:11:48.189 00:11:48.189 ' 00:11:48.189 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:48.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.189 --rc genhtml_branch_coverage=1 00:11:48.189 --rc genhtml_function_coverage=1 00:11:48.189 --rc genhtml_legend=1 00:11:48.189 --rc geninfo_all_blocks=1 00:11:48.189 --rc geninfo_unexecuted_blocks=1 00:11:48.189 00:11:48.189 ' 00:11:48.189 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:48.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.189 --rc genhtml_branch_coverage=1 00:11:48.189 --rc genhtml_function_coverage=1 00:11:48.189 --rc genhtml_legend=1 00:11:48.189 --rc geninfo_all_blocks=1 00:11:48.189 --rc geninfo_unexecuted_blocks=1 00:11:48.189 00:11:48.189 ' 00:11:48.189 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:48.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.189 --rc genhtml_branch_coverage=1 00:11:48.189 --rc genhtml_function_coverage=1 00:11:48.189 --rc genhtml_legend=1 00:11:48.189 --rc geninfo_all_blocks=1 00:11:48.189 --rc geninfo_unexecuted_blocks=1 00:11:48.189 00:11:48.189 ' 00:11:48.189 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:11:48.189 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:48.189 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:11:48.189 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:48.189 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:48.189 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:48.189 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:11:48.189 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:11:48.189 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:11:48.189 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:48.189 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:11:48.189 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:48.189 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:48.189 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:48.189 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:48.189 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:48.189 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:48.189 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:48.189 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:48.189 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:48.189 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:48.189 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:48.189 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:48.189 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:48.189 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:48.189 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:11:48.189 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:48.189 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:48.189 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:11:48.189 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:11:48.189 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:11:48.189 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:48.189 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:11:48.189 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:11:48.189 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_AIO_FSDEV=y 00:11:48.189 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:48.189 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:48.189 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_UBLK=y 00:11:48.189 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_ISAL_CRYPTO=y 00:11:48.189 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OPENSSL_PATH= 00:11:48.189 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OCF=n 00:11:48.189 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_FUSE=n 00:11:48.189 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_VTUNE_DIR= 00:11:48.189 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER_LIB= 00:11:48.189 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER=n 00:11:48.189 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FSDEV=y 00:11:48.189 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:48.189 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_CRYPTO=n 00:11:48.189 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_PGO_USE=n 00:11:48.189 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_VHOST=y 00:11:48.189 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS=n 00:11:48.189 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DPDK_INC_DIR= 00:11:48.189 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DAOS_DIR= 00:11:48.189 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_UNIT_TESTS=n 00:11:48.189 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:48.189 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_VIRTIO=y 00:11:48.189 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_DPDK_UADK=n 00:11:48.189 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_COVERAGE=y 00:11:48.189 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_RDMA=y 00:11:48.189 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:11:48.190 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_LZ4=n 00:11:48.190 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:48.190 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_URING_PATH= 00:11:48.190 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_XNVME=n 00:11:48.190 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_VFIO_USER=y 00:11:48.190 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_ARCH=native 00:11:48.190 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_HAVE_EVP_MAC=y 00:11:48.190 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_URING_ZNS=n 00:11:48.190 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_WERROR=y 00:11:48.190 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_HAVE_LIBBSD=n 00:11:48.190 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_UBSAN=y 00:11:48.190 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:11:48.190 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_IPSEC_MB_DIR= 00:11:48.190 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_GOLANG=n 00:11:48.190 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_ISAL=y 00:11:48.190 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_IDXD_KERNEL=y 00:11:48.190 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_DPDK_LIB_DIR= 00:11:48.190 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_RDMA_PROV=verbs 00:11:48.190 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_APPS=y 00:11:48.190 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_SHARED=y 00:11:48.190 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_HAVE_KEYUTILS=y 00:11:48.190 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_FC_PATH= 00:11:48.190 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:48.190 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_FC=n 00:11:48.190 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_AVAHI=n 00:11:48.190 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_FIO_PLUGIN=y 00:11:48.190 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_RAID5F=n 00:11:48.190 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_EXAMPLES=y 00:11:48.190 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_TESTS=y 00:11:48.190 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_CRYPTO_MLX5=n 00:11:48.190 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_MAX_LCORES=128 00:11:48.190 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_IPSEC_MB=n 00:11:48.190 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_PGO_DIR= 00:11:48.190 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_DEBUG=y 00:11:48.190 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:48.190 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_CROSS_PREFIX= 00:11:48.190 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_COPY_FILE_RANGE=y 00:11:48.190 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_URING=n 00:11:48.190 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:48.190 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:48.190 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:48.190 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:48.190 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:48.190 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:48.190 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:11:48.190 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:48.190 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:48.190 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:48.190 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:48.190 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:48.190 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:48.190 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:48.190 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:11:48.190 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:48.190 #define SPDK_CONFIG_H 00:11:48.190 #define SPDK_CONFIG_AIO_FSDEV 1 00:11:48.190 #define SPDK_CONFIG_APPS 1 00:11:48.190 #define SPDK_CONFIG_ARCH native 00:11:48.190 #undef SPDK_CONFIG_ASAN 00:11:48.190 #undef SPDK_CONFIG_AVAHI 00:11:48.190 #undef SPDK_CONFIG_CET 00:11:48.190 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:11:48.190 #define SPDK_CONFIG_COVERAGE 1 00:11:48.190 #define SPDK_CONFIG_CROSS_PREFIX 00:11:48.190 #undef SPDK_CONFIG_CRYPTO 00:11:48.190 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:48.190 #undef SPDK_CONFIG_CUSTOMOCF 00:11:48.190 #undef SPDK_CONFIG_DAOS 00:11:48.190 #define SPDK_CONFIG_DAOS_DIR 00:11:48.190 #define SPDK_CONFIG_DEBUG 1 00:11:48.190 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:48.190 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:48.190 #define SPDK_CONFIG_DPDK_INC_DIR 00:11:48.190 #define SPDK_CONFIG_DPDK_LIB_DIR 00:11:48.190 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:48.190 #undef SPDK_CONFIG_DPDK_UADK 00:11:48.190 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:48.190 #define SPDK_CONFIG_EXAMPLES 1 00:11:48.190 #undef SPDK_CONFIG_FC 00:11:48.190 #define SPDK_CONFIG_FC_PATH 00:11:48.190 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:48.190 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:48.190 #define SPDK_CONFIG_FSDEV 1 00:11:48.190 #undef SPDK_CONFIG_FUSE 00:11:48.190 #undef SPDK_CONFIG_FUZZER 00:11:48.190 #define SPDK_CONFIG_FUZZER_LIB 00:11:48.190 #undef SPDK_CONFIG_GOLANG 00:11:48.190 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:48.190 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:48.190 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:48.190 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:48.190 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:48.190 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:48.190 #undef SPDK_CONFIG_HAVE_LZ4 00:11:48.190 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:11:48.190 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:11:48.190 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:48.190 #define SPDK_CONFIG_IDXD 1 00:11:48.190 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:48.190 #undef SPDK_CONFIG_IPSEC_MB 00:11:48.190 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:48.190 #define SPDK_CONFIG_ISAL 1 00:11:48.190 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:48.190 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:48.191 #define SPDK_CONFIG_LIBDIR 00:11:48.191 #undef SPDK_CONFIG_LTO 00:11:48.191 #define SPDK_CONFIG_MAX_LCORES 128 00:11:48.191 #define SPDK_CONFIG_NVME_CUSE 1 00:11:48.191 #undef SPDK_CONFIG_OCF 00:11:48.191 #define SPDK_CONFIG_OCF_PATH 00:11:48.191 #define SPDK_CONFIG_OPENSSL_PATH 00:11:48.191 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:48.191 #define SPDK_CONFIG_PGO_DIR 00:11:48.191 #undef SPDK_CONFIG_PGO_USE 00:11:48.191 #define SPDK_CONFIG_PREFIX /usr/local 00:11:48.191 #undef SPDK_CONFIG_RAID5F 00:11:48.191 #undef SPDK_CONFIG_RBD 00:11:48.191 #define SPDK_CONFIG_RDMA 1 00:11:48.191 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:48.191 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:48.191 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:48.191 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:48.191 #define SPDK_CONFIG_SHARED 1 00:11:48.191 #undef SPDK_CONFIG_SMA 00:11:48.191 #define SPDK_CONFIG_TESTS 1 00:11:48.191 #undef SPDK_CONFIG_TSAN 00:11:48.191 #define SPDK_CONFIG_UBLK 1 00:11:48.191 #define SPDK_CONFIG_UBSAN 1 00:11:48.191 #undef SPDK_CONFIG_UNIT_TESTS 00:11:48.191 #undef SPDK_CONFIG_URING 00:11:48.191 #define SPDK_CONFIG_URING_PATH 00:11:48.191 #undef SPDK_CONFIG_URING_ZNS 00:11:48.191 #undef SPDK_CONFIG_USDT 00:11:48.191 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:48.191 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:48.191 #define SPDK_CONFIG_VFIO_USER 1 00:11:48.191 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:48.191 #define SPDK_CONFIG_VHOST 1 00:11:48.191 #define SPDK_CONFIG_VIRTIO 1 00:11:48.191 #undef SPDK_CONFIG_VTUNE 00:11:48.191 #define SPDK_CONFIG_VTUNE_DIR 00:11:48.191 #define SPDK_CONFIG_WERROR 1 00:11:48.191 #define SPDK_CONFIG_WPDK_DIR 00:11:48.191 #undef SPDK_CONFIG_XNVME 00:11:48.191 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:48.191 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:48.191 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:48.191 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:48.191 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:48.191 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:48.191 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:48.191 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.191 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.191 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.191 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:48.191 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.191 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:48.191 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:48.191 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:48.191 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:48.191 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:11:48.191 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:48.191 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:11:48.191 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:11:48.191 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:11:48.191 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:11:48.191 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:11:48.191 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:48.191 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:48.191 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:48.191 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:48.191 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:48.191 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:48.191 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:11:48.191 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:48.191 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:48.191 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:48.191 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:48.191 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:11:48.191 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:11:48.191 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:11:48.191 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:11:48.191 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:11:48.191 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:11:48.191 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:48.191 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:11:48.191 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:48.191 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:11:48.192 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:48.192 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:11:48.192 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:48.192 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:11:48.192 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:48.192 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:11:48.192 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:48.192 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:11:48.192 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:48.192 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:11:48.192 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:48.192 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:11:48.192 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:48.192 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:11:48.192 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:48.192 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:11:48.192 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:48.192 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:11:48.192 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:48.192 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:11:48.192 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:48.192 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:11:48.192 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:48.192 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:11:48.192 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:48.192 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:11:48.192 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:48.192 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:11:48.192 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:48.192 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:11:48.192 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:48.192 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:11:48.192 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:48.192 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:11:48.192 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:48.192 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:11:48.192 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:48.192 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:11:48.192 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:48.192 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:11:48.192 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:48.192 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:11:48.192 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:48.192 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:11:48.192 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:48.192 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:11:48.192 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:11:48.192 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:11:48.192 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:11:48.192 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:11:48.192 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:11:48.192 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:11:48.192 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:11:48.192 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:11:48.192 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:11:48.192 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:11:48.192 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:48.192 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:11:48.192 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:11:48.192 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:11:48.192 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:11:48.192 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:11:48.192 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:48.192 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:11:48.192 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:11:48.192 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:11:48.192 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:11:48.192 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:11:48.192 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:11:48.192 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:11:48.192 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:11:48.192 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:11:48.192 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:11:48.192 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:11:48.192 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:11:48.192 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:11:48.192 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:11:48.192 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:11:48.192 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:11:48.192 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:11:48.192 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:48.192 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:11:48.192 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:48.192 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:11:48.192 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:48.192 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:11:48.193 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:48.193 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:11:48.193 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:48.193 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:11:48.193 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:48.193 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:11:48.193 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:48.193 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:11:48.193 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:48.193 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:11:48.193 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:48.193 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:11:48.193 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:48.193 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:11:48.193 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:48.193 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:11:48.193 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:48.193 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:11:48.193 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:48.193 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:11:48.193 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:48.193 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:11:48.193 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:48.193 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:11:48.193 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:11:48.193 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:48.193 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:48.193 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:48.193 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:48.193 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:48.193 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:48.193 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:48.193 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:48.193 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:48.193 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:48.193 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:48.193 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:48.193 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:48.193 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:11:48.193 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:48.193 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:48.193 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:48.193 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:48.193 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:48.193 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:11:48.193 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # cat 00:11:48.193 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:11:48.193 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:48.193 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:48.193 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:48.193 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:48.193 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:11:48.193 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:11:48.193 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:48.193 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:48.193 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:48.194 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:48.194 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:48.194 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:48.194 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:48.194 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:48.194 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:48.194 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:48.456 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:48.456 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:48.456 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:11:48.456 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:11:48.456 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV= 00:11:48.456 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:11:48.456 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:11:48.456 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:11:48.456 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:11:48.456 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # lcov_opt= 00:11:48.456 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:11:48.456 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # export valgrind= 00:11:48.456 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # valgrind= 00:11:48.456 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # uname -s 00:11:48.456 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:11:48.456 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:11:48.456 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:11:48.456 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:11:48.456 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # MAKE=make 00:11:48.456 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j48 00:11:48.456 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:11:48.456 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:11:48.456 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:11:48.456 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@307 -- # TEST_MODE= 00:11:48.456 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # for i in "$@" 00:11:48.456 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # case "$i" in 00:11:48.456 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@314 -- # TEST_TRANSPORT=tcp 00:11:48.456 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # [[ -z 160699 ]] 00:11:48.456 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # kill -0 160699 00:11:48.456 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1666 -- # set_test_storage 2147483648 00:11:48.456 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:11:48.456 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:11:48.456 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@342 -- # local mount target_dir 00:11:48.456 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:11:48.456 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:11:48.456 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:11:48.456 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:11:48.456 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.YHy0lQ 00:11:48.456 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:48.456 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:11:48.456 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:11:48.456 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.YHy0lQ/tests/target /tmp/spdk.YHy0lQ 00:11:48.456 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:11:48.456 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:48.456 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # df -T 00:11:48.456 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:11:48.456 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_devtmpfs 00:11:48.456 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:11:48.456 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=67108864 00:11:48.456 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=67108864 00:11:48.456 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:11:48.456 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:48.456 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/pmem0 00:11:48.456 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=ext2 00:11:48.456 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=4096 00:11:48.456 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=5284429824 00:11:48.456 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=5284425728 00:11:48.456 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:48.456 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_root 00:11:48.456 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=overlay 00:11:48.456 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=56194994176 00:11:48.456 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=61988528128 00:11:48.456 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=5793533952 00:11:48.456 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:48.456 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:11:48.456 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:11:48.456 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=30984232960 00:11:48.456 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=30994264064 00:11:48.456 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=10031104 00:11:48.456 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:48.456 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:11:48.456 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:11:48.457 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=12375318528 00:11:48.457 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=12397707264 00:11:48.457 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=22388736 00:11:48.457 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:48.457 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:11:48.457 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:11:48.457 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=30994083840 00:11:48.457 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=30994264064 00:11:48.457 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=180224 00:11:48.457 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:48.457 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:11:48.457 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:11:48.457 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=6198837248 00:11:48.457 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=6198849536 00:11:48.457 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:11:48.457 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:48.457 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:11:48.457 * Looking for test storage... 00:11:48.457 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # local target_space new_size 00:11:48.457 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:11:48.457 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:48.457 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:48.457 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # mount=/ 00:11:48.457 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # target_space=56194994176 00:11:48.457 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:11:48.457 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:11:48.457 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == tmpfs ]] 00:11:48.457 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == ramfs ]] 00:11:48.457 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ / == / ]] 00:11:48.457 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@392 -- # new_size=8008126464 00:11:48.457 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # (( new_size * 100 / sizes[/] > 95 )) 00:11:48.457 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:48.457 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:48.457 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:48.457 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:48.457 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # return 0 00:11:48.457 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1668 -- # set -o errtrace 00:11:48.457 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1669 -- # shopt -s extdebug 00:11:48.457 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1670 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:48.457 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1672 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:48.457 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1673 -- # true 00:11:48.457 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1675 -- # xtrace_fd 00:11:48.457 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:11:48.457 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:11:48.457 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:11:48.457 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:11:48.457 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:48.457 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:48.457 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:48.457 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:11:48.457 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:48.457 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lcov --version 00:11:48.457 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:48.457 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:48.457 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:48.457 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:48.457 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:48.457 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:48.457 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:48.457 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:48.457 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:48.457 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:48.457 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:48.457 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:48.457 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:48.457 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:48.457 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:48.457 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:48.457 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:48.457 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:48.457 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:48.457 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:48.457 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:48.457 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:48.457 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:48.457 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:48.457 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:48.457 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:48.457 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:48.457 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:48.457 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:48.457 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:48.457 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:48.457 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:48.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.457 --rc genhtml_branch_coverage=1 00:11:48.457 --rc genhtml_function_coverage=1 00:11:48.457 --rc genhtml_legend=1 00:11:48.457 --rc geninfo_all_blocks=1 00:11:48.457 --rc geninfo_unexecuted_blocks=1 00:11:48.457 00:11:48.457 ' 00:11:48.457 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:48.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.457 --rc genhtml_branch_coverage=1 00:11:48.457 --rc genhtml_function_coverage=1 00:11:48.457 --rc genhtml_legend=1 00:11:48.457 --rc geninfo_all_blocks=1 00:11:48.457 --rc geninfo_unexecuted_blocks=1 00:11:48.457 00:11:48.457 ' 00:11:48.457 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:48.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.457 --rc genhtml_branch_coverage=1 00:11:48.457 --rc genhtml_function_coverage=1 00:11:48.457 --rc genhtml_legend=1 00:11:48.457 --rc geninfo_all_blocks=1 00:11:48.457 --rc geninfo_unexecuted_blocks=1 00:11:48.457 00:11:48.457 ' 00:11:48.457 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:48.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.457 --rc genhtml_branch_coverage=1 00:11:48.457 --rc genhtml_function_coverage=1 00:11:48.457 --rc genhtml_legend=1 00:11:48.457 --rc geninfo_all_blocks=1 00:11:48.457 --rc geninfo_unexecuted_blocks=1 00:11:48.457 00:11:48.457 ' 00:11:48.457 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:48.457 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:11:48.457 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:48.458 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:48.458 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:48.458 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:48.458 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:48.458 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:48.458 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:48.458 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:48.458 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:48.458 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:48.458 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:11:48.458 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:11:48.458 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:48.458 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:48.458 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:48.458 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:48.458 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:48.458 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:48.458 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:48.458 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:48.458 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:48.458 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.458 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.458 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.458 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:48.458 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.458 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:11:48.458 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:48.458 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:48.458 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:48.458 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:48.458 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:48.458 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:48.458 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:48.458 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:48.458 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:48.458 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:48.458 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:11:48.458 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:48.458 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:11:48.458 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:11:48.458 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:48.458 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # prepare_net_devs 00:11:48.458 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@436 -- # local -g is_hw=no 00:11:48.458 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # remove_spdk_ns 00:11:48.458 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:48.458 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:48.458 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:48.458 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:11:48.458 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:11:48.458 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:11:48.458 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:50.995 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:50.995 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:11:50.995 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:50.995 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:50.995 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:50.995 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:50.995 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:50.995 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:11:50.995 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:50.995 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:11:50.995 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:11:50.995 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:11:50.995 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:11:50.995 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:11:50.995 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:11:50.995 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:50.995 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:50.995 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:50.996 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:50.996 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:50.996 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:50.996 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:50.996 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:50.996 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:50.996 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:50.996 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:50.996 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:50.996 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:50.996 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:50.996 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:50.996 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:50.996 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:50.996 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:50.996 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:50.996 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x1592)' 00:11:50.996 Found 0000:09:00.0 (0x8086 - 0x1592) 00:11:50.996 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:50.996 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:50.996 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:11:50.996 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:11:50.996 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:50.996 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:50.996 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x1592)' 00:11:50.996 Found 0000:09:00.1 (0x8086 - 0x1592) 00:11:50.996 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:50.996 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:50.996 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:11:50.996 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:11:50.996 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:50.996 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:50.996 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:50.996 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:50.996 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:50.996 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:50.996 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:50.996 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:50.996 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:50.996 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:50.996 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:50.996 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:11:50.996 Found net devices under 0000:09:00.0: cvl_0_0 00:11:50.996 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:50.996 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:50.996 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:50.996 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:50.996 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:50.996 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:50.996 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:50.996 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:50.996 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:11:50.996 Found net devices under 0000:09:00.1: cvl_0_1 00:11:50.996 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:50.996 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:11:50.996 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # is_hw=yes 00:11:50.996 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:11:50.996 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:11:50.996 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:11:50.996 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:50.996 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:50.996 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:50.996 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:50.996 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:50.996 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:50.996 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:50.996 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:50.996 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:50.996 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:50.996 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:50.996 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:50.996 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:50.996 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:50.996 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:50.996 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:50.996 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:50.996 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:50.996 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:50.996 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:50.996 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:50.996 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:50.996 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:50.996 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:50.996 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.367 ms 00:11:50.996 00:11:50.996 --- 10.0.0.2 ping statistics --- 00:11:50.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:50.997 rtt min/avg/max/mdev = 0.367/0.367/0.367/0.000 ms 00:11:50.997 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:50.997 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:50.997 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.153 ms 00:11:50.997 00:11:50.997 --- 10.0.0.1 ping statistics --- 00:11:50.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:50.997 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:11:50.997 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:50.997 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # return 0 00:11:50.997 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:11:50.997 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:50.997 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:11:50.997 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:11:50.997 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:50.997 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:11:50.997 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:11:50.997 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:11:50.997 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:50.997 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:50.997 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:50.997 ************************************ 00:11:50.997 START TEST nvmf_filesystem_no_in_capsule 00:11:50.997 ************************************ 00:11:50.997 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 0 00:11:50.997 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:11:50.997 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:50.997 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:11:50.997 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:50.997 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:50.997 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # nvmfpid=162256 00:11:50.997 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:50.997 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # waitforlisten 162256 00:11:50.997 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 162256 ']' 00:11:50.997 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:50.997 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:50.997 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:50.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:50.997 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:50.997 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:50.997 [2024-10-07 09:32:39.753921] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:11:50.997 [2024-10-07 09:32:39.754025] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:50.997 [2024-10-07 09:32:39.814388] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:50.997 [2024-10-07 09:32:39.924312] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:50.997 [2024-10-07 09:32:39.924364] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:50.997 [2024-10-07 09:32:39.924393] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:50.997 [2024-10-07 09:32:39.924403] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:50.997 [2024-10-07 09:32:39.924414] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:50.997 [2024-10-07 09:32:39.925852] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:11:50.997 [2024-10-07 09:32:39.925880] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:11:50.997 [2024-10-07 09:32:39.925934] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:11:50.997 [2024-10-07 09:32:39.925937] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:51.256 09:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:51.256 09:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:11:51.256 09:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:11:51.256 09:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:51.256 09:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:51.256 09:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:51.256 09:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:51.256 09:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:51.256 09:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.256 09:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:51.256 [2024-10-07 09:32:40.087567] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:51.256 09:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.256 09:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:51.256 09:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.256 09:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:51.514 Malloc1 00:11:51.514 09:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.514 09:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:51.514 09:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.514 09:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:51.515 09:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.515 09:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:51.515 09:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.515 09:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:51.515 09:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.515 09:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:51.515 09:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.515 09:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:51.515 [2024-10-07 09:32:40.277121] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:51.515 09:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.515 09:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:51.515 09:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:11:51.515 09:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:11:51.515 09:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:11:51.515 09:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:11:51.515 09:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:51.515 09:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.515 09:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:51.515 09:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.515 09:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:11:51.515 { 00:11:51.515 "name": "Malloc1", 00:11:51.515 "aliases": [ 00:11:51.515 "93be71c9-1727-42ee-82b0-5da7f7c44795" 00:11:51.515 ], 00:11:51.515 "product_name": "Malloc disk", 00:11:51.515 "block_size": 512, 00:11:51.515 "num_blocks": 1048576, 00:11:51.515 "uuid": "93be71c9-1727-42ee-82b0-5da7f7c44795", 00:11:51.515 "assigned_rate_limits": { 00:11:51.515 "rw_ios_per_sec": 0, 00:11:51.515 "rw_mbytes_per_sec": 0, 00:11:51.515 "r_mbytes_per_sec": 0, 00:11:51.515 "w_mbytes_per_sec": 0 00:11:51.515 }, 00:11:51.515 "claimed": true, 00:11:51.515 "claim_type": "exclusive_write", 00:11:51.515 "zoned": false, 00:11:51.515 "supported_io_types": { 00:11:51.515 "read": true, 00:11:51.515 "write": true, 00:11:51.515 "unmap": true, 00:11:51.515 "flush": true, 00:11:51.515 "reset": true, 00:11:51.515 "nvme_admin": false, 00:11:51.515 "nvme_io": false, 00:11:51.515 "nvme_io_md": false, 00:11:51.515 "write_zeroes": true, 00:11:51.515 "zcopy": true, 00:11:51.515 "get_zone_info": false, 00:11:51.515 "zone_management": false, 00:11:51.515 "zone_append": false, 00:11:51.515 "compare": false, 00:11:51.515 "compare_and_write": false, 00:11:51.515 "abort": true, 00:11:51.515 "seek_hole": false, 00:11:51.515 "seek_data": false, 00:11:51.515 "copy": true, 00:11:51.515 "nvme_iov_md": false 00:11:51.515 }, 00:11:51.515 "memory_domains": [ 00:11:51.515 { 00:11:51.515 "dma_device_id": "system", 00:11:51.515 "dma_device_type": 1 00:11:51.515 }, 00:11:51.515 { 00:11:51.515 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:51.515 "dma_device_type": 2 00:11:51.515 } 00:11:51.515 ], 00:11:51.515 "driver_specific": {} 00:11:51.515 } 00:11:51.515 ]' 00:11:51.515 09:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:11:51.515 09:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:11:51.515 09:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:11:51.515 09:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:11:51.515 09:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:11:51.515 09:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:11:51.515 09:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:51.515 09:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid=21b7cb46-a602-e411-a339-001e67bc3be4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:52.081 09:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:52.081 09:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:11:52.081 09:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:52.081 09:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:52.081 09:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:11:53.981 09:32:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:53.981 09:32:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:53.981 09:32:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:54.239 09:32:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:54.239 09:32:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:54.239 09:32:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:11:54.239 09:32:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:54.239 09:32:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:54.239 09:32:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:54.239 09:32:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:54.239 09:32:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:54.239 09:32:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:54.239 09:32:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:54.239 09:32:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:54.239 09:32:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:54.239 09:32:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:54.239 09:32:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:54.239 09:32:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:55.176 09:32:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:56.116 09:32:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:11:56.116 09:32:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:56.116 09:32:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:56.116 09:32:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:56.116 09:32:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:56.116 ************************************ 00:11:56.116 START TEST filesystem_ext4 00:11:56.116 ************************************ 00:11:56.116 09:32:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:56.116 09:32:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:56.116 09:32:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:56.116 09:32:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:56.116 09:32:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:11:56.116 09:32:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:56.116 09:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:11:56.116 09:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local force 00:11:56.116 09:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:11:56.117 09:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:11:56.117 09:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:56.117 mke2fs 1.47.0 (5-Feb-2023) 00:11:56.378 Discarding device blocks: 0/522240 done 00:11:56.378 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:56.378 Filesystem UUID: f15fdf5c-c547-467c-8323-2ca9daa7fd28 00:11:56.378 Superblock backups stored on blocks: 00:11:56.378 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:56.378 00:11:56.378 Allocating group tables: 0/64 done 00:11:56.378 Writing inode tables: 0/64 done 00:11:56.378 Creating journal (8192 blocks): done 00:11:56.378 Writing superblocks and filesystem accounting information: 0/64 done 00:11:56.378 00:11:56.378 09:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@945 -- # return 0 00:11:56.378 09:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:01.658 09:32:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:01.658 09:32:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:12:01.658 09:32:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:01.658 09:32:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:12:01.659 09:32:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:01.659 09:32:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:01.659 09:32:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 162256 00:12:01.659 09:32:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:01.659 09:32:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:01.659 09:32:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:01.659 09:32:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:01.659 00:12:01.659 real 0m5.504s 00:12:01.659 user 0m0.025s 00:12:01.659 sys 0m0.095s 00:12:01.659 09:32:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:01.659 09:32:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:01.659 ************************************ 00:12:01.659 END TEST filesystem_ext4 00:12:01.659 ************************************ 00:12:01.659 09:32:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:01.659 09:32:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:01.659 09:32:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:01.659 09:32:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:01.659 ************************************ 00:12:01.659 START TEST filesystem_btrfs 00:12:01.659 ************************************ 00:12:01.659 09:32:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:01.659 09:32:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:01.659 09:32:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:01.659 09:32:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:01.659 09:32:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:12:01.659 09:32:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:01.659 09:32:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:12:01.659 09:32:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local force 00:12:01.659 09:32:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:12:01.659 09:32:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:12:01.659 09:32:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:01.919 btrfs-progs v6.8.1 00:12:01.919 See https://btrfs.readthedocs.io for more information. 00:12:01.919 00:12:01.919 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:01.919 NOTE: several default settings have changed in version 5.15, please make sure 00:12:01.919 this does not affect your deployments: 00:12:01.919 - DUP for metadata (-m dup) 00:12:01.919 - enabled no-holes (-O no-holes) 00:12:01.919 - enabled free-space-tree (-R free-space-tree) 00:12:01.919 00:12:01.919 Label: (null) 00:12:01.919 UUID: ec9f9897-71cf-4d42-b0ec-43742d90efbe 00:12:01.919 Node size: 16384 00:12:01.919 Sector size: 4096 (CPU page size: 4096) 00:12:01.919 Filesystem size: 510.00MiB 00:12:01.919 Block group profiles: 00:12:01.919 Data: single 8.00MiB 00:12:01.919 Metadata: DUP 32.00MiB 00:12:01.919 System: DUP 8.00MiB 00:12:01.919 SSD detected: yes 00:12:01.919 Zoned device: no 00:12:01.919 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:01.919 Checksum: crc32c 00:12:01.919 Number of devices: 1 00:12:01.919 Devices: 00:12:01.919 ID SIZE PATH 00:12:01.919 1 510.00MiB /dev/nvme0n1p1 00:12:01.919 00:12:01.919 09:32:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@945 -- # return 0 00:12:01.919 09:32:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:02.485 09:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:02.485 09:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:12:02.485 09:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:02.485 09:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:12:02.485 09:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:02.485 09:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:02.746 09:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 162256 00:12:02.746 09:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:02.746 09:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:02.746 09:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:02.746 09:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:02.746 00:12:02.746 real 0m0.951s 00:12:02.746 user 0m0.013s 00:12:02.746 sys 0m0.144s 00:12:02.746 09:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:02.746 09:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:02.746 ************************************ 00:12:02.746 END TEST filesystem_btrfs 00:12:02.746 ************************************ 00:12:02.746 09:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:12:02.746 09:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:02.746 09:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:02.746 09:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:02.746 ************************************ 00:12:02.746 START TEST filesystem_xfs 00:12:02.746 ************************************ 00:12:02.747 09:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:12:02.747 09:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:02.747 09:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:02.747 09:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:02.747 09:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:12:02.747 09:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:02.747 09:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local i=0 00:12:02.747 09:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local force 00:12:02.747 09:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:12:02.747 09:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@934 -- # force=-f 00:12:02.747 09:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:02.747 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:02.747 = sectsz=512 attr=2, projid32bit=1 00:12:02.747 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:02.747 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:02.747 data = bsize=4096 blocks=130560, imaxpct=25 00:12:02.747 = sunit=0 swidth=0 blks 00:12:02.747 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:02.747 log =internal log bsize=4096 blocks=16384, version=2 00:12:02.747 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:02.747 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:04.127 Discarding blocks...Done. 00:12:04.127 09:32:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@945 -- # return 0 00:12:04.127 09:32:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:06.032 09:32:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:06.032 09:32:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:12:06.032 09:32:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:06.032 09:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:12:06.032 09:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:12:06.032 09:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:06.032 09:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 162256 00:12:06.032 09:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:06.032 09:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:06.032 09:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:06.291 09:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:06.291 00:12:06.291 real 0m3.481s 00:12:06.291 user 0m0.015s 00:12:06.291 sys 0m0.096s 00:12:06.291 09:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:06.291 09:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:06.291 ************************************ 00:12:06.291 END TEST filesystem_xfs 00:12:06.291 ************************************ 00:12:06.291 09:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:06.291 09:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:06.291 09:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:06.291 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:06.291 09:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:06.291 09:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:12:06.291 09:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:06.291 09:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:06.291 09:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:06.291 09:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:06.551 09:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:12:06.551 09:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:06.551 09:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.551 09:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:06.551 09:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.551 09:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:06.551 09:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 162256 00:12:06.551 09:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 162256 ']' 00:12:06.551 09:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # kill -0 162256 00:12:06.551 09:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # uname 00:12:06.551 09:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:06.551 09:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 162256 00:12:06.551 09:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:06.551 09:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:06.551 09:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 162256' 00:12:06.551 killing process with pid 162256 00:12:06.551 09:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@969 -- # kill 162256 00:12:06.551 09:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@974 -- # wait 162256 00:12:07.120 09:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:07.120 00:12:07.120 real 0m16.113s 00:12:07.120 user 1m2.109s 00:12:07.120 sys 0m2.138s 00:12:07.120 09:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:07.120 09:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:07.120 ************************************ 00:12:07.120 END TEST nvmf_filesystem_no_in_capsule 00:12:07.120 ************************************ 00:12:07.120 09:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:12:07.120 09:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:07.120 09:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:07.120 09:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:07.120 ************************************ 00:12:07.120 START TEST nvmf_filesystem_in_capsule 00:12:07.120 ************************************ 00:12:07.120 09:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 4096 00:12:07.120 09:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:12:07.120 09:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:12:07.120 09:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:12:07.120 09:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:07.120 09:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:07.120 09:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # nvmfpid=164378 00:12:07.120 09:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:07.121 09:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # waitforlisten 164378 00:12:07.121 09:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 164378 ']' 00:12:07.121 09:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:07.121 09:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:07.121 09:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:07.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:07.121 09:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:07.121 09:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:07.121 [2024-10-07 09:32:55.920567] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:12:07.121 [2024-10-07 09:32:55.920644] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:07.121 [2024-10-07 09:32:55.982492] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:07.121 [2024-10-07 09:32:56.081062] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:07.121 [2024-10-07 09:32:56.081124] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:07.121 [2024-10-07 09:32:56.081151] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:07.121 [2024-10-07 09:32:56.081161] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:07.121 [2024-10-07 09:32:56.081170] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:07.121 [2024-10-07 09:32:56.082567] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:12:07.121 [2024-10-07 09:32:56.082650] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:12:07.121 [2024-10-07 09:32:56.082761] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:12:07.121 [2024-10-07 09:32:56.082822] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:12:07.379 09:32:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:07.379 09:32:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:12:07.379 09:32:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:12:07.379 09:32:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:07.379 09:32:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:07.379 09:32:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:07.379 09:32:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:07.379 09:32:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:12:07.379 09:32:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.379 09:32:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:07.379 [2024-10-07 09:32:56.231346] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:07.379 09:32:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.379 09:32:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:07.379 09:32:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.379 09:32:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:07.379 Malloc1 00:12:07.379 09:32:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.379 09:32:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:07.379 09:32:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.379 09:32:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:07.639 09:32:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.639 09:32:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:07.639 09:32:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.639 09:32:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:07.639 09:32:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.639 09:32:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:07.639 09:32:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.639 09:32:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:07.639 [2024-10-07 09:32:56.392818] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:07.639 09:32:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.639 09:32:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:07.639 09:32:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:12:07.639 09:32:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:12:07.639 09:32:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:12:07.639 09:32:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:12:07.639 09:32:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:07.639 09:32:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.639 09:32:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:07.639 09:32:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.639 09:32:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:12:07.639 { 00:12:07.639 "name": "Malloc1", 00:12:07.639 "aliases": [ 00:12:07.639 "e9a1554f-9c22-487b-961a-9325b7e21ec9" 00:12:07.639 ], 00:12:07.639 "product_name": "Malloc disk", 00:12:07.639 "block_size": 512, 00:12:07.639 "num_blocks": 1048576, 00:12:07.639 "uuid": "e9a1554f-9c22-487b-961a-9325b7e21ec9", 00:12:07.639 "assigned_rate_limits": { 00:12:07.639 "rw_ios_per_sec": 0, 00:12:07.639 "rw_mbytes_per_sec": 0, 00:12:07.639 "r_mbytes_per_sec": 0, 00:12:07.639 "w_mbytes_per_sec": 0 00:12:07.639 }, 00:12:07.639 "claimed": true, 00:12:07.639 "claim_type": "exclusive_write", 00:12:07.639 "zoned": false, 00:12:07.639 "supported_io_types": { 00:12:07.639 "read": true, 00:12:07.639 "write": true, 00:12:07.639 "unmap": true, 00:12:07.639 "flush": true, 00:12:07.639 "reset": true, 00:12:07.639 "nvme_admin": false, 00:12:07.639 "nvme_io": false, 00:12:07.639 "nvme_io_md": false, 00:12:07.639 "write_zeroes": true, 00:12:07.639 "zcopy": true, 00:12:07.639 "get_zone_info": false, 00:12:07.639 "zone_management": false, 00:12:07.639 "zone_append": false, 00:12:07.639 "compare": false, 00:12:07.639 "compare_and_write": false, 00:12:07.639 "abort": true, 00:12:07.639 "seek_hole": false, 00:12:07.639 "seek_data": false, 00:12:07.639 "copy": true, 00:12:07.639 "nvme_iov_md": false 00:12:07.639 }, 00:12:07.639 "memory_domains": [ 00:12:07.639 { 00:12:07.639 "dma_device_id": "system", 00:12:07.639 "dma_device_type": 1 00:12:07.639 }, 00:12:07.639 { 00:12:07.639 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:07.639 "dma_device_type": 2 00:12:07.639 } 00:12:07.639 ], 00:12:07.639 "driver_specific": {} 00:12:07.639 } 00:12:07.639 ]' 00:12:07.639 09:32:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:12:07.639 09:32:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:12:07.639 09:32:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:12:07.639 09:32:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:12:07.639 09:32:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:12:07.639 09:32:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:12:07.639 09:32:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:07.639 09:32:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid=21b7cb46-a602-e411-a339-001e67bc3be4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:08.210 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:08.210 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:12:08.210 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:08.210 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:08.210 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:12:10.127 09:32:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:10.127 09:32:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:10.127 09:32:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:10.127 09:32:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:10.127 09:32:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:10.127 09:32:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:12:10.127 09:32:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:10.127 09:32:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:10.387 09:32:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:10.387 09:32:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:10.387 09:32:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:10.387 09:32:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:10.387 09:32:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:10.387 09:32:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:10.387 09:32:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:10.387 09:32:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:10.387 09:32:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:10.387 09:32:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:10.648 09:32:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:11.588 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:12:11.588 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:11.588 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:11.588 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:11.588 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:11.847 ************************************ 00:12:11.847 START TEST filesystem_in_capsule_ext4 00:12:11.847 ************************************ 00:12:11.847 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:11.847 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:11.847 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:11.847 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:11.847 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:12:11.847 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:11.847 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:12:11.848 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local force 00:12:11.848 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:12:11.848 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:12:11.848 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:11.848 mke2fs 1.47.0 (5-Feb-2023) 00:12:11.848 Discarding device blocks: 0/522240 done 00:12:11.848 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:11.848 Filesystem UUID: d9dfe341-9b19-4217-8ac9-408c17d5cc53 00:12:11.848 Superblock backups stored on blocks: 00:12:11.848 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:11.848 00:12:11.848 Allocating group tables: 0/64 done 00:12:11.848 Writing inode tables: 0/64 done 00:12:12.108 Creating journal (8192 blocks): done 00:12:14.058 Writing superblocks and filesystem accounting information: 0/64 done 00:12:14.058 00:12:14.058 09:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@945 -- # return 0 00:12:14.058 09:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:19.338 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:19.338 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:12:19.338 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:19.338 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:12:19.338 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:19.338 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:19.338 09:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 164378 00:12:19.338 09:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:19.338 09:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:19.338 09:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:19.338 09:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:19.338 00:12:19.338 real 0m7.419s 00:12:19.338 user 0m0.024s 00:12:19.338 sys 0m0.054s 00:12:19.338 09:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:19.338 09:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:19.338 ************************************ 00:12:19.338 END TEST filesystem_in_capsule_ext4 00:12:19.338 ************************************ 00:12:19.338 09:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:19.338 09:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:19.338 09:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:19.338 09:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:19.338 ************************************ 00:12:19.338 START TEST filesystem_in_capsule_btrfs 00:12:19.338 ************************************ 00:12:19.338 09:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:19.338 09:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:19.338 09:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:19.338 09:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:19.338 09:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:12:19.338 09:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:19.338 09:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:12:19.338 09:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local force 00:12:19.338 09:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:12:19.338 09:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:12:19.338 09:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:19.338 btrfs-progs v6.8.1 00:12:19.338 See https://btrfs.readthedocs.io for more information. 00:12:19.338 00:12:19.338 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:19.338 NOTE: several default settings have changed in version 5.15, please make sure 00:12:19.339 this does not affect your deployments: 00:12:19.339 - DUP for metadata (-m dup) 00:12:19.339 - enabled no-holes (-O no-holes) 00:12:19.339 - enabled free-space-tree (-R free-space-tree) 00:12:19.339 00:12:19.339 Label: (null) 00:12:19.339 UUID: 130d3370-e217-497b-8291-eb6a2a3f3b29 00:12:19.339 Node size: 16384 00:12:19.339 Sector size: 4096 (CPU page size: 4096) 00:12:19.339 Filesystem size: 510.00MiB 00:12:19.339 Block group profiles: 00:12:19.339 Data: single 8.00MiB 00:12:19.339 Metadata: DUP 32.00MiB 00:12:19.339 System: DUP 8.00MiB 00:12:19.339 SSD detected: yes 00:12:19.339 Zoned device: no 00:12:19.339 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:19.339 Checksum: crc32c 00:12:19.339 Number of devices: 1 00:12:19.339 Devices: 00:12:19.339 ID SIZE PATH 00:12:19.339 1 510.00MiB /dev/nvme0n1p1 00:12:19.339 00:12:19.339 09:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@945 -- # return 0 00:12:19.339 09:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:19.906 09:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:19.906 09:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:12:19.906 09:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:19.906 09:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:12:19.906 09:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:19.906 09:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:19.906 09:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 164378 00:12:19.906 09:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:19.906 09:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:19.906 09:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:19.906 09:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:19.906 00:12:19.906 real 0m0.638s 00:12:19.906 user 0m0.017s 00:12:19.906 sys 0m0.103s 00:12:19.906 09:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:19.906 09:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:19.906 ************************************ 00:12:19.906 END TEST filesystem_in_capsule_btrfs 00:12:19.906 ************************************ 00:12:19.906 09:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:12:19.906 09:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:19.906 09:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:19.906 09:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:19.906 ************************************ 00:12:19.906 START TEST filesystem_in_capsule_xfs 00:12:19.906 ************************************ 00:12:19.906 09:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:12:19.906 09:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:19.906 09:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:19.906 09:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:19.906 09:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:12:19.906 09:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:19.906 09:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local i=0 00:12:19.906 09:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local force 00:12:19.906 09:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:12:19.906 09:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@934 -- # force=-f 00:12:19.906 09:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:20.164 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:20.164 = sectsz=512 attr=2, projid32bit=1 00:12:20.164 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:20.164 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:20.164 data = bsize=4096 blocks=130560, imaxpct=25 00:12:20.164 = sunit=0 swidth=0 blks 00:12:20.164 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:20.164 log =internal log bsize=4096 blocks=16384, version=2 00:12:20.164 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:20.164 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:21.104 Discarding blocks...Done. 00:12:21.104 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@945 -- # return 0 00:12:21.104 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:23.011 09:33:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:23.011 09:33:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:12:23.011 09:33:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:23.011 09:33:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:12:23.011 09:33:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:12:23.011 09:33:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:23.011 09:33:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 164378 00:12:23.011 09:33:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:23.011 09:33:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:23.011 09:33:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:23.011 09:33:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:23.011 00:12:23.011 real 0m3.104s 00:12:23.011 user 0m0.013s 00:12:23.011 sys 0m0.061s 00:12:23.011 09:33:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:23.011 09:33:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:23.011 ************************************ 00:12:23.011 END TEST filesystem_in_capsule_xfs 00:12:23.011 ************************************ 00:12:23.011 09:33:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:23.011 09:33:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:23.011 09:33:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:23.011 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:23.011 09:33:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:23.011 09:33:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:12:23.011 09:33:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:23.011 09:33:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:23.011 09:33:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:23.011 09:33:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:23.011 09:33:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:12:23.011 09:33:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:23.011 09:33:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.011 09:33:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:23.011 09:33:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.011 09:33:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:23.011 09:33:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 164378 00:12:23.011 09:33:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 164378 ']' 00:12:23.011 09:33:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # kill -0 164378 00:12:23.011 09:33:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # uname 00:12:23.011 09:33:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:23.011 09:33:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 164378 00:12:23.272 09:33:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:23.272 09:33:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:23.272 09:33:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 164378' 00:12:23.272 killing process with pid 164378 00:12:23.272 09:33:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@969 -- # kill 164378 00:12:23.272 09:33:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@974 -- # wait 164378 00:12:23.533 09:33:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:23.533 00:12:23.533 real 0m16.626s 00:12:23.533 user 1m4.101s 00:12:23.533 sys 0m2.028s 00:12:23.533 09:33:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:23.533 09:33:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:23.533 ************************************ 00:12:23.533 END TEST nvmf_filesystem_in_capsule 00:12:23.533 ************************************ 00:12:23.533 09:33:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:12:23.533 09:33:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@514 -- # nvmfcleanup 00:12:23.533 09:33:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:12:23.533 09:33:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:23.533 09:33:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:12:23.533 09:33:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:23.533 09:33:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:23.533 rmmod nvme_tcp 00:12:23.793 rmmod nvme_fabrics 00:12:23.793 rmmod nvme_keyring 00:12:23.793 09:33:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:23.793 09:33:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:12:23.793 09:33:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:12:23.793 09:33:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:12:23.793 09:33:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:12:23.793 09:33:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:12:23.793 09:33:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:12:23.793 09:33:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:12:23.793 09:33:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@789 -- # iptables-save 00:12:23.793 09:33:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:12:23.793 09:33:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@789 -- # iptables-restore 00:12:23.793 09:33:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:23.794 09:33:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:23.794 09:33:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:23.794 09:33:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:23.794 09:33:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:25.705 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:25.705 00:12:25.705 real 0m37.668s 00:12:25.705 user 2m7.360s 00:12:25.705 sys 0m5.950s 00:12:25.705 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:25.705 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:25.705 ************************************ 00:12:25.705 END TEST nvmf_filesystem 00:12:25.705 ************************************ 00:12:25.705 09:33:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:25.705 09:33:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:25.705 09:33:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:25.705 09:33:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:25.705 ************************************ 00:12:25.705 START TEST nvmf_target_discovery 00:12:25.705 ************************************ 00:12:25.705 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:25.966 * Looking for test storage... 00:12:25.966 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:25.966 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:25.966 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1681 -- # lcov --version 00:12:25.966 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:25.966 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:25.966 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:25.966 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:25.966 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:25.966 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:12:25.966 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:12:25.966 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:12:25.966 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:12:25.966 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:12:25.966 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:12:25.966 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:12:25.966 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:25.966 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:12:25.966 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:12:25.966 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:25.966 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:25.966 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:12:25.966 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:12:25.966 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:25.966 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:12:25.966 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:12:25.966 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:12:25.966 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:12:25.966 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:25.966 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:12:25.966 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:12:25.966 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:25.966 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:25.966 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:12:25.966 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:25.966 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:25.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:25.966 --rc genhtml_branch_coverage=1 00:12:25.966 --rc genhtml_function_coverage=1 00:12:25.966 --rc genhtml_legend=1 00:12:25.966 --rc geninfo_all_blocks=1 00:12:25.966 --rc geninfo_unexecuted_blocks=1 00:12:25.966 00:12:25.966 ' 00:12:25.966 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:25.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:25.966 --rc genhtml_branch_coverage=1 00:12:25.966 --rc genhtml_function_coverage=1 00:12:25.966 --rc genhtml_legend=1 00:12:25.966 --rc geninfo_all_blocks=1 00:12:25.966 --rc geninfo_unexecuted_blocks=1 00:12:25.966 00:12:25.966 ' 00:12:25.966 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:25.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:25.966 --rc genhtml_branch_coverage=1 00:12:25.966 --rc genhtml_function_coverage=1 00:12:25.966 --rc genhtml_legend=1 00:12:25.966 --rc geninfo_all_blocks=1 00:12:25.966 --rc geninfo_unexecuted_blocks=1 00:12:25.966 00:12:25.966 ' 00:12:25.966 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:25.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:25.966 --rc genhtml_branch_coverage=1 00:12:25.966 --rc genhtml_function_coverage=1 00:12:25.966 --rc genhtml_legend=1 00:12:25.966 --rc geninfo_all_blocks=1 00:12:25.967 --rc geninfo_unexecuted_blocks=1 00:12:25.967 00:12:25.967 ' 00:12:25.967 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:25.967 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:12:25.967 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:25.967 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:25.967 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:25.967 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:25.967 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:25.967 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:25.967 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:25.967 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:25.967 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:25.967 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:25.967 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:12:25.967 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:12:25.967 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:25.967 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:25.967 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:25.967 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:25.967 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:25.967 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:12:25.967 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:25.967 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:25.967 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:25.967 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.967 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.967 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.967 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:12:25.967 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.967 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:12:25.967 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:25.967 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:25.967 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:25.967 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:25.967 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:25.967 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:25.967 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:25.967 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:25.967 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:25.967 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:25.967 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:12:25.967 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:12:25.967 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:12:25.967 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:12:25.967 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:12:25.967 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:12:25.967 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:25.967 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # prepare_net_devs 00:12:25.967 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@436 -- # local -g is_hw=no 00:12:25.967 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # remove_spdk_ns 00:12:25.967 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:25.967 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:25.967 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:25.967 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:12:25.967 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:12:25.967 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:12:25.967 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:27.879 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:27.879 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:12:27.879 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:27.879 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:27.879 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:27.879 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:27.879 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:27.879 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:12:27.879 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:27.879 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:12:27.879 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:12:27.879 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:12:27.879 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:12:27.879 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:12:27.879 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:12:27.879 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:27.879 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:27.879 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:27.879 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:27.879 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:27.879 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:27.879 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:27.879 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:27.879 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:27.879 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:27.879 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:27.879 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:27.879 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:27.879 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:27.879 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:27.879 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:27.879 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:27.879 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:27.879 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:27.879 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x1592)' 00:12:27.879 Found 0000:09:00.0 (0x8086 - 0x1592) 00:12:27.879 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:27.879 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:27.879 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:12:27.879 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:12:27.879 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:27.879 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:27.879 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x1592)' 00:12:27.879 Found 0000:09:00.1 (0x8086 - 0x1592) 00:12:27.879 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:27.879 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:27.879 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:12:27.879 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:12:27.879 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:27.879 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:27.879 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:27.879 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:27.879 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:27.879 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:27.879 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:27.880 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:27.880 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:27.880 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:27.880 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:27.880 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:12:27.880 Found net devices under 0000:09:00.0: cvl_0_0 00:12:27.880 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:27.880 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:27.880 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:27.880 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:27.880 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:27.880 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:27.880 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:27.880 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:27.880 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:12:27.880 Found net devices under 0000:09:00.1: cvl_0_1 00:12:27.880 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:27.880 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:12:27.880 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # is_hw=yes 00:12:27.880 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:12:27.880 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:12:27.880 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:12:27.880 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:27.880 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:28.139 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:28.139 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:28.139 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:28.139 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:28.139 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:28.139 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:28.139 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:28.139 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:28.139 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:28.140 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:28.140 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:28.140 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:28.140 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:28.140 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:28.140 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:28.140 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:28.140 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:28.140 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:28.140 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:28.140 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:28.140 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:28.140 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:28.140 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.212 ms 00:12:28.140 00:12:28.140 --- 10.0.0.2 ping statistics --- 00:12:28.140 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:28.140 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:12:28.140 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:28.140 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:28.140 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:12:28.140 00:12:28.140 --- 10.0.0.1 ping statistics --- 00:12:28.140 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:28.140 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:12:28.140 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:28.140 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # return 0 00:12:28.140 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:12:28.140 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:28.140 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:12:28.140 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:12:28.140 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:28.140 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:12:28.140 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:12:28.140 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:12:28.140 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:12:28.140 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:28.140 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:28.140 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # nvmfpid=168847 00:12:28.140 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:28.140 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # waitforlisten 168847 00:12:28.140 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@831 -- # '[' -z 168847 ']' 00:12:28.140 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:28.140 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:28.140 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:28.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:28.140 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:28.140 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:28.140 [2024-10-07 09:33:17.085301] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:12:28.140 [2024-10-07 09:33:17.085377] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:28.401 [2024-10-07 09:33:17.149195] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:28.401 [2024-10-07 09:33:17.258998] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:28.401 [2024-10-07 09:33:17.259055] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:28.401 [2024-10-07 09:33:17.259083] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:28.401 [2024-10-07 09:33:17.259095] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:28.401 [2024-10-07 09:33:17.259105] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:28.401 [2024-10-07 09:33:17.260695] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:12:28.401 [2024-10-07 09:33:17.260746] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:12:28.401 [2024-10-07 09:33:17.260786] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:12:28.401 [2024-10-07 09:33:17.260789] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:12:28.401 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:28.401 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # return 0 00:12:28.401 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:12:28.401 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:28.401 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:28.661 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:28.661 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:28.661 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.661 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:28.661 [2024-10-07 09:33:17.425220] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:28.661 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.661 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:12:28.661 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:28.661 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:12:28.661 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.661 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:28.661 Null1 00:12:28.661 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.661 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:28.661 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.661 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:28.661 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.661 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:12:28.661 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.661 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:28.661 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.662 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:28.662 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.662 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:28.662 [2024-10-07 09:33:17.473551] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:28.662 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.662 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:28.662 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:12:28.662 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.662 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:28.662 Null2 00:12:28.662 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.662 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:12:28.662 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.662 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:28.662 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.662 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:12:28.662 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.662 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:28.662 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.662 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:28.662 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.662 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:28.662 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.662 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:28.662 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:12:28.662 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.662 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:28.662 Null3 00:12:28.662 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.662 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:12:28.662 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.662 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:28.662 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.662 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:12:28.662 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.662 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:28.662 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.662 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:12:28.662 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.662 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:28.662 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.662 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:28.662 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:12:28.662 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.662 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:28.662 Null4 00:12:28.662 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.662 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:12:28.662 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.662 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:28.662 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.662 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:12:28.662 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.662 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:28.662 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.662 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:12:28.662 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.662 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:28.662 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.662 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:28.662 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.662 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:28.662 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.662 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:12:28.662 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.662 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:28.662 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.662 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid=21b7cb46-a602-e411-a339-001e67bc3be4 -t tcp -a 10.0.0.2 -s 4420 00:12:28.923 00:12:28.923 Discovery Log Number of Records 6, Generation counter 6 00:12:28.923 =====Discovery Log Entry 0====== 00:12:28.923 trtype: tcp 00:12:28.923 adrfam: ipv4 00:12:28.923 subtype: current discovery subsystem 00:12:28.923 treq: not required 00:12:28.923 portid: 0 00:12:28.923 trsvcid: 4420 00:12:28.923 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:28.923 traddr: 10.0.0.2 00:12:28.923 eflags: explicit discovery connections, duplicate discovery information 00:12:28.923 sectype: none 00:12:28.923 =====Discovery Log Entry 1====== 00:12:28.923 trtype: tcp 00:12:28.923 adrfam: ipv4 00:12:28.923 subtype: nvme subsystem 00:12:28.923 treq: not required 00:12:28.923 portid: 0 00:12:28.923 trsvcid: 4420 00:12:28.923 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:28.923 traddr: 10.0.0.2 00:12:28.923 eflags: none 00:12:28.923 sectype: none 00:12:28.923 =====Discovery Log Entry 2====== 00:12:28.923 trtype: tcp 00:12:28.923 adrfam: ipv4 00:12:28.923 subtype: nvme subsystem 00:12:28.923 treq: not required 00:12:28.923 portid: 0 00:12:28.923 trsvcid: 4420 00:12:28.923 subnqn: nqn.2016-06.io.spdk:cnode2 00:12:28.923 traddr: 10.0.0.2 00:12:28.923 eflags: none 00:12:28.923 sectype: none 00:12:28.923 =====Discovery Log Entry 3====== 00:12:28.923 trtype: tcp 00:12:28.923 adrfam: ipv4 00:12:28.923 subtype: nvme subsystem 00:12:28.923 treq: not required 00:12:28.923 portid: 0 00:12:28.923 trsvcid: 4420 00:12:28.923 subnqn: nqn.2016-06.io.spdk:cnode3 00:12:28.923 traddr: 10.0.0.2 00:12:28.923 eflags: none 00:12:28.923 sectype: none 00:12:28.923 =====Discovery Log Entry 4====== 00:12:28.923 trtype: tcp 00:12:28.923 adrfam: ipv4 00:12:28.923 subtype: nvme subsystem 00:12:28.923 treq: not required 00:12:28.923 portid: 0 00:12:28.923 trsvcid: 4420 00:12:28.923 subnqn: nqn.2016-06.io.spdk:cnode4 00:12:28.923 traddr: 10.0.0.2 00:12:28.923 eflags: none 00:12:28.923 sectype: none 00:12:28.923 =====Discovery Log Entry 5====== 00:12:28.923 trtype: tcp 00:12:28.923 adrfam: ipv4 00:12:28.923 subtype: discovery subsystem referral 00:12:28.923 treq: not required 00:12:28.923 portid: 0 00:12:28.923 trsvcid: 4430 00:12:28.923 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:28.923 traddr: 10.0.0.2 00:12:28.923 eflags: none 00:12:28.923 sectype: none 00:12:28.923 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:12:28.923 Perform nvmf subsystem discovery via RPC 00:12:28.923 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:12:28.923 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.923 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:28.923 [ 00:12:28.923 { 00:12:28.923 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:28.923 "subtype": "Discovery", 00:12:28.923 "listen_addresses": [ 00:12:28.923 { 00:12:28.923 "trtype": "TCP", 00:12:28.923 "adrfam": "IPv4", 00:12:28.923 "traddr": "10.0.0.2", 00:12:28.923 "trsvcid": "4420" 00:12:28.923 } 00:12:28.923 ], 00:12:28.924 "allow_any_host": true, 00:12:28.924 "hosts": [] 00:12:28.924 }, 00:12:28.924 { 00:12:28.924 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:28.924 "subtype": "NVMe", 00:12:28.924 "listen_addresses": [ 00:12:28.924 { 00:12:28.924 "trtype": "TCP", 00:12:28.924 "adrfam": "IPv4", 00:12:28.924 "traddr": "10.0.0.2", 00:12:28.924 "trsvcid": "4420" 00:12:28.924 } 00:12:28.924 ], 00:12:28.924 "allow_any_host": true, 00:12:28.924 "hosts": [], 00:12:28.924 "serial_number": "SPDK00000000000001", 00:12:28.924 "model_number": "SPDK bdev Controller", 00:12:28.924 "max_namespaces": 32, 00:12:28.924 "min_cntlid": 1, 00:12:28.924 "max_cntlid": 65519, 00:12:28.924 "namespaces": [ 00:12:28.924 { 00:12:28.924 "nsid": 1, 00:12:28.924 "bdev_name": "Null1", 00:12:28.924 "name": "Null1", 00:12:28.924 "nguid": "BDA418F8B6F14B8E9DCFA50290B00B60", 00:12:28.924 "uuid": "bda418f8-b6f1-4b8e-9dcf-a50290b00b60" 00:12:28.924 } 00:12:28.924 ] 00:12:28.924 }, 00:12:28.924 { 00:12:28.924 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:28.924 "subtype": "NVMe", 00:12:28.924 "listen_addresses": [ 00:12:28.924 { 00:12:28.924 "trtype": "TCP", 00:12:28.924 "adrfam": "IPv4", 00:12:28.924 "traddr": "10.0.0.2", 00:12:28.924 "trsvcid": "4420" 00:12:28.924 } 00:12:28.924 ], 00:12:28.924 "allow_any_host": true, 00:12:28.924 "hosts": [], 00:12:28.924 "serial_number": "SPDK00000000000002", 00:12:28.924 "model_number": "SPDK bdev Controller", 00:12:28.924 "max_namespaces": 32, 00:12:28.924 "min_cntlid": 1, 00:12:28.924 "max_cntlid": 65519, 00:12:28.924 "namespaces": [ 00:12:28.924 { 00:12:28.924 "nsid": 1, 00:12:28.924 "bdev_name": "Null2", 00:12:28.924 "name": "Null2", 00:12:28.924 "nguid": "E487A9DFC304407E9DD1309511344E71", 00:12:28.924 "uuid": "e487a9df-c304-407e-9dd1-309511344e71" 00:12:28.924 } 00:12:28.924 ] 00:12:28.924 }, 00:12:28.924 { 00:12:28.924 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:12:28.924 "subtype": "NVMe", 00:12:28.924 "listen_addresses": [ 00:12:28.924 { 00:12:28.924 "trtype": "TCP", 00:12:28.924 "adrfam": "IPv4", 00:12:28.924 "traddr": "10.0.0.2", 00:12:28.924 "trsvcid": "4420" 00:12:28.924 } 00:12:28.924 ], 00:12:28.924 "allow_any_host": true, 00:12:28.924 "hosts": [], 00:12:28.924 "serial_number": "SPDK00000000000003", 00:12:28.924 "model_number": "SPDK bdev Controller", 00:12:28.924 "max_namespaces": 32, 00:12:28.924 "min_cntlid": 1, 00:12:28.924 "max_cntlid": 65519, 00:12:28.924 "namespaces": [ 00:12:28.924 { 00:12:28.924 "nsid": 1, 00:12:28.924 "bdev_name": "Null3", 00:12:28.924 "name": "Null3", 00:12:28.924 "nguid": "8FD6480F1CB44539A192FAB2E4BE23CE", 00:12:28.924 "uuid": "8fd6480f-1cb4-4539-a192-fab2e4be23ce" 00:12:28.924 } 00:12:28.924 ] 00:12:28.924 }, 00:12:28.924 { 00:12:28.924 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:12:28.924 "subtype": "NVMe", 00:12:28.924 "listen_addresses": [ 00:12:28.924 { 00:12:28.924 "trtype": "TCP", 00:12:28.924 "adrfam": "IPv4", 00:12:28.924 "traddr": "10.0.0.2", 00:12:28.924 "trsvcid": "4420" 00:12:28.924 } 00:12:28.924 ], 00:12:28.924 "allow_any_host": true, 00:12:28.924 "hosts": [], 00:12:28.924 "serial_number": "SPDK00000000000004", 00:12:28.924 "model_number": "SPDK bdev Controller", 00:12:28.924 "max_namespaces": 32, 00:12:28.924 "min_cntlid": 1, 00:12:28.924 "max_cntlid": 65519, 00:12:28.924 "namespaces": [ 00:12:28.924 { 00:12:28.924 "nsid": 1, 00:12:28.924 "bdev_name": "Null4", 00:12:28.924 "name": "Null4", 00:12:28.924 "nguid": "DFB7DD2DD27F49D5B3A41B779BFAED11", 00:12:28.924 "uuid": "dfb7dd2d-d27f-49d5-b3a4-1b779bfaed11" 00:12:28.924 } 00:12:28.924 ] 00:12:28.924 } 00:12:28.924 ] 00:12:28.924 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.924 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:12:28.924 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:28.924 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:28.924 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.924 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:28.924 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.924 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:12:28.924 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.924 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:28.924 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.924 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:28.924 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:12:28.924 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.924 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:28.924 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.924 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:12:28.924 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.924 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:28.924 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.924 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:28.924 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:12:28.924 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.924 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:28.924 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.924 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:12:28.924 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.924 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:28.924 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.924 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:28.924 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:12:28.924 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.925 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:28.925 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.925 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:12:28.925 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.925 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:28.925 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.925 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:12:28.925 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.925 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:28.925 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.925 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:12:28.925 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.925 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:12:28.925 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:28.925 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.925 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:12:28.925 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:12:28.925 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:12:28.925 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:12:28.925 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@514 -- # nvmfcleanup 00:12:28.925 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:12:28.925 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:28.925 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:12:28.925 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:28.925 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:28.925 rmmod nvme_tcp 00:12:29.188 rmmod nvme_fabrics 00:12:29.188 rmmod nvme_keyring 00:12:29.188 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:29.188 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:12:29.188 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:12:29.188 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@515 -- # '[' -n 168847 ']' 00:12:29.188 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # killprocess 168847 00:12:29.188 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@950 -- # '[' -z 168847 ']' 00:12:29.188 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # kill -0 168847 00:12:29.188 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # uname 00:12:29.188 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:29.188 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 168847 00:12:29.188 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:29.188 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:29.188 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 168847' 00:12:29.188 killing process with pid 168847 00:12:29.188 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@969 -- # kill 168847 00:12:29.188 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@974 -- # wait 168847 00:12:29.449 09:33:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:12:29.449 09:33:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:12:29.449 09:33:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:12:29.449 09:33:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:12:29.449 09:33:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@789 -- # iptables-save 00:12:29.449 09:33:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@789 -- # iptables-restore 00:12:29.449 09:33:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:12:29.449 09:33:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:29.449 09:33:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:29.449 09:33:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:29.449 09:33:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:29.449 09:33:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:31.358 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:31.358 00:12:31.358 real 0m5.621s 00:12:31.358 user 0m4.762s 00:12:31.358 sys 0m1.904s 00:12:31.358 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:31.358 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:31.358 ************************************ 00:12:31.358 END TEST nvmf_target_discovery 00:12:31.358 ************************************ 00:12:31.358 09:33:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:31.358 09:33:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:31.358 09:33:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:31.358 09:33:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:31.618 ************************************ 00:12:31.618 START TEST nvmf_referrals 00:12:31.618 ************************************ 00:12:31.618 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:31.618 * Looking for test storage... 00:12:31.618 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:31.618 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:31.618 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1681 -- # lcov --version 00:12:31.618 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:31.618 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:31.618 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:31.618 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:31.618 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:31.618 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:12:31.618 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:12:31.618 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:12:31.618 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:12:31.618 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:12:31.618 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:12:31.618 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:12:31.618 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:31.618 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:12:31.618 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:12:31.618 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:31.618 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:31.618 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:12:31.618 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:12:31.618 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:31.618 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:12:31.618 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:12:31.618 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:12:31.618 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:12:31.618 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:31.618 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:12:31.618 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:12:31.619 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:31.619 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:31.619 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:12:31.619 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:31.619 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:31.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:31.619 --rc genhtml_branch_coverage=1 00:12:31.619 --rc genhtml_function_coverage=1 00:12:31.619 --rc genhtml_legend=1 00:12:31.619 --rc geninfo_all_blocks=1 00:12:31.619 --rc geninfo_unexecuted_blocks=1 00:12:31.619 00:12:31.619 ' 00:12:31.619 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:31.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:31.619 --rc genhtml_branch_coverage=1 00:12:31.619 --rc genhtml_function_coverage=1 00:12:31.619 --rc genhtml_legend=1 00:12:31.619 --rc geninfo_all_blocks=1 00:12:31.619 --rc geninfo_unexecuted_blocks=1 00:12:31.619 00:12:31.619 ' 00:12:31.619 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:31.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:31.619 --rc genhtml_branch_coverage=1 00:12:31.619 --rc genhtml_function_coverage=1 00:12:31.619 --rc genhtml_legend=1 00:12:31.619 --rc geninfo_all_blocks=1 00:12:31.619 --rc geninfo_unexecuted_blocks=1 00:12:31.619 00:12:31.619 ' 00:12:31.619 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:31.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:31.619 --rc genhtml_branch_coverage=1 00:12:31.619 --rc genhtml_function_coverage=1 00:12:31.619 --rc genhtml_legend=1 00:12:31.619 --rc geninfo_all_blocks=1 00:12:31.619 --rc geninfo_unexecuted_blocks=1 00:12:31.619 00:12:31.619 ' 00:12:31.619 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:31.619 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:12:31.619 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:31.619 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:31.619 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:31.619 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:31.619 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:31.619 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:31.619 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:31.619 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:31.619 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:31.619 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:31.619 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:12:31.619 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:12:31.619 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:31.619 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:31.619 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:31.619 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:31.619 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:31.619 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:12:31.619 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:31.619 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:31.619 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:31.619 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.619 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.619 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.619 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:12:31.619 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.619 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:12:31.619 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:31.619 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:31.619 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:31.619 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:31.619 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:31.619 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:31.619 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:31.619 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:31.619 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:31.619 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:31.619 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:12:31.619 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:12:31.619 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:12:31.619 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:12:31.619 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:12:31.619 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:12:31.619 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:12:31.619 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:12:31.619 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:31.619 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # prepare_net_devs 00:12:31.619 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@436 -- # local -g is_hw=no 00:12:31.619 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # remove_spdk_ns 00:12:31.619 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:31.619 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:31.619 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:31.619 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:12:31.619 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:12:31.620 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:12:31.620 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:34.159 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:34.159 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:12:34.159 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:34.159 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:34.159 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:34.159 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:34.159 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:34.159 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:12:34.159 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:34.159 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:12:34.159 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:12:34.159 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:12:34.159 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:12:34.159 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:12:34.159 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:12:34.159 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:34.159 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:34.159 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:34.159 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:34.159 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:34.159 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:34.159 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:34.159 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:34.159 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:34.159 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:34.159 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:34.159 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:34.159 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:34.159 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:34.159 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:34.159 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:34.159 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:34.159 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:34.159 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:34.159 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x1592)' 00:12:34.159 Found 0000:09:00.0 (0x8086 - 0x1592) 00:12:34.159 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:34.159 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:34.159 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:12:34.159 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:12:34.159 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:34.159 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:34.159 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x1592)' 00:12:34.159 Found 0000:09:00.1 (0x8086 - 0x1592) 00:12:34.159 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:34.159 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:34.159 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:12:34.159 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:12:34.159 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:34.159 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:34.159 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:34.159 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:34.159 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:34.159 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:34.159 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:34.159 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:34.159 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:34.159 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:34.159 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:34.159 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:12:34.159 Found net devices under 0000:09:00.0: cvl_0_0 00:12:34.159 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:34.159 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:34.159 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:34.159 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:34.159 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:34.159 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:34.159 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:34.159 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:34.159 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:12:34.159 Found net devices under 0000:09:00.1: cvl_0_1 00:12:34.159 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:34.159 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:12:34.159 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # is_hw=yes 00:12:34.159 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:12:34.159 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:12:34.159 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:12:34.159 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:34.159 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:34.159 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:34.159 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:34.160 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:34.160 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:34.160 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:34.160 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:34.160 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:34.160 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:34.160 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:34.160 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:34.160 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:34.160 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:34.160 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:34.160 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:34.160 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:34.160 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:34.160 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:34.160 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:34.160 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:34.160 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:34.160 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:34.160 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:34.160 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.219 ms 00:12:34.160 00:12:34.160 --- 10.0.0.2 ping statistics --- 00:12:34.160 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:34.160 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:12:34.160 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:34.160 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:34.160 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.048 ms 00:12:34.160 00:12:34.160 --- 10.0.0.1 ping statistics --- 00:12:34.160 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:34.160 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:12:34.160 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:34.160 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # return 0 00:12:34.160 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:12:34.160 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:34.160 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:12:34.160 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:12:34.160 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:34.160 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:12:34.160 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:12:34.160 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:12:34.160 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:12:34.160 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:34.160 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:34.160 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # nvmfpid=170939 00:12:34.160 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:34.160 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # waitforlisten 170939 00:12:34.160 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@831 -- # '[' -z 170939 ']' 00:12:34.160 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:34.160 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:34.160 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:34.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:34.160 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:34.160 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:34.160 [2024-10-07 09:33:22.862603] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:12:34.160 [2024-10-07 09:33:22.862707] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:34.160 [2024-10-07 09:33:22.934344] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:34.160 [2024-10-07 09:33:23.044228] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:34.160 [2024-10-07 09:33:23.044299] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:34.160 [2024-10-07 09:33:23.044327] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:34.160 [2024-10-07 09:33:23.044338] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:34.160 [2024-10-07 09:33:23.044348] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:34.160 [2024-10-07 09:33:23.046128] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:12:34.160 [2024-10-07 09:33:23.046152] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:12:34.160 [2024-10-07 09:33:23.046180] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:12:34.160 [2024-10-07 09:33:23.046183] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:12:34.421 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:34.421 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # return 0 00:12:34.421 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:12:34.421 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:34.421 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:34.421 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:34.421 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:34.421 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.421 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:34.421 [2024-10-07 09:33:23.210518] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:34.421 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.421 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:12:34.421 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.421 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:34.421 [2024-10-07 09:33:23.222771] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:12:34.421 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.421 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:12:34.421 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.421 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:34.421 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.421 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:12:34.421 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.421 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:34.421 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.421 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:12:34.421 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.421 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:34.421 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.421 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:34.421 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:12:34.421 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.421 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:34.421 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.421 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:12:34.421 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:12:34.421 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:34.421 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:34.421 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:34.421 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.421 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:34.421 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:34.421 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.421 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:34.421 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:34.421 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:12:34.421 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:34.421 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:34.421 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid=21b7cb46-a602-e411-a339-001e67bc3be4 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:34.421 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:34.421 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:34.680 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:34.680 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:34.680 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:12:34.680 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.680 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:34.680 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.681 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:12:34.681 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.681 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:34.681 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.681 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:12:34.681 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.681 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:34.681 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.681 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:34.681 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:12:34.681 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.681 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:34.681 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.681 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:12:34.681 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:12:34.681 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:34.681 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:34.681 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid=21b7cb46-a602-e411-a339-001e67bc3be4 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:34.681 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:34.681 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:34.940 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:34.940 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:12:34.940 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:12:34.940 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.940 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:34.940 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.940 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:34.940 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.940 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:34.940 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.940 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:12:34.940 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:34.940 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:34.940 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.940 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:34.940 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:34.940 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:34.940 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.940 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:12:34.940 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:34.940 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:12:34.940 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:34.940 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:34.940 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid=21b7cb46-a602-e411-a339-001e67bc3be4 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:34.940 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:34.940 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:35.199 09:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:12:35.199 09:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:35.199 09:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:12:35.199 09:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:12:35.199 09:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:35.199 09:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid=21b7cb46-a602-e411-a339-001e67bc3be4 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:35.199 09:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:35.199 09:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:35.199 09:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:12:35.199 09:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:35.199 09:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:12:35.199 09:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid=21b7cb46-a602-e411-a339-001e67bc3be4 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:35.199 09:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:35.459 09:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:35.459 09:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:35.459 09:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.459 09:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:35.459 09:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.459 09:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:12:35.459 09:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:35.459 09:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:35.459 09:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:35.459 09:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.459 09:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:35.459 09:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:35.459 09:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.459 09:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:12:35.459 09:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:35.459 09:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:12:35.459 09:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:35.459 09:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:35.459 09:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid=21b7cb46-a602-e411-a339-001e67bc3be4 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:35.459 09:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:35.459 09:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:35.719 09:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:12:35.719 09:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:35.719 09:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:12:35.719 09:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:12:35.719 09:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:35.719 09:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid=21b7cb46-a602-e411-a339-001e67bc3be4 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:35.719 09:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:35.978 09:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:12:35.978 09:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:12:35.978 09:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:12:35.978 09:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:35.978 09:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid=21b7cb46-a602-e411-a339-001e67bc3be4 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:35.978 09:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:35.978 09:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:35.978 09:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:12:35.978 09:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.978 09:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:35.978 09:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.978 09:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:35.978 09:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:12:35.978 09:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.978 09:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:35.978 09:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.978 09:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:12:35.978 09:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:12:35.978 09:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:35.978 09:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:35.978 09:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid=21b7cb46-a602-e411-a339-001e67bc3be4 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:35.978 09:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:35.978 09:33:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:36.237 09:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:36.237 09:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:12:36.237 09:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:12:36.237 09:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:12:36.237 09:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@514 -- # nvmfcleanup 00:12:36.237 09:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:12:36.237 09:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:36.237 09:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:12:36.237 09:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:36.237 09:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:36.237 rmmod nvme_tcp 00:12:36.237 rmmod nvme_fabrics 00:12:36.237 rmmod nvme_keyring 00:12:36.237 09:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:36.237 09:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:12:36.237 09:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:12:36.237 09:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@515 -- # '[' -n 170939 ']' 00:12:36.237 09:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # killprocess 170939 00:12:36.237 09:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@950 -- # '[' -z 170939 ']' 00:12:36.237 09:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # kill -0 170939 00:12:36.237 09:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # uname 00:12:36.237 09:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:36.237 09:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 170939 00:12:36.237 09:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:36.237 09:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:36.237 09:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@968 -- # echo 'killing process with pid 170939' 00:12:36.237 killing process with pid 170939 00:12:36.237 09:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@969 -- # kill 170939 00:12:36.237 09:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@974 -- # wait 170939 00:12:36.495 09:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:12:36.495 09:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:12:36.495 09:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:12:36.495 09:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:12:36.496 09:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@789 -- # iptables-save 00:12:36.496 09:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:12:36.496 09:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@789 -- # iptables-restore 00:12:36.496 09:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:36.496 09:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:36.496 09:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:36.496 09:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:36.496 09:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:39.041 09:33:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:39.041 00:12:39.041 real 0m7.153s 00:12:39.041 user 0m11.135s 00:12:39.041 sys 0m2.246s 00:12:39.041 09:33:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:39.041 09:33:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:39.041 ************************************ 00:12:39.041 END TEST nvmf_referrals 00:12:39.041 ************************************ 00:12:39.041 09:33:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:39.041 09:33:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:39.041 09:33:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:39.041 09:33:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:39.041 ************************************ 00:12:39.041 START TEST nvmf_connect_disconnect 00:12:39.041 ************************************ 00:12:39.041 09:33:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:39.041 * Looking for test storage... 00:12:39.041 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:39.041 09:33:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:39.041 09:33:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:39.041 09:33:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1681 -- # lcov --version 00:12:39.041 09:33:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:39.041 09:33:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:39.041 09:33:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:39.041 09:33:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:39.041 09:33:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:12:39.041 09:33:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:12:39.041 09:33:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:12:39.041 09:33:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:12:39.041 09:33:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:12:39.041 09:33:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:12:39.041 09:33:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:12:39.041 09:33:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:39.041 09:33:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:12:39.041 09:33:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:12:39.041 09:33:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:39.041 09:33:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:39.041 09:33:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:12:39.041 09:33:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:12:39.041 09:33:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:39.041 09:33:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:12:39.041 09:33:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:12:39.041 09:33:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:12:39.041 09:33:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:12:39.041 09:33:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:39.041 09:33:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:12:39.041 09:33:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:12:39.041 09:33:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:39.041 09:33:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:39.041 09:33:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:12:39.041 09:33:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:39.041 09:33:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:39.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:39.041 --rc genhtml_branch_coverage=1 00:12:39.041 --rc genhtml_function_coverage=1 00:12:39.041 --rc genhtml_legend=1 00:12:39.041 --rc geninfo_all_blocks=1 00:12:39.041 --rc geninfo_unexecuted_blocks=1 00:12:39.041 00:12:39.041 ' 00:12:39.041 09:33:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:39.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:39.041 --rc genhtml_branch_coverage=1 00:12:39.041 --rc genhtml_function_coverage=1 00:12:39.041 --rc genhtml_legend=1 00:12:39.041 --rc geninfo_all_blocks=1 00:12:39.041 --rc geninfo_unexecuted_blocks=1 00:12:39.041 00:12:39.041 ' 00:12:39.041 09:33:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:39.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:39.041 --rc genhtml_branch_coverage=1 00:12:39.041 --rc genhtml_function_coverage=1 00:12:39.041 --rc genhtml_legend=1 00:12:39.041 --rc geninfo_all_blocks=1 00:12:39.041 --rc geninfo_unexecuted_blocks=1 00:12:39.041 00:12:39.041 ' 00:12:39.041 09:33:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:39.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:39.041 --rc genhtml_branch_coverage=1 00:12:39.041 --rc genhtml_function_coverage=1 00:12:39.041 --rc genhtml_legend=1 00:12:39.041 --rc geninfo_all_blocks=1 00:12:39.042 --rc geninfo_unexecuted_blocks=1 00:12:39.042 00:12:39.042 ' 00:12:39.042 09:33:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:39.042 09:33:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:12:39.042 09:33:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:39.042 09:33:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:39.042 09:33:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:39.042 09:33:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:39.042 09:33:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:39.042 09:33:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:39.042 09:33:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:39.042 09:33:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:39.042 09:33:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:39.042 09:33:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:39.042 09:33:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:12:39.042 09:33:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:12:39.042 09:33:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:39.042 09:33:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:39.042 09:33:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:39.042 09:33:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:39.042 09:33:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:39.042 09:33:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:12:39.042 09:33:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:39.042 09:33:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:39.042 09:33:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:39.042 09:33:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.042 09:33:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.042 09:33:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.042 09:33:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:12:39.042 09:33:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.042 09:33:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:12:39.042 09:33:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:39.042 09:33:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:39.042 09:33:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:39.042 09:33:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:39.042 09:33:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:39.042 09:33:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:39.042 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:39.042 09:33:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:39.042 09:33:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:39.042 09:33:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:39.042 09:33:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:39.042 09:33:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:39.042 09:33:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:12:39.042 09:33:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:12:39.042 09:33:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:39.042 09:33:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # prepare_net_devs 00:12:39.042 09:33:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@436 -- # local -g is_hw=no 00:12:39.042 09:33:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # remove_spdk_ns 00:12:39.042 09:33:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:39.042 09:33:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:39.042 09:33:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:39.042 09:33:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:12:39.042 09:33:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:12:39.042 09:33:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:12:39.042 09:33:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:40.955 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:40.955 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:12:40.955 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:40.955 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:40.955 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:40.955 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:40.955 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:40.955 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:12:40.955 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:40.955 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:12:40.955 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:12:40.955 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:12:40.955 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:12:40.955 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:12:40.955 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:12:40.955 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:40.955 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:40.955 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:40.955 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:40.955 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:40.955 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:40.955 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:40.955 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:40.955 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:40.955 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:40.955 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:40.955 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:40.955 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:40.955 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:40.955 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:40.955 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:40.955 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:40.955 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:40.955 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:40.955 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x1592)' 00:12:40.955 Found 0000:09:00.0 (0x8086 - 0x1592) 00:12:40.955 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:40.955 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:40.955 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:12:40.955 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:12:40.955 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:40.955 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:40.955 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x1592)' 00:12:40.955 Found 0000:09:00.1 (0x8086 - 0x1592) 00:12:40.955 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:40.955 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:40.955 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:12:40.955 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:12:40.955 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:40.955 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:40.955 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:40.955 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:40.955 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:40.955 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:40.955 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:40.955 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:40.955 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:40.955 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:40.955 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:40.955 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:12:40.955 Found net devices under 0000:09:00.0: cvl_0_0 00:12:40.955 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:40.955 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:40.955 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:40.955 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:40.955 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:40.955 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:40.955 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:40.955 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:40.955 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:12:40.955 Found net devices under 0000:09:00.1: cvl_0_1 00:12:40.955 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:40.955 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:12:40.955 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # is_hw=yes 00:12:40.955 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:12:40.955 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:12:40.955 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:12:40.955 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:40.955 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:40.955 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:40.955 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:40.955 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:40.955 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:40.955 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:40.955 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:40.955 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:40.955 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:40.955 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:40.955 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:40.955 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:40.955 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:40.955 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:40.955 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:40.955 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:40.956 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:40.956 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:40.956 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:40.956 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:40.956 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:40.956 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:40.956 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:40.956 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.389 ms 00:12:40.956 00:12:40.956 --- 10.0.0.2 ping statistics --- 00:12:40.956 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:40.956 rtt min/avg/max/mdev = 0.389/0.389/0.389/0.000 ms 00:12:40.956 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:40.956 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:40.956 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:12:40.956 00:12:40.956 --- 10.0.0.1 ping statistics --- 00:12:40.956 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:40.956 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:12:40.956 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:40.956 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # return 0 00:12:40.956 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:12:40.956 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:40.956 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:12:40.956 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:12:40.956 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:40.956 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:12:40.956 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:12:40.956 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:12:40.956 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:12:40.956 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:40.956 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:40.956 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # nvmfpid=173154 00:12:40.956 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:40.956 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # waitforlisten 173154 00:12:40.956 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # '[' -z 173154 ']' 00:12:40.956 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:40.956 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:40.956 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:40.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:40.956 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:40.956 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:40.956 [2024-10-07 09:33:29.916113] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:12:40.956 [2024-10-07 09:33:29.916218] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:41.215 [2024-10-07 09:33:29.979238] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:41.215 [2024-10-07 09:33:30.103762] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:41.215 [2024-10-07 09:33:30.103832] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:41.215 [2024-10-07 09:33:30.103862] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:41.215 [2024-10-07 09:33:30.103874] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:41.215 [2024-10-07 09:33:30.103884] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:41.215 [2024-10-07 09:33:30.105542] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:12:41.215 [2024-10-07 09:33:30.105603] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:12:41.215 [2024-10-07 09:33:30.105679] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:12:41.215 [2024-10-07 09:33:30.105683] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:12:41.473 09:33:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:41.473 09:33:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # return 0 00:12:41.473 09:33:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:12:41.473 09:33:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:41.473 09:33:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:41.473 09:33:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:41.473 09:33:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:41.473 09:33:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.473 09:33:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:41.473 [2024-10-07 09:33:30.273588] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:41.473 09:33:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.473 09:33:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:41.473 09:33:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.473 09:33:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:41.473 09:33:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.473 09:33:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:41.473 09:33:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:41.473 09:33:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.473 09:33:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:41.473 09:33:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.473 09:33:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:41.473 09:33:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.473 09:33:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:41.473 09:33:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.473 09:33:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:41.473 09:33:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.473 09:33:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:41.473 [2024-10-07 09:33:30.324886] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:41.473 09:33:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.473 09:33:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:12:41.473 09:33:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:12:41.473 09:33:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:12:44.766 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:47.301 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:49.844 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:53.137 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:55.672 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:55.672 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:55.672 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:55.672 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@514 -- # nvmfcleanup 00:12:55.672 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:12:55.672 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:55.672 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:12:55.672 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:55.672 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:55.672 rmmod nvme_tcp 00:12:55.672 rmmod nvme_fabrics 00:12:55.672 rmmod nvme_keyring 00:12:55.672 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:55.672 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:12:55.672 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:12:55.672 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@515 -- # '[' -n 173154 ']' 00:12:55.672 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # killprocess 173154 00:12:55.672 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # '[' -z 173154 ']' 00:12:55.672 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # kill -0 173154 00:12:55.672 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # uname 00:12:55.673 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:55.673 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 173154 00:12:55.673 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:55.673 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:55.673 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 173154' 00:12:55.673 killing process with pid 173154 00:12:55.673 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@969 -- # kill 173154 00:12:55.673 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@974 -- # wait 173154 00:12:55.934 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:12:55.934 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:12:55.934 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:12:55.934 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:12:55.934 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:12:55.934 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@789 -- # iptables-save 00:12:55.934 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@789 -- # iptables-restore 00:12:55.934 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:55.934 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:55.934 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:55.934 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:55.934 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:57.846 09:33:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:57.846 00:12:57.846 real 0m19.146s 00:12:57.846 user 0m57.772s 00:12:57.846 sys 0m3.298s 00:12:57.846 09:33:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:57.846 09:33:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:57.846 ************************************ 00:12:57.846 END TEST nvmf_connect_disconnect 00:12:57.846 ************************************ 00:12:57.846 09:33:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:57.846 09:33:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:57.846 09:33:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:57.846 09:33:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:57.846 ************************************ 00:12:57.846 START TEST nvmf_multitarget 00:12:57.846 ************************************ 00:12:57.846 09:33:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:57.846 * Looking for test storage... 00:12:57.846 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:57.846 09:33:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:57.846 09:33:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1681 -- # lcov --version 00:12:57.846 09:33:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:58.106 09:33:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:58.106 09:33:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:58.106 09:33:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:58.106 09:33:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:58.106 09:33:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:12:58.106 09:33:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:12:58.106 09:33:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:12:58.106 09:33:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:12:58.106 09:33:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:12:58.106 09:33:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:12:58.106 09:33:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:12:58.106 09:33:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:58.106 09:33:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:12:58.106 09:33:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:12:58.106 09:33:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:58.106 09:33:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:58.106 09:33:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:12:58.106 09:33:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:12:58.106 09:33:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:58.106 09:33:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:12:58.106 09:33:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:12:58.106 09:33:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:12:58.106 09:33:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:12:58.106 09:33:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:58.106 09:33:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:12:58.106 09:33:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:12:58.106 09:33:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:58.106 09:33:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:58.106 09:33:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:12:58.106 09:33:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:58.106 09:33:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:58.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:58.106 --rc genhtml_branch_coverage=1 00:12:58.106 --rc genhtml_function_coverage=1 00:12:58.106 --rc genhtml_legend=1 00:12:58.106 --rc geninfo_all_blocks=1 00:12:58.106 --rc geninfo_unexecuted_blocks=1 00:12:58.106 00:12:58.106 ' 00:12:58.106 09:33:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:58.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:58.106 --rc genhtml_branch_coverage=1 00:12:58.106 --rc genhtml_function_coverage=1 00:12:58.106 --rc genhtml_legend=1 00:12:58.106 --rc geninfo_all_blocks=1 00:12:58.106 --rc geninfo_unexecuted_blocks=1 00:12:58.106 00:12:58.106 ' 00:12:58.106 09:33:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:58.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:58.106 --rc genhtml_branch_coverage=1 00:12:58.106 --rc genhtml_function_coverage=1 00:12:58.106 --rc genhtml_legend=1 00:12:58.106 --rc geninfo_all_blocks=1 00:12:58.106 --rc geninfo_unexecuted_blocks=1 00:12:58.106 00:12:58.106 ' 00:12:58.106 09:33:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:58.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:58.106 --rc genhtml_branch_coverage=1 00:12:58.106 --rc genhtml_function_coverage=1 00:12:58.106 --rc genhtml_legend=1 00:12:58.106 --rc geninfo_all_blocks=1 00:12:58.106 --rc geninfo_unexecuted_blocks=1 00:12:58.106 00:12:58.106 ' 00:12:58.106 09:33:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:58.106 09:33:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:12:58.106 09:33:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:58.106 09:33:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:58.106 09:33:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:58.106 09:33:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:58.106 09:33:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:58.106 09:33:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:58.106 09:33:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:58.106 09:33:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:58.106 09:33:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:58.106 09:33:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:58.106 09:33:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:12:58.106 09:33:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:12:58.106 09:33:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:58.106 09:33:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:58.106 09:33:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:58.106 09:33:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:58.106 09:33:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:58.106 09:33:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:12:58.106 09:33:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:58.106 09:33:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:58.106 09:33:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:58.107 09:33:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.107 09:33:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.107 09:33:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.107 09:33:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:12:58.107 09:33:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.107 09:33:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:12:58.107 09:33:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:58.107 09:33:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:58.107 09:33:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:58.107 09:33:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:58.107 09:33:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:58.107 09:33:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:58.107 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:58.107 09:33:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:58.107 09:33:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:58.107 09:33:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:58.107 09:33:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:58.107 09:33:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:12:58.107 09:33:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:12:58.107 09:33:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:58.107 09:33:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # prepare_net_devs 00:12:58.107 09:33:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@436 -- # local -g is_hw=no 00:12:58.107 09:33:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # remove_spdk_ns 00:12:58.107 09:33:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:58.107 09:33:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:58.107 09:33:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:58.107 09:33:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:12:58.107 09:33:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:12:58.107 09:33:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:12:58.107 09:33:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:00.643 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:00.643 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:13:00.643 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:00.643 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:00.643 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:00.643 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:00.643 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:00.643 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:13:00.643 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:00.643 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:13:00.643 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:13:00.643 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:13:00.643 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:13:00.643 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:13:00.643 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:13:00.643 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:00.643 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:00.643 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:00.643 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:00.643 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:00.644 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:00.644 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:00.644 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:00.644 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:00.644 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:00.644 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:00.644 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:00.644 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:00.644 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:00.644 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:00.644 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:00.644 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:00.644 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:00.644 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:00.644 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x1592)' 00:13:00.644 Found 0000:09:00.0 (0x8086 - 0x1592) 00:13:00.644 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:00.644 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:00.644 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:13:00.644 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:13:00.644 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:00.644 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:00.644 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x1592)' 00:13:00.644 Found 0000:09:00.1 (0x8086 - 0x1592) 00:13:00.644 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:00.644 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:00.644 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:13:00.644 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:13:00.644 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:00.644 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:00.644 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:00.644 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:00.644 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:00.644 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:00.644 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:00.644 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:00.644 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:00.644 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:00.644 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:00.644 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:13:00.644 Found net devices under 0000:09:00.0: cvl_0_0 00:13:00.644 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:00.644 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:00.644 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:00.644 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:00.644 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:00.644 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:00.644 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:00.644 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:00.644 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:13:00.644 Found net devices under 0000:09:00.1: cvl_0_1 00:13:00.644 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:00.644 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:13:00.644 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # is_hw=yes 00:13:00.644 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:13:00.644 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:13:00.644 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:13:00.644 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:00.644 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:00.644 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:00.644 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:00.644 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:00.644 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:00.644 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:00.644 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:00.644 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:00.644 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:00.644 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:00.644 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:00.644 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:00.644 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:00.644 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:00.644 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:00.644 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:00.644 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:00.644 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:00.644 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:00.644 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:00.644 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:00.645 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:00.645 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:00.645 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.243 ms 00:13:00.645 00:13:00.645 --- 10.0.0.2 ping statistics --- 00:13:00.645 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:00.645 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:13:00.645 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:00.645 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:00.645 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:13:00.645 00:13:00.645 --- 10.0.0.1 ping statistics --- 00:13:00.645 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:00.645 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:13:00.645 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:00.645 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # return 0 00:13:00.645 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:13:00.645 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:00.645 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:13:00.645 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:13:00.645 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:00.645 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:13:00.645 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:13:00.645 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:13:00.645 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:13:00.645 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:00.645 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:00.645 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # nvmfpid=176755 00:13:00.645 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:00.645 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # waitforlisten 176755 00:13:00.645 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@831 -- # '[' -z 176755 ']' 00:13:00.645 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:00.645 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:00.645 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:00.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:00.645 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:00.645 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:00.645 [2024-10-07 09:33:49.253937] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:13:00.645 [2024-10-07 09:33:49.254019] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:00.645 [2024-10-07 09:33:49.317726] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:00.645 [2024-10-07 09:33:49.423771] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:00.645 [2024-10-07 09:33:49.423820] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:00.645 [2024-10-07 09:33:49.423848] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:00.645 [2024-10-07 09:33:49.423859] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:00.645 [2024-10-07 09:33:49.423868] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:00.645 [2024-10-07 09:33:49.425392] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:13:00.645 [2024-10-07 09:33:49.425502] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:13:00.645 [2024-10-07 09:33:49.425592] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:13:00.645 [2024-10-07 09:33:49.425595] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:13:00.645 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:00.645 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # return 0 00:13:00.645 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:13:00.645 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:00.645 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:00.645 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:00.645 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:00.645 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:00.645 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:13:00.904 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:13:00.904 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:13:00.904 "nvmf_tgt_1" 00:13:00.904 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:13:01.162 "nvmf_tgt_2" 00:13:01.162 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:01.162 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:13:01.162 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:13:01.162 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:13:01.421 true 00:13:01.421 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:13:01.421 true 00:13:01.421 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:01.421 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:13:01.680 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:13:01.680 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:13:01.680 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:13:01.680 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@514 -- # nvmfcleanup 00:13:01.680 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:13:01.680 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:01.680 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:13:01.680 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:01.680 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:01.680 rmmod nvme_tcp 00:13:01.680 rmmod nvme_fabrics 00:13:01.680 rmmod nvme_keyring 00:13:01.680 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:01.680 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:13:01.680 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:13:01.680 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@515 -- # '[' -n 176755 ']' 00:13:01.680 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # killprocess 176755 00:13:01.680 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@950 -- # '[' -z 176755 ']' 00:13:01.680 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # kill -0 176755 00:13:01.680 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # uname 00:13:01.680 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:01.680 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 176755 00:13:01.680 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:01.680 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:01.680 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@968 -- # echo 'killing process with pid 176755' 00:13:01.680 killing process with pid 176755 00:13:01.680 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@969 -- # kill 176755 00:13:01.680 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@974 -- # wait 176755 00:13:01.939 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:13:01.939 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:13:01.939 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:13:01.939 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:13:01.939 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@789 -- # iptables-save 00:13:01.939 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:13:01.940 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@789 -- # iptables-restore 00:13:01.940 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:01.940 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:01.940 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:01.940 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:01.940 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:03.851 09:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:03.851 00:13:03.851 real 0m6.039s 00:13:03.851 user 0m6.872s 00:13:03.851 sys 0m2.005s 00:13:03.851 09:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:03.851 09:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:03.851 ************************************ 00:13:03.851 END TEST nvmf_multitarget 00:13:03.851 ************************************ 00:13:03.851 09:33:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:03.851 09:33:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:03.851 09:33:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:03.851 09:33:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:04.112 ************************************ 00:13:04.112 START TEST nvmf_rpc 00:13:04.112 ************************************ 00:13:04.112 09:33:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:04.112 * Looking for test storage... 00:13:04.112 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:04.112 09:33:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:04.112 09:33:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:13:04.112 09:33:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:04.112 09:33:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:04.112 09:33:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:04.112 09:33:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:04.112 09:33:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:04.112 09:33:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:13:04.112 09:33:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:13:04.112 09:33:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:13:04.112 09:33:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:13:04.112 09:33:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:13:04.112 09:33:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:13:04.112 09:33:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:13:04.112 09:33:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:04.112 09:33:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:13:04.112 09:33:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:13:04.112 09:33:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:04.112 09:33:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:04.112 09:33:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:13:04.112 09:33:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:13:04.112 09:33:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:04.112 09:33:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:13:04.112 09:33:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:13:04.112 09:33:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:13:04.112 09:33:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:13:04.112 09:33:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:04.112 09:33:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:13:04.112 09:33:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:13:04.112 09:33:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:04.112 09:33:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:04.112 09:33:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:13:04.112 09:33:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:04.112 09:33:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:04.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:04.112 --rc genhtml_branch_coverage=1 00:13:04.112 --rc genhtml_function_coverage=1 00:13:04.112 --rc genhtml_legend=1 00:13:04.112 --rc geninfo_all_blocks=1 00:13:04.112 --rc geninfo_unexecuted_blocks=1 00:13:04.112 00:13:04.112 ' 00:13:04.112 09:33:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:04.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:04.112 --rc genhtml_branch_coverage=1 00:13:04.112 --rc genhtml_function_coverage=1 00:13:04.112 --rc genhtml_legend=1 00:13:04.112 --rc geninfo_all_blocks=1 00:13:04.112 --rc geninfo_unexecuted_blocks=1 00:13:04.112 00:13:04.112 ' 00:13:04.112 09:33:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:04.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:04.112 --rc genhtml_branch_coverage=1 00:13:04.112 --rc genhtml_function_coverage=1 00:13:04.112 --rc genhtml_legend=1 00:13:04.112 --rc geninfo_all_blocks=1 00:13:04.112 --rc geninfo_unexecuted_blocks=1 00:13:04.112 00:13:04.112 ' 00:13:04.113 09:33:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:04.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:04.113 --rc genhtml_branch_coverage=1 00:13:04.113 --rc genhtml_function_coverage=1 00:13:04.113 --rc genhtml_legend=1 00:13:04.113 --rc geninfo_all_blocks=1 00:13:04.113 --rc geninfo_unexecuted_blocks=1 00:13:04.113 00:13:04.113 ' 00:13:04.113 09:33:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:04.113 09:33:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:13:04.113 09:33:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:04.113 09:33:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:04.113 09:33:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:04.113 09:33:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:04.113 09:33:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:04.113 09:33:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:04.113 09:33:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:04.113 09:33:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:04.113 09:33:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:04.113 09:33:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:04.113 09:33:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:13:04.113 09:33:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:13:04.113 09:33:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:04.113 09:33:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:04.113 09:33:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:04.113 09:33:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:04.113 09:33:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:04.113 09:33:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:13:04.113 09:33:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:04.113 09:33:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:04.113 09:33:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:04.113 09:33:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.113 09:33:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.113 09:33:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.113 09:33:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:13:04.113 09:33:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.113 09:33:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:13:04.113 09:33:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:04.113 09:33:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:04.113 09:33:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:04.113 09:33:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:04.113 09:33:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:04.113 09:33:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:04.113 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:04.113 09:33:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:04.113 09:33:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:04.113 09:33:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:04.113 09:33:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:13:04.113 09:33:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:13:04.113 09:33:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:13:04.113 09:33:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:04.113 09:33:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # prepare_net_devs 00:13:04.113 09:33:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@436 -- # local -g is_hw=no 00:13:04.113 09:33:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # remove_spdk_ns 00:13:04.113 09:33:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:04.113 09:33:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:04.113 09:33:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:04.113 09:33:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:13:04.113 09:33:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:13:04.113 09:33:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:13:04.113 09:33:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:06.650 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:06.650 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:13:06.650 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:06.650 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:06.650 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:06.650 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:06.650 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:06.650 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:13:06.650 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:06.650 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:13:06.650 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:13:06.650 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:13:06.650 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:13:06.650 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:13:06.650 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:13:06.650 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:06.650 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:06.650 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:06.650 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:06.650 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:06.650 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:06.650 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:06.650 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:06.651 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:06.651 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:06.651 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:06.651 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:06.651 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:06.651 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:06.651 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:06.651 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:06.651 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:06.651 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:06.651 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:06.651 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x1592)' 00:13:06.651 Found 0000:09:00.0 (0x8086 - 0x1592) 00:13:06.651 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:06.651 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:06.651 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:13:06.651 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:13:06.651 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:06.651 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:06.651 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x1592)' 00:13:06.651 Found 0000:09:00.1 (0x8086 - 0x1592) 00:13:06.651 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:06.651 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:06.651 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:13:06.651 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:13:06.651 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:06.651 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:06.651 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:06.651 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:06.651 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:06.651 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:06.651 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:06.651 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:06.651 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:06.651 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:06.651 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:06.651 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:13:06.651 Found net devices under 0000:09:00.0: cvl_0_0 00:13:06.651 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:06.651 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:06.651 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:06.651 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:06.651 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:06.651 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:06.651 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:06.651 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:06.651 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:13:06.651 Found net devices under 0000:09:00.1: cvl_0_1 00:13:06.651 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:06.651 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:13:06.651 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # is_hw=yes 00:13:06.651 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:13:06.651 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:13:06.651 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:13:06.651 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:06.651 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:06.651 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:06.651 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:06.651 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:06.651 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:06.651 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:06.651 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:06.651 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:06.651 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:06.651 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:06.651 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:06.651 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:06.651 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:06.651 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:06.651 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:06.651 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:06.651 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:06.651 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:06.651 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:06.651 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:06.651 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:06.651 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:06.651 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:06.651 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.349 ms 00:13:06.651 00:13:06.651 --- 10.0.0.2 ping statistics --- 00:13:06.651 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:06.651 rtt min/avg/max/mdev = 0.349/0.349/0.349/0.000 ms 00:13:06.651 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:06.651 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:06.651 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.166 ms 00:13:06.651 00:13:06.651 --- 10.0.0.1 ping statistics --- 00:13:06.651 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:06.651 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:13:06.651 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:06.651 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # return 0 00:13:06.651 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:13:06.651 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:06.651 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:13:06.651 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:13:06.651 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:06.651 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:13:06.651 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:13:06.651 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:13:06.651 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:13:06.651 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:06.651 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:06.652 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # nvmfpid=178761 00:13:06.652 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:06.652 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # waitforlisten 178761 00:13:06.652 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@831 -- # '[' -z 178761 ']' 00:13:06.652 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:06.652 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:06.652 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:06.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:06.652 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:06.652 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:06.652 [2024-10-07 09:33:55.302275] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:13:06.652 [2024-10-07 09:33:55.302339] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:06.652 [2024-10-07 09:33:55.362452] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:06.652 [2024-10-07 09:33:55.469436] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:06.652 [2024-10-07 09:33:55.469543] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:06.652 [2024-10-07 09:33:55.469572] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:06.652 [2024-10-07 09:33:55.469583] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:06.652 [2024-10-07 09:33:55.469593] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:06.652 [2024-10-07 09:33:55.471288] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:13:06.652 [2024-10-07 09:33:55.471355] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:13:06.652 [2024-10-07 09:33:55.474688] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:13:06.652 [2024-10-07 09:33:55.474700] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:13:06.652 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:06.652 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # return 0 00:13:06.652 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:13:06.652 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:06.652 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:06.652 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:06.652 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:13:06.652 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.652 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:06.652 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.652 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:13:06.652 "tick_rate": 2700000000, 00:13:06.652 "poll_groups": [ 00:13:06.652 { 00:13:06.652 "name": "nvmf_tgt_poll_group_000", 00:13:06.652 "admin_qpairs": 0, 00:13:06.652 "io_qpairs": 0, 00:13:06.652 "current_admin_qpairs": 0, 00:13:06.652 "current_io_qpairs": 0, 00:13:06.652 "pending_bdev_io": 0, 00:13:06.652 "completed_nvme_io": 0, 00:13:06.652 "transports": [] 00:13:06.652 }, 00:13:06.652 { 00:13:06.652 "name": "nvmf_tgt_poll_group_001", 00:13:06.652 "admin_qpairs": 0, 00:13:06.652 "io_qpairs": 0, 00:13:06.652 "current_admin_qpairs": 0, 00:13:06.652 "current_io_qpairs": 0, 00:13:06.652 "pending_bdev_io": 0, 00:13:06.652 "completed_nvme_io": 0, 00:13:06.652 "transports": [] 00:13:06.652 }, 00:13:06.652 { 00:13:06.652 "name": "nvmf_tgt_poll_group_002", 00:13:06.652 "admin_qpairs": 0, 00:13:06.652 "io_qpairs": 0, 00:13:06.652 "current_admin_qpairs": 0, 00:13:06.652 "current_io_qpairs": 0, 00:13:06.652 "pending_bdev_io": 0, 00:13:06.652 "completed_nvme_io": 0, 00:13:06.652 "transports": [] 00:13:06.652 }, 00:13:06.652 { 00:13:06.652 "name": "nvmf_tgt_poll_group_003", 00:13:06.652 "admin_qpairs": 0, 00:13:06.652 "io_qpairs": 0, 00:13:06.652 "current_admin_qpairs": 0, 00:13:06.652 "current_io_qpairs": 0, 00:13:06.652 "pending_bdev_io": 0, 00:13:06.652 "completed_nvme_io": 0, 00:13:06.652 "transports": [] 00:13:06.652 } 00:13:06.652 ] 00:13:06.652 }' 00:13:06.652 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:13:06.652 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:13:06.652 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:13:06.652 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:13:06.911 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:13:06.911 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:13:06.911 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:13:06.911 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:06.912 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.912 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:06.912 [2024-10-07 09:33:55.728249] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:06.912 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.912 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:13:06.912 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.912 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:06.912 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.912 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:13:06.912 "tick_rate": 2700000000, 00:13:06.912 "poll_groups": [ 00:13:06.912 { 00:13:06.912 "name": "nvmf_tgt_poll_group_000", 00:13:06.912 "admin_qpairs": 0, 00:13:06.912 "io_qpairs": 0, 00:13:06.912 "current_admin_qpairs": 0, 00:13:06.912 "current_io_qpairs": 0, 00:13:06.912 "pending_bdev_io": 0, 00:13:06.912 "completed_nvme_io": 0, 00:13:06.912 "transports": [ 00:13:06.912 { 00:13:06.912 "trtype": "TCP" 00:13:06.912 } 00:13:06.912 ] 00:13:06.912 }, 00:13:06.912 { 00:13:06.912 "name": "nvmf_tgt_poll_group_001", 00:13:06.912 "admin_qpairs": 0, 00:13:06.912 "io_qpairs": 0, 00:13:06.912 "current_admin_qpairs": 0, 00:13:06.912 "current_io_qpairs": 0, 00:13:06.912 "pending_bdev_io": 0, 00:13:06.912 "completed_nvme_io": 0, 00:13:06.912 "transports": [ 00:13:06.912 { 00:13:06.912 "trtype": "TCP" 00:13:06.912 } 00:13:06.912 ] 00:13:06.912 }, 00:13:06.912 { 00:13:06.912 "name": "nvmf_tgt_poll_group_002", 00:13:06.912 "admin_qpairs": 0, 00:13:06.912 "io_qpairs": 0, 00:13:06.912 "current_admin_qpairs": 0, 00:13:06.912 "current_io_qpairs": 0, 00:13:06.912 "pending_bdev_io": 0, 00:13:06.912 "completed_nvme_io": 0, 00:13:06.912 "transports": [ 00:13:06.912 { 00:13:06.912 "trtype": "TCP" 00:13:06.912 } 00:13:06.912 ] 00:13:06.912 }, 00:13:06.912 { 00:13:06.912 "name": "nvmf_tgt_poll_group_003", 00:13:06.912 "admin_qpairs": 0, 00:13:06.912 "io_qpairs": 0, 00:13:06.912 "current_admin_qpairs": 0, 00:13:06.912 "current_io_qpairs": 0, 00:13:06.912 "pending_bdev_io": 0, 00:13:06.912 "completed_nvme_io": 0, 00:13:06.912 "transports": [ 00:13:06.912 { 00:13:06.912 "trtype": "TCP" 00:13:06.912 } 00:13:06.912 ] 00:13:06.912 } 00:13:06.912 ] 00:13:06.912 }' 00:13:06.912 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:13:06.912 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:06.912 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:06.912 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:06.912 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:13:06.912 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:13:06.912 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:06.912 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:06.912 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:06.912 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:13:06.912 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:13:06.912 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:13:06.912 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:13:06.912 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:06.912 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.912 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:06.912 Malloc1 00:13:06.912 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.912 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:06.912 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.912 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:06.912 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.912 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:06.912 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.912 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:06.912 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.912 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:13:06.912 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.912 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:06.912 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.912 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:06.912 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.912 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:06.912 [2024-10-07 09:33:55.867391] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:06.912 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.912 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid=21b7cb46-a602-e411-a339-001e67bc3be4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -a 10.0.0.2 -s 4420 00:13:06.912 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:13:06.912 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid=21b7cb46-a602-e411-a339-001e67bc3be4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -a 10.0.0.2 -s 4420 00:13:06.912 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:13:06.912 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:06.912 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:13:06.912 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:06.912 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:13:06.912 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:06.912 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:13:06.912 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:13:06.912 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid=21b7cb46-a602-e411-a339-001e67bc3be4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -a 10.0.0.2 -s 4420 00:13:06.912 [2024-10-07 09:33:55.890048] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4' 00:13:07.178 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:07.179 could not add new controller: failed to write to nvme-fabrics device 00:13:07.179 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:13:07.179 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:07.179 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:07.179 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:07.179 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:13:07.179 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.179 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.179 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.179 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid=21b7cb46-a602-e411-a339-001e67bc3be4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:07.749 09:33:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:13:07.749 09:33:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:07.749 09:33:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:07.749 09:33:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:07.749 09:33:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:09.659 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:09.659 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:09.659 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:09.659 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:09.659 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:09.659 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:09.659 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:09.659 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:09.659 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:09.659 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:09.659 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:09.659 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:09.659 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:09.659 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:09.660 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:09.660 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:13:09.660 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.660 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.660 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.660 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid=21b7cb46-a602-e411-a339-001e67bc3be4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:09.660 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:13:09.660 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid=21b7cb46-a602-e411-a339-001e67bc3be4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:09.660 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:13:09.660 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:09.660 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:13:09.660 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:09.660 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:13:09.660 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:09.660 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:13:09.660 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:13:09.660 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid=21b7cb46-a602-e411-a339-001e67bc3be4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:09.660 [2024-10-07 09:33:58.651025] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4' 00:13:09.920 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:09.920 could not add new controller: failed to write to nvme-fabrics device 00:13:09.920 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:13:09.920 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:09.920 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:09.920 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:09.920 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:13:09.920 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.920 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.920 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.920 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid=21b7cb46-a602-e411-a339-001e67bc3be4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:10.490 09:33:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:13:10.490 09:33:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:10.490 09:33:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:10.490 09:33:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:10.490 09:33:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:12.397 09:34:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:12.397 09:34:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:12.397 09:34:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:12.397 09:34:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:12.397 09:34:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:12.397 09:34:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:12.397 09:34:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:12.397 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:12.397 09:34:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:12.397 09:34:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:12.397 09:34:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:12.397 09:34:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:12.397 09:34:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:12.397 09:34:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:12.397 09:34:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:12.397 09:34:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:12.397 09:34:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.397 09:34:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.657 09:34:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.657 09:34:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:13:12.657 09:34:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:12.657 09:34:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:12.657 09:34:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.657 09:34:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.657 09:34:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.657 09:34:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:12.657 09:34:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.657 09:34:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.657 [2024-10-07 09:34:01.417062] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:12.657 09:34:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.657 09:34:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:12.657 09:34:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.657 09:34:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.657 09:34:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.657 09:34:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:12.657 09:34:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.657 09:34:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.657 09:34:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.657 09:34:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid=21b7cb46-a602-e411-a339-001e67bc3be4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:13.227 09:34:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:13.227 09:34:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:13.227 09:34:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:13.227 09:34:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:13.227 09:34:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:15.137 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:15.137 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:15.137 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:15.397 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:15.397 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:15.397 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:15.397 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:15.397 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:15.397 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:15.397 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:15.397 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:15.397 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:15.397 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:15.397 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:15.397 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:15.397 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:15.397 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.397 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.397 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.397 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:15.397 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.397 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.397 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.397 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:15.397 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:15.397 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.397 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.397 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.397 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:15.397 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.397 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.397 [2024-10-07 09:34:04.283548] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:15.397 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.397 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:15.397 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.397 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.397 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.397 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:15.397 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.397 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.397 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.398 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid=21b7cb46-a602-e411-a339-001e67bc3be4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:15.967 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:15.967 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:15.967 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:15.967 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:15.967 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:18.528 09:34:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:18.528 09:34:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:18.528 09:34:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:18.528 09:34:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:18.528 09:34:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:18.528 09:34:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:18.528 09:34:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:18.528 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:18.528 09:34:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:18.528 09:34:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:18.528 09:34:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:18.528 09:34:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:18.528 09:34:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:18.528 09:34:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:18.528 09:34:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:18.528 09:34:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:18.528 09:34:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.528 09:34:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.528 09:34:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.528 09:34:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:18.528 09:34:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.528 09:34:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.528 09:34:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.528 09:34:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:18.528 09:34:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:18.528 09:34:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.528 09:34:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.528 09:34:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.528 09:34:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:18.528 09:34:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.528 09:34:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.528 [2024-10-07 09:34:07.033799] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:18.528 09:34:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.528 09:34:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:18.528 09:34:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.528 09:34:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.528 09:34:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.528 09:34:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:18.528 09:34:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.528 09:34:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.528 09:34:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.528 09:34:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid=21b7cb46-a602-e411-a339-001e67bc3be4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:18.788 09:34:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:18.788 09:34:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:18.788 09:34:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:18.788 09:34:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:18.788 09:34:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:21.330 09:34:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:21.330 09:34:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:21.330 09:34:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:21.330 09:34:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:21.330 09:34:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:21.330 09:34:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:21.330 09:34:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:21.330 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:21.330 09:34:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:21.330 09:34:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:21.330 09:34:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:21.330 09:34:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:21.330 09:34:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:21.330 09:34:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:21.330 09:34:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:21.330 09:34:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:21.330 09:34:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.330 09:34:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.330 09:34:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.330 09:34:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:21.330 09:34:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.330 09:34:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.330 09:34:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.330 09:34:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:21.330 09:34:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:21.330 09:34:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.330 09:34:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.330 09:34:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.330 09:34:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:21.330 09:34:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.330 09:34:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.330 [2024-10-07 09:34:09.874162] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:21.330 09:34:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.330 09:34:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:21.330 09:34:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.330 09:34:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.330 09:34:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.330 09:34:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:21.331 09:34:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.331 09:34:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.331 09:34:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.331 09:34:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid=21b7cb46-a602-e411-a339-001e67bc3be4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:21.589 09:34:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:21.589 09:34:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:21.589 09:34:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:21.589 09:34:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:21.589 09:34:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:24.128 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:24.128 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:24.128 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:24.128 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:24.128 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:24.128 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:24.128 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:24.128 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:24.128 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:24.128 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:24.128 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:24.128 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:24.128 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:24.128 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:24.128 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:24.128 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:24.128 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.128 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:24.128 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.128 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:24.128 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.128 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:24.128 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.128 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:24.128 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:24.128 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.129 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:24.129 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.129 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:24.129 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.129 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:24.129 [2024-10-07 09:34:12.623587] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:24.129 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.129 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:24.129 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.129 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:24.129 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.129 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:24.129 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.129 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:24.129 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.129 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid=21b7cb46-a602-e411-a339-001e67bc3be4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:24.387 09:34:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:24.387 09:34:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:24.387 09:34:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:24.387 09:34:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:24.387 09:34:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:26.928 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:26.928 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:26.928 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:26.928 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:26.928 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:26.928 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:26.928 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:26.928 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:26.928 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:26.928 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:26.928 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:26.928 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:26.928 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:26.928 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:26.928 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:26.928 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:26.928 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.928 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.928 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.928 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:26.928 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.928 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.928 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.928 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:13:26.928 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:26.928 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:26.928 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.928 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.928 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.928 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:26.928 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.928 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.928 [2024-10-07 09:34:15.459591] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:26.928 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.928 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:26.928 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.928 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.928 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.928 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:26.928 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.928 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.928 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.928 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:26.928 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.928 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.928 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.928 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:26.928 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.928 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.928 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.928 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:26.928 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:26.928 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.928 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.928 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.928 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:26.928 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.928 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.928 [2024-10-07 09:34:15.507633] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:26.928 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.928 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:26.928 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.928 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.928 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.928 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:26.928 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.928 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.928 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.928 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:26.928 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.928 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.928 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.928 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:26.928 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.928 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.928 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.928 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:26.928 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:26.928 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.928 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.928 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.928 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:26.928 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.928 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.928 [2024-10-07 09:34:15.555847] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:26.928 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.928 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:26.928 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.928 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.928 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.928 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:26.928 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.928 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.929 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.929 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:26.929 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.929 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.929 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.929 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:26.929 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.929 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.929 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.929 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:26.929 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:26.929 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.929 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.929 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.929 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:26.929 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.929 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.929 [2024-10-07 09:34:15.604025] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:26.929 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.929 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:26.929 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.929 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.929 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.929 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:26.929 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.929 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.929 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.929 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:26.929 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.929 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.929 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.929 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:26.929 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.929 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.929 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.929 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:26.929 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:26.929 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.929 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.929 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.929 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:26.929 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.929 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.929 [2024-10-07 09:34:15.652187] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:26.929 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.929 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:26.929 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.929 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.929 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.929 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:26.929 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.929 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.929 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.929 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:26.929 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.929 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.929 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.929 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:26.929 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.929 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.929 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.929 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:13:26.929 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.929 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.929 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.929 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:13:26.929 "tick_rate": 2700000000, 00:13:26.929 "poll_groups": [ 00:13:26.929 { 00:13:26.929 "name": "nvmf_tgt_poll_group_000", 00:13:26.929 "admin_qpairs": 2, 00:13:26.929 "io_qpairs": 84, 00:13:26.929 "current_admin_qpairs": 0, 00:13:26.929 "current_io_qpairs": 0, 00:13:26.929 "pending_bdev_io": 0, 00:13:26.929 "completed_nvme_io": 182, 00:13:26.929 "transports": [ 00:13:26.929 { 00:13:26.929 "trtype": "TCP" 00:13:26.929 } 00:13:26.929 ] 00:13:26.929 }, 00:13:26.929 { 00:13:26.929 "name": "nvmf_tgt_poll_group_001", 00:13:26.929 "admin_qpairs": 2, 00:13:26.929 "io_qpairs": 84, 00:13:26.929 "current_admin_qpairs": 0, 00:13:26.929 "current_io_qpairs": 0, 00:13:26.929 "pending_bdev_io": 0, 00:13:26.929 "completed_nvme_io": 87, 00:13:26.929 "transports": [ 00:13:26.929 { 00:13:26.929 "trtype": "TCP" 00:13:26.929 } 00:13:26.929 ] 00:13:26.929 }, 00:13:26.929 { 00:13:26.929 "name": "nvmf_tgt_poll_group_002", 00:13:26.929 "admin_qpairs": 1, 00:13:26.929 "io_qpairs": 84, 00:13:26.929 "current_admin_qpairs": 0, 00:13:26.929 "current_io_qpairs": 0, 00:13:26.929 "pending_bdev_io": 0, 00:13:26.929 "completed_nvme_io": 185, 00:13:26.929 "transports": [ 00:13:26.929 { 00:13:26.929 "trtype": "TCP" 00:13:26.929 } 00:13:26.929 ] 00:13:26.929 }, 00:13:26.929 { 00:13:26.929 "name": "nvmf_tgt_poll_group_003", 00:13:26.929 "admin_qpairs": 2, 00:13:26.929 "io_qpairs": 84, 00:13:26.929 "current_admin_qpairs": 0, 00:13:26.929 "current_io_qpairs": 0, 00:13:26.929 "pending_bdev_io": 0, 00:13:26.929 "completed_nvme_io": 232, 00:13:26.929 "transports": [ 00:13:26.929 { 00:13:26.929 "trtype": "TCP" 00:13:26.929 } 00:13:26.929 ] 00:13:26.929 } 00:13:26.929 ] 00:13:26.929 }' 00:13:26.929 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:13:26.929 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:26.929 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:26.929 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:26.929 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:13:26.929 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:13:26.929 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:26.929 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:26.929 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:26.929 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:13:26.929 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:13:26.929 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:13:26.929 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:13:26.929 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@514 -- # nvmfcleanup 00:13:26.929 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:13:26.929 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:26.929 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:13:26.929 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:26.929 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:26.929 rmmod nvme_tcp 00:13:26.929 rmmod nvme_fabrics 00:13:26.929 rmmod nvme_keyring 00:13:26.929 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:26.929 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:13:26.929 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:13:26.929 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@515 -- # '[' -n 178761 ']' 00:13:26.929 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # killprocess 178761 00:13:26.929 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@950 -- # '[' -z 178761 ']' 00:13:26.930 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # kill -0 178761 00:13:26.930 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # uname 00:13:26.930 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:26.930 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 178761 00:13:26.930 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:26.930 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:26.930 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 178761' 00:13:26.930 killing process with pid 178761 00:13:26.930 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@969 -- # kill 178761 00:13:26.930 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@974 -- # wait 178761 00:13:27.497 09:34:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:13:27.497 09:34:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:13:27.497 09:34:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:13:27.497 09:34:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:13:27.497 09:34:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@789 -- # iptables-save 00:13:27.497 09:34:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:13:27.497 09:34:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@789 -- # iptables-restore 00:13:27.497 09:34:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:27.497 09:34:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:27.497 09:34:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:27.497 09:34:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:27.497 09:34:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:29.407 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:29.407 00:13:29.407 real 0m25.392s 00:13:29.407 user 1m22.304s 00:13:29.407 sys 0m4.081s 00:13:29.407 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:29.407 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:29.407 ************************************ 00:13:29.407 END TEST nvmf_rpc 00:13:29.407 ************************************ 00:13:29.407 09:34:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:29.407 09:34:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:29.407 09:34:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:29.407 09:34:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:29.407 ************************************ 00:13:29.407 START TEST nvmf_invalid 00:13:29.407 ************************************ 00:13:29.407 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:29.407 * Looking for test storage... 00:13:29.407 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:29.407 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:29.407 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1681 -- # lcov --version 00:13:29.407 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:29.667 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:29.667 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:29.667 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:29.667 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:29.667 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:13:29.667 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:13:29.667 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:13:29.667 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:13:29.667 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:13:29.667 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:13:29.667 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:13:29.667 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:29.667 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:13:29.667 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:13:29.667 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:29.667 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:29.667 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:13:29.667 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:13:29.667 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:29.667 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:13:29.667 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:13:29.667 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:13:29.667 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:13:29.667 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:29.667 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:13:29.667 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:13:29.667 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:29.667 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:29.667 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:13:29.667 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:29.667 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:29.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:29.667 --rc genhtml_branch_coverage=1 00:13:29.667 --rc genhtml_function_coverage=1 00:13:29.667 --rc genhtml_legend=1 00:13:29.667 --rc geninfo_all_blocks=1 00:13:29.667 --rc geninfo_unexecuted_blocks=1 00:13:29.667 00:13:29.667 ' 00:13:29.667 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:29.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:29.667 --rc genhtml_branch_coverage=1 00:13:29.667 --rc genhtml_function_coverage=1 00:13:29.667 --rc genhtml_legend=1 00:13:29.667 --rc geninfo_all_blocks=1 00:13:29.667 --rc geninfo_unexecuted_blocks=1 00:13:29.667 00:13:29.667 ' 00:13:29.667 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:29.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:29.667 --rc genhtml_branch_coverage=1 00:13:29.667 --rc genhtml_function_coverage=1 00:13:29.667 --rc genhtml_legend=1 00:13:29.667 --rc geninfo_all_blocks=1 00:13:29.667 --rc geninfo_unexecuted_blocks=1 00:13:29.667 00:13:29.667 ' 00:13:29.667 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:29.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:29.667 --rc genhtml_branch_coverage=1 00:13:29.667 --rc genhtml_function_coverage=1 00:13:29.667 --rc genhtml_legend=1 00:13:29.667 --rc geninfo_all_blocks=1 00:13:29.667 --rc geninfo_unexecuted_blocks=1 00:13:29.667 00:13:29.667 ' 00:13:29.667 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:29.667 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:13:29.667 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:29.667 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:29.667 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:29.667 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:29.667 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:29.667 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:29.667 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:29.667 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:29.667 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:29.667 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:29.667 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:13:29.667 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:13:29.667 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:29.667 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:29.667 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:29.667 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:29.667 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:29.667 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:13:29.667 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:29.667 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:29.667 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:29.667 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.667 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.667 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.667 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:13:29.667 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.667 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:13:29.668 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:29.668 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:29.668 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:29.668 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:29.668 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:29.668 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:29.668 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:29.668 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:29.668 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:29.668 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:29.668 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:29.668 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:29.668 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:29.668 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:13:29.668 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:13:29.668 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:13:29.668 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:13:29.668 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:29.668 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # prepare_net_devs 00:13:29.668 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@436 -- # local -g is_hw=no 00:13:29.668 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # remove_spdk_ns 00:13:29.668 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:29.668 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:29.668 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:29.668 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:13:29.668 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:13:29.668 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:13:29.668 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:31.571 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:31.571 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:13:31.571 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:31.571 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:31.571 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:31.571 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:31.571 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:31.571 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:13:31.571 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:31.571 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:13:31.571 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:13:31.571 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:13:31.571 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:13:31.571 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:13:31.571 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:13:31.571 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:31.571 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:31.571 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:31.571 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:31.571 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:31.571 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:31.571 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:31.571 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:31.571 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:31.571 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:31.571 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:31.571 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:31.571 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:31.571 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:31.571 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:31.571 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:31.571 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:31.571 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:31.571 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:31.571 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x1592)' 00:13:31.571 Found 0000:09:00.0 (0x8086 - 0x1592) 00:13:31.571 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:31.571 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:31.571 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:13:31.571 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:13:31.571 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:31.571 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:31.571 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x1592)' 00:13:31.571 Found 0000:09:00.1 (0x8086 - 0x1592) 00:13:31.571 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:31.571 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:31.571 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:13:31.571 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:13:31.571 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:31.571 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:31.571 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:31.571 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:31.571 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:31.571 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:31.571 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:31.571 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:31.571 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:31.571 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:31.571 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:31.571 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:13:31.571 Found net devices under 0000:09:00.0: cvl_0_0 00:13:31.571 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:31.571 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:31.571 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:31.571 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:31.571 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:31.571 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:31.571 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:31.571 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:31.571 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:13:31.571 Found net devices under 0000:09:00.1: cvl_0_1 00:13:31.571 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:31.571 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:13:31.571 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # is_hw=yes 00:13:31.571 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:13:31.571 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:13:31.571 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:13:31.571 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:31.571 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:31.571 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:31.571 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:31.571 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:31.571 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:31.571 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:31.571 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:31.571 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:31.571 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:31.571 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:31.571 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:31.571 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:31.571 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:31.571 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:31.833 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:31.833 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:31.833 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:31.833 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:31.833 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:31.833 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:31.833 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:31.833 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:31.833 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:31.833 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.321 ms 00:13:31.833 00:13:31.833 --- 10.0.0.2 ping statistics --- 00:13:31.833 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:31.833 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:13:31.833 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:31.833 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:31.833 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:13:31.833 00:13:31.833 --- 10.0.0.1 ping statistics --- 00:13:31.833 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:31.833 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:13:31.833 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:31.833 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # return 0 00:13:31.833 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:13:31.833 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:31.833 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:13:31.833 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:13:31.833 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:31.833 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:13:31.833 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:13:31.833 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:31.833 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:13:31.833 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:31.833 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:31.833 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # nvmfpid=183185 00:13:31.833 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:31.833 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # waitforlisten 183185 00:13:31.833 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@831 -- # '[' -z 183185 ']' 00:13:31.833 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:31.833 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:31.833 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:31.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:31.833 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:31.833 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:31.833 [2024-10-07 09:34:20.746101] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:13:31.833 [2024-10-07 09:34:20.746207] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:31.833 [2024-10-07 09:34:20.810386] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:32.093 [2024-10-07 09:34:20.926899] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:32.093 [2024-10-07 09:34:20.926970] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:32.093 [2024-10-07 09:34:20.926984] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:32.093 [2024-10-07 09:34:20.926995] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:32.093 [2024-10-07 09:34:20.927005] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:32.093 [2024-10-07 09:34:20.928571] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:13:32.093 [2024-10-07 09:34:20.928697] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:13:32.093 [2024-10-07 09:34:20.928702] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:13:32.093 [2024-10-07 09:34:20.928614] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:13:32.093 09:34:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:32.093 09:34:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # return 0 00:13:32.093 09:34:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:13:32.093 09:34:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:32.093 09:34:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:32.352 09:34:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:32.352 09:34:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:32.352 09:34:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode27919 00:13:32.610 [2024-10-07 09:34:21.400461] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:32.610 09:34:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:13:32.610 { 00:13:32.610 "nqn": "nqn.2016-06.io.spdk:cnode27919", 00:13:32.610 "tgt_name": "foobar", 00:13:32.610 "method": "nvmf_create_subsystem", 00:13:32.610 "req_id": 1 00:13:32.610 } 00:13:32.610 Got JSON-RPC error response 00:13:32.610 response: 00:13:32.610 { 00:13:32.610 "code": -32603, 00:13:32.610 "message": "Unable to find target foobar" 00:13:32.610 }' 00:13:32.610 09:34:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:13:32.610 { 00:13:32.610 "nqn": "nqn.2016-06.io.spdk:cnode27919", 00:13:32.610 "tgt_name": "foobar", 00:13:32.610 "method": "nvmf_create_subsystem", 00:13:32.610 "req_id": 1 00:13:32.610 } 00:13:32.610 Got JSON-RPC error response 00:13:32.610 response: 00:13:32.610 { 00:13:32.610 "code": -32603, 00:13:32.610 "message": "Unable to find target foobar" 00:13:32.610 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:32.610 09:34:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:32.610 09:34:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode19753 00:13:32.868 [2024-10-07 09:34:21.685452] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19753: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:32.868 09:34:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:13:32.868 { 00:13:32.868 "nqn": "nqn.2016-06.io.spdk:cnode19753", 00:13:32.868 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:32.868 "method": "nvmf_create_subsystem", 00:13:32.868 "req_id": 1 00:13:32.868 } 00:13:32.868 Got JSON-RPC error response 00:13:32.868 response: 00:13:32.868 { 00:13:32.868 "code": -32602, 00:13:32.868 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:32.868 }' 00:13:32.868 09:34:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:13:32.868 { 00:13:32.868 "nqn": "nqn.2016-06.io.spdk:cnode19753", 00:13:32.868 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:32.868 "method": "nvmf_create_subsystem", 00:13:32.868 "req_id": 1 00:13:32.868 } 00:13:32.868 Got JSON-RPC error response 00:13:32.868 response: 00:13:32.868 { 00:13:32.868 "code": -32602, 00:13:32.868 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:32.868 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:32.868 09:34:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:32.868 09:34:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode9652 00:13:33.128 [2024-10-07 09:34:21.954378] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9652: invalid model number 'SPDK_Controller' 00:13:33.128 09:34:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:13:33.128 { 00:13:33.128 "nqn": "nqn.2016-06.io.spdk:cnode9652", 00:13:33.128 "model_number": "SPDK_Controller\u001f", 00:13:33.128 "method": "nvmf_create_subsystem", 00:13:33.128 "req_id": 1 00:13:33.128 } 00:13:33.128 Got JSON-RPC error response 00:13:33.128 response: 00:13:33.128 { 00:13:33.128 "code": -32602, 00:13:33.128 "message": "Invalid MN SPDK_Controller\u001f" 00:13:33.128 }' 00:13:33.128 09:34:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:13:33.128 { 00:13:33.128 "nqn": "nqn.2016-06.io.spdk:cnode9652", 00:13:33.128 "model_number": "SPDK_Controller\u001f", 00:13:33.128 "method": "nvmf_create_subsystem", 00:13:33.128 "req_id": 1 00:13:33.128 } 00:13:33.128 Got JSON-RPC error response 00:13:33.128 response: 00:13:33.128 { 00:13:33.128 "code": -32602, 00:13:33.128 "message": "Invalid MN SPDK_Controller\u001f" 00:13:33.128 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:33.128 09:34:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:13:33.128 09:34:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:13:33.128 09:34:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:33.128 09:34:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:33.128 09:34:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:33.128 09:34:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:33.128 09:34:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.128 09:34:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:13:33.128 09:34:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:13:33.128 09:34:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:13:33.128 09:34:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.128 09:34:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.128 09:34:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:13:33.128 09:34:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:13:33.128 09:34:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:13:33.128 09:34:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.128 09:34:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.128 09:34:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:13:33.128 09:34:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:13:33.128 09:34:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:13:33.128 09:34:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.128 09:34:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.128 09:34:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:13:33.128 09:34:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:13:33.128 09:34:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:13:33.128 09:34:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.128 09:34:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.128 09:34:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:13:33.128 09:34:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:13:33.128 09:34:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:13:33.128 09:34:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.128 09:34:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.128 09:34:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:13:33.128 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:13:33.128 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:13:33.128 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.128 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.128 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:13:33.128 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:13:33.128 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:13:33.128 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.128 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.128 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:13:33.128 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:13:33.128 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:13:33.128 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.128 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.128 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:13:33.128 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:13:33.128 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:13:33.128 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.128 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.128 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:13:33.128 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:13:33.128 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:13:33.128 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.128 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.128 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:13:33.128 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:13:33.128 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:13:33.128 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.128 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.128 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:13:33.128 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:13:33.128 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:13:33.128 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.128 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.128 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:13:33.128 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:13:33.128 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:13:33.128 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.128 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.128 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:13:33.128 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:13:33.128 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:13:33.128 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.128 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.128 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:13:33.128 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:13:33.128 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:13:33.128 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.128 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.128 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:13:33.128 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:13:33.128 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:13:33.128 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.128 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.128 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:13:33.128 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:13:33.128 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:13:33.128 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.128 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.128 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:13:33.128 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:13:33.128 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:13:33.128 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.129 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.129 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:13:33.129 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:13:33.129 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:13:33.129 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.129 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.129 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:13:33.129 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:13:33.129 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:13:33.129 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.129 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.129 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:13:33.129 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:13:33.129 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:13:33.129 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.129 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.129 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ b == \- ]] 00:13:33.129 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'bZ[`Qd:Frs%(EKBLm*'\''iE' 00:13:33.129 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'bZ[`Qd:Frs%(EKBLm*'\''iE' nqn.2016-06.io.spdk:cnode16172 00:13:33.388 [2024-10-07 09:34:22.303473] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16172: invalid serial number 'bZ[`Qd:Frs%(EKBLm*'iE' 00:13:33.388 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:13:33.388 { 00:13:33.388 "nqn": "nqn.2016-06.io.spdk:cnode16172", 00:13:33.388 "serial_number": "bZ[`Qd:Frs%(EKBLm*'\''iE", 00:13:33.388 "method": "nvmf_create_subsystem", 00:13:33.388 "req_id": 1 00:13:33.388 } 00:13:33.388 Got JSON-RPC error response 00:13:33.388 response: 00:13:33.388 { 00:13:33.388 "code": -32602, 00:13:33.388 "message": "Invalid SN bZ[`Qd:Frs%(EKBLm*'\''iE" 00:13:33.388 }' 00:13:33.388 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:13:33.388 { 00:13:33.388 "nqn": "nqn.2016-06.io.spdk:cnode16172", 00:13:33.388 "serial_number": "bZ[`Qd:Frs%(EKBLm*'iE", 00:13:33.388 "method": "nvmf_create_subsystem", 00:13:33.388 "req_id": 1 00:13:33.388 } 00:13:33.388 Got JSON-RPC error response 00:13:33.388 response: 00:13:33.388 { 00:13:33.388 "code": -32602, 00:13:33.388 "message": "Invalid SN bZ[`Qd:Frs%(EKBLm*'iE" 00:13:33.388 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:33.388 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:13:33.388 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:13:33.388 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:33.388 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:33.388 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:33.388 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:33.388 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.388 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:13:33.388 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:13:33.388 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:13:33.388 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.388 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.388 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:13:33.388 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:13:33.388 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:13:33.388 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.388 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.388 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:13:33.388 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:13:33.388 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:13:33.388 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.388 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.388 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:13:33.388 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:13:33.388 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:13:33.388 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.388 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.388 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:13:33.388 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:13:33.388 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:13:33.388 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.388 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.388 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:13:33.388 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:13:33.388 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:13:33.388 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.388 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.388 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:13:33.388 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:13:33.388 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:13:33.388 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.388 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.388 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:13:33.388 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:13:33.388 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:13:33.388 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.388 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.388 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:13:33.388 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:13:33.388 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:13:33.388 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.388 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.388 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:13:33.388 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:13:33.388 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:13:33.388 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.388 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.388 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:13:33.388 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:13:33.388 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:13:33.388 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.388 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.388 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:13:33.388 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:13:33.388 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:13:33.388 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.388 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.388 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:13:33.388 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:13:33.388 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:13:33.388 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.388 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.388 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:13:33.388 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:13:33.388 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:13:33.388 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.388 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.388 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:13:33.648 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:13:33.648 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:13:33.648 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.648 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.648 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:13:33.648 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:13:33.648 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:13:33.648 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.648 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.648 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:13:33.648 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:13:33.648 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:13:33.648 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.648 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.648 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:13:33.648 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:13:33.648 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:13:33.648 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.648 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.648 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:13:33.648 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:13:33.648 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:13:33.649 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.649 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.649 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:13:33.649 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:13:33.649 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:13:33.649 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.649 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.649 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:13:33.649 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:13:33.649 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:13:33.649 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.649 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.649 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:13:33.649 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:13:33.649 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:13:33.649 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.649 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.649 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:13:33.649 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:13:33.649 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:13:33.649 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.649 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.649 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:13:33.649 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:13:33.649 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:13:33.649 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.649 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.649 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:13:33.649 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:13:33.649 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:13:33.649 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.649 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.649 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:13:33.649 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:13:33.649 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:13:33.649 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.649 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.649 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:13:33.649 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:13:33.649 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:13:33.649 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.649 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.649 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:13:33.649 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:13:33.649 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:13:33.649 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.649 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.649 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:13:33.649 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:13:33.649 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:13:33.649 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.649 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.649 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:13:33.649 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:13:33.649 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:13:33.649 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.649 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.649 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:13:33.649 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:13:33.649 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:13:33.649 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.649 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.649 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:13:33.649 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:13:33.649 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:13:33.649 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.649 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.649 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:13:33.649 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:13:33.649 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:13:33.649 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.649 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.649 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:13:33.649 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:13:33.649 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:13:33.649 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.649 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.649 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:13:33.649 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:13:33.649 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:13:33.649 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.649 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.649 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:13:33.649 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:13:33.649 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:13:33.649 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.649 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.649 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:13:33.649 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:13:33.649 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:13:33.649 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.649 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.649 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:13:33.649 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:13:33.649 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:13:33.649 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.649 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.649 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:13:33.649 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:13:33.649 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:13:33.649 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.649 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.649 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:13:33.649 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:13:33.649 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:13:33.649 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.649 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.649 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:13:33.649 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:13:33.649 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:13:33.649 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.649 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.649 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ u == \- ]] 00:13:33.649 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'u#6xggkD={3"6+kTKQV%.G~B6nqh1P7&Ey#*8i!+T' 00:13:33.649 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'u#6xggkD={3"6+kTKQV%.G~B6nqh1P7&Ey#*8i!+T' nqn.2016-06.io.spdk:cnode5755 00:13:33.908 [2024-10-07 09:34:22.745029] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5755: invalid model number 'u#6xggkD={3"6+kTKQV%.G~B6nqh1P7&Ey#*8i!+T' 00:13:33.908 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:13:33.908 { 00:13:33.908 "nqn": "nqn.2016-06.io.spdk:cnode5755", 00:13:33.908 "model_number": "u#6xggkD={3\"6+kTKQV%.G~B6nqh1P7&Ey#*8i!+T", 00:13:33.908 "method": "nvmf_create_subsystem", 00:13:33.908 "req_id": 1 00:13:33.908 } 00:13:33.908 Got JSON-RPC error response 00:13:33.908 response: 00:13:33.908 { 00:13:33.908 "code": -32602, 00:13:33.908 "message": "Invalid MN u#6xggkD={3\"6+kTKQV%.G~B6nqh1P7&Ey#*8i!+T" 00:13:33.908 }' 00:13:33.908 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:13:33.908 { 00:13:33.908 "nqn": "nqn.2016-06.io.spdk:cnode5755", 00:13:33.908 "model_number": "u#6xggkD={3\"6+kTKQV%.G~B6nqh1P7&Ey#*8i!+T", 00:13:33.908 "method": "nvmf_create_subsystem", 00:13:33.908 "req_id": 1 00:13:33.908 } 00:13:33.908 Got JSON-RPC error response 00:13:33.908 response: 00:13:33.908 { 00:13:33.908 "code": -32602, 00:13:33.908 "message": "Invalid MN u#6xggkD={3\"6+kTKQV%.G~B6nqh1P7&Ey#*8i!+T" 00:13:33.908 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:33.908 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:13:34.166 [2024-10-07 09:34:23.026056] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:34.166 09:34:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:13:34.424 09:34:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:13:34.424 09:34:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:13:34.424 09:34:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:13:34.424 09:34:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:13:34.424 09:34:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:13:34.683 [2024-10-07 09:34:23.559772] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:13:34.683 09:34:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:13:34.683 { 00:13:34.683 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:34.683 "listen_address": { 00:13:34.683 "trtype": "tcp", 00:13:34.683 "traddr": "", 00:13:34.683 "trsvcid": "4421" 00:13:34.683 }, 00:13:34.683 "method": "nvmf_subsystem_remove_listener", 00:13:34.683 "req_id": 1 00:13:34.683 } 00:13:34.683 Got JSON-RPC error response 00:13:34.683 response: 00:13:34.683 { 00:13:34.683 "code": -32602, 00:13:34.683 "message": "Invalid parameters" 00:13:34.683 }' 00:13:34.683 09:34:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:13:34.683 { 00:13:34.683 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:34.683 "listen_address": { 00:13:34.683 "trtype": "tcp", 00:13:34.683 "traddr": "", 00:13:34.683 "trsvcid": "4421" 00:13:34.683 }, 00:13:34.683 "method": "nvmf_subsystem_remove_listener", 00:13:34.683 "req_id": 1 00:13:34.683 } 00:13:34.683 Got JSON-RPC error response 00:13:34.683 response: 00:13:34.683 { 00:13:34.683 "code": -32602, 00:13:34.683 "message": "Invalid parameters" 00:13:34.683 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:13:34.683 09:34:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9891 -i 0 00:13:34.941 [2024-10-07 09:34:23.836629] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9891: invalid cntlid range [0-65519] 00:13:34.941 09:34:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:13:34.941 { 00:13:34.941 "nqn": "nqn.2016-06.io.spdk:cnode9891", 00:13:34.941 "min_cntlid": 0, 00:13:34.941 "method": "nvmf_create_subsystem", 00:13:34.941 "req_id": 1 00:13:34.941 } 00:13:34.941 Got JSON-RPC error response 00:13:34.941 response: 00:13:34.941 { 00:13:34.941 "code": -32602, 00:13:34.941 "message": "Invalid cntlid range [0-65519]" 00:13:34.941 }' 00:13:34.941 09:34:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:13:34.941 { 00:13:34.941 "nqn": "nqn.2016-06.io.spdk:cnode9891", 00:13:34.941 "min_cntlid": 0, 00:13:34.941 "method": "nvmf_create_subsystem", 00:13:34.941 "req_id": 1 00:13:34.941 } 00:13:34.941 Got JSON-RPC error response 00:13:34.941 response: 00:13:34.941 { 00:13:34.941 "code": -32602, 00:13:34.941 "message": "Invalid cntlid range [0-65519]" 00:13:34.941 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:34.941 09:34:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20107 -i 65520 00:13:35.200 [2024-10-07 09:34:24.105527] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20107: invalid cntlid range [65520-65519] 00:13:35.200 09:34:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:13:35.200 { 00:13:35.200 "nqn": "nqn.2016-06.io.spdk:cnode20107", 00:13:35.200 "min_cntlid": 65520, 00:13:35.200 "method": "nvmf_create_subsystem", 00:13:35.200 "req_id": 1 00:13:35.200 } 00:13:35.200 Got JSON-RPC error response 00:13:35.200 response: 00:13:35.200 { 00:13:35.200 "code": -32602, 00:13:35.200 "message": "Invalid cntlid range [65520-65519]" 00:13:35.200 }' 00:13:35.200 09:34:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:13:35.200 { 00:13:35.200 "nqn": "nqn.2016-06.io.spdk:cnode20107", 00:13:35.200 "min_cntlid": 65520, 00:13:35.200 "method": "nvmf_create_subsystem", 00:13:35.200 "req_id": 1 00:13:35.200 } 00:13:35.200 Got JSON-RPC error response 00:13:35.200 response: 00:13:35.200 { 00:13:35.200 "code": -32602, 00:13:35.200 "message": "Invalid cntlid range [65520-65519]" 00:13:35.200 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:35.200 09:34:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode28048 -I 0 00:13:35.460 [2024-10-07 09:34:24.378424] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28048: invalid cntlid range [1-0] 00:13:35.460 09:34:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:13:35.460 { 00:13:35.460 "nqn": "nqn.2016-06.io.spdk:cnode28048", 00:13:35.460 "max_cntlid": 0, 00:13:35.460 "method": "nvmf_create_subsystem", 00:13:35.460 "req_id": 1 00:13:35.460 } 00:13:35.460 Got JSON-RPC error response 00:13:35.460 response: 00:13:35.460 { 00:13:35.460 "code": -32602, 00:13:35.460 "message": "Invalid cntlid range [1-0]" 00:13:35.460 }' 00:13:35.460 09:34:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:13:35.460 { 00:13:35.460 "nqn": "nqn.2016-06.io.spdk:cnode28048", 00:13:35.460 "max_cntlid": 0, 00:13:35.460 "method": "nvmf_create_subsystem", 00:13:35.460 "req_id": 1 00:13:35.460 } 00:13:35.460 Got JSON-RPC error response 00:13:35.460 response: 00:13:35.460 { 00:13:35.460 "code": -32602, 00:13:35.460 "message": "Invalid cntlid range [1-0]" 00:13:35.460 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:35.460 09:34:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5263 -I 65520 00:13:35.717 [2024-10-07 09:34:24.643313] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5263: invalid cntlid range [1-65520] 00:13:35.717 09:34:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:13:35.717 { 00:13:35.717 "nqn": "nqn.2016-06.io.spdk:cnode5263", 00:13:35.717 "max_cntlid": 65520, 00:13:35.717 "method": "nvmf_create_subsystem", 00:13:35.717 "req_id": 1 00:13:35.717 } 00:13:35.717 Got JSON-RPC error response 00:13:35.717 response: 00:13:35.717 { 00:13:35.717 "code": -32602, 00:13:35.717 "message": "Invalid cntlid range [1-65520]" 00:13:35.717 }' 00:13:35.717 09:34:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:13:35.717 { 00:13:35.717 "nqn": "nqn.2016-06.io.spdk:cnode5263", 00:13:35.717 "max_cntlid": 65520, 00:13:35.717 "method": "nvmf_create_subsystem", 00:13:35.717 "req_id": 1 00:13:35.717 } 00:13:35.717 Got JSON-RPC error response 00:13:35.717 response: 00:13:35.717 { 00:13:35.717 "code": -32602, 00:13:35.717 "message": "Invalid cntlid range [1-65520]" 00:13:35.717 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:35.717 09:34:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode26347 -i 6 -I 5 00:13:35.975 [2024-10-07 09:34:24.924254] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26347: invalid cntlid range [6-5] 00:13:35.975 09:34:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:13:35.975 { 00:13:35.975 "nqn": "nqn.2016-06.io.spdk:cnode26347", 00:13:35.975 "min_cntlid": 6, 00:13:35.975 "max_cntlid": 5, 00:13:35.975 "method": "nvmf_create_subsystem", 00:13:35.975 "req_id": 1 00:13:35.975 } 00:13:35.975 Got JSON-RPC error response 00:13:35.975 response: 00:13:35.975 { 00:13:35.975 "code": -32602, 00:13:35.975 "message": "Invalid cntlid range [6-5]" 00:13:35.975 }' 00:13:35.975 09:34:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:13:35.975 { 00:13:35.975 "nqn": "nqn.2016-06.io.spdk:cnode26347", 00:13:35.975 "min_cntlid": 6, 00:13:35.975 "max_cntlid": 5, 00:13:35.975 "method": "nvmf_create_subsystem", 00:13:35.975 "req_id": 1 00:13:35.975 } 00:13:35.975 Got JSON-RPC error response 00:13:35.975 response: 00:13:35.975 { 00:13:35.975 "code": -32602, 00:13:35.975 "message": "Invalid cntlid range [6-5]" 00:13:35.975 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:35.975 09:34:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:13:36.234 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:13:36.234 { 00:13:36.234 "name": "foobar", 00:13:36.234 "method": "nvmf_delete_target", 00:13:36.234 "req_id": 1 00:13:36.234 } 00:13:36.234 Got JSON-RPC error response 00:13:36.234 response: 00:13:36.234 { 00:13:36.234 "code": -32602, 00:13:36.234 "message": "The specified target doesn'\''t exist, cannot delete it." 00:13:36.234 }' 00:13:36.234 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:13:36.234 { 00:13:36.234 "name": "foobar", 00:13:36.234 "method": "nvmf_delete_target", 00:13:36.234 "req_id": 1 00:13:36.234 } 00:13:36.234 Got JSON-RPC error response 00:13:36.234 response: 00:13:36.234 { 00:13:36.234 "code": -32602, 00:13:36.234 "message": "The specified target doesn't exist, cannot delete it." 00:13:36.235 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:13:36.235 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:13:36.235 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:13:36.235 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@514 -- # nvmfcleanup 00:13:36.235 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:13:36.235 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:36.235 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:13:36.235 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:36.235 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:36.235 rmmod nvme_tcp 00:13:36.235 rmmod nvme_fabrics 00:13:36.235 rmmod nvme_keyring 00:13:36.235 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:36.235 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:13:36.235 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:13:36.235 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@515 -- # '[' -n 183185 ']' 00:13:36.235 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # killprocess 183185 00:13:36.235 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@950 -- # '[' -z 183185 ']' 00:13:36.235 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # kill -0 183185 00:13:36.235 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # uname 00:13:36.235 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:36.235 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 183185 00:13:36.235 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:36.235 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:36.235 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 183185' 00:13:36.235 killing process with pid 183185 00:13:36.235 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@969 -- # kill 183185 00:13:36.235 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@974 -- # wait 183185 00:13:36.496 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:13:36.496 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:13:36.496 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:13:36.496 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:13:36.496 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@789 -- # iptables-save 00:13:36.496 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:13:36.496 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@789 -- # iptables-restore 00:13:36.496 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:36.496 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:36.497 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:36.497 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:36.497 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:39.037 09:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:39.037 00:13:39.037 real 0m9.150s 00:13:39.037 user 0m21.721s 00:13:39.037 sys 0m2.636s 00:13:39.037 09:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:39.037 09:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:39.037 ************************************ 00:13:39.037 END TEST nvmf_invalid 00:13:39.037 ************************************ 00:13:39.037 09:34:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:39.037 09:34:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:39.037 09:34:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:39.037 09:34:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:39.037 ************************************ 00:13:39.037 START TEST nvmf_connect_stress 00:13:39.037 ************************************ 00:13:39.037 09:34:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:39.037 * Looking for test storage... 00:13:39.037 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:39.037 09:34:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:39.038 09:34:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1681 -- # lcov --version 00:13:39.038 09:34:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:39.038 09:34:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:39.038 09:34:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:39.038 09:34:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:39.038 09:34:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:39.038 09:34:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:13:39.038 09:34:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:13:39.038 09:34:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:13:39.038 09:34:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:13:39.038 09:34:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:13:39.038 09:34:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:13:39.038 09:34:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:13:39.038 09:34:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:39.038 09:34:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:13:39.038 09:34:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:13:39.038 09:34:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:39.038 09:34:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:39.038 09:34:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:13:39.038 09:34:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:13:39.038 09:34:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:39.038 09:34:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:13:39.038 09:34:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:13:39.038 09:34:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:13:39.038 09:34:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:13:39.038 09:34:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:39.038 09:34:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:13:39.038 09:34:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:13:39.038 09:34:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:39.038 09:34:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:39.038 09:34:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:13:39.038 09:34:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:39.038 09:34:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:39.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:39.038 --rc genhtml_branch_coverage=1 00:13:39.038 --rc genhtml_function_coverage=1 00:13:39.038 --rc genhtml_legend=1 00:13:39.038 --rc geninfo_all_blocks=1 00:13:39.038 --rc geninfo_unexecuted_blocks=1 00:13:39.038 00:13:39.038 ' 00:13:39.038 09:34:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:39.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:39.038 --rc genhtml_branch_coverage=1 00:13:39.038 --rc genhtml_function_coverage=1 00:13:39.038 --rc genhtml_legend=1 00:13:39.038 --rc geninfo_all_blocks=1 00:13:39.038 --rc geninfo_unexecuted_blocks=1 00:13:39.038 00:13:39.038 ' 00:13:39.038 09:34:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:39.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:39.038 --rc genhtml_branch_coverage=1 00:13:39.038 --rc genhtml_function_coverage=1 00:13:39.038 --rc genhtml_legend=1 00:13:39.038 --rc geninfo_all_blocks=1 00:13:39.038 --rc geninfo_unexecuted_blocks=1 00:13:39.038 00:13:39.038 ' 00:13:39.038 09:34:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:39.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:39.038 --rc genhtml_branch_coverage=1 00:13:39.038 --rc genhtml_function_coverage=1 00:13:39.038 --rc genhtml_legend=1 00:13:39.038 --rc geninfo_all_blocks=1 00:13:39.038 --rc geninfo_unexecuted_blocks=1 00:13:39.038 00:13:39.038 ' 00:13:39.038 09:34:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:39.038 09:34:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:13:39.038 09:34:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:39.038 09:34:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:39.038 09:34:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:39.038 09:34:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:39.038 09:34:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:39.038 09:34:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:39.038 09:34:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:39.038 09:34:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:39.038 09:34:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:39.038 09:34:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:39.038 09:34:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:13:39.038 09:34:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:13:39.038 09:34:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:39.038 09:34:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:39.038 09:34:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:39.038 09:34:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:39.038 09:34:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:39.038 09:34:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:13:39.038 09:34:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:39.038 09:34:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:39.038 09:34:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:39.038 09:34:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.038 09:34:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.038 09:34:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.038 09:34:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:13:39.038 09:34:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.039 09:34:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:13:39.039 09:34:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:39.039 09:34:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:39.039 09:34:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:39.039 09:34:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:39.039 09:34:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:39.039 09:34:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:39.039 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:39.039 09:34:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:39.039 09:34:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:39.039 09:34:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:39.039 09:34:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:39.039 09:34:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:13:39.039 09:34:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:39.039 09:34:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # prepare_net_devs 00:13:39.039 09:34:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@436 -- # local -g is_hw=no 00:13:39.039 09:34:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # remove_spdk_ns 00:13:39.039 09:34:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:39.039 09:34:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:39.039 09:34:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:39.039 09:34:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:13:39.039 09:34:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:13:39.039 09:34:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:13:39.039 09:34:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:40.943 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:40.943 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:13:40.943 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:40.943 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:40.943 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:40.943 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:40.943 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:40.943 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:13:40.943 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:40.943 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:13:40.943 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:13:40.943 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:13:40.943 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:13:40.943 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:13:40.943 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:13:40.943 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:40.943 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:40.943 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:40.943 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:40.943 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:40.943 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:40.943 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:40.943 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:40.943 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:40.943 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:40.943 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:40.943 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:40.943 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:40.943 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:40.943 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:40.943 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:40.943 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:40.943 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:40.943 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:40.943 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x1592)' 00:13:40.943 Found 0000:09:00.0 (0x8086 - 0x1592) 00:13:40.943 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:40.943 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:40.943 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:13:40.943 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:13:40.943 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:40.943 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:40.943 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x1592)' 00:13:40.943 Found 0000:09:00.1 (0x8086 - 0x1592) 00:13:40.943 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:40.943 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:40.943 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:13:40.943 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:13:40.943 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:40.943 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:40.943 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:40.943 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:40.943 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:40.943 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:40.943 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:40.943 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:40.943 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:40.943 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:40.943 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:40.943 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:13:40.943 Found net devices under 0000:09:00.0: cvl_0_0 00:13:40.943 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:40.943 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:40.943 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:40.943 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:40.943 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:40.943 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:40.943 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:40.943 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:40.943 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:13:40.943 Found net devices under 0000:09:00.1: cvl_0_1 00:13:40.944 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:40.944 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:13:40.944 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # is_hw=yes 00:13:40.944 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:13:40.944 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:13:40.944 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:13:40.944 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:40.944 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:40.944 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:40.944 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:40.944 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:40.944 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:40.944 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:40.944 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:40.944 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:40.944 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:40.944 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:40.944 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:40.944 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:40.944 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:40.944 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:40.944 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:40.944 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:40.944 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:40.944 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:40.944 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:40.944 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:40.944 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:40.944 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:40.944 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:40.944 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.304 ms 00:13:40.944 00:13:40.944 --- 10.0.0.2 ping statistics --- 00:13:40.944 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:40.944 rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms 00:13:40.944 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:40.944 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:40.944 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:13:40.944 00:13:40.944 --- 10.0.0.1 ping statistics --- 00:13:40.944 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:40.944 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:13:40.944 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:40.944 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # return 0 00:13:40.944 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:13:40.944 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:40.944 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:13:40.944 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:13:40.944 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:40.944 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:13:40.944 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:13:40.944 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:40.944 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:13:40.944 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:40.944 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:40.944 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # nvmfpid=185712 00:13:40.944 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:40.944 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # waitforlisten 185712 00:13:40.944 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@831 -- # '[' -z 185712 ']' 00:13:40.944 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:40.944 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:40.944 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:40.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:40.944 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:40.944 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:41.206 [2024-10-07 09:34:29.951903] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:13:41.206 [2024-10-07 09:34:29.952005] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:41.206 [2024-10-07 09:34:30.016084] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:41.206 [2024-10-07 09:34:30.137930] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:41.206 [2024-10-07 09:34:30.138011] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:41.206 [2024-10-07 09:34:30.138025] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:41.206 [2024-10-07 09:34:30.138059] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:41.206 [2024-10-07 09:34:30.138069] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:41.206 [2024-10-07 09:34:30.138818] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:13:41.206 [2024-10-07 09:34:30.138877] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:13:41.206 [2024-10-07 09:34:30.138881] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:13:41.466 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:41.466 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # return 0 00:13:41.466 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:13:41.466 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:41.466 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:41.466 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:41.466 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:41.466 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.466 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:41.466 [2024-10-07 09:34:30.288364] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:41.466 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.466 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:41.466 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.466 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:41.466 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.466 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:41.466 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.466 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:41.466 [2024-10-07 09:34:30.322750] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:41.466 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.466 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:41.466 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.466 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:41.466 NULL1 00:13:41.466 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.466 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=185737 00:13:41.466 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:41.466 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:41.466 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:41.466 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:13:41.466 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:41.466 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:41.466 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:41.466 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:41.466 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:41.466 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:41.466 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:41.466 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:41.466 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:41.466 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:41.466 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:41.466 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:41.466 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:41.466 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:41.467 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:41.467 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:41.467 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:41.467 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:41.467 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:41.467 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:41.467 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:41.467 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:41.467 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:41.467 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:41.467 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:41.467 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:41.467 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:41.467 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:41.467 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:41.467 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:41.467 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:41.467 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:41.467 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:41.467 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:41.467 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:41.467 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:41.467 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:41.467 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:41.467 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:41.467 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:41.467 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 185737 00:13:41.467 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:41.467 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.467 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:41.727 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.727 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 185737 00:13:41.727 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:41.727 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.727 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:42.297 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.297 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 185737 00:13:42.297 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:42.297 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.297 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:42.556 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.556 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 185737 00:13:42.556 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:42.556 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.556 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:42.816 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.816 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 185737 00:13:42.816 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:42.816 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.816 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:43.075 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.075 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 185737 00:13:43.075 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:43.075 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.075 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:43.334 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.334 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 185737 00:13:43.334 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:43.334 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.334 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:43.906 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.906 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 185737 00:13:43.906 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:43.906 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.906 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:44.193 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.193 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 185737 00:13:44.193 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:44.193 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.193 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:44.454 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.454 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 185737 00:13:44.454 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:44.454 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.454 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:44.713 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.713 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 185737 00:13:44.713 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:44.713 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.713 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:44.974 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.974 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 185737 00:13:44.974 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:44.974 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.974 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:45.545 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.545 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 185737 00:13:45.545 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:45.545 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.545 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:45.806 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.806 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 185737 00:13:45.806 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:45.806 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.806 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:46.066 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.066 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 185737 00:13:46.066 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:46.066 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.066 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:46.327 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.327 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 185737 00:13:46.327 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:46.327 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.327 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:46.587 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.587 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 185737 00:13:46.587 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:46.587 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.587 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:46.848 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.848 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 185737 00:13:46.848 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:46.848 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.848 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:47.416 09:34:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.416 09:34:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 185737 00:13:47.416 09:34:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:47.416 09:34:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.416 09:34:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:47.675 09:34:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.675 09:34:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 185737 00:13:47.675 09:34:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:47.675 09:34:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.675 09:34:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:47.935 09:34:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.935 09:34:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 185737 00:13:47.935 09:34:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:47.935 09:34:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.935 09:34:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:48.194 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.194 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 185737 00:13:48.194 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:48.194 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.194 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:48.453 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.453 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 185737 00:13:48.453 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:48.453 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.453 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:49.022 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.022 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 185737 00:13:49.022 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:49.022 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.022 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:49.282 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.282 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 185737 00:13:49.282 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:49.282 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.282 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:49.541 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.541 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 185737 00:13:49.541 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:49.541 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.541 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:49.802 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.802 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 185737 00:13:49.802 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:49.802 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.802 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:50.061 09:34:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.061 09:34:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 185737 00:13:50.061 09:34:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:50.061 09:34:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.061 09:34:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:50.631 09:34:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.631 09:34:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 185737 00:13:50.631 09:34:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:50.631 09:34:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.631 09:34:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:50.892 09:34:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.892 09:34:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 185737 00:13:50.892 09:34:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:50.892 09:34:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.892 09:34:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:51.152 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.152 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 185737 00:13:51.152 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:51.152 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.152 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:51.411 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.411 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 185737 00:13:51.411 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:51.411 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.411 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:51.669 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:51.669 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.669 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 185737 00:13:51.669 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (185737) - No such process 00:13:51.669 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 185737 00:13:51.669 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:51.669 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:51.669 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:51.669 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@514 -- # nvmfcleanup 00:13:51.669 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:13:51.670 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:51.670 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:13:51.670 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:51.930 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:51.930 rmmod nvme_tcp 00:13:51.930 rmmod nvme_fabrics 00:13:51.930 rmmod nvme_keyring 00:13:51.930 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:51.930 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:13:51.930 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:13:51.930 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@515 -- # '[' -n 185712 ']' 00:13:51.930 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # killprocess 185712 00:13:51.930 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@950 -- # '[' -z 185712 ']' 00:13:51.930 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # kill -0 185712 00:13:51.930 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # uname 00:13:51.930 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:51.930 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 185712 00:13:51.930 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:13:51.930 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:13:51.930 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 185712' 00:13:51.930 killing process with pid 185712 00:13:51.930 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@969 -- # kill 185712 00:13:51.930 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@974 -- # wait 185712 00:13:52.191 09:34:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:13:52.191 09:34:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:13:52.191 09:34:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:13:52.191 09:34:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:13:52.191 09:34:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@789 -- # iptables-save 00:13:52.191 09:34:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:13:52.191 09:34:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@789 -- # iptables-restore 00:13:52.191 09:34:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:52.191 09:34:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:52.191 09:34:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:52.191 09:34:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:52.191 09:34:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:54.099 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:54.099 00:13:54.099 real 0m15.568s 00:13:54.099 user 0m40.220s 00:13:54.099 sys 0m4.631s 00:13:54.099 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:54.099 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:54.099 ************************************ 00:13:54.099 END TEST nvmf_connect_stress 00:13:54.100 ************************************ 00:13:54.100 09:34:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:54.100 09:34:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:54.100 09:34:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:54.100 09:34:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:54.358 ************************************ 00:13:54.358 START TEST nvmf_fused_ordering 00:13:54.358 ************************************ 00:13:54.358 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:54.358 * Looking for test storage... 00:13:54.358 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:54.358 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:54.358 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1681 -- # lcov --version 00:13:54.358 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:54.358 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:54.358 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:54.358 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:54.358 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:54.358 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:13:54.358 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:13:54.358 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:13:54.358 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:13:54.358 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:13:54.358 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:13:54.359 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:13:54.359 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:54.359 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:13:54.359 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:13:54.359 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:54.359 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:54.359 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:13:54.359 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:13:54.359 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:54.359 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:13:54.359 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:13:54.359 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:13:54.359 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:13:54.359 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:54.359 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:13:54.359 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:13:54.359 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:54.359 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:54.359 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:13:54.359 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:54.359 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:54.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:54.359 --rc genhtml_branch_coverage=1 00:13:54.359 --rc genhtml_function_coverage=1 00:13:54.359 --rc genhtml_legend=1 00:13:54.359 --rc geninfo_all_blocks=1 00:13:54.359 --rc geninfo_unexecuted_blocks=1 00:13:54.359 00:13:54.359 ' 00:13:54.359 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:54.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:54.359 --rc genhtml_branch_coverage=1 00:13:54.359 --rc genhtml_function_coverage=1 00:13:54.359 --rc genhtml_legend=1 00:13:54.359 --rc geninfo_all_blocks=1 00:13:54.359 --rc geninfo_unexecuted_blocks=1 00:13:54.359 00:13:54.359 ' 00:13:54.359 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:54.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:54.359 --rc genhtml_branch_coverage=1 00:13:54.359 --rc genhtml_function_coverage=1 00:13:54.359 --rc genhtml_legend=1 00:13:54.359 --rc geninfo_all_blocks=1 00:13:54.359 --rc geninfo_unexecuted_blocks=1 00:13:54.359 00:13:54.359 ' 00:13:54.359 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:54.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:54.359 --rc genhtml_branch_coverage=1 00:13:54.359 --rc genhtml_function_coverage=1 00:13:54.359 --rc genhtml_legend=1 00:13:54.359 --rc geninfo_all_blocks=1 00:13:54.359 --rc geninfo_unexecuted_blocks=1 00:13:54.359 00:13:54.359 ' 00:13:54.359 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:54.359 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:13:54.359 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:54.359 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:54.359 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:54.359 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:54.359 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:54.359 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:54.359 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:54.359 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:54.359 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:54.359 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:54.359 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:13:54.359 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:13:54.359 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:54.359 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:54.359 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:54.359 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:54.359 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:54.359 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:13:54.359 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:54.359 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:54.359 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:54.359 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.359 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.359 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.359 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:13:54.359 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.359 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:13:54.360 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:54.360 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:54.360 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:54.360 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:54.360 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:54.360 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:54.360 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:54.360 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:54.360 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:54.360 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:54.360 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:13:54.360 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:13:54.360 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:54.360 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # prepare_net_devs 00:13:54.360 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@436 -- # local -g is_hw=no 00:13:54.360 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # remove_spdk_ns 00:13:54.360 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:54.360 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:54.360 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:54.360 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:13:54.360 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:13:54.360 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:13:54.360 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:56.266 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:56.266 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:13:56.266 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:56.266 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:56.266 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:56.266 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:56.266 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:56.266 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:13:56.266 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:56.266 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:13:56.267 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:13:56.267 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:13:56.267 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:13:56.267 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:13:56.267 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:13:56.267 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:56.267 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:56.267 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:56.267 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:56.267 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:56.267 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:56.267 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:56.267 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:56.267 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:56.267 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:56.267 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:56.267 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:56.267 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:56.267 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:56.267 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:56.267 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:56.267 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:56.267 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:56.267 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:56.267 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x1592)' 00:13:56.267 Found 0000:09:00.0 (0x8086 - 0x1592) 00:13:56.267 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:56.267 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:56.267 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:13:56.267 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:13:56.267 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:56.267 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:56.267 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x1592)' 00:13:56.267 Found 0000:09:00.1 (0x8086 - 0x1592) 00:13:56.267 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:56.267 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:56.267 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:13:56.267 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:13:56.267 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:56.267 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:56.267 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:56.267 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:56.267 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:56.267 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:56.267 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:56.267 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:56.267 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:56.267 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:56.267 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:56.267 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:13:56.267 Found net devices under 0000:09:00.0: cvl_0_0 00:13:56.267 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:56.267 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:56.267 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:56.267 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:56.267 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:56.267 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:56.267 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:56.267 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:56.267 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:13:56.267 Found net devices under 0000:09:00.1: cvl_0_1 00:13:56.267 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:56.267 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:13:56.267 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # is_hw=yes 00:13:56.267 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:13:56.267 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:13:56.267 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:13:56.267 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:56.267 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:56.267 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:56.267 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:56.267 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:56.267 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:56.267 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:56.267 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:56.267 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:56.267 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:56.267 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:56.267 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:56.267 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:56.267 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:56.267 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:56.528 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:56.528 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:56.528 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:56.528 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:56.528 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:56.528 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:56.528 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:56.528 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:56.528 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:56.528 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.303 ms 00:13:56.528 00:13:56.528 --- 10.0.0.2 ping statistics --- 00:13:56.528 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:56.528 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:13:56.528 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:56.528 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:56.528 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.131 ms 00:13:56.528 00:13:56.528 --- 10.0.0.1 ping statistics --- 00:13:56.528 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:56.528 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:13:56.528 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:56.528 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # return 0 00:13:56.528 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:13:56.528 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:56.529 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:13:56.529 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:13:56.529 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:56.529 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:13:56.529 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:13:56.529 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:13:56.529 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:13:56.529 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:56.529 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:56.529 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # nvmfpid=188863 00:13:56.529 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # waitforlisten 188863 00:13:56.529 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # '[' -z 188863 ']' 00:13:56.529 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:56.529 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:56.529 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:56.529 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:56.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:56.529 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:56.529 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:56.529 [2024-10-07 09:34:45.422062] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:13:56.529 [2024-10-07 09:34:45.422142] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:56.529 [2024-10-07 09:34:45.490216] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:56.791 [2024-10-07 09:34:45.599318] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:56.791 [2024-10-07 09:34:45.599393] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:56.791 [2024-10-07 09:34:45.599421] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:56.791 [2024-10-07 09:34:45.599432] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:56.791 [2024-10-07 09:34:45.599441] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:56.791 [2024-10-07 09:34:45.600070] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:13:56.791 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:56.791 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # return 0 00:13:56.791 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:13:56.791 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:56.791 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:56.791 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:56.791 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:56.791 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.791 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:56.791 [2024-10-07 09:34:45.744067] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:56.791 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.791 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:56.791 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.791 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:56.791 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.791 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:56.791 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.791 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:56.791 [2024-10-07 09:34:45.760277] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:56.791 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.791 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:56.791 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.791 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:56.791 NULL1 00:13:56.791 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.791 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:13:56.791 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.791 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:56.791 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.791 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:56.791 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.791 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:57.050 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.050 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:57.050 [2024-10-07 09:34:45.804084] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:13:57.051 [2024-10-07 09:34:45.804119] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid188888 ] 00:13:57.620 Attached to nqn.2016-06.io.spdk:cnode1 00:13:57.620 Namespace ID: 1 size: 1GB 00:13:57.620 fused_ordering(0) 00:13:57.620 fused_ordering(1) 00:13:57.620 fused_ordering(2) 00:13:57.620 fused_ordering(3) 00:13:57.620 fused_ordering(4) 00:13:57.620 fused_ordering(5) 00:13:57.620 fused_ordering(6) 00:13:57.620 fused_ordering(7) 00:13:57.620 fused_ordering(8) 00:13:57.620 fused_ordering(9) 00:13:57.620 fused_ordering(10) 00:13:57.620 fused_ordering(11) 00:13:57.620 fused_ordering(12) 00:13:57.620 fused_ordering(13) 00:13:57.620 fused_ordering(14) 00:13:57.620 fused_ordering(15) 00:13:57.620 fused_ordering(16) 00:13:57.620 fused_ordering(17) 00:13:57.620 fused_ordering(18) 00:13:57.620 fused_ordering(19) 00:13:57.620 fused_ordering(20) 00:13:57.620 fused_ordering(21) 00:13:57.620 fused_ordering(22) 00:13:57.620 fused_ordering(23) 00:13:57.620 fused_ordering(24) 00:13:57.620 fused_ordering(25) 00:13:57.620 fused_ordering(26) 00:13:57.620 fused_ordering(27) 00:13:57.620 fused_ordering(28) 00:13:57.620 fused_ordering(29) 00:13:57.620 fused_ordering(30) 00:13:57.620 fused_ordering(31) 00:13:57.620 fused_ordering(32) 00:13:57.620 fused_ordering(33) 00:13:57.620 fused_ordering(34) 00:13:57.620 fused_ordering(35) 00:13:57.620 fused_ordering(36) 00:13:57.620 fused_ordering(37) 00:13:57.620 fused_ordering(38) 00:13:57.620 fused_ordering(39) 00:13:57.620 fused_ordering(40) 00:13:57.620 fused_ordering(41) 00:13:57.620 fused_ordering(42) 00:13:57.620 fused_ordering(43) 00:13:57.620 fused_ordering(44) 00:13:57.620 fused_ordering(45) 00:13:57.620 fused_ordering(46) 00:13:57.620 fused_ordering(47) 00:13:57.620 fused_ordering(48) 00:13:57.620 fused_ordering(49) 00:13:57.620 fused_ordering(50) 00:13:57.620 fused_ordering(51) 00:13:57.620 fused_ordering(52) 00:13:57.620 fused_ordering(53) 00:13:57.620 fused_ordering(54) 00:13:57.620 fused_ordering(55) 00:13:57.620 fused_ordering(56) 00:13:57.620 fused_ordering(57) 00:13:57.620 fused_ordering(58) 00:13:57.620 fused_ordering(59) 00:13:57.620 fused_ordering(60) 00:13:57.620 fused_ordering(61) 00:13:57.620 fused_ordering(62) 00:13:57.620 fused_ordering(63) 00:13:57.620 fused_ordering(64) 00:13:57.620 fused_ordering(65) 00:13:57.620 fused_ordering(66) 00:13:57.620 fused_ordering(67) 00:13:57.620 fused_ordering(68) 00:13:57.620 fused_ordering(69) 00:13:57.620 fused_ordering(70) 00:13:57.620 fused_ordering(71) 00:13:57.620 fused_ordering(72) 00:13:57.620 fused_ordering(73) 00:13:57.620 fused_ordering(74) 00:13:57.620 fused_ordering(75) 00:13:57.620 fused_ordering(76) 00:13:57.620 fused_ordering(77) 00:13:57.620 fused_ordering(78) 00:13:57.620 fused_ordering(79) 00:13:57.620 fused_ordering(80) 00:13:57.620 fused_ordering(81) 00:13:57.620 fused_ordering(82) 00:13:57.620 fused_ordering(83) 00:13:57.620 fused_ordering(84) 00:13:57.620 fused_ordering(85) 00:13:57.620 fused_ordering(86) 00:13:57.620 fused_ordering(87) 00:13:57.620 fused_ordering(88) 00:13:57.620 fused_ordering(89) 00:13:57.620 fused_ordering(90) 00:13:57.620 fused_ordering(91) 00:13:57.620 fused_ordering(92) 00:13:57.620 fused_ordering(93) 00:13:57.620 fused_ordering(94) 00:13:57.620 fused_ordering(95) 00:13:57.620 fused_ordering(96) 00:13:57.620 fused_ordering(97) 00:13:57.620 fused_ordering(98) 00:13:57.620 fused_ordering(99) 00:13:57.620 fused_ordering(100) 00:13:57.620 fused_ordering(101) 00:13:57.620 fused_ordering(102) 00:13:57.620 fused_ordering(103) 00:13:57.620 fused_ordering(104) 00:13:57.620 fused_ordering(105) 00:13:57.620 fused_ordering(106) 00:13:57.620 fused_ordering(107) 00:13:57.620 fused_ordering(108) 00:13:57.620 fused_ordering(109) 00:13:57.620 fused_ordering(110) 00:13:57.620 fused_ordering(111) 00:13:57.620 fused_ordering(112) 00:13:57.620 fused_ordering(113) 00:13:57.620 fused_ordering(114) 00:13:57.620 fused_ordering(115) 00:13:57.620 fused_ordering(116) 00:13:57.620 fused_ordering(117) 00:13:57.620 fused_ordering(118) 00:13:57.620 fused_ordering(119) 00:13:57.620 fused_ordering(120) 00:13:57.620 fused_ordering(121) 00:13:57.620 fused_ordering(122) 00:13:57.620 fused_ordering(123) 00:13:57.620 fused_ordering(124) 00:13:57.620 fused_ordering(125) 00:13:57.620 fused_ordering(126) 00:13:57.620 fused_ordering(127) 00:13:57.620 fused_ordering(128) 00:13:57.620 fused_ordering(129) 00:13:57.620 fused_ordering(130) 00:13:57.620 fused_ordering(131) 00:13:57.620 fused_ordering(132) 00:13:57.620 fused_ordering(133) 00:13:57.620 fused_ordering(134) 00:13:57.620 fused_ordering(135) 00:13:57.620 fused_ordering(136) 00:13:57.620 fused_ordering(137) 00:13:57.620 fused_ordering(138) 00:13:57.620 fused_ordering(139) 00:13:57.620 fused_ordering(140) 00:13:57.620 fused_ordering(141) 00:13:57.620 fused_ordering(142) 00:13:57.620 fused_ordering(143) 00:13:57.620 fused_ordering(144) 00:13:57.620 fused_ordering(145) 00:13:57.620 fused_ordering(146) 00:13:57.620 fused_ordering(147) 00:13:57.620 fused_ordering(148) 00:13:57.620 fused_ordering(149) 00:13:57.620 fused_ordering(150) 00:13:57.620 fused_ordering(151) 00:13:57.620 fused_ordering(152) 00:13:57.620 fused_ordering(153) 00:13:57.620 fused_ordering(154) 00:13:57.620 fused_ordering(155) 00:13:57.620 fused_ordering(156) 00:13:57.620 fused_ordering(157) 00:13:57.620 fused_ordering(158) 00:13:57.620 fused_ordering(159) 00:13:57.620 fused_ordering(160) 00:13:57.620 fused_ordering(161) 00:13:57.620 fused_ordering(162) 00:13:57.620 fused_ordering(163) 00:13:57.620 fused_ordering(164) 00:13:57.620 fused_ordering(165) 00:13:57.620 fused_ordering(166) 00:13:57.620 fused_ordering(167) 00:13:57.620 fused_ordering(168) 00:13:57.620 fused_ordering(169) 00:13:57.620 fused_ordering(170) 00:13:57.620 fused_ordering(171) 00:13:57.620 fused_ordering(172) 00:13:57.620 fused_ordering(173) 00:13:57.620 fused_ordering(174) 00:13:57.620 fused_ordering(175) 00:13:57.620 fused_ordering(176) 00:13:57.620 fused_ordering(177) 00:13:57.620 fused_ordering(178) 00:13:57.620 fused_ordering(179) 00:13:57.620 fused_ordering(180) 00:13:57.620 fused_ordering(181) 00:13:57.620 fused_ordering(182) 00:13:57.620 fused_ordering(183) 00:13:57.620 fused_ordering(184) 00:13:57.620 fused_ordering(185) 00:13:57.620 fused_ordering(186) 00:13:57.620 fused_ordering(187) 00:13:57.620 fused_ordering(188) 00:13:57.620 fused_ordering(189) 00:13:57.620 fused_ordering(190) 00:13:57.620 fused_ordering(191) 00:13:57.620 fused_ordering(192) 00:13:57.620 fused_ordering(193) 00:13:57.620 fused_ordering(194) 00:13:57.620 fused_ordering(195) 00:13:57.620 fused_ordering(196) 00:13:57.620 fused_ordering(197) 00:13:57.620 fused_ordering(198) 00:13:57.620 fused_ordering(199) 00:13:57.620 fused_ordering(200) 00:13:57.620 fused_ordering(201) 00:13:57.620 fused_ordering(202) 00:13:57.620 fused_ordering(203) 00:13:57.620 fused_ordering(204) 00:13:57.620 fused_ordering(205) 00:13:57.882 fused_ordering(206) 00:13:57.882 fused_ordering(207) 00:13:57.882 fused_ordering(208) 00:13:57.882 fused_ordering(209) 00:13:57.882 fused_ordering(210) 00:13:57.882 fused_ordering(211) 00:13:57.882 fused_ordering(212) 00:13:57.882 fused_ordering(213) 00:13:57.882 fused_ordering(214) 00:13:57.882 fused_ordering(215) 00:13:57.882 fused_ordering(216) 00:13:57.882 fused_ordering(217) 00:13:57.882 fused_ordering(218) 00:13:57.882 fused_ordering(219) 00:13:57.882 fused_ordering(220) 00:13:57.882 fused_ordering(221) 00:13:57.882 fused_ordering(222) 00:13:57.882 fused_ordering(223) 00:13:57.882 fused_ordering(224) 00:13:57.882 fused_ordering(225) 00:13:57.882 fused_ordering(226) 00:13:57.882 fused_ordering(227) 00:13:57.882 fused_ordering(228) 00:13:57.882 fused_ordering(229) 00:13:57.882 fused_ordering(230) 00:13:57.882 fused_ordering(231) 00:13:57.882 fused_ordering(232) 00:13:57.882 fused_ordering(233) 00:13:57.882 fused_ordering(234) 00:13:57.882 fused_ordering(235) 00:13:57.882 fused_ordering(236) 00:13:57.882 fused_ordering(237) 00:13:57.882 fused_ordering(238) 00:13:57.882 fused_ordering(239) 00:13:57.882 fused_ordering(240) 00:13:57.882 fused_ordering(241) 00:13:57.882 fused_ordering(242) 00:13:57.882 fused_ordering(243) 00:13:57.882 fused_ordering(244) 00:13:57.882 fused_ordering(245) 00:13:57.882 fused_ordering(246) 00:13:57.882 fused_ordering(247) 00:13:57.882 fused_ordering(248) 00:13:57.882 fused_ordering(249) 00:13:57.882 fused_ordering(250) 00:13:57.882 fused_ordering(251) 00:13:57.882 fused_ordering(252) 00:13:57.882 fused_ordering(253) 00:13:57.882 fused_ordering(254) 00:13:57.882 fused_ordering(255) 00:13:57.882 fused_ordering(256) 00:13:57.882 fused_ordering(257) 00:13:57.882 fused_ordering(258) 00:13:57.882 fused_ordering(259) 00:13:57.882 fused_ordering(260) 00:13:57.882 fused_ordering(261) 00:13:57.882 fused_ordering(262) 00:13:57.882 fused_ordering(263) 00:13:57.882 fused_ordering(264) 00:13:57.882 fused_ordering(265) 00:13:57.882 fused_ordering(266) 00:13:57.882 fused_ordering(267) 00:13:57.882 fused_ordering(268) 00:13:57.882 fused_ordering(269) 00:13:57.882 fused_ordering(270) 00:13:57.882 fused_ordering(271) 00:13:57.882 fused_ordering(272) 00:13:57.882 fused_ordering(273) 00:13:57.882 fused_ordering(274) 00:13:57.882 fused_ordering(275) 00:13:57.882 fused_ordering(276) 00:13:57.882 fused_ordering(277) 00:13:57.882 fused_ordering(278) 00:13:57.882 fused_ordering(279) 00:13:57.882 fused_ordering(280) 00:13:57.882 fused_ordering(281) 00:13:57.882 fused_ordering(282) 00:13:57.882 fused_ordering(283) 00:13:57.882 fused_ordering(284) 00:13:57.882 fused_ordering(285) 00:13:57.882 fused_ordering(286) 00:13:57.882 fused_ordering(287) 00:13:57.882 fused_ordering(288) 00:13:57.882 fused_ordering(289) 00:13:57.882 fused_ordering(290) 00:13:57.882 fused_ordering(291) 00:13:57.882 fused_ordering(292) 00:13:57.882 fused_ordering(293) 00:13:57.882 fused_ordering(294) 00:13:57.882 fused_ordering(295) 00:13:57.882 fused_ordering(296) 00:13:57.882 fused_ordering(297) 00:13:57.882 fused_ordering(298) 00:13:57.882 fused_ordering(299) 00:13:57.882 fused_ordering(300) 00:13:57.882 fused_ordering(301) 00:13:57.882 fused_ordering(302) 00:13:57.882 fused_ordering(303) 00:13:57.882 fused_ordering(304) 00:13:57.882 fused_ordering(305) 00:13:57.882 fused_ordering(306) 00:13:57.882 fused_ordering(307) 00:13:57.882 fused_ordering(308) 00:13:57.882 fused_ordering(309) 00:13:57.882 fused_ordering(310) 00:13:57.882 fused_ordering(311) 00:13:57.882 fused_ordering(312) 00:13:57.882 fused_ordering(313) 00:13:57.882 fused_ordering(314) 00:13:57.882 fused_ordering(315) 00:13:57.882 fused_ordering(316) 00:13:57.882 fused_ordering(317) 00:13:57.882 fused_ordering(318) 00:13:57.882 fused_ordering(319) 00:13:57.882 fused_ordering(320) 00:13:57.882 fused_ordering(321) 00:13:57.882 fused_ordering(322) 00:13:57.882 fused_ordering(323) 00:13:57.882 fused_ordering(324) 00:13:57.882 fused_ordering(325) 00:13:57.882 fused_ordering(326) 00:13:57.882 fused_ordering(327) 00:13:57.882 fused_ordering(328) 00:13:57.882 fused_ordering(329) 00:13:57.882 fused_ordering(330) 00:13:57.882 fused_ordering(331) 00:13:57.882 fused_ordering(332) 00:13:57.882 fused_ordering(333) 00:13:57.882 fused_ordering(334) 00:13:57.882 fused_ordering(335) 00:13:57.882 fused_ordering(336) 00:13:57.882 fused_ordering(337) 00:13:57.882 fused_ordering(338) 00:13:57.882 fused_ordering(339) 00:13:57.882 fused_ordering(340) 00:13:57.882 fused_ordering(341) 00:13:57.882 fused_ordering(342) 00:13:57.882 fused_ordering(343) 00:13:57.882 fused_ordering(344) 00:13:57.882 fused_ordering(345) 00:13:57.882 fused_ordering(346) 00:13:57.882 fused_ordering(347) 00:13:57.882 fused_ordering(348) 00:13:57.882 fused_ordering(349) 00:13:57.882 fused_ordering(350) 00:13:57.882 fused_ordering(351) 00:13:57.882 fused_ordering(352) 00:13:57.882 fused_ordering(353) 00:13:57.882 fused_ordering(354) 00:13:57.882 fused_ordering(355) 00:13:57.882 fused_ordering(356) 00:13:57.882 fused_ordering(357) 00:13:57.882 fused_ordering(358) 00:13:57.882 fused_ordering(359) 00:13:57.882 fused_ordering(360) 00:13:57.883 fused_ordering(361) 00:13:57.883 fused_ordering(362) 00:13:57.883 fused_ordering(363) 00:13:57.883 fused_ordering(364) 00:13:57.883 fused_ordering(365) 00:13:57.883 fused_ordering(366) 00:13:57.883 fused_ordering(367) 00:13:57.883 fused_ordering(368) 00:13:57.883 fused_ordering(369) 00:13:57.883 fused_ordering(370) 00:13:57.883 fused_ordering(371) 00:13:57.883 fused_ordering(372) 00:13:57.883 fused_ordering(373) 00:13:57.883 fused_ordering(374) 00:13:57.883 fused_ordering(375) 00:13:57.883 fused_ordering(376) 00:13:57.883 fused_ordering(377) 00:13:57.883 fused_ordering(378) 00:13:57.883 fused_ordering(379) 00:13:57.883 fused_ordering(380) 00:13:57.883 fused_ordering(381) 00:13:57.883 fused_ordering(382) 00:13:57.883 fused_ordering(383) 00:13:57.883 fused_ordering(384) 00:13:57.883 fused_ordering(385) 00:13:57.883 fused_ordering(386) 00:13:57.883 fused_ordering(387) 00:13:57.883 fused_ordering(388) 00:13:57.883 fused_ordering(389) 00:13:57.883 fused_ordering(390) 00:13:57.883 fused_ordering(391) 00:13:57.883 fused_ordering(392) 00:13:57.883 fused_ordering(393) 00:13:57.883 fused_ordering(394) 00:13:57.883 fused_ordering(395) 00:13:57.883 fused_ordering(396) 00:13:57.883 fused_ordering(397) 00:13:57.883 fused_ordering(398) 00:13:57.883 fused_ordering(399) 00:13:57.883 fused_ordering(400) 00:13:57.883 fused_ordering(401) 00:13:57.883 fused_ordering(402) 00:13:57.883 fused_ordering(403) 00:13:57.883 fused_ordering(404) 00:13:57.883 fused_ordering(405) 00:13:57.883 fused_ordering(406) 00:13:57.883 fused_ordering(407) 00:13:57.883 fused_ordering(408) 00:13:57.883 fused_ordering(409) 00:13:57.883 fused_ordering(410) 00:13:58.143 fused_ordering(411) 00:13:58.143 fused_ordering(412) 00:13:58.143 fused_ordering(413) 00:13:58.143 fused_ordering(414) 00:13:58.143 fused_ordering(415) 00:13:58.143 fused_ordering(416) 00:13:58.143 fused_ordering(417) 00:13:58.143 fused_ordering(418) 00:13:58.143 fused_ordering(419) 00:13:58.143 fused_ordering(420) 00:13:58.143 fused_ordering(421) 00:13:58.143 fused_ordering(422) 00:13:58.143 fused_ordering(423) 00:13:58.143 fused_ordering(424) 00:13:58.143 fused_ordering(425) 00:13:58.143 fused_ordering(426) 00:13:58.143 fused_ordering(427) 00:13:58.143 fused_ordering(428) 00:13:58.143 fused_ordering(429) 00:13:58.143 fused_ordering(430) 00:13:58.144 fused_ordering(431) 00:13:58.144 fused_ordering(432) 00:13:58.144 fused_ordering(433) 00:13:58.144 fused_ordering(434) 00:13:58.144 fused_ordering(435) 00:13:58.144 fused_ordering(436) 00:13:58.144 fused_ordering(437) 00:13:58.144 fused_ordering(438) 00:13:58.144 fused_ordering(439) 00:13:58.144 fused_ordering(440) 00:13:58.144 fused_ordering(441) 00:13:58.144 fused_ordering(442) 00:13:58.144 fused_ordering(443) 00:13:58.144 fused_ordering(444) 00:13:58.144 fused_ordering(445) 00:13:58.144 fused_ordering(446) 00:13:58.144 fused_ordering(447) 00:13:58.144 fused_ordering(448) 00:13:58.144 fused_ordering(449) 00:13:58.144 fused_ordering(450) 00:13:58.144 fused_ordering(451) 00:13:58.144 fused_ordering(452) 00:13:58.144 fused_ordering(453) 00:13:58.144 fused_ordering(454) 00:13:58.144 fused_ordering(455) 00:13:58.144 fused_ordering(456) 00:13:58.144 fused_ordering(457) 00:13:58.144 fused_ordering(458) 00:13:58.144 fused_ordering(459) 00:13:58.144 fused_ordering(460) 00:13:58.144 fused_ordering(461) 00:13:58.144 fused_ordering(462) 00:13:58.144 fused_ordering(463) 00:13:58.144 fused_ordering(464) 00:13:58.144 fused_ordering(465) 00:13:58.144 fused_ordering(466) 00:13:58.144 fused_ordering(467) 00:13:58.144 fused_ordering(468) 00:13:58.144 fused_ordering(469) 00:13:58.144 fused_ordering(470) 00:13:58.144 fused_ordering(471) 00:13:58.144 fused_ordering(472) 00:13:58.144 fused_ordering(473) 00:13:58.144 fused_ordering(474) 00:13:58.144 fused_ordering(475) 00:13:58.144 fused_ordering(476) 00:13:58.144 fused_ordering(477) 00:13:58.144 fused_ordering(478) 00:13:58.144 fused_ordering(479) 00:13:58.144 fused_ordering(480) 00:13:58.144 fused_ordering(481) 00:13:58.144 fused_ordering(482) 00:13:58.144 fused_ordering(483) 00:13:58.144 fused_ordering(484) 00:13:58.144 fused_ordering(485) 00:13:58.144 fused_ordering(486) 00:13:58.144 fused_ordering(487) 00:13:58.144 fused_ordering(488) 00:13:58.144 fused_ordering(489) 00:13:58.144 fused_ordering(490) 00:13:58.144 fused_ordering(491) 00:13:58.144 fused_ordering(492) 00:13:58.144 fused_ordering(493) 00:13:58.144 fused_ordering(494) 00:13:58.144 fused_ordering(495) 00:13:58.144 fused_ordering(496) 00:13:58.144 fused_ordering(497) 00:13:58.144 fused_ordering(498) 00:13:58.144 fused_ordering(499) 00:13:58.144 fused_ordering(500) 00:13:58.144 fused_ordering(501) 00:13:58.144 fused_ordering(502) 00:13:58.144 fused_ordering(503) 00:13:58.144 fused_ordering(504) 00:13:58.144 fused_ordering(505) 00:13:58.144 fused_ordering(506) 00:13:58.144 fused_ordering(507) 00:13:58.144 fused_ordering(508) 00:13:58.144 fused_ordering(509) 00:13:58.144 fused_ordering(510) 00:13:58.144 fused_ordering(511) 00:13:58.144 fused_ordering(512) 00:13:58.144 fused_ordering(513) 00:13:58.144 fused_ordering(514) 00:13:58.144 fused_ordering(515) 00:13:58.144 fused_ordering(516) 00:13:58.144 fused_ordering(517) 00:13:58.144 fused_ordering(518) 00:13:58.144 fused_ordering(519) 00:13:58.144 fused_ordering(520) 00:13:58.144 fused_ordering(521) 00:13:58.144 fused_ordering(522) 00:13:58.144 fused_ordering(523) 00:13:58.144 fused_ordering(524) 00:13:58.144 fused_ordering(525) 00:13:58.144 fused_ordering(526) 00:13:58.144 fused_ordering(527) 00:13:58.144 fused_ordering(528) 00:13:58.144 fused_ordering(529) 00:13:58.144 fused_ordering(530) 00:13:58.144 fused_ordering(531) 00:13:58.144 fused_ordering(532) 00:13:58.144 fused_ordering(533) 00:13:58.144 fused_ordering(534) 00:13:58.144 fused_ordering(535) 00:13:58.144 fused_ordering(536) 00:13:58.144 fused_ordering(537) 00:13:58.144 fused_ordering(538) 00:13:58.144 fused_ordering(539) 00:13:58.144 fused_ordering(540) 00:13:58.144 fused_ordering(541) 00:13:58.144 fused_ordering(542) 00:13:58.144 fused_ordering(543) 00:13:58.144 fused_ordering(544) 00:13:58.144 fused_ordering(545) 00:13:58.144 fused_ordering(546) 00:13:58.144 fused_ordering(547) 00:13:58.144 fused_ordering(548) 00:13:58.144 fused_ordering(549) 00:13:58.144 fused_ordering(550) 00:13:58.144 fused_ordering(551) 00:13:58.144 fused_ordering(552) 00:13:58.144 fused_ordering(553) 00:13:58.144 fused_ordering(554) 00:13:58.144 fused_ordering(555) 00:13:58.144 fused_ordering(556) 00:13:58.144 fused_ordering(557) 00:13:58.144 fused_ordering(558) 00:13:58.144 fused_ordering(559) 00:13:58.144 fused_ordering(560) 00:13:58.144 fused_ordering(561) 00:13:58.144 fused_ordering(562) 00:13:58.144 fused_ordering(563) 00:13:58.144 fused_ordering(564) 00:13:58.144 fused_ordering(565) 00:13:58.144 fused_ordering(566) 00:13:58.144 fused_ordering(567) 00:13:58.144 fused_ordering(568) 00:13:58.144 fused_ordering(569) 00:13:58.144 fused_ordering(570) 00:13:58.144 fused_ordering(571) 00:13:58.144 fused_ordering(572) 00:13:58.144 fused_ordering(573) 00:13:58.144 fused_ordering(574) 00:13:58.144 fused_ordering(575) 00:13:58.144 fused_ordering(576) 00:13:58.144 fused_ordering(577) 00:13:58.144 fused_ordering(578) 00:13:58.144 fused_ordering(579) 00:13:58.144 fused_ordering(580) 00:13:58.144 fused_ordering(581) 00:13:58.144 fused_ordering(582) 00:13:58.144 fused_ordering(583) 00:13:58.144 fused_ordering(584) 00:13:58.144 fused_ordering(585) 00:13:58.144 fused_ordering(586) 00:13:58.144 fused_ordering(587) 00:13:58.144 fused_ordering(588) 00:13:58.144 fused_ordering(589) 00:13:58.144 fused_ordering(590) 00:13:58.144 fused_ordering(591) 00:13:58.144 fused_ordering(592) 00:13:58.144 fused_ordering(593) 00:13:58.144 fused_ordering(594) 00:13:58.144 fused_ordering(595) 00:13:58.144 fused_ordering(596) 00:13:58.144 fused_ordering(597) 00:13:58.144 fused_ordering(598) 00:13:58.144 fused_ordering(599) 00:13:58.144 fused_ordering(600) 00:13:58.144 fused_ordering(601) 00:13:58.144 fused_ordering(602) 00:13:58.144 fused_ordering(603) 00:13:58.144 fused_ordering(604) 00:13:58.144 fused_ordering(605) 00:13:58.144 fused_ordering(606) 00:13:58.144 fused_ordering(607) 00:13:58.144 fused_ordering(608) 00:13:58.144 fused_ordering(609) 00:13:58.144 fused_ordering(610) 00:13:58.144 fused_ordering(611) 00:13:58.144 fused_ordering(612) 00:13:58.144 fused_ordering(613) 00:13:58.144 fused_ordering(614) 00:13:58.144 fused_ordering(615) 00:13:58.715 fused_ordering(616) 00:13:58.715 fused_ordering(617) 00:13:58.715 fused_ordering(618) 00:13:58.715 fused_ordering(619) 00:13:58.715 fused_ordering(620) 00:13:58.715 fused_ordering(621) 00:13:58.715 fused_ordering(622) 00:13:58.715 fused_ordering(623) 00:13:58.715 fused_ordering(624) 00:13:58.715 fused_ordering(625) 00:13:58.715 fused_ordering(626) 00:13:58.715 fused_ordering(627) 00:13:58.715 fused_ordering(628) 00:13:58.715 fused_ordering(629) 00:13:58.715 fused_ordering(630) 00:13:58.715 fused_ordering(631) 00:13:58.715 fused_ordering(632) 00:13:58.715 fused_ordering(633) 00:13:58.715 fused_ordering(634) 00:13:58.715 fused_ordering(635) 00:13:58.715 fused_ordering(636) 00:13:58.715 fused_ordering(637) 00:13:58.715 fused_ordering(638) 00:13:58.715 fused_ordering(639) 00:13:58.715 fused_ordering(640) 00:13:58.715 fused_ordering(641) 00:13:58.715 fused_ordering(642) 00:13:58.715 fused_ordering(643) 00:13:58.715 fused_ordering(644) 00:13:58.715 fused_ordering(645) 00:13:58.715 fused_ordering(646) 00:13:58.715 fused_ordering(647) 00:13:58.715 fused_ordering(648) 00:13:58.715 fused_ordering(649) 00:13:58.715 fused_ordering(650) 00:13:58.715 fused_ordering(651) 00:13:58.715 fused_ordering(652) 00:13:58.715 fused_ordering(653) 00:13:58.715 fused_ordering(654) 00:13:58.715 fused_ordering(655) 00:13:58.715 fused_ordering(656) 00:13:58.715 fused_ordering(657) 00:13:58.715 fused_ordering(658) 00:13:58.715 fused_ordering(659) 00:13:58.715 fused_ordering(660) 00:13:58.715 fused_ordering(661) 00:13:58.715 fused_ordering(662) 00:13:58.715 fused_ordering(663) 00:13:58.715 fused_ordering(664) 00:13:58.715 fused_ordering(665) 00:13:58.715 fused_ordering(666) 00:13:58.715 fused_ordering(667) 00:13:58.715 fused_ordering(668) 00:13:58.715 fused_ordering(669) 00:13:58.715 fused_ordering(670) 00:13:58.715 fused_ordering(671) 00:13:58.715 fused_ordering(672) 00:13:58.715 fused_ordering(673) 00:13:58.715 fused_ordering(674) 00:13:58.715 fused_ordering(675) 00:13:58.715 fused_ordering(676) 00:13:58.715 fused_ordering(677) 00:13:58.715 fused_ordering(678) 00:13:58.715 fused_ordering(679) 00:13:58.715 fused_ordering(680) 00:13:58.715 fused_ordering(681) 00:13:58.715 fused_ordering(682) 00:13:58.715 fused_ordering(683) 00:13:58.715 fused_ordering(684) 00:13:58.715 fused_ordering(685) 00:13:58.715 fused_ordering(686) 00:13:58.715 fused_ordering(687) 00:13:58.715 fused_ordering(688) 00:13:58.715 fused_ordering(689) 00:13:58.715 fused_ordering(690) 00:13:58.715 fused_ordering(691) 00:13:58.715 fused_ordering(692) 00:13:58.715 fused_ordering(693) 00:13:58.715 fused_ordering(694) 00:13:58.715 fused_ordering(695) 00:13:58.715 fused_ordering(696) 00:13:58.715 fused_ordering(697) 00:13:58.715 fused_ordering(698) 00:13:58.715 fused_ordering(699) 00:13:58.715 fused_ordering(700) 00:13:58.715 fused_ordering(701) 00:13:58.715 fused_ordering(702) 00:13:58.715 fused_ordering(703) 00:13:58.715 fused_ordering(704) 00:13:58.715 fused_ordering(705) 00:13:58.715 fused_ordering(706) 00:13:58.715 fused_ordering(707) 00:13:58.715 fused_ordering(708) 00:13:58.715 fused_ordering(709) 00:13:58.715 fused_ordering(710) 00:13:58.715 fused_ordering(711) 00:13:58.715 fused_ordering(712) 00:13:58.715 fused_ordering(713) 00:13:58.715 fused_ordering(714) 00:13:58.715 fused_ordering(715) 00:13:58.715 fused_ordering(716) 00:13:58.715 fused_ordering(717) 00:13:58.715 fused_ordering(718) 00:13:58.715 fused_ordering(719) 00:13:58.715 fused_ordering(720) 00:13:58.715 fused_ordering(721) 00:13:58.715 fused_ordering(722) 00:13:58.715 fused_ordering(723) 00:13:58.715 fused_ordering(724) 00:13:58.715 fused_ordering(725) 00:13:58.715 fused_ordering(726) 00:13:58.715 fused_ordering(727) 00:13:58.715 fused_ordering(728) 00:13:58.715 fused_ordering(729) 00:13:58.715 fused_ordering(730) 00:13:58.715 fused_ordering(731) 00:13:58.715 fused_ordering(732) 00:13:58.715 fused_ordering(733) 00:13:58.715 fused_ordering(734) 00:13:58.715 fused_ordering(735) 00:13:58.715 fused_ordering(736) 00:13:58.715 fused_ordering(737) 00:13:58.715 fused_ordering(738) 00:13:58.715 fused_ordering(739) 00:13:58.715 fused_ordering(740) 00:13:58.715 fused_ordering(741) 00:13:58.715 fused_ordering(742) 00:13:58.715 fused_ordering(743) 00:13:58.715 fused_ordering(744) 00:13:58.715 fused_ordering(745) 00:13:58.715 fused_ordering(746) 00:13:58.715 fused_ordering(747) 00:13:58.715 fused_ordering(748) 00:13:58.715 fused_ordering(749) 00:13:58.715 fused_ordering(750) 00:13:58.715 fused_ordering(751) 00:13:58.715 fused_ordering(752) 00:13:58.715 fused_ordering(753) 00:13:58.715 fused_ordering(754) 00:13:58.715 fused_ordering(755) 00:13:58.715 fused_ordering(756) 00:13:58.715 fused_ordering(757) 00:13:58.715 fused_ordering(758) 00:13:58.715 fused_ordering(759) 00:13:58.715 fused_ordering(760) 00:13:58.715 fused_ordering(761) 00:13:58.715 fused_ordering(762) 00:13:58.715 fused_ordering(763) 00:13:58.715 fused_ordering(764) 00:13:58.715 fused_ordering(765) 00:13:58.715 fused_ordering(766) 00:13:58.715 fused_ordering(767) 00:13:58.715 fused_ordering(768) 00:13:58.715 fused_ordering(769) 00:13:58.715 fused_ordering(770) 00:13:58.715 fused_ordering(771) 00:13:58.715 fused_ordering(772) 00:13:58.715 fused_ordering(773) 00:13:58.715 fused_ordering(774) 00:13:58.715 fused_ordering(775) 00:13:58.715 fused_ordering(776) 00:13:58.715 fused_ordering(777) 00:13:58.715 fused_ordering(778) 00:13:58.715 fused_ordering(779) 00:13:58.715 fused_ordering(780) 00:13:58.715 fused_ordering(781) 00:13:58.715 fused_ordering(782) 00:13:58.715 fused_ordering(783) 00:13:58.715 fused_ordering(784) 00:13:58.715 fused_ordering(785) 00:13:58.715 fused_ordering(786) 00:13:58.715 fused_ordering(787) 00:13:58.715 fused_ordering(788) 00:13:58.715 fused_ordering(789) 00:13:58.715 fused_ordering(790) 00:13:58.715 fused_ordering(791) 00:13:58.715 fused_ordering(792) 00:13:58.715 fused_ordering(793) 00:13:58.715 fused_ordering(794) 00:13:58.715 fused_ordering(795) 00:13:58.715 fused_ordering(796) 00:13:58.715 fused_ordering(797) 00:13:58.715 fused_ordering(798) 00:13:58.715 fused_ordering(799) 00:13:58.715 fused_ordering(800) 00:13:58.715 fused_ordering(801) 00:13:58.715 fused_ordering(802) 00:13:58.715 fused_ordering(803) 00:13:58.715 fused_ordering(804) 00:13:58.715 fused_ordering(805) 00:13:58.715 fused_ordering(806) 00:13:58.715 fused_ordering(807) 00:13:58.715 fused_ordering(808) 00:13:58.715 fused_ordering(809) 00:13:58.715 fused_ordering(810) 00:13:58.715 fused_ordering(811) 00:13:58.715 fused_ordering(812) 00:13:58.715 fused_ordering(813) 00:13:58.715 fused_ordering(814) 00:13:58.715 fused_ordering(815) 00:13:58.715 fused_ordering(816) 00:13:58.715 fused_ordering(817) 00:13:58.715 fused_ordering(818) 00:13:58.715 fused_ordering(819) 00:13:58.716 fused_ordering(820) 00:13:59.287 fused_ordering(821) 00:13:59.287 fused_ordering(822) 00:13:59.287 fused_ordering(823) 00:13:59.287 fused_ordering(824) 00:13:59.287 fused_ordering(825) 00:13:59.287 fused_ordering(826) 00:13:59.287 fused_ordering(827) 00:13:59.287 fused_ordering(828) 00:13:59.287 fused_ordering(829) 00:13:59.287 fused_ordering(830) 00:13:59.287 fused_ordering(831) 00:13:59.287 fused_ordering(832) 00:13:59.287 fused_ordering(833) 00:13:59.287 fused_ordering(834) 00:13:59.287 fused_ordering(835) 00:13:59.287 fused_ordering(836) 00:13:59.287 fused_ordering(837) 00:13:59.287 fused_ordering(838) 00:13:59.287 fused_ordering(839) 00:13:59.287 fused_ordering(840) 00:13:59.287 fused_ordering(841) 00:13:59.287 fused_ordering(842) 00:13:59.287 fused_ordering(843) 00:13:59.287 fused_ordering(844) 00:13:59.287 fused_ordering(845) 00:13:59.287 fused_ordering(846) 00:13:59.287 fused_ordering(847) 00:13:59.287 fused_ordering(848) 00:13:59.287 fused_ordering(849) 00:13:59.287 fused_ordering(850) 00:13:59.287 fused_ordering(851) 00:13:59.287 fused_ordering(852) 00:13:59.287 fused_ordering(853) 00:13:59.287 fused_ordering(854) 00:13:59.287 fused_ordering(855) 00:13:59.287 fused_ordering(856) 00:13:59.287 fused_ordering(857) 00:13:59.287 fused_ordering(858) 00:13:59.287 fused_ordering(859) 00:13:59.287 fused_ordering(860) 00:13:59.287 fused_ordering(861) 00:13:59.287 fused_ordering(862) 00:13:59.287 fused_ordering(863) 00:13:59.287 fused_ordering(864) 00:13:59.287 fused_ordering(865) 00:13:59.287 fused_ordering(866) 00:13:59.287 fused_ordering(867) 00:13:59.287 fused_ordering(868) 00:13:59.287 fused_ordering(869) 00:13:59.287 fused_ordering(870) 00:13:59.287 fused_ordering(871) 00:13:59.287 fused_ordering(872) 00:13:59.287 fused_ordering(873) 00:13:59.287 fused_ordering(874) 00:13:59.287 fused_ordering(875) 00:13:59.287 fused_ordering(876) 00:13:59.287 fused_ordering(877) 00:13:59.287 fused_ordering(878) 00:13:59.287 fused_ordering(879) 00:13:59.287 fused_ordering(880) 00:13:59.287 fused_ordering(881) 00:13:59.287 fused_ordering(882) 00:13:59.287 fused_ordering(883) 00:13:59.287 fused_ordering(884) 00:13:59.287 fused_ordering(885) 00:13:59.287 fused_ordering(886) 00:13:59.287 fused_ordering(887) 00:13:59.287 fused_ordering(888) 00:13:59.287 fused_ordering(889) 00:13:59.287 fused_ordering(890) 00:13:59.287 fused_ordering(891) 00:13:59.287 fused_ordering(892) 00:13:59.287 fused_ordering(893) 00:13:59.287 fused_ordering(894) 00:13:59.287 fused_ordering(895) 00:13:59.287 fused_ordering(896) 00:13:59.287 fused_ordering(897) 00:13:59.287 fused_ordering(898) 00:13:59.287 fused_ordering(899) 00:13:59.287 fused_ordering(900) 00:13:59.287 fused_ordering(901) 00:13:59.287 fused_ordering(902) 00:13:59.287 fused_ordering(903) 00:13:59.287 fused_ordering(904) 00:13:59.287 fused_ordering(905) 00:13:59.287 fused_ordering(906) 00:13:59.287 fused_ordering(907) 00:13:59.287 fused_ordering(908) 00:13:59.287 fused_ordering(909) 00:13:59.287 fused_ordering(910) 00:13:59.287 fused_ordering(911) 00:13:59.287 fused_ordering(912) 00:13:59.287 fused_ordering(913) 00:13:59.287 fused_ordering(914) 00:13:59.287 fused_ordering(915) 00:13:59.287 fused_ordering(916) 00:13:59.287 fused_ordering(917) 00:13:59.287 fused_ordering(918) 00:13:59.287 fused_ordering(919) 00:13:59.287 fused_ordering(920) 00:13:59.287 fused_ordering(921) 00:13:59.287 fused_ordering(922) 00:13:59.287 fused_ordering(923) 00:13:59.287 fused_ordering(924) 00:13:59.287 fused_ordering(925) 00:13:59.287 fused_ordering(926) 00:13:59.287 fused_ordering(927) 00:13:59.287 fused_ordering(928) 00:13:59.287 fused_ordering(929) 00:13:59.287 fused_ordering(930) 00:13:59.287 fused_ordering(931) 00:13:59.287 fused_ordering(932) 00:13:59.287 fused_ordering(933) 00:13:59.287 fused_ordering(934) 00:13:59.287 fused_ordering(935) 00:13:59.287 fused_ordering(936) 00:13:59.287 fused_ordering(937) 00:13:59.287 fused_ordering(938) 00:13:59.287 fused_ordering(939) 00:13:59.287 fused_ordering(940) 00:13:59.287 fused_ordering(941) 00:13:59.287 fused_ordering(942) 00:13:59.287 fused_ordering(943) 00:13:59.287 fused_ordering(944) 00:13:59.287 fused_ordering(945) 00:13:59.287 fused_ordering(946) 00:13:59.287 fused_ordering(947) 00:13:59.287 fused_ordering(948) 00:13:59.287 fused_ordering(949) 00:13:59.287 fused_ordering(950) 00:13:59.287 fused_ordering(951) 00:13:59.287 fused_ordering(952) 00:13:59.287 fused_ordering(953) 00:13:59.287 fused_ordering(954) 00:13:59.287 fused_ordering(955) 00:13:59.287 fused_ordering(956) 00:13:59.287 fused_ordering(957) 00:13:59.287 fused_ordering(958) 00:13:59.287 fused_ordering(959) 00:13:59.287 fused_ordering(960) 00:13:59.287 fused_ordering(961) 00:13:59.287 fused_ordering(962) 00:13:59.287 fused_ordering(963) 00:13:59.287 fused_ordering(964) 00:13:59.287 fused_ordering(965) 00:13:59.287 fused_ordering(966) 00:13:59.287 fused_ordering(967) 00:13:59.287 fused_ordering(968) 00:13:59.287 fused_ordering(969) 00:13:59.287 fused_ordering(970) 00:13:59.287 fused_ordering(971) 00:13:59.287 fused_ordering(972) 00:13:59.287 fused_ordering(973) 00:13:59.287 fused_ordering(974) 00:13:59.287 fused_ordering(975) 00:13:59.287 fused_ordering(976) 00:13:59.287 fused_ordering(977) 00:13:59.287 fused_ordering(978) 00:13:59.287 fused_ordering(979) 00:13:59.287 fused_ordering(980) 00:13:59.287 fused_ordering(981) 00:13:59.287 fused_ordering(982) 00:13:59.287 fused_ordering(983) 00:13:59.287 fused_ordering(984) 00:13:59.287 fused_ordering(985) 00:13:59.287 fused_ordering(986) 00:13:59.287 fused_ordering(987) 00:13:59.287 fused_ordering(988) 00:13:59.287 fused_ordering(989) 00:13:59.287 fused_ordering(990) 00:13:59.287 fused_ordering(991) 00:13:59.287 fused_ordering(992) 00:13:59.287 fused_ordering(993) 00:13:59.287 fused_ordering(994) 00:13:59.287 fused_ordering(995) 00:13:59.287 fused_ordering(996) 00:13:59.287 fused_ordering(997) 00:13:59.287 fused_ordering(998) 00:13:59.287 fused_ordering(999) 00:13:59.287 fused_ordering(1000) 00:13:59.287 fused_ordering(1001) 00:13:59.287 fused_ordering(1002) 00:13:59.287 fused_ordering(1003) 00:13:59.287 fused_ordering(1004) 00:13:59.287 fused_ordering(1005) 00:13:59.287 fused_ordering(1006) 00:13:59.287 fused_ordering(1007) 00:13:59.287 fused_ordering(1008) 00:13:59.287 fused_ordering(1009) 00:13:59.287 fused_ordering(1010) 00:13:59.287 fused_ordering(1011) 00:13:59.287 fused_ordering(1012) 00:13:59.287 fused_ordering(1013) 00:13:59.287 fused_ordering(1014) 00:13:59.287 fused_ordering(1015) 00:13:59.287 fused_ordering(1016) 00:13:59.287 fused_ordering(1017) 00:13:59.287 fused_ordering(1018) 00:13:59.287 fused_ordering(1019) 00:13:59.287 fused_ordering(1020) 00:13:59.287 fused_ordering(1021) 00:13:59.287 fused_ordering(1022) 00:13:59.287 fused_ordering(1023) 00:13:59.287 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:13:59.287 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:13:59.287 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@514 -- # nvmfcleanup 00:13:59.287 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:13:59.287 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:59.287 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:13:59.287 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:59.287 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:59.287 rmmod nvme_tcp 00:13:59.287 rmmod nvme_fabrics 00:13:59.287 rmmod nvme_keyring 00:13:59.287 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:59.287 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:13:59.287 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:13:59.287 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@515 -- # '[' -n 188863 ']' 00:13:59.287 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # killprocess 188863 00:13:59.287 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # '[' -z 188863 ']' 00:13:59.287 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # kill -0 188863 00:13:59.287 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # uname 00:13:59.287 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:59.287 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 188863 00:13:59.287 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:13:59.287 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:13:59.287 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # echo 'killing process with pid 188863' 00:13:59.287 killing process with pid 188863 00:13:59.287 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@969 -- # kill 188863 00:13:59.287 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@974 -- # wait 188863 00:13:59.550 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:13:59.550 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:13:59.550 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:13:59.550 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:13:59.550 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@789 -- # iptables-save 00:13:59.550 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:13:59.550 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@789 -- # iptables-restore 00:13:59.550 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:59.550 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:59.550 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:59.550 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:59.550 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:02.095 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:02.095 00:14:02.095 real 0m7.347s 00:14:02.095 user 0m5.181s 00:14:02.095 sys 0m2.743s 00:14:02.095 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:02.095 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:02.095 ************************************ 00:14:02.095 END TEST nvmf_fused_ordering 00:14:02.095 ************************************ 00:14:02.095 09:34:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:14:02.095 09:34:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:02.095 09:34:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:02.095 09:34:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:02.095 ************************************ 00:14:02.095 START TEST nvmf_ns_masking 00:14:02.095 ************************************ 00:14:02.095 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:14:02.095 * Looking for test storage... 00:14:02.095 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:02.095 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:02.095 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1681 -- # lcov --version 00:14:02.095 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:02.095 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:02.095 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:02.095 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:02.095 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:02.095 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:14:02.095 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:14:02.095 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:14:02.095 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:14:02.095 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:14:02.095 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:14:02.095 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:14:02.095 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:02.095 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:14:02.095 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:14:02.095 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:02.095 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:02.095 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:14:02.095 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:14:02.095 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:02.095 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:14:02.095 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:14:02.095 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:14:02.095 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:14:02.095 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:02.095 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:14:02.095 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:14:02.095 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:02.095 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:02.095 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:14:02.095 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:02.095 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:02.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:02.095 --rc genhtml_branch_coverage=1 00:14:02.095 --rc genhtml_function_coverage=1 00:14:02.095 --rc genhtml_legend=1 00:14:02.095 --rc geninfo_all_blocks=1 00:14:02.095 --rc geninfo_unexecuted_blocks=1 00:14:02.095 00:14:02.095 ' 00:14:02.095 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:02.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:02.095 --rc genhtml_branch_coverage=1 00:14:02.095 --rc genhtml_function_coverage=1 00:14:02.095 --rc genhtml_legend=1 00:14:02.095 --rc geninfo_all_blocks=1 00:14:02.095 --rc geninfo_unexecuted_blocks=1 00:14:02.095 00:14:02.095 ' 00:14:02.095 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:02.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:02.095 --rc genhtml_branch_coverage=1 00:14:02.095 --rc genhtml_function_coverage=1 00:14:02.095 --rc genhtml_legend=1 00:14:02.095 --rc geninfo_all_blocks=1 00:14:02.095 --rc geninfo_unexecuted_blocks=1 00:14:02.095 00:14:02.095 ' 00:14:02.095 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:02.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:02.095 --rc genhtml_branch_coverage=1 00:14:02.095 --rc genhtml_function_coverage=1 00:14:02.095 --rc genhtml_legend=1 00:14:02.095 --rc geninfo_all_blocks=1 00:14:02.095 --rc geninfo_unexecuted_blocks=1 00:14:02.095 00:14:02.095 ' 00:14:02.095 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:02.095 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:14:02.095 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:02.095 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:02.095 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:02.095 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:02.095 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:02.095 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:02.095 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:02.095 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:02.095 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:02.095 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:02.095 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:14:02.095 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:14:02.095 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:02.095 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:02.095 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:02.095 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:02.095 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:02.096 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:14:02.096 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:02.096 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:02.096 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:02.096 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.096 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.096 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.096 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:14:02.096 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.096 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:14:02.096 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:02.096 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:02.096 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:02.096 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:02.096 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:02.096 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:02.096 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:02.096 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:02.096 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:02.096 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:02.096 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:02.096 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:14:02.096 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:14:02.096 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:14:02.096 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=808d1653-54a7-410b-bc20-b8ce48362891 00:14:02.096 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:14:02.096 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=d91872ef-2d33-4318-9257-13459872d011 00:14:02.096 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:14:02.096 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:14:02.096 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:14:02.096 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:14:02.096 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=52cd7fb3-9c58-41a7-b536-6c2ba8dff600 00:14:02.096 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:14:02.096 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:14:02.096 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:02.096 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # prepare_net_devs 00:14:02.096 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@436 -- # local -g is_hw=no 00:14:02.096 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # remove_spdk_ns 00:14:02.096 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:02.096 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:02.096 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:02.096 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:14:02.096 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:14:02.096 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:14:02.096 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:04.000 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:04.000 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:14:04.000 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:04.000 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:04.000 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:04.000 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:04.000 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:04.001 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:14:04.001 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:04.001 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:14:04.001 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:14:04.001 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:14:04.001 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:14:04.001 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:14:04.001 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:14:04.001 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:04.001 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:04.001 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:04.001 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:04.001 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:04.001 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:04.001 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:04.001 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:04.001 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:04.001 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:04.001 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:04.001 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:04.001 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:04.001 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:04.001 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:04.001 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:04.001 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:04.001 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:04.001 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:04.001 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x1592)' 00:14:04.001 Found 0000:09:00.0 (0x8086 - 0x1592) 00:14:04.001 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:04.001 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:04.001 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:14:04.001 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:14:04.001 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:04.001 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:04.001 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x1592)' 00:14:04.001 Found 0000:09:00.1 (0x8086 - 0x1592) 00:14:04.001 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:04.001 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:04.001 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:14:04.001 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:14:04.001 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:04.001 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:04.001 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:04.001 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:04.001 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:14:04.001 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:04.001 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:14:04.001 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:04.001 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ up == up ]] 00:14:04.001 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:14:04.001 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:04.001 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:14:04.001 Found net devices under 0000:09:00.0: cvl_0_0 00:14:04.001 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:14:04.001 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:14:04.001 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:04.001 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:14:04.001 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:04.001 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ up == up ]] 00:14:04.001 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:14:04.001 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:04.001 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:14:04.001 Found net devices under 0000:09:00.1: cvl_0_1 00:14:04.001 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:14:04.001 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:14:04.001 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # is_hw=yes 00:14:04.001 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:14:04.001 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:14:04.001 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:14:04.001 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:04.001 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:04.001 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:04.001 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:04.001 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:04.001 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:04.001 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:04.001 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:04.001 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:04.001 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:04.001 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:04.001 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:04.001 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:04.001 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:04.001 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:04.001 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:04.001 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:04.001 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:04.001 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:04.001 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:04.001 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:04.001 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:04.002 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:04.002 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:04.002 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.230 ms 00:14:04.002 00:14:04.002 --- 10.0.0.2 ping statistics --- 00:14:04.002 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:04.002 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:14:04.002 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:04.002 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:04.002 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:14:04.002 00:14:04.002 --- 10.0.0.1 ping statistics --- 00:14:04.002 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:04.002 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:14:04.002 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:04.002 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # return 0 00:14:04.002 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:14:04.002 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:04.002 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:14:04.002 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:14:04.002 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:04.002 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:14:04.002 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:14:04.002 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:14:04.002 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:14:04.002 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:04.002 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:04.002 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # nvmfpid=190994 00:14:04.002 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:04.002 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # waitforlisten 190994 00:14:04.002 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 190994 ']' 00:14:04.002 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:04.002 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:04.002 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:04.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:04.002 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:04.002 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:04.002 [2024-10-07 09:34:52.853701] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:14:04.002 [2024-10-07 09:34:52.853774] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:04.002 [2024-10-07 09:34:52.914561] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:04.259 [2024-10-07 09:34:53.022724] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:04.259 [2024-10-07 09:34:53.022780] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:04.259 [2024-10-07 09:34:53.022808] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:04.259 [2024-10-07 09:34:53.022820] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:04.259 [2024-10-07 09:34:53.022829] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:04.259 [2024-10-07 09:34:53.023387] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:14:04.259 09:34:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:04.259 09:34:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:14:04.259 09:34:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:14:04.259 09:34:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:04.259 09:34:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:04.259 09:34:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:04.259 09:34:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:04.516 [2024-10-07 09:34:53.399833] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:04.516 09:34:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:14:04.516 09:34:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:14:04.516 09:34:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:04.774 Malloc1 00:14:04.774 09:34:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:05.339 Malloc2 00:14:05.339 09:34:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:05.598 09:34:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:14:05.856 09:34:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:06.116 [2024-10-07 09:34:54.875757] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:06.116 09:34:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:14:06.116 09:34:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 52cd7fb3-9c58-41a7-b536-6c2ba8dff600 -a 10.0.0.2 -s 4420 -i 4 00:14:06.116 09:34:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:14:06.116 09:34:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:14:06.116 09:34:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:06.116 09:34:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:06.116 09:34:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:14:08.662 09:34:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:08.662 09:34:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:08.662 09:34:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:08.662 09:34:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:08.662 09:34:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:08.662 09:34:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:14:08.662 09:34:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:08.662 09:34:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:08.662 09:34:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:08.662 09:34:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:08.662 09:34:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:14:08.662 09:34:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:08.662 09:34:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:08.662 [ 0]:0x1 00:14:08.662 09:34:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:08.662 09:34:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:08.662 09:34:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5765de13055847078ade06817c5b70e4 00:14:08.662 09:34:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5765de13055847078ade06817c5b70e4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:08.662 09:34:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:14:08.662 09:34:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:14:08.662 09:34:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:08.662 09:34:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:08.662 [ 0]:0x1 00:14:08.662 09:34:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:08.662 09:34:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:08.662 09:34:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5765de13055847078ade06817c5b70e4 00:14:08.662 09:34:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5765de13055847078ade06817c5b70e4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:08.662 09:34:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:14:08.662 09:34:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:08.662 09:34:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:08.662 [ 1]:0x2 00:14:08.662 09:34:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:08.662 09:34:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:08.662 09:34:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ca68730e54014811a458885781992ceb 00:14:08.662 09:34:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ca68730e54014811a458885781992ceb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:08.662 09:34:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:14:08.662 09:34:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:08.921 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:08.921 09:34:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:09.179 09:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:14:09.747 09:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:14:09.747 09:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 52cd7fb3-9c58-41a7-b536-6c2ba8dff600 -a 10.0.0.2 -s 4420 -i 4 00:14:09.747 09:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:14:09.747 09:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:14:09.747 09:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:09.747 09:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:14:09.747 09:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:14:09.747 09:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:14:11.658 09:35:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:11.658 09:35:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:11.658 09:35:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:11.658 09:35:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:11.658 09:35:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:11.658 09:35:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:14:11.658 09:35:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:11.658 09:35:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:11.658 09:35:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:11.658 09:35:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:11.658 09:35:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:14:11.658 09:35:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:11.658 09:35:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:11.658 09:35:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:11.658 09:35:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:11.658 09:35:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:11.658 09:35:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:11.658 09:35:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:11.658 09:35:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:11.658 09:35:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:11.658 09:35:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:11.658 09:35:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:11.917 09:35:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:11.918 09:35:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:11.918 09:35:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:11.918 09:35:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:11.918 09:35:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:11.918 09:35:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:11.918 09:35:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:14:11.918 09:35:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:11.918 09:35:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:11.918 [ 0]:0x2 00:14:11.918 09:35:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:11.918 09:35:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:11.918 09:35:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ca68730e54014811a458885781992ceb 00:14:11.918 09:35:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ca68730e54014811a458885781992ceb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:11.918 09:35:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:12.178 09:35:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:14:12.178 09:35:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:12.178 09:35:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:12.178 [ 0]:0x1 00:14:12.178 09:35:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:12.178 09:35:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:12.178 09:35:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5765de13055847078ade06817c5b70e4 00:14:12.178 09:35:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5765de13055847078ade06817c5b70e4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:12.178 09:35:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:14:12.178 09:35:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:12.178 09:35:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:12.178 [ 1]:0x2 00:14:12.178 09:35:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:12.178 09:35:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:12.178 09:35:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ca68730e54014811a458885781992ceb 00:14:12.178 09:35:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ca68730e54014811a458885781992ceb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:12.178 09:35:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:12.437 09:35:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:14:12.437 09:35:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:12.437 09:35:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:12.437 09:35:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:12.437 09:35:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:12.437 09:35:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:12.437 09:35:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:12.437 09:35:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:12.437 09:35:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:12.437 09:35:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:12.437 09:35:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:12.437 09:35:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:12.437 09:35:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:12.437 09:35:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:12.437 09:35:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:12.437 09:35:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:12.437 09:35:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:12.437 09:35:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:12.437 09:35:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:14:12.437 09:35:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:12.437 09:35:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:12.437 [ 0]:0x2 00:14:12.437 09:35:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:12.437 09:35:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:12.695 09:35:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ca68730e54014811a458885781992ceb 00:14:12.695 09:35:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ca68730e54014811a458885781992ceb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:12.695 09:35:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:14:12.695 09:35:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:12.695 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:12.696 09:35:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:12.954 09:35:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:14:12.954 09:35:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 52cd7fb3-9c58-41a7-b536-6c2ba8dff600 -a 10.0.0.2 -s 4420 -i 4 00:14:12.954 09:35:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:12.954 09:35:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:14:12.954 09:35:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:12.954 09:35:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:14:12.954 09:35:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:14:12.954 09:35:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:14:15.494 09:35:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:15.494 09:35:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:15.494 09:35:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:15.494 09:35:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:14:15.494 09:35:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:15.494 09:35:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:14:15.494 09:35:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:15.494 09:35:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:15.494 09:35:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:15.494 09:35:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:15.494 09:35:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:14:15.494 09:35:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:15.494 09:35:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:15.494 [ 0]:0x1 00:14:15.494 09:35:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:15.494 09:35:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:15.494 09:35:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5765de13055847078ade06817c5b70e4 00:14:15.494 09:35:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5765de13055847078ade06817c5b70e4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:15.494 09:35:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:14:15.494 09:35:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:15.494 09:35:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:15.494 [ 1]:0x2 00:14:15.494 09:35:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:15.494 09:35:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:15.494 09:35:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ca68730e54014811a458885781992ceb 00:14:15.494 09:35:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ca68730e54014811a458885781992ceb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:15.494 09:35:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:15.494 09:35:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:14:15.494 09:35:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:15.494 09:35:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:15.494 09:35:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:15.494 09:35:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:15.494 09:35:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:15.494 09:35:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:15.494 09:35:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:15.494 09:35:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:15.494 09:35:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:15.494 09:35:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:15.494 09:35:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:15.494 09:35:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:15.494 09:35:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:15.494 09:35:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:15.494 09:35:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:15.494 09:35:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:15.494 09:35:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:15.494 09:35:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:14:15.494 09:35:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:15.495 09:35:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:15.495 [ 0]:0x2 00:14:15.495 09:35:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:15.495 09:35:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:15.495 09:35:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ca68730e54014811a458885781992ceb 00:14:15.495 09:35:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ca68730e54014811a458885781992ceb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:15.495 09:35:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:15.495 09:35:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:15.495 09:35:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:15.495 09:35:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:15.495 09:35:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:15.495 09:35:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:15.495 09:35:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:15.495 09:35:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:15.495 09:35:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:15.495 09:35:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:15.495 09:35:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:15.495 09:35:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:15.755 [2024-10-07 09:35:04.737621] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:14:15.755 request: 00:14:15.755 { 00:14:15.755 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:15.755 "nsid": 2, 00:14:15.755 "host": "nqn.2016-06.io.spdk:host1", 00:14:15.755 "method": "nvmf_ns_remove_host", 00:14:15.755 "req_id": 1 00:14:15.755 } 00:14:15.755 Got JSON-RPC error response 00:14:15.755 response: 00:14:15.755 { 00:14:15.755 "code": -32602, 00:14:15.755 "message": "Invalid parameters" 00:14:15.755 } 00:14:16.014 09:35:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:16.014 09:35:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:16.014 09:35:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:16.014 09:35:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:16.014 09:35:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:14:16.014 09:35:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:16.014 09:35:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:16.014 09:35:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:16.014 09:35:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:16.014 09:35:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:16.014 09:35:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:16.014 09:35:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:16.014 09:35:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:16.014 09:35:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:16.014 09:35:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:16.014 09:35:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:16.014 09:35:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:16.014 09:35:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:16.014 09:35:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:16.014 09:35:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:16.014 09:35:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:16.014 09:35:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:16.014 09:35:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:14:16.014 09:35:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:16.014 09:35:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:16.014 [ 0]:0x2 00:14:16.014 09:35:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:16.014 09:35:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:16.014 09:35:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ca68730e54014811a458885781992ceb 00:14:16.014 09:35:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ca68730e54014811a458885781992ceb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:16.014 09:35:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:14:16.014 09:35:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:16.014 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:16.014 09:35:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=192547 00:14:16.014 09:35:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:14:16.014 09:35:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:14:16.014 09:35:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 192547 /var/tmp/host.sock 00:14:16.014 09:35:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 192547 ']' 00:14:16.014 09:35:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:14:16.014 09:35:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:16.014 09:35:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:16.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:16.014 09:35:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:16.014 09:35:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:16.014 [2024-10-07 09:35:04.958804] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:14:16.014 [2024-10-07 09:35:04.958881] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid192547 ] 00:14:16.272 [2024-10-07 09:35:05.014386] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:16.272 [2024-10-07 09:35:05.120932] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:14:16.530 09:35:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:16.530 09:35:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:14:16.530 09:35:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:16.788 09:35:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:17.047 09:35:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 808d1653-54a7-410b-bc20-b8ce48362891 00:14:17.047 09:35:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@785 -- # tr -d - 00:14:17.047 09:35:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 808D165354A7410BBC20B8CE48362891 -i 00:14:17.306 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid d91872ef-2d33-4318-9257-13459872d011 00:14:17.306 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@785 -- # tr -d - 00:14:17.306 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g D91872EF2D334318925713459872D011 -i 00:14:17.564 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:17.822 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:14:18.080 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:18.080 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:18.649 nvme0n1 00:14:18.649 09:35:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:18.649 09:35:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:18.908 nvme1n2 00:14:18.908 09:35:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:14:18.908 09:35:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:14:18.908 09:35:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:14:18.908 09:35:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:14:18.908 09:35:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:14:19.166 09:35:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:14:19.166 09:35:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:14:19.166 09:35:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:14:19.166 09:35:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:14:19.425 09:35:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 808d1653-54a7-410b-bc20-b8ce48362891 == \8\0\8\d\1\6\5\3\-\5\4\a\7\-\4\1\0\b\-\b\c\2\0\-\b\8\c\e\4\8\3\6\2\8\9\1 ]] 00:14:19.425 09:35:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:14:19.425 09:35:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:14:19.425 09:35:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:14:19.685 09:35:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ d91872ef-2d33-4318-9257-13459872d011 == \d\9\1\8\7\2\e\f\-\2\d\3\3\-\4\3\1\8\-\9\2\5\7\-\1\3\4\5\9\8\7\2\d\0\1\1 ]] 00:14:19.685 09:35:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 192547 00:14:19.685 09:35:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 192547 ']' 00:14:19.685 09:35:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 192547 00:14:19.685 09:35:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:14:19.685 09:35:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:19.685 09:35:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 192547 00:14:19.685 09:35:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:19.685 09:35:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:19.685 09:35:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 192547' 00:14:19.685 killing process with pid 192547 00:14:19.685 09:35:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 192547 00:14:19.685 09:35:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 192547 00:14:20.256 09:35:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:20.514 09:35:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:14:20.514 09:35:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:14:20.514 09:35:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@514 -- # nvmfcleanup 00:14:20.514 09:35:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:14:20.514 09:35:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:20.514 09:35:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:14:20.514 09:35:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:20.514 09:35:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:20.514 rmmod nvme_tcp 00:14:20.514 rmmod nvme_fabrics 00:14:20.514 rmmod nvme_keyring 00:14:20.514 09:35:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:20.514 09:35:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:14:20.514 09:35:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:14:20.514 09:35:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@515 -- # '[' -n 190994 ']' 00:14:20.514 09:35:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # killprocess 190994 00:14:20.514 09:35:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 190994 ']' 00:14:20.514 09:35:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 190994 00:14:20.514 09:35:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:14:20.514 09:35:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:20.514 09:35:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 190994 00:14:20.514 09:35:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:20.514 09:35:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:20.514 09:35:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 190994' 00:14:20.514 killing process with pid 190994 00:14:20.514 09:35:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 190994 00:14:20.514 09:35:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 190994 00:14:21.085 09:35:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:14:21.085 09:35:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:14:21.085 09:35:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:14:21.085 09:35:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:14:21.085 09:35:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@789 -- # iptables-save 00:14:21.085 09:35:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:14:21.085 09:35:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@789 -- # iptables-restore 00:14:21.085 09:35:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:21.085 09:35:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:21.085 09:35:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:21.085 09:35:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:21.085 09:35:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:22.998 09:35:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:22.998 00:14:22.998 real 0m21.364s 00:14:22.998 user 0m28.401s 00:14:22.998 sys 0m4.023s 00:14:22.998 09:35:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:22.998 09:35:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:22.998 ************************************ 00:14:22.998 END TEST nvmf_ns_masking 00:14:22.998 ************************************ 00:14:22.998 09:35:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:14:22.998 09:35:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:22.998 09:35:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:22.998 09:35:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:22.998 09:35:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:22.998 ************************************ 00:14:22.998 START TEST nvmf_nvme_cli 00:14:22.998 ************************************ 00:14:22.998 09:35:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:22.998 * Looking for test storage... 00:14:22.998 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:22.998 09:35:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:22.998 09:35:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1681 -- # lcov --version 00:14:22.998 09:35:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:23.258 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:23.258 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:23.258 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:23.258 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:23.258 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:14:23.258 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:14:23.258 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:14:23.258 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:14:23.258 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:14:23.258 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:14:23.258 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:14:23.258 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:23.258 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:14:23.258 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:14:23.258 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:23.258 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:23.258 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:14:23.258 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:14:23.258 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:23.258 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:14:23.258 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:14:23.258 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:14:23.258 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:14:23.258 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:23.258 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:14:23.258 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:14:23.258 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:23.258 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:23.258 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:14:23.258 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:23.258 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:23.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:23.258 --rc genhtml_branch_coverage=1 00:14:23.258 --rc genhtml_function_coverage=1 00:14:23.258 --rc genhtml_legend=1 00:14:23.258 --rc geninfo_all_blocks=1 00:14:23.258 --rc geninfo_unexecuted_blocks=1 00:14:23.258 00:14:23.258 ' 00:14:23.258 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:23.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:23.258 --rc genhtml_branch_coverage=1 00:14:23.258 --rc genhtml_function_coverage=1 00:14:23.258 --rc genhtml_legend=1 00:14:23.258 --rc geninfo_all_blocks=1 00:14:23.258 --rc geninfo_unexecuted_blocks=1 00:14:23.258 00:14:23.258 ' 00:14:23.258 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:23.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:23.258 --rc genhtml_branch_coverage=1 00:14:23.258 --rc genhtml_function_coverage=1 00:14:23.258 --rc genhtml_legend=1 00:14:23.258 --rc geninfo_all_blocks=1 00:14:23.258 --rc geninfo_unexecuted_blocks=1 00:14:23.258 00:14:23.258 ' 00:14:23.258 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:23.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:23.258 --rc genhtml_branch_coverage=1 00:14:23.258 --rc genhtml_function_coverage=1 00:14:23.258 --rc genhtml_legend=1 00:14:23.258 --rc geninfo_all_blocks=1 00:14:23.258 --rc geninfo_unexecuted_blocks=1 00:14:23.258 00:14:23.258 ' 00:14:23.258 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:23.258 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:14:23.258 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:23.258 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:23.258 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:23.258 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:23.258 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:23.258 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:23.258 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:23.258 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:23.258 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:23.258 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:23.258 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:14:23.258 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:14:23.258 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:23.258 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:23.258 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:23.258 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:23.258 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:23.258 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:14:23.258 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:23.258 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:23.258 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:23.258 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.259 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.259 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.259 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:14:23.259 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.259 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:14:23.259 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:23.259 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:23.259 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:23.259 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:23.259 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:23.259 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:23.259 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:23.259 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:23.259 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:23.259 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:23.259 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:23.259 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:23.259 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:14:23.259 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:14:23.259 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:14:23.259 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:23.259 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # prepare_net_devs 00:14:23.259 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@436 -- # local -g is_hw=no 00:14:23.259 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # remove_spdk_ns 00:14:23.259 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:23.259 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:23.259 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:23.259 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:14:23.259 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:14:23.259 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:14:23.259 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:25.166 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:25.166 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:14:25.166 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:25.166 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:25.166 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:25.166 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:25.166 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:25.166 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:14:25.166 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:25.166 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:14:25.166 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:14:25.166 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:14:25.166 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:14:25.166 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:14:25.166 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:14:25.166 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:25.166 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:25.166 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:25.166 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:25.166 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:25.166 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:25.166 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:25.166 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:25.166 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:25.166 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:25.166 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:25.166 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:25.166 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:25.166 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:25.166 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:25.166 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:25.166 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:25.166 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:25.166 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:25.166 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x1592)' 00:14:25.166 Found 0000:09:00.0 (0x8086 - 0x1592) 00:14:25.166 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:25.166 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:25.166 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:14:25.166 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:14:25.166 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:25.166 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:25.166 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x1592)' 00:14:25.166 Found 0000:09:00.1 (0x8086 - 0x1592) 00:14:25.166 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:25.166 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:25.166 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:14:25.166 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:14:25.166 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:25.166 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:25.166 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:25.166 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:25.166 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:14:25.166 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:25.166 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:14:25.166 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:25.166 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ up == up ]] 00:14:25.166 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:14:25.166 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:25.167 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:14:25.167 Found net devices under 0000:09:00.0: cvl_0_0 00:14:25.167 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:14:25.167 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:14:25.167 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:25.167 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:14:25.167 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:25.167 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ up == up ]] 00:14:25.167 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:14:25.167 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:25.167 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:14:25.167 Found net devices under 0000:09:00.1: cvl_0_1 00:14:25.167 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:14:25.167 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:14:25.167 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # is_hw=yes 00:14:25.167 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:14:25.167 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:14:25.167 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:14:25.167 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:25.167 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:25.167 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:25.167 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:25.167 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:25.167 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:25.167 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:25.167 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:25.167 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:25.167 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:25.167 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:25.167 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:25.167 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:25.167 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:25.167 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:25.428 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:25.428 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:25.428 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:25.428 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:25.428 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:25.428 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:25.428 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:25.428 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:25.428 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:25.428 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.262 ms 00:14:25.428 00:14:25.428 --- 10.0.0.2 ping statistics --- 00:14:25.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:25.428 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:14:25.428 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:25.428 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:25.428 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:14:25.428 00:14:25.428 --- 10.0.0.1 ping statistics --- 00:14:25.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:25.428 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:14:25.428 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:25.428 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # return 0 00:14:25.428 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:14:25.428 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:25.428 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:14:25.428 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:14:25.428 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:25.428 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:14:25.428 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:14:25.428 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:14:25.428 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:14:25.428 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:25.428 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:25.428 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # nvmfpid=194932 00:14:25.428 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:25.428 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # waitforlisten 194932 00:14:25.428 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # '[' -z 194932 ']' 00:14:25.428 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:25.428 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:25.428 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:25.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:25.428 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:25.428 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:25.428 [2024-10-07 09:35:14.346034] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:14:25.428 [2024-10-07 09:35:14.346111] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:25.428 [2024-10-07 09:35:14.408525] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:25.687 [2024-10-07 09:35:14.515823] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:25.687 [2024-10-07 09:35:14.515878] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:25.687 [2024-10-07 09:35:14.515908] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:25.687 [2024-10-07 09:35:14.515919] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:25.687 [2024-10-07 09:35:14.515929] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:25.687 [2024-10-07 09:35:14.517499] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:14:25.687 [2024-10-07 09:35:14.517599] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:14:25.687 [2024-10-07 09:35:14.517758] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:14:25.687 [2024-10-07 09:35:14.517762] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:14:25.687 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:25.687 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # return 0 00:14:25.687 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:14:25.687 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:25.687 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:25.687 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:25.687 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:25.687 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.687 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:25.687 [2024-10-07 09:35:14.673513] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:25.687 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.687 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:25.687 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.687 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:25.960 Malloc0 00:14:25.960 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.960 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:25.960 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.960 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:25.960 Malloc1 00:14:25.960 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.960 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:14:25.960 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.960 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:25.960 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.960 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:25.960 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.960 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:25.960 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.960 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:25.960 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.960 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:25.960 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.960 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:25.960 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.960 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:25.960 [2024-10-07 09:35:14.759357] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:25.960 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.960 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:25.960 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.960 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:25.960 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.960 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid=21b7cb46-a602-e411-a339-001e67bc3be4 -t tcp -a 10.0.0.2 -s 4420 00:14:25.960 00:14:25.960 Discovery Log Number of Records 2, Generation counter 2 00:14:25.960 =====Discovery Log Entry 0====== 00:14:25.960 trtype: tcp 00:14:25.960 adrfam: ipv4 00:14:25.960 subtype: current discovery subsystem 00:14:25.960 treq: not required 00:14:25.960 portid: 0 00:14:25.960 trsvcid: 4420 00:14:25.960 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:25.960 traddr: 10.0.0.2 00:14:25.960 eflags: explicit discovery connections, duplicate discovery information 00:14:25.960 sectype: none 00:14:25.960 =====Discovery Log Entry 1====== 00:14:25.960 trtype: tcp 00:14:25.960 adrfam: ipv4 00:14:25.960 subtype: nvme subsystem 00:14:25.960 treq: not required 00:14:25.960 portid: 0 00:14:25.960 trsvcid: 4420 00:14:25.960 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:25.960 traddr: 10.0.0.2 00:14:25.960 eflags: none 00:14:25.960 sectype: none 00:14:25.960 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:14:25.960 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:14:25.960 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # local dev _ 00:14:25.960 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:25.960 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@547 -- # nvme list 00:14:25.960 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ Node == /dev/nvme* ]] 00:14:25.960 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:25.960 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ --------------------- == /dev/nvme* ]] 00:14:25.960 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:25.960 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:14:25.960 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid=21b7cb46-a602-e411-a339-001e67bc3be4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:26.900 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:26.900 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:14:26.900 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:26.900 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:14:26.900 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:14:26.900 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:14:28.806 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:28.806 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:28.806 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:28.806 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:14:28.806 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:28.806 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:14:28.806 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:14:28.806 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # local dev _ 00:14:28.806 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:28.806 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@547 -- # nvme list 00:14:28.806 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ Node == /dev/nvme* ]] 00:14:28.806 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:28.806 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ --------------------- == /dev/nvme* ]] 00:14:28.806 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:28.806 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:28.806 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n1 00:14:28.806 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:28.806 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:28.806 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n2 00:14:28.806 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:28.806 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:14:28.806 /dev/nvme0n2 ]] 00:14:28.806 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:14:28.806 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:14:28.806 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # local dev _ 00:14:28.806 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:28.806 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@547 -- # nvme list 00:14:28.806 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ Node == /dev/nvme* ]] 00:14:28.806 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:28.806 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ --------------------- == /dev/nvme* ]] 00:14:28.806 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:28.806 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:28.806 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n1 00:14:28.806 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:28.806 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:28.806 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n2 00:14:28.806 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:28.806 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:14:28.806 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:28.806 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:28.806 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:28.806 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:14:28.806 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:28.806 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:28.806 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:28.806 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:28.806 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:14:28.806 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:14:28.806 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:28.806 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.806 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:28.806 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.806 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:28.806 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:14:28.806 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@514 -- # nvmfcleanup 00:14:28.806 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:14:28.806 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:28.806 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:14:28.806 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:28.806 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:28.806 rmmod nvme_tcp 00:14:28.806 rmmod nvme_fabrics 00:14:28.806 rmmod nvme_keyring 00:14:29.065 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:29.065 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:14:29.065 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:14:29.065 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@515 -- # '[' -n 194932 ']' 00:14:29.065 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # killprocess 194932 00:14:29.065 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # '[' -z 194932 ']' 00:14:29.065 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # kill -0 194932 00:14:29.065 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # uname 00:14:29.065 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:29.065 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 194932 00:14:29.065 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:29.065 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:29.065 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # echo 'killing process with pid 194932' 00:14:29.065 killing process with pid 194932 00:14:29.065 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@969 -- # kill 194932 00:14:29.065 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@974 -- # wait 194932 00:14:29.325 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:14:29.325 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:14:29.325 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:14:29.325 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:14:29.325 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@789 -- # iptables-save 00:14:29.325 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:14:29.325 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@789 -- # iptables-restore 00:14:29.325 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:29.325 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:29.325 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:29.325 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:29.325 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:31.236 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:31.236 00:14:31.236 real 0m8.277s 00:14:31.236 user 0m15.002s 00:14:31.236 sys 0m2.244s 00:14:31.236 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:31.236 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:31.236 ************************************ 00:14:31.236 END TEST nvmf_nvme_cli 00:14:31.236 ************************************ 00:14:31.497 09:35:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:14:31.497 09:35:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:31.497 09:35:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:31.497 09:35:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:31.497 09:35:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:31.497 ************************************ 00:14:31.497 START TEST nvmf_vfio_user 00:14:31.497 ************************************ 00:14:31.497 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:31.497 * Looking for test storage... 00:14:31.497 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:31.497 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:31.497 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1681 -- # lcov --version 00:14:31.497 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:31.497 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:31.497 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:31.497 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:31.497 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:31.497 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:14:31.497 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:14:31.497 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:14:31.497 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:14:31.497 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:14:31.497 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:14:31.497 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:14:31.497 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:31.497 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:14:31.497 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:14:31.497 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:31.497 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:31.497 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:14:31.497 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:14:31.497 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:31.497 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:14:31.497 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:14:31.497 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:14:31.497 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:14:31.497 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:31.497 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:14:31.497 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:14:31.497 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:31.497 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:31.497 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:14:31.497 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:31.497 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:31.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:31.497 --rc genhtml_branch_coverage=1 00:14:31.497 --rc genhtml_function_coverage=1 00:14:31.497 --rc genhtml_legend=1 00:14:31.497 --rc geninfo_all_blocks=1 00:14:31.497 --rc geninfo_unexecuted_blocks=1 00:14:31.497 00:14:31.497 ' 00:14:31.497 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:31.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:31.497 --rc genhtml_branch_coverage=1 00:14:31.497 --rc genhtml_function_coverage=1 00:14:31.497 --rc genhtml_legend=1 00:14:31.497 --rc geninfo_all_blocks=1 00:14:31.497 --rc geninfo_unexecuted_blocks=1 00:14:31.497 00:14:31.497 ' 00:14:31.497 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:31.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:31.497 --rc genhtml_branch_coverage=1 00:14:31.497 --rc genhtml_function_coverage=1 00:14:31.497 --rc genhtml_legend=1 00:14:31.497 --rc geninfo_all_blocks=1 00:14:31.497 --rc geninfo_unexecuted_blocks=1 00:14:31.497 00:14:31.497 ' 00:14:31.497 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:31.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:31.497 --rc genhtml_branch_coverage=1 00:14:31.497 --rc genhtml_function_coverage=1 00:14:31.497 --rc genhtml_legend=1 00:14:31.497 --rc geninfo_all_blocks=1 00:14:31.497 --rc geninfo_unexecuted_blocks=1 00:14:31.497 00:14:31.497 ' 00:14:31.497 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:31.497 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:14:31.497 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:31.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:31.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:31.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:31.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:31.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:31.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:31.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:31.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:31.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:31.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:14:31.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:14:31.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:31.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:31.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:31.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:31.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:31.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:14:31.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:31.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:31.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:31.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:14:31.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:14:31.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:31.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:31.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:31.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:31.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:31.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:31.498 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:31.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:31.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:31.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:31.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:31.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:31.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:14:31.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:31.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:31.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:31.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:14:31.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:14:31.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:14:31.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:14:31.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=195827 00:14:31.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:14:31.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 195827' 00:14:31.498 Process pid: 195827 00:14:31.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:31.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 195827 00:14:31.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 195827 ']' 00:14:31.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:31.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:31.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:31.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:31.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:31.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:31.498 [2024-10-07 09:35:20.487635] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:14:31.498 [2024-10-07 09:35:20.487761] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:31.759 [2024-10-07 09:35:20.547708] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:31.759 [2024-10-07 09:35:20.656968] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:31.759 [2024-10-07 09:35:20.657025] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:31.759 [2024-10-07 09:35:20.657048] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:31.759 [2024-10-07 09:35:20.657059] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:31.759 [2024-10-07 09:35:20.657068] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:31.759 [2024-10-07 09:35:20.658503] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:14:31.759 [2024-10-07 09:35:20.658569] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:14:31.759 [2024-10-07 09:35:20.658634] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:14:31.759 [2024-10-07 09:35:20.658637] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:14:32.019 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:32.019 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:14:32.019 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:32.958 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:14:33.216 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:33.216 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:33.216 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:33.216 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:33.216 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:33.475 Malloc1 00:14:33.475 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:33.733 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:33.992 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:34.251 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:34.251 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:34.251 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:34.510 Malloc2 00:14:34.510 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:34.768 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:35.027 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:35.285 09:35:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:14:35.285 09:35:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:14:35.285 09:35:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:35.285 09:35:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:35.285 09:35:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:14:35.285 09:35:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:35.285 [2024-10-07 09:35:24.270451] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:14:35.285 [2024-10-07 09:35:24.270492] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid196232 ] 00:14:35.546 [2024-10-07 09:35:24.304028] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:14:35.546 [2024-10-07 09:35:24.313129] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:35.546 [2024-10-07 09:35:24.313161] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7faa9aacd000 00:14:35.546 [2024-10-07 09:35:24.314122] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:35.546 [2024-10-07 09:35:24.315113] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:35.546 [2024-10-07 09:35:24.316116] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:35.546 [2024-10-07 09:35:24.317120] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:35.546 [2024-10-07 09:35:24.318127] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:35.546 [2024-10-07 09:35:24.319133] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:35.546 [2024-10-07 09:35:24.320139] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:35.546 [2024-10-07 09:35:24.321143] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:35.546 [2024-10-07 09:35:24.322154] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:35.546 [2024-10-07 09:35:24.322174] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7faa9aac2000 00:14:35.546 [2024-10-07 09:35:24.323291] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:35.546 [2024-10-07 09:35:24.342935] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:14:35.546 [2024-10-07 09:35:24.342984] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:14:35.546 [2024-10-07 09:35:24.345279] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:35.546 [2024-10-07 09:35:24.345329] nvme_pcie_common.c: 134:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:35.546 [2024-10-07 09:35:24.345416] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:14:35.546 [2024-10-07 09:35:24.345443] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:14:35.546 [2024-10-07 09:35:24.345454] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:14:35.546 [2024-10-07 09:35:24.346276] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:14:35.546 [2024-10-07 09:35:24.346295] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:14:35.546 [2024-10-07 09:35:24.346307] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:14:35.546 [2024-10-07 09:35:24.347281] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:35.546 [2024-10-07 09:35:24.347300] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:14:35.546 [2024-10-07 09:35:24.347313] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:14:35.546 [2024-10-07 09:35:24.348283] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:14:35.546 [2024-10-07 09:35:24.348301] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:35.546 [2024-10-07 09:35:24.349288] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:14:35.546 [2024-10-07 09:35:24.349307] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:14:35.546 [2024-10-07 09:35:24.349315] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:14:35.546 [2024-10-07 09:35:24.349326] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:35.546 [2024-10-07 09:35:24.349436] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:14:35.546 [2024-10-07 09:35:24.349444] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:35.546 [2024-10-07 09:35:24.349452] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:14:35.546 [2024-10-07 09:35:24.350296] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:14:35.546 [2024-10-07 09:35:24.351299] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:14:35.546 [2024-10-07 09:35:24.352308] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:35.546 [2024-10-07 09:35:24.353301] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:35.546 [2024-10-07 09:35:24.353394] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:35.546 [2024-10-07 09:35:24.354322] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:14:35.546 [2024-10-07 09:35:24.354340] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:35.546 [2024-10-07 09:35:24.354349] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:14:35.546 [2024-10-07 09:35:24.354373] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:14:35.546 [2024-10-07 09:35:24.354387] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:14:35.546 [2024-10-07 09:35:24.354412] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:35.546 [2024-10-07 09:35:24.354421] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:35.546 [2024-10-07 09:35:24.354428] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:35.546 [2024-10-07 09:35:24.354447] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:35.546 [2024-10-07 09:35:24.354505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:35.546 [2024-10-07 09:35:24.354521] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:14:35.546 [2024-10-07 09:35:24.354530] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:14:35.546 [2024-10-07 09:35:24.354537] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:14:35.546 [2024-10-07 09:35:24.354544] nvme_ctrlr.c:2095:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:35.546 [2024-10-07 09:35:24.354555] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:14:35.546 [2024-10-07 09:35:24.354563] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:14:35.546 [2024-10-07 09:35:24.354571] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:14:35.546 [2024-10-07 09:35:24.354583] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:14:35.546 [2024-10-07 09:35:24.354598] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:35.546 [2024-10-07 09:35:24.354619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:35.547 [2024-10-07 09:35:24.354635] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:35.547 [2024-10-07 09:35:24.354648] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:35.547 [2024-10-07 09:35:24.354660] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:35.547 [2024-10-07 09:35:24.354697] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:35.547 [2024-10-07 09:35:24.354707] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:14:35.547 [2024-10-07 09:35:24.354726] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:35.547 [2024-10-07 09:35:24.354741] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:35.547 [2024-10-07 09:35:24.354753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:35.547 [2024-10-07 09:35:24.354764] nvme_ctrlr.c:3034:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:14:35.547 [2024-10-07 09:35:24.354773] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:35.547 [2024-10-07 09:35:24.354784] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:14:35.547 [2024-10-07 09:35:24.354798] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:14:35.547 [2024-10-07 09:35:24.354812] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:35.547 [2024-10-07 09:35:24.354826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:35.547 [2024-10-07 09:35:24.354894] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:14:35.547 [2024-10-07 09:35:24.354910] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:14:35.547 [2024-10-07 09:35:24.354924] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:35.547 [2024-10-07 09:35:24.354933] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:35.547 [2024-10-07 09:35:24.354943] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:35.547 [2024-10-07 09:35:24.354953] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:35.547 [2024-10-07 09:35:24.354974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:35.547 [2024-10-07 09:35:24.355006] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:14:35.547 [2024-10-07 09:35:24.355021] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:14:35.547 [2024-10-07 09:35:24.355035] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:14:35.547 [2024-10-07 09:35:24.355063] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:35.547 [2024-10-07 09:35:24.355071] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:35.547 [2024-10-07 09:35:24.355077] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:35.547 [2024-10-07 09:35:24.355086] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:35.547 [2024-10-07 09:35:24.355119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:35.547 [2024-10-07 09:35:24.355140] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:35.547 [2024-10-07 09:35:24.355154] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:35.547 [2024-10-07 09:35:24.355166] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:35.547 [2024-10-07 09:35:24.355174] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:35.547 [2024-10-07 09:35:24.355180] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:35.547 [2024-10-07 09:35:24.355189] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:35.547 [2024-10-07 09:35:24.355202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:35.547 [2024-10-07 09:35:24.355216] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:35.547 [2024-10-07 09:35:24.355227] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:14:35.547 [2024-10-07 09:35:24.355240] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:14:35.547 [2024-10-07 09:35:24.355250] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:14:35.547 [2024-10-07 09:35:24.355259] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:35.547 [2024-10-07 09:35:24.355266] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:14:35.547 [2024-10-07 09:35:24.355274] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:14:35.547 [2024-10-07 09:35:24.355281] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:14:35.547 [2024-10-07 09:35:24.355293] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:14:35.547 [2024-10-07 09:35:24.355318] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:35.547 [2024-10-07 09:35:24.355332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:35.547 [2024-10-07 09:35:24.355348] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:35.547 [2024-10-07 09:35:24.355359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:35.547 [2024-10-07 09:35:24.355375] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:35.547 [2024-10-07 09:35:24.355385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:35.547 [2024-10-07 09:35:24.355400] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:35.547 [2024-10-07 09:35:24.355411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:35.547 [2024-10-07 09:35:24.355433] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:35.547 [2024-10-07 09:35:24.355442] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:35.547 [2024-10-07 09:35:24.355448] nvme_pcie_common.c:1241:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:35.547 [2024-10-07 09:35:24.355454] nvme_pcie_common.c:1257:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:35.547 [2024-10-07 09:35:24.355460] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:14:35.547 [2024-10-07 09:35:24.355469] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:35.547 [2024-10-07 09:35:24.355480] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:35.547 [2024-10-07 09:35:24.355488] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:35.547 [2024-10-07 09:35:24.355494] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:35.547 [2024-10-07 09:35:24.355502] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:35.547 [2024-10-07 09:35:24.355513] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:35.547 [2024-10-07 09:35:24.355520] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:35.547 [2024-10-07 09:35:24.355526] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:35.547 [2024-10-07 09:35:24.355534] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:35.547 [2024-10-07 09:35:24.355546] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:35.547 [2024-10-07 09:35:24.355554] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:35.547 [2024-10-07 09:35:24.355560] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:35.547 [2024-10-07 09:35:24.355568] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:35.547 [2024-10-07 09:35:24.355579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:35.547 [2024-10-07 09:35:24.355598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:35.547 [2024-10-07 09:35:24.355619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:35.547 [2024-10-07 09:35:24.355631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:35.547 ===================================================== 00:14:35.547 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:35.547 ===================================================== 00:14:35.547 Controller Capabilities/Features 00:14:35.547 ================================ 00:14:35.547 Vendor ID: 4e58 00:14:35.547 Subsystem Vendor ID: 4e58 00:14:35.547 Serial Number: SPDK1 00:14:35.547 Model Number: SPDK bdev Controller 00:14:35.547 Firmware Version: 25.01 00:14:35.547 Recommended Arb Burst: 6 00:14:35.547 IEEE OUI Identifier: 8d 6b 50 00:14:35.547 Multi-path I/O 00:14:35.547 May have multiple subsystem ports: Yes 00:14:35.548 May have multiple controllers: Yes 00:14:35.548 Associated with SR-IOV VF: No 00:14:35.548 Max Data Transfer Size: 131072 00:14:35.548 Max Number of Namespaces: 32 00:14:35.548 Max Number of I/O Queues: 127 00:14:35.548 NVMe Specification Version (VS): 1.3 00:14:35.548 NVMe Specification Version (Identify): 1.3 00:14:35.548 Maximum Queue Entries: 256 00:14:35.548 Contiguous Queues Required: Yes 00:14:35.548 Arbitration Mechanisms Supported 00:14:35.548 Weighted Round Robin: Not Supported 00:14:35.548 Vendor Specific: Not Supported 00:14:35.548 Reset Timeout: 15000 ms 00:14:35.548 Doorbell Stride: 4 bytes 00:14:35.548 NVM Subsystem Reset: Not Supported 00:14:35.548 Command Sets Supported 00:14:35.548 NVM Command Set: Supported 00:14:35.548 Boot Partition: Not Supported 00:14:35.548 Memory Page Size Minimum: 4096 bytes 00:14:35.548 Memory Page Size Maximum: 4096 bytes 00:14:35.548 Persistent Memory Region: Not Supported 00:14:35.548 Optional Asynchronous Events Supported 00:14:35.548 Namespace Attribute Notices: Supported 00:14:35.548 Firmware Activation Notices: Not Supported 00:14:35.548 ANA Change Notices: Not Supported 00:14:35.548 PLE Aggregate Log Change Notices: Not Supported 00:14:35.548 LBA Status Info Alert Notices: Not Supported 00:14:35.548 EGE Aggregate Log Change Notices: Not Supported 00:14:35.548 Normal NVM Subsystem Shutdown event: Not Supported 00:14:35.548 Zone Descriptor Change Notices: Not Supported 00:14:35.548 Discovery Log Change Notices: Not Supported 00:14:35.548 Controller Attributes 00:14:35.548 128-bit Host Identifier: Supported 00:14:35.548 Non-Operational Permissive Mode: Not Supported 00:14:35.548 NVM Sets: Not Supported 00:14:35.548 Read Recovery Levels: Not Supported 00:14:35.548 Endurance Groups: Not Supported 00:14:35.548 Predictable Latency Mode: Not Supported 00:14:35.548 Traffic Based Keep ALive: Not Supported 00:14:35.548 Namespace Granularity: Not Supported 00:14:35.548 SQ Associations: Not Supported 00:14:35.548 UUID List: Not Supported 00:14:35.548 Multi-Domain Subsystem: Not Supported 00:14:35.548 Fixed Capacity Management: Not Supported 00:14:35.548 Variable Capacity Management: Not Supported 00:14:35.548 Delete Endurance Group: Not Supported 00:14:35.548 Delete NVM Set: Not Supported 00:14:35.548 Extended LBA Formats Supported: Not Supported 00:14:35.548 Flexible Data Placement Supported: Not Supported 00:14:35.548 00:14:35.548 Controller Memory Buffer Support 00:14:35.548 ================================ 00:14:35.548 Supported: No 00:14:35.548 00:14:35.548 Persistent Memory Region Support 00:14:35.548 ================================ 00:14:35.548 Supported: No 00:14:35.548 00:14:35.548 Admin Command Set Attributes 00:14:35.548 ============================ 00:14:35.548 Security Send/Receive: Not Supported 00:14:35.548 Format NVM: Not Supported 00:14:35.548 Firmware Activate/Download: Not Supported 00:14:35.548 Namespace Management: Not Supported 00:14:35.548 Device Self-Test: Not Supported 00:14:35.548 Directives: Not Supported 00:14:35.548 NVMe-MI: Not Supported 00:14:35.548 Virtualization Management: Not Supported 00:14:35.548 Doorbell Buffer Config: Not Supported 00:14:35.548 Get LBA Status Capability: Not Supported 00:14:35.548 Command & Feature Lockdown Capability: Not Supported 00:14:35.548 Abort Command Limit: 4 00:14:35.548 Async Event Request Limit: 4 00:14:35.548 Number of Firmware Slots: N/A 00:14:35.548 Firmware Slot 1 Read-Only: N/A 00:14:35.548 Firmware Activation Without Reset: N/A 00:14:35.548 Multiple Update Detection Support: N/A 00:14:35.548 Firmware Update Granularity: No Information Provided 00:14:35.548 Per-Namespace SMART Log: No 00:14:35.548 Asymmetric Namespace Access Log Page: Not Supported 00:14:35.548 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:14:35.548 Command Effects Log Page: Supported 00:14:35.548 Get Log Page Extended Data: Supported 00:14:35.548 Telemetry Log Pages: Not Supported 00:14:35.548 Persistent Event Log Pages: Not Supported 00:14:35.548 Supported Log Pages Log Page: May Support 00:14:35.548 Commands Supported & Effects Log Page: Not Supported 00:14:35.548 Feature Identifiers & Effects Log Page:May Support 00:14:35.548 NVMe-MI Commands & Effects Log Page: May Support 00:14:35.548 Data Area 4 for Telemetry Log: Not Supported 00:14:35.548 Error Log Page Entries Supported: 128 00:14:35.548 Keep Alive: Supported 00:14:35.548 Keep Alive Granularity: 10000 ms 00:14:35.548 00:14:35.548 NVM Command Set Attributes 00:14:35.548 ========================== 00:14:35.548 Submission Queue Entry Size 00:14:35.548 Max: 64 00:14:35.548 Min: 64 00:14:35.548 Completion Queue Entry Size 00:14:35.548 Max: 16 00:14:35.548 Min: 16 00:14:35.548 Number of Namespaces: 32 00:14:35.548 Compare Command: Supported 00:14:35.548 Write Uncorrectable Command: Not Supported 00:14:35.548 Dataset Management Command: Supported 00:14:35.548 Write Zeroes Command: Supported 00:14:35.548 Set Features Save Field: Not Supported 00:14:35.548 Reservations: Not Supported 00:14:35.548 Timestamp: Not Supported 00:14:35.548 Copy: Supported 00:14:35.548 Volatile Write Cache: Present 00:14:35.548 Atomic Write Unit (Normal): 1 00:14:35.548 Atomic Write Unit (PFail): 1 00:14:35.548 Atomic Compare & Write Unit: 1 00:14:35.548 Fused Compare & Write: Supported 00:14:35.548 Scatter-Gather List 00:14:35.548 SGL Command Set: Supported (Dword aligned) 00:14:35.548 SGL Keyed: Not Supported 00:14:35.548 SGL Bit Bucket Descriptor: Not Supported 00:14:35.548 SGL Metadata Pointer: Not Supported 00:14:35.548 Oversized SGL: Not Supported 00:14:35.548 SGL Metadata Address: Not Supported 00:14:35.548 SGL Offset: Not Supported 00:14:35.548 Transport SGL Data Block: Not Supported 00:14:35.548 Replay Protected Memory Block: Not Supported 00:14:35.548 00:14:35.548 Firmware Slot Information 00:14:35.548 ========================= 00:14:35.548 Active slot: 1 00:14:35.548 Slot 1 Firmware Revision: 25.01 00:14:35.548 00:14:35.548 00:14:35.548 Commands Supported and Effects 00:14:35.548 ============================== 00:14:35.548 Admin Commands 00:14:35.548 -------------- 00:14:35.548 Get Log Page (02h): Supported 00:14:35.548 Identify (06h): Supported 00:14:35.548 Abort (08h): Supported 00:14:35.548 Set Features (09h): Supported 00:14:35.548 Get Features (0Ah): Supported 00:14:35.548 Asynchronous Event Request (0Ch): Supported 00:14:35.548 Keep Alive (18h): Supported 00:14:35.548 I/O Commands 00:14:35.548 ------------ 00:14:35.548 Flush (00h): Supported LBA-Change 00:14:35.548 Write (01h): Supported LBA-Change 00:14:35.548 Read (02h): Supported 00:14:35.548 Compare (05h): Supported 00:14:35.548 Write Zeroes (08h): Supported LBA-Change 00:14:35.548 Dataset Management (09h): Supported LBA-Change 00:14:35.548 Copy (19h): Supported LBA-Change 00:14:35.548 00:14:35.548 Error Log 00:14:35.548 ========= 00:14:35.548 00:14:35.548 Arbitration 00:14:35.548 =========== 00:14:35.548 Arbitration Burst: 1 00:14:35.548 00:14:35.548 Power Management 00:14:35.548 ================ 00:14:35.548 Number of Power States: 1 00:14:35.548 Current Power State: Power State #0 00:14:35.548 Power State #0: 00:14:35.548 Max Power: 0.00 W 00:14:35.548 Non-Operational State: Operational 00:14:35.548 Entry Latency: Not Reported 00:14:35.548 Exit Latency: Not Reported 00:14:35.548 Relative Read Throughput: 0 00:14:35.548 Relative Read Latency: 0 00:14:35.548 Relative Write Throughput: 0 00:14:35.548 Relative Write Latency: 0 00:14:35.548 Idle Power: Not Reported 00:14:35.548 Active Power: Not Reported 00:14:35.548 Non-Operational Permissive Mode: Not Supported 00:14:35.548 00:14:35.548 Health Information 00:14:35.548 ================== 00:14:35.548 Critical Warnings: 00:14:35.548 Available Spare Space: OK 00:14:35.548 Temperature: OK 00:14:35.548 Device Reliability: OK 00:14:35.548 Read Only: No 00:14:35.548 Volatile Memory Backup: OK 00:14:35.548 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:35.548 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:35.548 Available Spare: 0% 00:14:35.548 Available Sp[2024-10-07 09:35:24.355774] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:35.548 [2024-10-07 09:35:24.355791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:35.548 [2024-10-07 09:35:24.355831] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:14:35.548 [2024-10-07 09:35:24.355849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.548 [2024-10-07 09:35:24.355860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.548 [2024-10-07 09:35:24.355870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.549 [2024-10-07 09:35:24.355879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.549 [2024-10-07 09:35:24.356327] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:35.549 [2024-10-07 09:35:24.356347] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:14:35.549 [2024-10-07 09:35:24.357329] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:35.549 [2024-10-07 09:35:24.357399] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:14:35.549 [2024-10-07 09:35:24.357412] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:14:35.549 [2024-10-07 09:35:24.358342] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:14:35.549 [2024-10-07 09:35:24.358364] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:14:35.549 [2024-10-07 09:35:24.358424] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:14:35.549 [2024-10-07 09:35:24.362677] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:35.549 are Threshold: 0% 00:14:35.549 Life Percentage Used: 0% 00:14:35.549 Data Units Read: 0 00:14:35.549 Data Units Written: 0 00:14:35.549 Host Read Commands: 0 00:14:35.549 Host Write Commands: 0 00:14:35.549 Controller Busy Time: 0 minutes 00:14:35.549 Power Cycles: 0 00:14:35.549 Power On Hours: 0 hours 00:14:35.549 Unsafe Shutdowns: 0 00:14:35.549 Unrecoverable Media Errors: 0 00:14:35.549 Lifetime Error Log Entries: 0 00:14:35.549 Warning Temperature Time: 0 minutes 00:14:35.549 Critical Temperature Time: 0 minutes 00:14:35.549 00:14:35.549 Number of Queues 00:14:35.549 ================ 00:14:35.549 Number of I/O Submission Queues: 127 00:14:35.549 Number of I/O Completion Queues: 127 00:14:35.549 00:14:35.549 Active Namespaces 00:14:35.549 ================= 00:14:35.549 Namespace ID:1 00:14:35.549 Error Recovery Timeout: Unlimited 00:14:35.549 Command Set Identifier: NVM (00h) 00:14:35.549 Deallocate: Supported 00:14:35.549 Deallocated/Unwritten Error: Not Supported 00:14:35.549 Deallocated Read Value: Unknown 00:14:35.549 Deallocate in Write Zeroes: Not Supported 00:14:35.549 Deallocated Guard Field: 0xFFFF 00:14:35.549 Flush: Supported 00:14:35.549 Reservation: Supported 00:14:35.549 Namespace Sharing Capabilities: Multiple Controllers 00:14:35.549 Size (in LBAs): 131072 (0GiB) 00:14:35.549 Capacity (in LBAs): 131072 (0GiB) 00:14:35.549 Utilization (in LBAs): 131072 (0GiB) 00:14:35.549 NGUID: 4735CD9965F84D9CB61198CF54FBBDD4 00:14:35.549 UUID: 4735cd99-65f8-4d9c-b611-98cf54fbbdd4 00:14:35.549 Thin Provisioning: Not Supported 00:14:35.549 Per-NS Atomic Units: Yes 00:14:35.549 Atomic Boundary Size (Normal): 0 00:14:35.549 Atomic Boundary Size (PFail): 0 00:14:35.549 Atomic Boundary Offset: 0 00:14:35.549 Maximum Single Source Range Length: 65535 00:14:35.549 Maximum Copy Length: 65535 00:14:35.549 Maximum Source Range Count: 1 00:14:35.549 NGUID/EUI64 Never Reused: No 00:14:35.549 Namespace Write Protected: No 00:14:35.549 Number of LBA Formats: 1 00:14:35.549 Current LBA Format: LBA Format #00 00:14:35.549 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:35.549 00:14:35.549 09:35:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:35.809 [2024-10-07 09:35:24.594526] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:41.088 Initializing NVMe Controllers 00:14:41.088 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:41.088 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:41.088 Initialization complete. Launching workers. 00:14:41.088 ======================================================== 00:14:41.088 Latency(us) 00:14:41.088 Device Information : IOPS MiB/s Average min max 00:14:41.088 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 33168.18 129.56 3858.62 1177.96 8307.16 00:14:41.088 ======================================================== 00:14:41.088 Total : 33168.18 129.56 3858.62 1177.96 8307.16 00:14:41.088 00:14:41.088 [2024-10-07 09:35:29.614062] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:41.088 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:41.088 [2024-10-07 09:35:29.853225] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:46.358 Initializing NVMe Controllers 00:14:46.358 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:46.358 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:46.358 Initialization complete. Launching workers. 00:14:46.358 ======================================================== 00:14:46.358 Latency(us) 00:14:46.358 Device Information : IOPS MiB/s Average min max 00:14:46.358 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16024.69 62.60 7998.54 7534.57 15962.56 00:14:46.358 ======================================================== 00:14:46.358 Total : 16024.69 62.60 7998.54 7534.57 15962.56 00:14:46.358 00:14:46.358 [2024-10-07 09:35:34.890415] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:46.358 09:35:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:46.358 [2024-10-07 09:35:35.104498] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:51.631 [2024-10-07 09:35:40.194116] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:51.631 Initializing NVMe Controllers 00:14:51.631 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:51.631 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:51.631 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:14:51.631 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:14:51.631 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:14:51.631 Initialization complete. Launching workers. 00:14:51.631 Starting thread on core 2 00:14:51.631 Starting thread on core 3 00:14:51.631 Starting thread on core 1 00:14:51.631 09:35:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:14:51.631 [2024-10-07 09:35:40.507193] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:54.921 [2024-10-07 09:35:43.573954] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:54.921 Initializing NVMe Controllers 00:14:54.921 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:54.921 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:54.921 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:14:54.921 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:14:54.921 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:14:54.921 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:14:54.921 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:14:54.921 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:14:54.921 Initialization complete. Launching workers. 00:14:54.921 Starting thread on core 1 with urgent priority queue 00:14:54.921 Starting thread on core 2 with urgent priority queue 00:14:54.921 Starting thread on core 3 with urgent priority queue 00:14:54.921 Starting thread on core 0 with urgent priority queue 00:14:54.921 SPDK bdev Controller (SPDK1 ) core 0: 6748.33 IO/s 14.82 secs/100000 ios 00:14:54.921 SPDK bdev Controller (SPDK1 ) core 1: 4922.67 IO/s 20.31 secs/100000 ios 00:14:54.921 SPDK bdev Controller (SPDK1 ) core 2: 5770.33 IO/s 17.33 secs/100000 ios 00:14:54.921 SPDK bdev Controller (SPDK1 ) core 3: 4928.33 IO/s 20.29 secs/100000 ios 00:14:54.921 ======================================================== 00:14:54.921 00:14:54.921 09:35:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:54.921 [2024-10-07 09:35:43.869765] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:54.921 Initializing NVMe Controllers 00:14:54.921 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:54.921 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:54.921 Namespace ID: 1 size: 0GB 00:14:54.921 Initialization complete. 00:14:54.921 INFO: using host memory buffer for IO 00:14:54.921 Hello world! 00:14:54.921 [2024-10-07 09:35:43.905325] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:55.180 09:35:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:55.439 [2024-10-07 09:35:44.200173] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:56.381 Initializing NVMe Controllers 00:14:56.381 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:56.381 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:56.381 Initialization complete. Launching workers. 00:14:56.381 submit (in ns) avg, min, max = 6715.2, 3488.9, 4002026.7 00:14:56.381 complete (in ns) avg, min, max = 27166.9, 2060.0, 4015625.6 00:14:56.381 00:14:56.381 Submit histogram 00:14:56.381 ================ 00:14:56.381 Range in us Cumulative Count 00:14:56.381 3.484 - 3.508: 0.2057% ( 27) 00:14:56.381 3.508 - 3.532: 0.9297% ( 95) 00:14:56.381 3.532 - 3.556: 3.0481% ( 278) 00:14:56.381 3.556 - 3.579: 8.0393% ( 655) 00:14:56.381 3.579 - 3.603: 14.9204% ( 903) 00:14:56.381 3.603 - 3.627: 23.5998% ( 1139) 00:14:56.381 3.627 - 3.650: 31.8525% ( 1083) 00:14:56.381 3.650 - 3.674: 39.7775% ( 1040) 00:14:56.381 3.674 - 3.698: 46.9405% ( 940) 00:14:56.381 3.698 - 3.721: 53.1205% ( 811) 00:14:56.381 3.721 - 3.745: 57.3345% ( 553) 00:14:56.381 3.745 - 3.769: 61.0379% ( 486) 00:14:56.381 3.769 - 3.793: 64.3679% ( 437) 00:14:56.381 3.793 - 3.816: 67.8046% ( 451) 00:14:56.381 3.816 - 3.840: 71.6071% ( 499) 00:14:56.381 3.840 - 3.864: 75.8211% ( 553) 00:14:56.381 3.864 - 3.887: 79.4712% ( 479) 00:14:56.381 3.887 - 3.911: 82.8393% ( 442) 00:14:56.381 3.911 - 3.935: 85.6435% ( 368) 00:14:56.381 3.935 - 3.959: 87.5333% ( 248) 00:14:56.381 3.959 - 3.982: 89.1869% ( 217) 00:14:56.381 3.982 - 4.006: 90.4443% ( 165) 00:14:56.381 4.006 - 4.030: 91.4044% ( 126) 00:14:56.381 4.030 - 4.053: 92.3722% ( 127) 00:14:56.381 4.053 - 4.077: 93.2256% ( 112) 00:14:56.381 4.077 - 4.101: 93.9496% ( 95) 00:14:56.381 4.101 - 4.124: 94.6354% ( 90) 00:14:56.381 4.124 - 4.148: 95.1002% ( 61) 00:14:56.381 4.148 - 4.172: 95.4660% ( 48) 00:14:56.381 4.172 - 4.196: 95.8241% ( 47) 00:14:56.381 4.196 - 4.219: 96.0299% ( 27) 00:14:56.381 4.219 - 4.243: 96.1289% ( 13) 00:14:56.381 4.243 - 4.267: 96.2890% ( 21) 00:14:56.381 4.267 - 4.290: 96.4109% ( 16) 00:14:56.381 4.290 - 4.314: 96.5252% ( 15) 00:14:56.381 4.314 - 4.338: 96.6319% ( 14) 00:14:56.381 4.338 - 4.361: 96.7538% ( 16) 00:14:56.381 4.361 - 4.385: 96.8224% ( 9) 00:14:56.381 4.385 - 4.409: 96.8910% ( 9) 00:14:56.381 4.409 - 4.433: 96.9443% ( 7) 00:14:56.381 4.433 - 4.456: 97.0053% ( 8) 00:14:56.381 4.456 - 4.480: 97.0662% ( 8) 00:14:56.381 4.480 - 4.504: 97.0738% ( 1) 00:14:56.381 4.504 - 4.527: 97.0967% ( 3) 00:14:56.381 4.551 - 4.575: 97.1119% ( 2) 00:14:56.381 4.575 - 4.599: 97.1272% ( 2) 00:14:56.381 4.599 - 4.622: 97.1348% ( 1) 00:14:56.381 4.622 - 4.646: 97.1424% ( 1) 00:14:56.381 4.670 - 4.693: 97.1500% ( 1) 00:14:56.381 4.717 - 4.741: 97.1577% ( 1) 00:14:56.381 4.741 - 4.764: 97.1805% ( 3) 00:14:56.381 4.764 - 4.788: 97.2110% ( 4) 00:14:56.381 4.788 - 4.812: 97.2262% ( 2) 00:14:56.381 4.812 - 4.836: 97.2643% ( 5) 00:14:56.381 4.836 - 4.859: 97.2796% ( 2) 00:14:56.381 4.859 - 4.883: 97.3405% ( 8) 00:14:56.381 4.883 - 4.907: 97.4015% ( 8) 00:14:56.381 4.907 - 4.930: 97.4701% ( 9) 00:14:56.381 4.930 - 4.954: 97.5234% ( 7) 00:14:56.381 4.954 - 4.978: 97.5768% ( 7) 00:14:56.381 4.978 - 5.001: 97.5996% ( 3) 00:14:56.381 5.001 - 5.025: 97.6758% ( 10) 00:14:56.381 5.025 - 5.049: 97.7292% ( 7) 00:14:56.381 5.073 - 5.096: 97.7597% ( 4) 00:14:56.381 5.096 - 5.120: 97.7978% ( 5) 00:14:56.381 5.120 - 5.144: 97.8511% ( 7) 00:14:56.381 5.144 - 5.167: 97.8663% ( 2) 00:14:56.381 5.167 - 5.191: 97.9197% ( 7) 00:14:56.381 5.191 - 5.215: 97.9273% ( 1) 00:14:56.381 5.239 - 5.262: 97.9502% ( 3) 00:14:56.381 5.262 - 5.286: 97.9730% ( 3) 00:14:56.381 5.286 - 5.310: 97.9883% ( 2) 00:14:56.381 5.333 - 5.357: 97.9959% ( 1) 00:14:56.381 5.381 - 5.404: 98.0111% ( 2) 00:14:56.381 5.404 - 5.428: 98.0340% ( 3) 00:14:56.381 5.452 - 5.476: 98.0416% ( 1) 00:14:56.381 5.476 - 5.499: 98.0492% ( 1) 00:14:56.381 5.499 - 5.523: 98.0645% ( 2) 00:14:56.381 5.523 - 5.547: 98.0721% ( 1) 00:14:56.381 5.547 - 5.570: 98.0797% ( 1) 00:14:56.381 5.570 - 5.594: 98.0949% ( 2) 00:14:56.381 5.618 - 5.641: 98.1026% ( 1) 00:14:56.381 5.641 - 5.665: 98.1102% ( 1) 00:14:56.381 5.973 - 5.997: 98.1178% ( 1) 00:14:56.381 6.210 - 6.258: 98.1254% ( 1) 00:14:56.381 6.447 - 6.495: 98.1330% ( 1) 00:14:56.381 6.637 - 6.684: 98.1483% ( 2) 00:14:56.381 6.969 - 7.016: 98.1559% ( 1) 00:14:56.381 7.396 - 7.443: 98.1635% ( 1) 00:14:56.381 7.490 - 7.538: 98.1711% ( 1) 00:14:56.381 7.538 - 7.585: 98.1788% ( 1) 00:14:56.381 7.680 - 7.727: 98.1940% ( 2) 00:14:56.382 7.727 - 7.775: 98.2016% ( 1) 00:14:56.382 7.775 - 7.822: 98.2093% ( 1) 00:14:56.382 7.822 - 7.870: 98.2169% ( 1) 00:14:56.382 7.870 - 7.917: 98.2245% ( 1) 00:14:56.382 7.964 - 8.012: 98.2321% ( 1) 00:14:56.382 8.012 - 8.059: 98.2397% ( 1) 00:14:56.382 8.201 - 8.249: 98.2474% ( 1) 00:14:56.382 8.296 - 8.344: 98.2550% ( 1) 00:14:56.382 8.439 - 8.486: 98.2626% ( 1) 00:14:56.382 8.581 - 8.628: 98.2702% ( 1) 00:14:56.382 8.628 - 8.676: 98.2931% ( 3) 00:14:56.382 8.676 - 8.723: 98.3312% ( 5) 00:14:56.382 8.723 - 8.770: 98.3388% ( 1) 00:14:56.382 8.818 - 8.865: 98.3464% ( 1) 00:14:56.382 8.865 - 8.913: 98.3617% ( 2) 00:14:56.382 9.007 - 9.055: 98.3693% ( 1) 00:14:56.382 9.055 - 9.102: 98.3769% ( 1) 00:14:56.382 9.102 - 9.150: 98.3921% ( 2) 00:14:56.382 9.197 - 9.244: 98.3998% ( 1) 00:14:56.382 9.292 - 9.339: 98.4150% ( 2) 00:14:56.382 9.387 - 9.434: 98.4302% ( 2) 00:14:56.382 9.434 - 9.481: 98.4379% ( 1) 00:14:56.382 9.624 - 9.671: 98.4455% ( 1) 00:14:56.382 9.719 - 9.766: 98.4531% ( 1) 00:14:56.382 9.766 - 9.813: 98.4607% ( 1) 00:14:56.382 9.813 - 9.861: 98.4912% ( 4) 00:14:56.382 9.861 - 9.908: 98.4988% ( 1) 00:14:56.382 9.908 - 9.956: 98.5064% ( 1) 00:14:56.382 9.956 - 10.003: 98.5141% ( 1) 00:14:56.382 10.050 - 10.098: 98.5217% ( 1) 00:14:56.382 10.098 - 10.145: 98.5293% ( 1) 00:14:56.382 10.193 - 10.240: 98.5445% ( 2) 00:14:56.382 10.430 - 10.477: 98.5522% ( 1) 00:14:56.382 10.477 - 10.524: 98.5598% ( 1) 00:14:56.382 10.524 - 10.572: 98.5674% ( 1) 00:14:56.382 10.619 - 10.667: 98.5750% ( 1) 00:14:56.382 10.667 - 10.714: 98.5979% ( 3) 00:14:56.382 10.761 - 10.809: 98.6055% ( 1) 00:14:56.382 10.809 - 10.856: 98.6207% ( 2) 00:14:56.382 10.904 - 10.951: 98.6284% ( 1) 00:14:56.382 11.046 - 11.093: 98.6360% ( 1) 00:14:56.382 11.188 - 11.236: 98.6436% ( 1) 00:14:56.382 11.520 - 11.567: 98.6512% ( 1) 00:14:56.382 11.567 - 11.615: 98.6665% ( 2) 00:14:56.382 11.852 - 11.899: 98.6893% ( 3) 00:14:56.382 12.136 - 12.231: 98.7046% ( 2) 00:14:56.382 12.421 - 12.516: 98.7122% ( 1) 00:14:56.382 12.895 - 12.990: 98.7198% ( 1) 00:14:56.382 12.990 - 13.084: 98.7274% ( 1) 00:14:56.382 13.464 - 13.559: 98.7503% ( 3) 00:14:56.382 13.559 - 13.653: 98.7655% ( 2) 00:14:56.382 13.653 - 13.748: 98.7731% ( 1) 00:14:56.382 13.748 - 13.843: 98.7884% ( 2) 00:14:56.382 13.843 - 13.938: 98.8036% ( 2) 00:14:56.382 14.033 - 14.127: 98.8112% ( 1) 00:14:56.382 14.222 - 14.317: 98.8341% ( 3) 00:14:56.382 14.317 - 14.412: 98.8417% ( 1) 00:14:56.382 14.412 - 14.507: 98.8493% ( 1) 00:14:56.382 14.696 - 14.791: 98.8570% ( 1) 00:14:56.382 14.886 - 14.981: 98.8646% ( 1) 00:14:56.382 15.076 - 15.170: 98.8722% ( 1) 00:14:56.382 16.782 - 16.877: 98.8798% ( 1) 00:14:56.382 17.351 - 17.446: 98.9256% ( 6) 00:14:56.382 17.446 - 17.541: 98.9713% ( 6) 00:14:56.382 17.541 - 17.636: 99.0170% ( 6) 00:14:56.382 17.636 - 17.730: 99.0703% ( 7) 00:14:56.382 17.730 - 17.825: 99.1084% ( 5) 00:14:56.382 17.825 - 17.920: 99.1313% ( 3) 00:14:56.382 17.920 - 18.015: 99.1999% ( 9) 00:14:56.382 18.015 - 18.110: 99.2685% ( 9) 00:14:56.382 18.110 - 18.204: 99.3218% ( 7) 00:14:56.382 18.204 - 18.299: 99.4209% ( 13) 00:14:56.382 18.299 - 18.394: 99.5123% ( 12) 00:14:56.382 18.394 - 18.489: 99.5809% ( 9) 00:14:56.382 18.489 - 18.584: 99.6266% ( 6) 00:14:56.382 18.584 - 18.679: 99.6419% ( 2) 00:14:56.382 18.679 - 18.773: 99.7028% ( 8) 00:14:56.382 18.773 - 18.868: 99.7257% ( 3) 00:14:56.382 18.868 - 18.963: 99.7485% ( 3) 00:14:56.382 18.963 - 19.058: 99.7562% ( 1) 00:14:56.382 19.058 - 19.153: 99.7714% ( 2) 00:14:56.382 19.153 - 19.247: 99.7790% ( 1) 00:14:56.382 19.247 - 19.342: 99.7943% ( 2) 00:14:56.382 19.342 - 19.437: 99.8019% ( 1) 00:14:56.382 20.764 - 20.859: 99.8095% ( 1) 00:14:56.382 20.954 - 21.049: 99.8247% ( 2) 00:14:56.382 22.092 - 22.187: 99.8324% ( 1) 00:14:56.382 22.850 - 22.945: 99.8400% ( 1) 00:14:56.382 23.514 - 23.609: 99.8476% ( 1) 00:14:56.382 24.083 - 24.178: 99.8552% ( 1) 00:14:56.382 25.031 - 25.221: 99.8705% ( 2) 00:14:56.382 25.410 - 25.600: 99.8781% ( 1) 00:14:56.382 25.790 - 25.979: 99.8857% ( 1) 00:14:56.382 26.169 - 26.359: 99.8933% ( 1) 00:14:56.382 27.496 - 27.686: 99.9009% ( 1) 00:14:56.382 28.634 - 28.824: 99.9086% ( 1) 00:14:56.382 29.203 - 29.393: 99.9162% ( 1) 00:14:56.382 30.720 - 30.910: 99.9238% ( 1) 00:14:56.382 34.323 - 34.513: 99.9314% ( 1) 00:14:56.382 3980.705 - 4004.978: 100.0000% ( 9) 00:14:56.382 00:14:56.382 Complete histogram 00:14:56.382 ================== 00:14:56.382 Range in us Cumulative Count 00:14:56.382 2.050 - 2.062: 0.0381% ( 5) 00:14:56.382 2.062 - 2.074: 14.9051% ( 1951) 00:14:56.382 2.074 - 2.086: 45.2259% ( 3979) 00:14:56.382 2.086 - 2.098: 47.3672% ( 281) 00:14:56.382 2.098 - 2.110: 52.8385% ( 718) 00:14:56.382 2.110 - 2.121: 58.9271% ( 799) 00:14:56.382 2.121 - 2.133: 61.3046% ( 312) 00:14:56.382 2.133 - 2.145: 70.7003% ( 1233) 00:14:56.382 2.145 - 2.157: 76.8117% ( 802) 00:14:56.382 2.157 - 2.169: 77.8100% ( 131) 00:14:56.382 2.169 - 2.181: 80.1341% ( 305) 00:14:56.382 2.181 - 2.193: 81.7344% ( 210) 00:14:56.382 2.193 - 2.204: 82.6716% ( 123) 00:14:56.382 2.204 - 2.216: 86.2989% ( 476) 00:14:56.382 2.216 - 2.228: 89.2250% ( 384) 00:14:56.382 2.228 - 2.240: 91.2825% ( 270) 00:14:56.382 2.240 - 2.252: 92.7989% ( 199) 00:14:56.382 2.252 - 2.264: 93.5457% ( 98) 00:14:56.382 2.264 - 2.276: 93.8276% ( 37) 00:14:56.382 2.276 - 2.287: 94.2010% ( 49) 00:14:56.382 2.287 - 2.299: 94.6278% ( 56) 00:14:56.382 2.299 - 2.311: 95.2526% ( 82) 00:14:56.382 2.311 - 2.323: 95.7174% ( 61) 00:14:56.382 2.323 - 2.335: 95.7403% ( 3) 00:14:56.382 2.335 - 2.347: 95.7860% ( 6) 00:14:56.382 2.347 - 2.359: 95.8317% ( 6) 00:14:56.382 2.359 - 2.370: 95.8851% ( 7) 00:14:56.382 2.370 - 2.382: 96.0756% ( 25) 00:14:56.382 2.382 - 2.394: 96.2813% ( 27) 00:14:56.382 2.394 - 2.406: 96.5023% ( 29) 00:14:56.382 2.406 - 2.418: 96.6700% ( 22) 00:14:56.382 2.418 - 2.430: 96.8529% ( 24) 00:14:56.382 2.430 - 2.441: 96.9976% ( 19) 00:14:56.382 2.441 - 2.453: 97.1653% ( 22) 00:14:56.382 2.453 - 2.465: 97.4015% ( 31) 00:14:56.382 2.465 - 2.477: 97.5539% ( 20) 00:14:56.382 2.477 - 2.489: 97.6835% ( 17) 00:14:56.382 2.489 - 2.501: 97.8359% ( 20) 00:14:56.382 2.501 - 2.513: 97.9502% ( 15) 00:14:56.382 2.513 - 2.524: 98.0340% ( 11) 00:14:56.382 2.524 - 2.536: 98.1102% ( 10) 00:14:56.382 2.536 - 2.548: 98.1940% ( 11) 00:14:56.382 2.548 - 2.560: 98.2626% ( 9) 00:14:56.382 2.560 - 2.572: 98.2931% ( 4) 00:14:56.382 2.572 - 2.584: 98.3312% ( 5) 00:14:56.382 2.596 - 2.607: 98.3540% ( 3) 00:14:56.382 2.619 - 2.631: 98.3617% ( 1) 00:14:56.382 2.631 - 2.643: 98.3693% ( 1) 00:14:56.382 2.655 - 2.667: 98.3769% ( 1) 00:14:56.382 2.679 - 2.690: 98.3845% ( 1) 00:14:56.382 2.702 - 2.714: 98.3921% ( 1) 00:14:56.382 2.833 - 2.844: 98.3998% ( 1) 00:14:56.382 2.844 - 2.856: 98.4074% ( 1) 00:14:56.382 2.951 - 2.963: 98.4150% ( 1) 00:14:56.382 3.010 - 3.022: 98.4226% ( 1) 00:14:56.382 3.129 - 3.153: 98.4302% ( 1) 00:14:56.382 3.200 - 3.224: 98.4379% ( 1) 00:14:56.382 3.224 - 3.247: 98.4455% ( 1) 00:14:56.382 3.247 - 3.271: 98.4531% ( 1) 00:14:56.382 3.271 - 3.295: 9[2024-10-07 09:35:45.223339] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:56.382 8.4607% ( 1) 00:14:56.382 3.319 - 3.342: 98.4683% ( 1) 00:14:56.382 3.342 - 3.366: 98.4760% ( 1) 00:14:56.382 3.366 - 3.390: 98.4836% ( 1) 00:14:56.382 3.390 - 3.413: 98.4912% ( 1) 00:14:56.382 3.413 - 3.437: 98.4988% ( 1) 00:14:56.382 3.437 - 3.461: 98.5141% ( 2) 00:14:56.382 3.461 - 3.484: 98.5522% ( 5) 00:14:56.382 3.484 - 3.508: 98.5598% ( 1) 00:14:56.382 3.532 - 3.556: 98.5826% ( 3) 00:14:56.382 3.556 - 3.579: 98.5979% ( 2) 00:14:56.382 3.579 - 3.603: 98.6131% ( 2) 00:14:56.382 3.674 - 3.698: 98.6207% ( 1) 00:14:56.382 3.745 - 3.769: 98.6360% ( 2) 00:14:56.382 3.887 - 3.911: 98.6436% ( 1) 00:14:56.382 3.935 - 3.959: 98.6512% ( 1) 00:14:56.382 5.926 - 5.950: 98.6588% ( 1) 00:14:56.382 6.068 - 6.116: 98.6665% ( 1) 00:14:56.382 6.258 - 6.305: 98.6741% ( 1) 00:14:56.383 6.305 - 6.353: 98.6893% ( 2) 00:14:56.383 6.400 - 6.447: 98.6969% ( 1) 00:14:56.383 6.447 - 6.495: 98.7046% ( 1) 00:14:56.383 6.542 - 6.590: 98.7122% ( 1) 00:14:56.383 6.590 - 6.637: 98.7198% ( 1) 00:14:56.383 6.637 - 6.684: 98.7274% ( 1) 00:14:56.383 6.827 - 6.874: 98.7350% ( 1) 00:14:56.383 7.016 - 7.064: 98.7503% ( 2) 00:14:56.383 7.064 - 7.111: 98.7579% ( 1) 00:14:56.383 7.111 - 7.159: 98.7655% ( 1) 00:14:56.383 7.396 - 7.443: 98.7731% ( 1) 00:14:56.383 7.538 - 7.585: 98.7808% ( 1) 00:14:56.383 7.633 - 7.680: 98.7884% ( 1) 00:14:56.383 7.680 - 7.727: 98.7960% ( 1) 00:14:56.383 8.391 - 8.439: 98.8036% ( 1) 00:14:56.383 8.533 - 8.581: 98.8112% ( 1) 00:14:56.383 8.818 - 8.865: 98.8189% ( 1) 00:14:56.383 10.999 - 11.046: 98.8265% ( 1) 00:14:56.383 15.455 - 15.550: 98.8493% ( 3) 00:14:56.383 15.644 - 15.739: 98.8570% ( 1) 00:14:56.383 15.739 - 15.834: 98.8722% ( 2) 00:14:56.383 15.834 - 15.929: 98.9103% ( 5) 00:14:56.383 15.929 - 16.024: 98.9408% ( 4) 00:14:56.383 16.024 - 16.119: 98.9637% ( 3) 00:14:56.383 16.119 - 16.213: 98.9865% ( 3) 00:14:56.383 16.213 - 16.308: 99.0094% ( 3) 00:14:56.383 16.308 - 16.403: 99.0703% ( 8) 00:14:56.383 16.403 - 16.498: 99.1161% ( 6) 00:14:56.383 16.498 - 16.593: 99.1542% ( 5) 00:14:56.383 16.593 - 16.687: 99.1999% ( 6) 00:14:56.383 16.687 - 16.782: 99.2075% ( 1) 00:14:56.383 16.782 - 16.877: 99.2532% ( 6) 00:14:56.383 16.877 - 16.972: 99.2837% ( 4) 00:14:56.383 16.972 - 17.067: 99.2913% ( 1) 00:14:56.383 17.067 - 17.161: 99.3142% ( 3) 00:14:56.383 17.161 - 17.256: 99.3294% ( 2) 00:14:56.383 17.256 - 17.351: 99.3370% ( 1) 00:14:56.383 17.351 - 17.446: 99.3447% ( 1) 00:14:56.383 17.636 - 17.730: 99.3523% ( 1) 00:14:56.383 17.730 - 17.825: 99.3599% ( 1) 00:14:56.383 18.110 - 18.204: 99.3675% ( 1) 00:14:56.383 18.394 - 18.489: 99.3751% ( 1) 00:14:56.383 3325.345 - 3349.618: 99.3828% ( 1) 00:14:56.383 3980.705 - 4004.978: 99.8857% ( 66) 00:14:56.383 4004.978 - 4029.250: 100.0000% ( 15) 00:14:56.383 00:14:56.383 09:35:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:14:56.383 09:35:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:56.383 09:35:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:14:56.383 09:35:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:14:56.383 09:35:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:56.641 [ 00:14:56.641 { 00:14:56.641 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:56.641 "subtype": "Discovery", 00:14:56.641 "listen_addresses": [], 00:14:56.641 "allow_any_host": true, 00:14:56.641 "hosts": [] 00:14:56.641 }, 00:14:56.641 { 00:14:56.641 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:56.641 "subtype": "NVMe", 00:14:56.641 "listen_addresses": [ 00:14:56.641 { 00:14:56.641 "trtype": "VFIOUSER", 00:14:56.641 "adrfam": "IPv4", 00:14:56.641 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:56.641 "trsvcid": "0" 00:14:56.641 } 00:14:56.641 ], 00:14:56.641 "allow_any_host": true, 00:14:56.641 "hosts": [], 00:14:56.641 "serial_number": "SPDK1", 00:14:56.641 "model_number": "SPDK bdev Controller", 00:14:56.641 "max_namespaces": 32, 00:14:56.641 "min_cntlid": 1, 00:14:56.641 "max_cntlid": 65519, 00:14:56.641 "namespaces": [ 00:14:56.641 { 00:14:56.641 "nsid": 1, 00:14:56.641 "bdev_name": "Malloc1", 00:14:56.641 "name": "Malloc1", 00:14:56.641 "nguid": "4735CD9965F84D9CB61198CF54FBBDD4", 00:14:56.641 "uuid": "4735cd99-65f8-4d9c-b611-98cf54fbbdd4" 00:14:56.641 } 00:14:56.641 ] 00:14:56.641 }, 00:14:56.641 { 00:14:56.641 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:56.641 "subtype": "NVMe", 00:14:56.641 "listen_addresses": [ 00:14:56.641 { 00:14:56.641 "trtype": "VFIOUSER", 00:14:56.641 "adrfam": "IPv4", 00:14:56.642 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:56.642 "trsvcid": "0" 00:14:56.642 } 00:14:56.642 ], 00:14:56.642 "allow_any_host": true, 00:14:56.642 "hosts": [], 00:14:56.642 "serial_number": "SPDK2", 00:14:56.642 "model_number": "SPDK bdev Controller", 00:14:56.642 "max_namespaces": 32, 00:14:56.642 "min_cntlid": 1, 00:14:56.642 "max_cntlid": 65519, 00:14:56.642 "namespaces": [ 00:14:56.642 { 00:14:56.642 "nsid": 1, 00:14:56.642 "bdev_name": "Malloc2", 00:14:56.642 "name": "Malloc2", 00:14:56.642 "nguid": "6B8BDE5A36F04672A42F27089D7A4EB5", 00:14:56.642 "uuid": "6b8bde5a-36f0-4672-a42f-27089d7a4eb5" 00:14:56.642 } 00:14:56.642 ] 00:14:56.642 } 00:14:56.642 ] 00:14:56.642 09:35:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:14:56.642 09:35:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=198641 00:14:56.642 09:35:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:14:56.642 09:35:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:14:56.642 09:35:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:14:56.642 09:35:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:56.642 09:35:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:14:56.642 09:35:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # i=1 00:14:56.642 09:35:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # sleep 0.1 00:14:56.900 09:35:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:56.900 09:35:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:14:56.900 09:35:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # i=2 00:14:56.900 09:35:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # sleep 0.1 00:14:56.900 [2024-10-07 09:35:45.714189] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:56.900 09:35:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:56.900 09:35:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:56.900 09:35:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:14:56.900 09:35:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:14:56.900 09:35:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:14:57.157 Malloc3 00:14:57.157 09:35:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:14:57.415 [2024-10-07 09:35:46.339835] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:57.415 09:35:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:57.415 Asynchronous Event Request test 00:14:57.415 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:57.415 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:57.415 Registering asynchronous event callbacks... 00:14:57.415 Starting namespace attribute notice tests for all controllers... 00:14:57.415 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:14:57.415 aer_cb - Changed Namespace 00:14:57.415 Cleaning up... 00:14:57.673 [ 00:14:57.673 { 00:14:57.673 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:57.673 "subtype": "Discovery", 00:14:57.673 "listen_addresses": [], 00:14:57.673 "allow_any_host": true, 00:14:57.673 "hosts": [] 00:14:57.673 }, 00:14:57.673 { 00:14:57.673 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:57.673 "subtype": "NVMe", 00:14:57.673 "listen_addresses": [ 00:14:57.673 { 00:14:57.673 "trtype": "VFIOUSER", 00:14:57.673 "adrfam": "IPv4", 00:14:57.673 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:57.673 "trsvcid": "0" 00:14:57.673 } 00:14:57.673 ], 00:14:57.673 "allow_any_host": true, 00:14:57.673 "hosts": [], 00:14:57.673 "serial_number": "SPDK1", 00:14:57.673 "model_number": "SPDK bdev Controller", 00:14:57.673 "max_namespaces": 32, 00:14:57.673 "min_cntlid": 1, 00:14:57.673 "max_cntlid": 65519, 00:14:57.673 "namespaces": [ 00:14:57.673 { 00:14:57.673 "nsid": 1, 00:14:57.673 "bdev_name": "Malloc1", 00:14:57.673 "name": "Malloc1", 00:14:57.673 "nguid": "4735CD9965F84D9CB61198CF54FBBDD4", 00:14:57.673 "uuid": "4735cd99-65f8-4d9c-b611-98cf54fbbdd4" 00:14:57.673 }, 00:14:57.673 { 00:14:57.674 "nsid": 2, 00:14:57.674 "bdev_name": "Malloc3", 00:14:57.674 "name": "Malloc3", 00:14:57.674 "nguid": "CC8805DDCE674702AA026A3B04C389EF", 00:14:57.674 "uuid": "cc8805dd-ce67-4702-aa02-6a3b04c389ef" 00:14:57.674 } 00:14:57.674 ] 00:14:57.674 }, 00:14:57.674 { 00:14:57.674 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:57.674 "subtype": "NVMe", 00:14:57.674 "listen_addresses": [ 00:14:57.674 { 00:14:57.674 "trtype": "VFIOUSER", 00:14:57.674 "adrfam": "IPv4", 00:14:57.674 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:57.674 "trsvcid": "0" 00:14:57.674 } 00:14:57.674 ], 00:14:57.674 "allow_any_host": true, 00:14:57.674 "hosts": [], 00:14:57.674 "serial_number": "SPDK2", 00:14:57.674 "model_number": "SPDK bdev Controller", 00:14:57.674 "max_namespaces": 32, 00:14:57.674 "min_cntlid": 1, 00:14:57.674 "max_cntlid": 65519, 00:14:57.674 "namespaces": [ 00:14:57.674 { 00:14:57.674 "nsid": 1, 00:14:57.674 "bdev_name": "Malloc2", 00:14:57.674 "name": "Malloc2", 00:14:57.674 "nguid": "6B8BDE5A36F04672A42F27089D7A4EB5", 00:14:57.674 "uuid": "6b8bde5a-36f0-4672-a42f-27089d7a4eb5" 00:14:57.674 } 00:14:57.674 ] 00:14:57.674 } 00:14:57.674 ] 00:14:57.674 09:35:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 198641 00:14:57.674 09:35:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:57.674 09:35:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:14:57.674 09:35:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:14:57.674 09:35:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:57.674 [2024-10-07 09:35:46.642795] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:14:57.674 [2024-10-07 09:35:46.642834] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid198769 ] 00:14:57.934 [2024-10-07 09:35:46.675049] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:14:57.934 [2024-10-07 09:35:46.683942] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:57.934 [2024-10-07 09:35:46.684000] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fbd2ff0c000 00:14:57.934 [2024-10-07 09:35:46.684948] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:57.934 [2024-10-07 09:35:46.685945] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:57.934 [2024-10-07 09:35:46.686970] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:57.934 [2024-10-07 09:35:46.687956] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:57.934 [2024-10-07 09:35:46.688977] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:57.934 [2024-10-07 09:35:46.689967] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:57.934 [2024-10-07 09:35:46.690987] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:57.934 [2024-10-07 09:35:46.691988] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:57.934 [2024-10-07 09:35:46.693013] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:57.934 [2024-10-07 09:35:46.693034] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fbd2ff01000 00:14:57.934 [2024-10-07 09:35:46.694148] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:57.935 [2024-10-07 09:35:46.708857] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:14:57.935 [2024-10-07 09:35:46.708894] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:14:57.935 [2024-10-07 09:35:46.711008] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:57.935 [2024-10-07 09:35:46.711058] nvme_pcie_common.c: 134:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:57.935 [2024-10-07 09:35:46.711142] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:14:57.935 [2024-10-07 09:35:46.711167] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:14:57.935 [2024-10-07 09:35:46.711178] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:14:57.935 [2024-10-07 09:35:46.712016] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:14:57.935 [2024-10-07 09:35:46.712036] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:14:57.935 [2024-10-07 09:35:46.712049] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:14:57.935 [2024-10-07 09:35:46.713033] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:57.935 [2024-10-07 09:35:46.713054] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:14:57.935 [2024-10-07 09:35:46.713067] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:14:57.935 [2024-10-07 09:35:46.714038] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:14:57.935 [2024-10-07 09:35:46.714058] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:57.935 [2024-10-07 09:35:46.715047] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:14:57.935 [2024-10-07 09:35:46.715066] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:14:57.935 [2024-10-07 09:35:46.715076] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:14:57.935 [2024-10-07 09:35:46.715092] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:57.935 [2024-10-07 09:35:46.715202] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:14:57.935 [2024-10-07 09:35:46.715210] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:57.935 [2024-10-07 09:35:46.715218] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:14:57.935 [2024-10-07 09:35:46.719678] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:14:57.935 [2024-10-07 09:35:46.720091] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:14:57.935 [2024-10-07 09:35:46.721100] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:57.935 [2024-10-07 09:35:46.722092] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:57.935 [2024-10-07 09:35:46.722155] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:57.935 [2024-10-07 09:35:46.723109] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:14:57.935 [2024-10-07 09:35:46.723128] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:57.935 [2024-10-07 09:35:46.723138] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:14:57.935 [2024-10-07 09:35:46.723161] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:14:57.935 [2024-10-07 09:35:46.723177] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:14:57.935 [2024-10-07 09:35:46.723197] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:57.935 [2024-10-07 09:35:46.723207] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:57.935 [2024-10-07 09:35:46.723213] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:57.935 [2024-10-07 09:35:46.723229] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:57.935 [2024-10-07 09:35:46.729682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:57.935 [2024-10-07 09:35:46.729704] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:14:57.935 [2024-10-07 09:35:46.729726] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:14:57.935 [2024-10-07 09:35:46.729734] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:14:57.935 [2024-10-07 09:35:46.729741] nvme_ctrlr.c:2095:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:57.935 [2024-10-07 09:35:46.729750] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:14:57.935 [2024-10-07 09:35:46.729757] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:14:57.935 [2024-10-07 09:35:46.729770] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:14:57.935 [2024-10-07 09:35:46.729783] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:14:57.935 [2024-10-07 09:35:46.729799] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:57.935 [2024-10-07 09:35:46.737675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:57.935 [2024-10-07 09:35:46.737698] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:57.935 [2024-10-07 09:35:46.737712] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:57.935 [2024-10-07 09:35:46.737724] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:57.935 [2024-10-07 09:35:46.737735] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:57.935 [2024-10-07 09:35:46.737744] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:14:57.935 [2024-10-07 09:35:46.737761] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:57.935 [2024-10-07 09:35:46.737776] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:57.935 [2024-10-07 09:35:46.745678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:57.935 [2024-10-07 09:35:46.745695] nvme_ctrlr.c:3034:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:14:57.935 [2024-10-07 09:35:46.745704] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:57.935 [2024-10-07 09:35:46.745715] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:14:57.935 [2024-10-07 09:35:46.745729] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:14:57.935 [2024-10-07 09:35:46.745744] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:57.935 [2024-10-07 09:35:46.753680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:57.935 [2024-10-07 09:35:46.753754] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:14:57.935 [2024-10-07 09:35:46.753771] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:14:57.935 [2024-10-07 09:35:46.753784] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:57.935 [2024-10-07 09:35:46.753792] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:57.935 [2024-10-07 09:35:46.753798] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:57.935 [2024-10-07 09:35:46.753808] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:57.935 [2024-10-07 09:35:46.761679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:57.935 [2024-10-07 09:35:46.761705] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:14:57.935 [2024-10-07 09:35:46.761722] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:14:57.935 [2024-10-07 09:35:46.761736] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:14:57.935 [2024-10-07 09:35:46.761748] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:57.935 [2024-10-07 09:35:46.761757] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:57.935 [2024-10-07 09:35:46.761762] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:57.935 [2024-10-07 09:35:46.761772] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:57.936 [2024-10-07 09:35:46.769676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:57.936 [2024-10-07 09:35:46.769704] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:57.936 [2024-10-07 09:35:46.769720] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:57.936 [2024-10-07 09:35:46.769734] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:57.936 [2024-10-07 09:35:46.769742] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:57.936 [2024-10-07 09:35:46.769748] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:57.936 [2024-10-07 09:35:46.769757] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:57.936 [2024-10-07 09:35:46.777676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:57.936 [2024-10-07 09:35:46.777697] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:57.936 [2024-10-07 09:35:46.777710] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:14:57.936 [2024-10-07 09:35:46.777726] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:14:57.936 [2024-10-07 09:35:46.777737] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:14:57.936 [2024-10-07 09:35:46.777745] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:57.936 [2024-10-07 09:35:46.777753] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:14:57.936 [2024-10-07 09:35:46.777761] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:14:57.936 [2024-10-07 09:35:46.777768] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:14:57.936 [2024-10-07 09:35:46.777777] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:14:57.936 [2024-10-07 09:35:46.777800] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:57.936 [2024-10-07 09:35:46.785706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:57.936 [2024-10-07 09:35:46.785736] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:57.936 [2024-10-07 09:35:46.793689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:57.936 [2024-10-07 09:35:46.793715] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:57.936 [2024-10-07 09:35:46.801679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:57.936 [2024-10-07 09:35:46.801704] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:57.936 [2024-10-07 09:35:46.809681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:57.936 [2024-10-07 09:35:46.809713] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:57.936 [2024-10-07 09:35:46.809725] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:57.936 [2024-10-07 09:35:46.809731] nvme_pcie_common.c:1241:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:57.936 [2024-10-07 09:35:46.809737] nvme_pcie_common.c:1257:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:57.936 [2024-10-07 09:35:46.809743] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:14:57.936 [2024-10-07 09:35:46.809753] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:57.936 [2024-10-07 09:35:46.809765] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:57.936 [2024-10-07 09:35:46.809774] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:57.936 [2024-10-07 09:35:46.809780] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:57.936 [2024-10-07 09:35:46.809789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:57.936 [2024-10-07 09:35:46.809800] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:57.936 [2024-10-07 09:35:46.809808] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:57.936 [2024-10-07 09:35:46.809814] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:57.936 [2024-10-07 09:35:46.809823] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:57.936 [2024-10-07 09:35:46.809835] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:57.936 [2024-10-07 09:35:46.809844] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:57.936 [2024-10-07 09:35:46.809849] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:57.936 [2024-10-07 09:35:46.809858] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:57.936 [2024-10-07 09:35:46.817679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:57.936 [2024-10-07 09:35:46.817706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:57.936 [2024-10-07 09:35:46.817724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:57.936 [2024-10-07 09:35:46.817736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:57.936 ===================================================== 00:14:57.936 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:57.936 ===================================================== 00:14:57.936 Controller Capabilities/Features 00:14:57.936 ================================ 00:14:57.936 Vendor ID: 4e58 00:14:57.936 Subsystem Vendor ID: 4e58 00:14:57.936 Serial Number: SPDK2 00:14:57.936 Model Number: SPDK bdev Controller 00:14:57.936 Firmware Version: 25.01 00:14:57.936 Recommended Arb Burst: 6 00:14:57.936 IEEE OUI Identifier: 8d 6b 50 00:14:57.936 Multi-path I/O 00:14:57.936 May have multiple subsystem ports: Yes 00:14:57.936 May have multiple controllers: Yes 00:14:57.936 Associated with SR-IOV VF: No 00:14:57.936 Max Data Transfer Size: 131072 00:14:57.936 Max Number of Namespaces: 32 00:14:57.936 Max Number of I/O Queues: 127 00:14:57.936 NVMe Specification Version (VS): 1.3 00:14:57.936 NVMe Specification Version (Identify): 1.3 00:14:57.936 Maximum Queue Entries: 256 00:14:57.936 Contiguous Queues Required: Yes 00:14:57.936 Arbitration Mechanisms Supported 00:14:57.936 Weighted Round Robin: Not Supported 00:14:57.936 Vendor Specific: Not Supported 00:14:57.936 Reset Timeout: 15000 ms 00:14:57.936 Doorbell Stride: 4 bytes 00:14:57.936 NVM Subsystem Reset: Not Supported 00:14:57.936 Command Sets Supported 00:14:57.936 NVM Command Set: Supported 00:14:57.936 Boot Partition: Not Supported 00:14:57.936 Memory Page Size Minimum: 4096 bytes 00:14:57.936 Memory Page Size Maximum: 4096 bytes 00:14:57.936 Persistent Memory Region: Not Supported 00:14:57.936 Optional Asynchronous Events Supported 00:14:57.936 Namespace Attribute Notices: Supported 00:14:57.936 Firmware Activation Notices: Not Supported 00:14:57.936 ANA Change Notices: Not Supported 00:14:57.936 PLE Aggregate Log Change Notices: Not Supported 00:14:57.936 LBA Status Info Alert Notices: Not Supported 00:14:57.936 EGE Aggregate Log Change Notices: Not Supported 00:14:57.936 Normal NVM Subsystem Shutdown event: Not Supported 00:14:57.936 Zone Descriptor Change Notices: Not Supported 00:14:57.936 Discovery Log Change Notices: Not Supported 00:14:57.936 Controller Attributes 00:14:57.936 128-bit Host Identifier: Supported 00:14:57.936 Non-Operational Permissive Mode: Not Supported 00:14:57.936 NVM Sets: Not Supported 00:14:57.936 Read Recovery Levels: Not Supported 00:14:57.937 Endurance Groups: Not Supported 00:14:57.937 Predictable Latency Mode: Not Supported 00:14:57.937 Traffic Based Keep ALive: Not Supported 00:14:57.937 Namespace Granularity: Not Supported 00:14:57.937 SQ Associations: Not Supported 00:14:57.937 UUID List: Not Supported 00:14:57.937 Multi-Domain Subsystem: Not Supported 00:14:57.937 Fixed Capacity Management: Not Supported 00:14:57.937 Variable Capacity Management: Not Supported 00:14:57.937 Delete Endurance Group: Not Supported 00:14:57.937 Delete NVM Set: Not Supported 00:14:57.937 Extended LBA Formats Supported: Not Supported 00:14:57.937 Flexible Data Placement Supported: Not Supported 00:14:57.937 00:14:57.937 Controller Memory Buffer Support 00:14:57.937 ================================ 00:14:57.937 Supported: No 00:14:57.937 00:14:57.937 Persistent Memory Region Support 00:14:57.937 ================================ 00:14:57.937 Supported: No 00:14:57.937 00:14:57.937 Admin Command Set Attributes 00:14:57.937 ============================ 00:14:57.937 Security Send/Receive: Not Supported 00:14:57.937 Format NVM: Not Supported 00:14:57.937 Firmware Activate/Download: Not Supported 00:14:57.937 Namespace Management: Not Supported 00:14:57.937 Device Self-Test: Not Supported 00:14:57.937 Directives: Not Supported 00:14:57.937 NVMe-MI: Not Supported 00:14:57.937 Virtualization Management: Not Supported 00:14:57.937 Doorbell Buffer Config: Not Supported 00:14:57.937 Get LBA Status Capability: Not Supported 00:14:57.937 Command & Feature Lockdown Capability: Not Supported 00:14:57.937 Abort Command Limit: 4 00:14:57.937 Async Event Request Limit: 4 00:14:57.937 Number of Firmware Slots: N/A 00:14:57.937 Firmware Slot 1 Read-Only: N/A 00:14:57.937 Firmware Activation Without Reset: N/A 00:14:57.937 Multiple Update Detection Support: N/A 00:14:57.937 Firmware Update Granularity: No Information Provided 00:14:57.937 Per-Namespace SMART Log: No 00:14:57.937 Asymmetric Namespace Access Log Page: Not Supported 00:14:57.937 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:14:57.937 Command Effects Log Page: Supported 00:14:57.937 Get Log Page Extended Data: Supported 00:14:57.937 Telemetry Log Pages: Not Supported 00:14:57.937 Persistent Event Log Pages: Not Supported 00:14:57.937 Supported Log Pages Log Page: May Support 00:14:57.937 Commands Supported & Effects Log Page: Not Supported 00:14:57.937 Feature Identifiers & Effects Log Page:May Support 00:14:57.937 NVMe-MI Commands & Effects Log Page: May Support 00:14:57.937 Data Area 4 for Telemetry Log: Not Supported 00:14:57.937 Error Log Page Entries Supported: 128 00:14:57.937 Keep Alive: Supported 00:14:57.937 Keep Alive Granularity: 10000 ms 00:14:57.937 00:14:57.937 NVM Command Set Attributes 00:14:57.937 ========================== 00:14:57.937 Submission Queue Entry Size 00:14:57.937 Max: 64 00:14:57.937 Min: 64 00:14:57.937 Completion Queue Entry Size 00:14:57.937 Max: 16 00:14:57.937 Min: 16 00:14:57.937 Number of Namespaces: 32 00:14:57.937 Compare Command: Supported 00:14:57.937 Write Uncorrectable Command: Not Supported 00:14:57.937 Dataset Management Command: Supported 00:14:57.937 Write Zeroes Command: Supported 00:14:57.937 Set Features Save Field: Not Supported 00:14:57.937 Reservations: Not Supported 00:14:57.937 Timestamp: Not Supported 00:14:57.937 Copy: Supported 00:14:57.937 Volatile Write Cache: Present 00:14:57.937 Atomic Write Unit (Normal): 1 00:14:57.937 Atomic Write Unit (PFail): 1 00:14:57.937 Atomic Compare & Write Unit: 1 00:14:57.937 Fused Compare & Write: Supported 00:14:57.937 Scatter-Gather List 00:14:57.937 SGL Command Set: Supported (Dword aligned) 00:14:57.937 SGL Keyed: Not Supported 00:14:57.937 SGL Bit Bucket Descriptor: Not Supported 00:14:57.937 SGL Metadata Pointer: Not Supported 00:14:57.937 Oversized SGL: Not Supported 00:14:57.937 SGL Metadata Address: Not Supported 00:14:57.937 SGL Offset: Not Supported 00:14:57.937 Transport SGL Data Block: Not Supported 00:14:57.937 Replay Protected Memory Block: Not Supported 00:14:57.937 00:14:57.937 Firmware Slot Information 00:14:57.937 ========================= 00:14:57.937 Active slot: 1 00:14:57.937 Slot 1 Firmware Revision: 25.01 00:14:57.937 00:14:57.937 00:14:57.937 Commands Supported and Effects 00:14:57.937 ============================== 00:14:57.937 Admin Commands 00:14:57.937 -------------- 00:14:57.937 Get Log Page (02h): Supported 00:14:57.937 Identify (06h): Supported 00:14:57.937 Abort (08h): Supported 00:14:57.937 Set Features (09h): Supported 00:14:57.937 Get Features (0Ah): Supported 00:14:57.937 Asynchronous Event Request (0Ch): Supported 00:14:57.937 Keep Alive (18h): Supported 00:14:57.937 I/O Commands 00:14:57.937 ------------ 00:14:57.937 Flush (00h): Supported LBA-Change 00:14:57.937 Write (01h): Supported LBA-Change 00:14:57.937 Read (02h): Supported 00:14:57.937 Compare (05h): Supported 00:14:57.937 Write Zeroes (08h): Supported LBA-Change 00:14:57.937 Dataset Management (09h): Supported LBA-Change 00:14:57.937 Copy (19h): Supported LBA-Change 00:14:57.937 00:14:57.937 Error Log 00:14:57.937 ========= 00:14:57.937 00:14:57.937 Arbitration 00:14:57.937 =========== 00:14:57.937 Arbitration Burst: 1 00:14:57.937 00:14:57.937 Power Management 00:14:57.937 ================ 00:14:57.937 Number of Power States: 1 00:14:57.937 Current Power State: Power State #0 00:14:57.937 Power State #0: 00:14:57.937 Max Power: 0.00 W 00:14:57.937 Non-Operational State: Operational 00:14:57.937 Entry Latency: Not Reported 00:14:57.937 Exit Latency: Not Reported 00:14:57.937 Relative Read Throughput: 0 00:14:57.937 Relative Read Latency: 0 00:14:57.937 Relative Write Throughput: 0 00:14:57.937 Relative Write Latency: 0 00:14:57.937 Idle Power: Not Reported 00:14:57.937 Active Power: Not Reported 00:14:57.937 Non-Operational Permissive Mode: Not Supported 00:14:57.937 00:14:57.937 Health Information 00:14:57.937 ================== 00:14:57.937 Critical Warnings: 00:14:57.937 Available Spare Space: OK 00:14:57.937 Temperature: OK 00:14:57.937 Device Reliability: OK 00:14:57.937 Read Only: No 00:14:57.937 Volatile Memory Backup: OK 00:14:57.937 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:57.937 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:57.937 Available Spare: 0% 00:14:57.937 Available Sp[2024-10-07 09:35:46.817850] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:57.937 [2024-10-07 09:35:46.825679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:57.937 [2024-10-07 09:35:46.825742] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:14:57.937 [2024-10-07 09:35:46.825761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:57.937 [2024-10-07 09:35:46.825772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:57.937 [2024-10-07 09:35:46.825783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:57.937 [2024-10-07 09:35:46.825792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:57.937 [2024-10-07 09:35:46.825873] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:57.937 [2024-10-07 09:35:46.825895] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:14:57.937 [2024-10-07 09:35:46.826875] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:57.937 [2024-10-07 09:35:46.826951] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:14:57.937 [2024-10-07 09:35:46.826981] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:14:57.937 [2024-10-07 09:35:46.827882] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:14:57.937 [2024-10-07 09:35:46.827907] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:14:57.937 [2024-10-07 09:35:46.827965] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:14:57.937 [2024-10-07 09:35:46.829152] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:57.937 are Threshold: 0% 00:14:57.937 Life Percentage Used: 0% 00:14:57.937 Data Units Read: 0 00:14:57.937 Data Units Written: 0 00:14:57.937 Host Read Commands: 0 00:14:57.937 Host Write Commands: 0 00:14:57.937 Controller Busy Time: 0 minutes 00:14:57.937 Power Cycles: 0 00:14:57.937 Power On Hours: 0 hours 00:14:57.937 Unsafe Shutdowns: 0 00:14:57.937 Unrecoverable Media Errors: 0 00:14:57.937 Lifetime Error Log Entries: 0 00:14:57.937 Warning Temperature Time: 0 minutes 00:14:57.937 Critical Temperature Time: 0 minutes 00:14:57.937 00:14:57.937 Number of Queues 00:14:57.937 ================ 00:14:57.937 Number of I/O Submission Queues: 127 00:14:57.937 Number of I/O Completion Queues: 127 00:14:57.937 00:14:57.937 Active Namespaces 00:14:57.937 ================= 00:14:57.937 Namespace ID:1 00:14:57.938 Error Recovery Timeout: Unlimited 00:14:57.938 Command Set Identifier: NVM (00h) 00:14:57.938 Deallocate: Supported 00:14:57.938 Deallocated/Unwritten Error: Not Supported 00:14:57.938 Deallocated Read Value: Unknown 00:14:57.938 Deallocate in Write Zeroes: Not Supported 00:14:57.938 Deallocated Guard Field: 0xFFFF 00:14:57.938 Flush: Supported 00:14:57.938 Reservation: Supported 00:14:57.938 Namespace Sharing Capabilities: Multiple Controllers 00:14:57.938 Size (in LBAs): 131072 (0GiB) 00:14:57.938 Capacity (in LBAs): 131072 (0GiB) 00:14:57.938 Utilization (in LBAs): 131072 (0GiB) 00:14:57.938 NGUID: 6B8BDE5A36F04672A42F27089D7A4EB5 00:14:57.938 UUID: 6b8bde5a-36f0-4672-a42f-27089d7a4eb5 00:14:57.938 Thin Provisioning: Not Supported 00:14:57.938 Per-NS Atomic Units: Yes 00:14:57.938 Atomic Boundary Size (Normal): 0 00:14:57.938 Atomic Boundary Size (PFail): 0 00:14:57.938 Atomic Boundary Offset: 0 00:14:57.938 Maximum Single Source Range Length: 65535 00:14:57.938 Maximum Copy Length: 65535 00:14:57.938 Maximum Source Range Count: 1 00:14:57.938 NGUID/EUI64 Never Reused: No 00:14:57.938 Namespace Write Protected: No 00:14:57.938 Number of LBA Formats: 1 00:14:57.938 Current LBA Format: LBA Format #00 00:14:57.938 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:57.938 00:14:57.938 09:35:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:58.197 [2024-10-07 09:35:47.068452] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:03.471 Initializing NVMe Controllers 00:15:03.471 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:03.471 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:03.471 Initialization complete. Launching workers. 00:15:03.471 ======================================================== 00:15:03.471 Latency(us) 00:15:03.471 Device Information : IOPS MiB/s Average min max 00:15:03.471 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 33774.86 131.93 3789.21 1168.80 9619.96 00:15:03.471 ======================================================== 00:15:03.471 Total : 33774.86 131.93 3789.21 1168.80 9619.96 00:15:03.471 00:15:03.471 [2024-10-07 09:35:52.171040] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:03.471 09:35:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:03.471 [2024-10-07 09:35:52.416688] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:08.751 Initializing NVMe Controllers 00:15:08.751 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:08.751 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:08.751 Initialization complete. Launching workers. 00:15:08.751 ======================================================== 00:15:08.751 Latency(us) 00:15:08.751 Device Information : IOPS MiB/s Average min max 00:15:08.751 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 31693.93 123.80 4037.95 1192.49 10281.53 00:15:08.751 ======================================================== 00:15:08.751 Total : 31693.93 123.80 4037.95 1192.49 10281.53 00:15:08.751 00:15:08.751 [2024-10-07 09:35:57.438479] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:08.751 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:08.751 [2024-10-07 09:35:57.650393] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:14.031 [2024-10-07 09:36:02.802799] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:14.031 Initializing NVMe Controllers 00:15:14.031 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:14.031 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:14.031 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:15:14.031 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:15:14.031 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:15:14.031 Initialization complete. Launching workers. 00:15:14.031 Starting thread on core 2 00:15:14.031 Starting thread on core 3 00:15:14.031 Starting thread on core 1 00:15:14.031 09:36:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:15:14.291 [2024-10-07 09:36:03.093183] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:17.590 [2024-10-07 09:36:06.158961] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:17.590 Initializing NVMe Controllers 00:15:17.590 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:17.590 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:17.590 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:15:17.590 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:15:17.590 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:15:17.590 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:15:17.590 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:17.590 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:17.590 Initialization complete. Launching workers. 00:15:17.590 Starting thread on core 1 with urgent priority queue 00:15:17.590 Starting thread on core 2 with urgent priority queue 00:15:17.590 Starting thread on core 3 with urgent priority queue 00:15:17.590 Starting thread on core 0 with urgent priority queue 00:15:17.590 SPDK bdev Controller (SPDK2 ) core 0: 5650.67 IO/s 17.70 secs/100000 ios 00:15:17.590 SPDK bdev Controller (SPDK2 ) core 1: 5309.67 IO/s 18.83 secs/100000 ios 00:15:17.590 SPDK bdev Controller (SPDK2 ) core 2: 5460.67 IO/s 18.31 secs/100000 ios 00:15:17.590 SPDK bdev Controller (SPDK2 ) core 3: 5724.00 IO/s 17.47 secs/100000 ios 00:15:17.590 ======================================================== 00:15:17.590 00:15:17.590 09:36:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:17.590 [2024-10-07 09:36:06.446214] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:17.590 Initializing NVMe Controllers 00:15:17.590 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:17.590 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:17.590 Namespace ID: 1 size: 0GB 00:15:17.590 Initialization complete. 00:15:17.590 INFO: using host memory buffer for IO 00:15:17.590 Hello world! 00:15:17.590 [2024-10-07 09:36:06.458289] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:17.590 09:36:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:17.860 [2024-10-07 09:36:06.749918] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:19.233 Initializing NVMe Controllers 00:15:19.234 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:19.234 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:19.234 Initialization complete. Launching workers. 00:15:19.234 submit (in ns) avg, min, max = 6766.4, 3506.7, 4042123.3 00:15:19.234 complete (in ns) avg, min, max = 26479.5, 2065.6, 4020998.9 00:15:19.234 00:15:19.234 Submit histogram 00:15:19.234 ================ 00:15:19.234 Range in us Cumulative Count 00:15:19.234 3.484 - 3.508: 0.0077% ( 1) 00:15:19.234 3.508 - 3.532: 0.1152% ( 14) 00:15:19.234 3.532 - 3.556: 1.8271% ( 223) 00:15:19.234 3.556 - 3.579: 5.5428% ( 484) 00:15:19.234 3.579 - 3.603: 11.5001% ( 776) 00:15:19.234 3.603 - 3.627: 18.9698% ( 973) 00:15:19.234 3.627 - 3.650: 27.9902% ( 1175) 00:15:19.234 3.650 - 3.674: 36.7649% ( 1143) 00:15:19.234 3.674 - 3.698: 45.2326% ( 1103) 00:15:19.234 3.698 - 3.721: 52.2186% ( 910) 00:15:19.234 3.721 - 3.745: 58.8362% ( 862) 00:15:19.234 3.745 - 3.769: 62.9050% ( 530) 00:15:19.234 3.769 - 3.793: 67.0352% ( 538) 00:15:19.234 3.793 - 3.816: 69.9601% ( 381) 00:15:19.234 3.816 - 3.840: 72.8850% ( 381) 00:15:19.234 3.840 - 3.864: 76.1554% ( 426) 00:15:19.234 3.864 - 3.887: 79.5332% ( 440) 00:15:19.234 3.887 - 3.911: 82.7806% ( 423) 00:15:19.234 3.911 - 3.935: 85.1912% ( 314) 00:15:19.234 3.935 - 3.959: 87.2870% ( 273) 00:15:19.234 3.959 - 3.982: 89.0987% ( 236) 00:15:19.234 3.982 - 4.006: 90.6341% ( 200) 00:15:19.234 4.006 - 4.030: 91.8624% ( 160) 00:15:19.234 4.030 - 4.053: 92.8451% ( 128) 00:15:19.234 4.053 - 4.077: 93.4823% ( 83) 00:15:19.234 4.077 - 4.101: 94.2193% ( 96) 00:15:19.234 4.101 - 4.124: 94.7336% ( 67) 00:15:19.234 4.124 - 4.148: 95.1328% ( 52) 00:15:19.234 4.148 - 4.172: 95.4092% ( 36) 00:15:19.234 4.172 - 4.196: 95.6088% ( 26) 00:15:19.234 4.196 - 4.219: 95.7546% ( 19) 00:15:19.234 4.219 - 4.243: 95.8698% ( 15) 00:15:19.234 4.243 - 4.267: 95.9773% ( 14) 00:15:19.234 4.267 - 4.290: 96.0924% ( 15) 00:15:19.234 4.290 - 4.314: 96.1538% ( 8) 00:15:19.234 4.314 - 4.338: 96.2383% ( 11) 00:15:19.234 4.338 - 4.361: 96.3151% ( 10) 00:15:19.234 4.361 - 4.385: 96.3765% ( 8) 00:15:19.234 4.385 - 4.409: 96.4379% ( 8) 00:15:19.234 4.409 - 4.433: 96.4763% ( 5) 00:15:19.234 4.433 - 4.456: 96.4840% ( 1) 00:15:19.234 4.456 - 4.480: 96.4993% ( 2) 00:15:19.234 4.504 - 4.527: 96.5070% ( 1) 00:15:19.234 4.551 - 4.575: 96.5147% ( 1) 00:15:19.234 4.599 - 4.622: 96.5223% ( 1) 00:15:19.234 4.622 - 4.646: 96.5300% ( 1) 00:15:19.234 4.646 - 4.670: 96.5377% ( 1) 00:15:19.234 4.670 - 4.693: 96.5454% ( 1) 00:15:19.234 4.693 - 4.717: 96.5530% ( 1) 00:15:19.234 4.717 - 4.741: 96.5607% ( 1) 00:15:19.234 4.741 - 4.764: 96.5684% ( 1) 00:15:19.234 4.764 - 4.788: 96.6298% ( 8) 00:15:19.234 4.788 - 4.812: 96.6759% ( 6) 00:15:19.234 4.812 - 4.836: 96.7219% ( 6) 00:15:19.234 4.836 - 4.859: 96.7757% ( 7) 00:15:19.234 4.859 - 4.883: 96.8601% ( 11) 00:15:19.234 4.883 - 4.907: 96.9369% ( 10) 00:15:19.234 4.907 - 4.930: 96.9830% ( 6) 00:15:19.234 4.930 - 4.954: 97.0213% ( 5) 00:15:19.234 4.954 - 4.978: 97.1135% ( 12) 00:15:19.234 4.978 - 5.001: 97.1749% ( 8) 00:15:19.234 5.001 - 5.025: 97.1979% ( 3) 00:15:19.234 5.025 - 5.049: 97.2209% ( 3) 00:15:19.234 5.049 - 5.073: 97.2517% ( 4) 00:15:19.234 5.073 - 5.096: 97.2747% ( 3) 00:15:19.234 5.096 - 5.120: 97.2977% ( 3) 00:15:19.234 5.120 - 5.144: 97.3515% ( 7) 00:15:19.234 5.144 - 5.167: 97.3898% ( 5) 00:15:19.234 5.167 - 5.191: 97.4359% ( 6) 00:15:19.234 5.191 - 5.215: 97.4589% ( 3) 00:15:19.234 5.215 - 5.239: 97.4820% ( 3) 00:15:19.234 5.239 - 5.262: 97.4896% ( 1) 00:15:19.234 5.262 - 5.286: 97.4973% ( 1) 00:15:19.234 5.286 - 5.310: 97.5127% ( 2) 00:15:19.234 5.333 - 5.357: 97.5203% ( 1) 00:15:19.234 5.357 - 5.381: 97.5280% ( 1) 00:15:19.234 5.381 - 5.404: 97.5357% ( 1) 00:15:19.234 5.428 - 5.452: 97.5434% ( 1) 00:15:19.234 5.499 - 5.523: 97.5511% ( 1) 00:15:19.234 5.689 - 5.713: 97.5587% ( 1) 00:15:19.234 5.713 - 5.736: 97.5664% ( 1) 00:15:19.234 5.784 - 5.807: 97.5741% ( 1) 00:15:19.234 5.831 - 5.855: 97.5818% ( 1) 00:15:19.234 5.855 - 5.879: 97.5971% ( 2) 00:15:19.234 5.973 - 5.997: 97.6048% ( 1) 00:15:19.234 5.997 - 6.021: 97.6125% ( 1) 00:15:19.234 6.044 - 6.068: 97.6278% ( 2) 00:15:19.234 6.163 - 6.210: 97.6355% ( 1) 00:15:19.234 6.210 - 6.258: 97.6432% ( 1) 00:15:19.234 6.258 - 6.305: 97.6509% ( 1) 00:15:19.234 6.305 - 6.353: 97.6585% ( 1) 00:15:19.234 6.400 - 6.447: 97.6662% ( 1) 00:15:19.234 6.447 - 6.495: 97.6892% ( 3) 00:15:19.234 6.590 - 6.637: 97.6969% ( 1) 00:15:19.234 6.637 - 6.684: 97.7123% ( 2) 00:15:19.234 6.732 - 6.779: 97.7199% ( 1) 00:15:19.234 6.827 - 6.874: 97.7353% ( 2) 00:15:19.234 6.874 - 6.921: 97.7430% ( 1) 00:15:19.234 7.016 - 7.064: 97.7660% ( 3) 00:15:19.234 7.111 - 7.159: 97.7737% ( 1) 00:15:19.234 7.159 - 7.206: 97.7814% ( 1) 00:15:19.234 7.206 - 7.253: 97.8044% ( 3) 00:15:19.234 7.301 - 7.348: 97.8121% ( 1) 00:15:19.234 7.348 - 7.396: 97.8197% ( 1) 00:15:19.234 7.443 - 7.490: 97.8351% ( 2) 00:15:19.234 7.538 - 7.585: 97.8428% ( 1) 00:15:19.234 7.585 - 7.633: 97.8505% ( 1) 00:15:19.234 7.633 - 7.680: 97.8581% ( 1) 00:15:19.234 7.680 - 7.727: 97.8658% ( 1) 00:15:19.234 7.727 - 7.775: 97.8735% ( 1) 00:15:19.234 7.822 - 7.870: 97.8812% ( 1) 00:15:19.234 7.917 - 7.964: 97.8965% ( 2) 00:15:19.234 7.964 - 8.012: 97.9042% ( 1) 00:15:19.234 8.012 - 8.059: 97.9119% ( 1) 00:15:19.234 8.154 - 8.201: 97.9272% ( 2) 00:15:19.234 8.201 - 8.249: 97.9349% ( 1) 00:15:19.234 8.344 - 8.391: 97.9503% ( 2) 00:15:19.234 8.439 - 8.486: 97.9810% ( 4) 00:15:19.234 8.486 - 8.533: 98.0193% ( 5) 00:15:19.234 8.581 - 8.628: 98.0424% ( 3) 00:15:19.234 8.676 - 8.723: 98.0501% ( 1) 00:15:19.234 8.723 - 8.770: 98.0577% ( 1) 00:15:19.234 8.770 - 8.818: 98.0731% ( 2) 00:15:19.234 8.818 - 8.865: 98.0808% ( 1) 00:15:19.234 8.865 - 8.913: 98.0961% ( 2) 00:15:19.234 8.913 - 8.960: 98.1038% ( 1) 00:15:19.234 8.960 - 9.007: 98.1191% ( 2) 00:15:19.234 9.102 - 9.150: 98.1345% ( 2) 00:15:19.234 9.150 - 9.197: 98.1499% ( 2) 00:15:19.234 9.197 - 9.244: 98.1652% ( 2) 00:15:19.234 9.292 - 9.339: 98.1729% ( 1) 00:15:19.234 9.387 - 9.434: 98.1806% ( 1) 00:15:19.234 9.434 - 9.481: 98.1882% ( 1) 00:15:19.234 9.481 - 9.529: 98.1959% ( 1) 00:15:19.234 9.529 - 9.576: 98.2189% ( 3) 00:15:19.234 9.576 - 9.624: 98.2420% ( 3) 00:15:19.234 9.624 - 9.671: 98.2497% ( 1) 00:15:19.234 9.671 - 9.719: 98.2650% ( 2) 00:15:19.234 9.719 - 9.766: 98.2727% ( 1) 00:15:19.234 9.766 - 9.813: 98.3034% ( 4) 00:15:19.234 9.813 - 9.861: 98.3264% ( 3) 00:15:19.234 10.003 - 10.050: 98.3418% ( 2) 00:15:19.234 10.145 - 10.193: 98.3495% ( 1) 00:15:19.234 10.240 - 10.287: 98.3725% ( 3) 00:15:19.235 10.335 - 10.382: 98.3802% ( 1) 00:15:19.235 10.382 - 10.430: 98.3878% ( 1) 00:15:19.235 10.430 - 10.477: 98.3955% ( 1) 00:15:19.235 10.477 - 10.524: 98.4109% ( 2) 00:15:19.235 10.572 - 10.619: 98.4262% ( 2) 00:15:19.235 10.619 - 10.667: 98.4339% ( 1) 00:15:19.235 10.667 - 10.714: 98.4493% ( 2) 00:15:19.235 10.761 - 10.809: 98.4569% ( 1) 00:15:19.235 10.809 - 10.856: 98.4723% ( 2) 00:15:19.235 10.856 - 10.904: 98.4953% ( 3) 00:15:19.235 10.904 - 10.951: 98.5030% ( 1) 00:15:19.235 10.951 - 10.999: 98.5260% ( 3) 00:15:19.235 10.999 - 11.046: 98.5414% ( 2) 00:15:19.235 11.046 - 11.093: 98.5567% ( 2) 00:15:19.235 11.236 - 11.283: 98.5721% ( 2) 00:15:19.235 11.283 - 11.330: 98.5798% ( 1) 00:15:19.235 11.330 - 11.378: 98.5874% ( 1) 00:15:19.235 11.378 - 11.425: 98.5951% ( 1) 00:15:19.235 11.425 - 11.473: 98.6105% ( 2) 00:15:19.235 11.473 - 11.520: 98.6181% ( 1) 00:15:19.235 11.567 - 11.615: 98.6258% ( 1) 00:15:19.235 11.615 - 11.662: 98.6412% ( 2) 00:15:19.235 11.662 - 11.710: 98.6489% ( 1) 00:15:19.235 11.899 - 11.947: 98.6565% ( 1) 00:15:19.235 11.994 - 12.041: 98.6642% ( 1) 00:15:19.235 12.136 - 12.231: 98.6872% ( 3) 00:15:19.235 12.326 - 12.421: 98.6949% ( 1) 00:15:19.235 12.421 - 12.516: 98.7026% ( 1) 00:15:19.235 12.516 - 12.610: 98.7103% ( 1) 00:15:19.235 12.610 - 12.705: 98.7179% ( 1) 00:15:19.235 12.800 - 12.895: 98.7410% ( 3) 00:15:19.235 12.990 - 13.084: 98.7487% ( 1) 00:15:19.235 13.179 - 13.274: 98.7563% ( 1) 00:15:19.235 13.559 - 13.653: 98.7794% ( 3) 00:15:19.235 13.653 - 13.748: 98.7870% ( 1) 00:15:19.235 13.938 - 14.033: 98.8101% ( 3) 00:15:19.235 14.127 - 14.222: 98.8331% ( 3) 00:15:19.235 14.222 - 14.317: 98.8561% ( 3) 00:15:19.235 14.412 - 14.507: 98.8638% ( 1) 00:15:19.235 14.507 - 14.601: 98.8792% ( 2) 00:15:19.235 14.601 - 14.696: 98.8945% ( 2) 00:15:19.235 14.696 - 14.791: 98.9022% ( 1) 00:15:19.235 14.886 - 14.981: 98.9099% ( 1) 00:15:19.235 15.076 - 15.170: 98.9252% ( 2) 00:15:19.235 15.170 - 15.265: 98.9406% ( 2) 00:15:19.235 15.455 - 15.550: 98.9483% ( 1) 00:15:19.235 15.550 - 15.644: 98.9559% ( 1) 00:15:19.235 16.877 - 16.972: 98.9636% ( 1) 00:15:19.235 17.256 - 17.351: 98.9866% ( 3) 00:15:19.235 17.351 - 17.446: 99.0250% ( 5) 00:15:19.235 17.446 - 17.541: 99.0634% ( 5) 00:15:19.235 17.541 - 17.636: 99.0788% ( 2) 00:15:19.235 17.636 - 17.730: 99.1555% ( 10) 00:15:19.235 17.730 - 17.825: 99.1786% ( 3) 00:15:19.235 17.825 - 17.920: 99.2400% ( 8) 00:15:19.235 17.920 - 18.015: 99.3091% ( 9) 00:15:19.235 18.015 - 18.110: 99.3782% ( 9) 00:15:19.235 18.110 - 18.204: 99.4396% ( 8) 00:15:19.235 18.204 - 18.299: 99.5164% ( 10) 00:15:19.235 18.299 - 18.394: 99.5547% ( 5) 00:15:19.235 18.394 - 18.489: 99.6238% ( 9) 00:15:19.235 18.489 - 18.584: 99.6699% ( 6) 00:15:19.235 18.584 - 18.679: 99.6852% ( 2) 00:15:19.235 18.679 - 18.773: 99.7083% ( 3) 00:15:19.235 18.773 - 18.868: 99.7467% ( 5) 00:15:19.235 18.868 - 18.963: 99.7774% ( 4) 00:15:19.235 18.963 - 19.058: 99.8004% ( 3) 00:15:19.235 19.058 - 19.153: 99.8081% ( 1) 00:15:19.235 19.627 - 19.721: 99.8158% ( 1) 00:15:19.235 20.196 - 20.290: 99.8311% ( 2) 00:15:19.235 20.764 - 20.859: 99.8388% ( 1) 00:15:19.235 23.040 - 23.135: 99.8465% ( 1) 00:15:19.235 23.419 - 23.514: 99.8541% ( 1) 00:15:19.235 23.704 - 23.799: 99.8618% ( 1) 00:15:19.235 27.117 - 27.307: 99.8772% ( 2) 00:15:19.235 27.307 - 27.496: 99.8848% ( 1) 00:15:19.235 28.065 - 28.255: 99.8925% ( 1) 00:15:19.235 29.013 - 29.203: 99.9002% ( 1) 00:15:19.235 29.203 - 29.393: 99.9079% ( 1) 00:15:19.235 34.133 - 34.323: 99.9156% ( 1) 00:15:19.235 34.513 - 34.702: 99.9232% ( 1) 00:15:19.235 35.461 - 35.650: 99.9309% ( 1) 00:15:19.235 3980.705 - 4004.978: 99.9693% ( 5) 00:15:19.235 4004.978 - 4029.250: 99.9923% ( 3) 00:15:19.235 4029.250 - 4053.523: 100.0000% ( 1) 00:15:19.235 00:15:19.235 Complete histogram 00:15:19.235 ================== 00:15:19.235 Range in us Cumulative Count 00:15:19.235 2.062 - 2.074: 1.0825% ( 141) 00:15:19.235 2.074 - 2.086: 31.0302% ( 3901) 00:15:19.235 2.086 - 2.098: 47.7430% ( 2177) 00:15:19.235 2.098 - 2.110: 50.1228% ( 310) 00:15:19.235 2.110 - 2.121: 57.6155% ( 976) 00:15:19.235 2.121 - 2.133: 60.5328% ( 380) 00:15:19.235 2.133 - 2.145: 65.0929% ( 594) 00:15:19.235 2.145 - 2.157: 78.8116% ( 1787) 00:15:19.235 2.157 - 2.169: 83.2719% ( 581) 00:15:19.235 2.169 - 2.181: 84.7766% ( 196) 00:15:19.235 2.181 - 2.193: 87.8781% ( 404) 00:15:19.235 2.193 - 2.204: 89.4058% ( 199) 00:15:19.235 2.204 - 2.216: 90.3808% ( 127) 00:15:19.235 2.216 - 2.228: 91.7089% ( 173) 00:15:19.235 2.228 - 2.240: 93.0370% ( 173) 00:15:19.235 2.240 - 2.252: 94.6185% ( 206) 00:15:19.235 2.252 - 2.264: 95.1558% ( 70) 00:15:19.235 2.264 - 2.276: 95.2940% ( 18) 00:15:19.235 2.276 - 2.287: 95.4245% ( 17) 00:15:19.235 2.287 - 2.299: 95.4936% ( 9) 00:15:19.235 2.299 - 2.311: 95.6625% ( 22) 00:15:19.235 2.311 - 2.323: 95.9466% ( 37) 00:15:19.235 2.323 - 2.335: 9[2024-10-07 09:36:07.851448] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:19.235 6.0157% ( 9) 00:15:19.235 2.335 - 2.347: 96.0387% ( 3) 00:15:19.235 2.347 - 2.359: 96.0617% ( 3) 00:15:19.235 2.359 - 2.370: 96.1155% ( 7) 00:15:19.235 2.370 - 2.382: 96.2460% ( 17) 00:15:19.235 2.382 - 2.394: 96.5377% ( 38) 00:15:19.235 2.394 - 2.406: 96.8294% ( 38) 00:15:19.235 2.406 - 2.418: 97.1058% ( 36) 00:15:19.235 2.418 - 2.430: 97.3668% ( 34) 00:15:19.235 2.430 - 2.441: 97.6278% ( 34) 00:15:19.235 2.441 - 2.453: 97.8121% ( 24) 00:15:19.235 2.453 - 2.465: 98.0193% ( 27) 00:15:19.235 2.465 - 2.477: 98.1038% ( 11) 00:15:19.235 2.477 - 2.489: 98.2189% ( 15) 00:15:19.235 2.489 - 2.501: 98.2573% ( 5) 00:15:19.235 2.501 - 2.513: 98.3264% ( 9) 00:15:19.235 2.513 - 2.524: 98.3725% ( 6) 00:15:19.235 2.524 - 2.536: 98.3955% ( 3) 00:15:19.235 2.560 - 2.572: 98.4262% ( 4) 00:15:19.235 2.572 - 2.584: 98.4416% ( 2) 00:15:19.235 2.631 - 2.643: 98.4569% ( 2) 00:15:19.235 2.643 - 2.655: 98.4646% ( 1) 00:15:19.235 2.809 - 2.821: 98.4723% ( 1) 00:15:19.235 2.821 - 2.833: 98.4800% ( 1) 00:15:19.235 3.224 - 3.247: 98.4876% ( 1) 00:15:19.235 3.342 - 3.366: 98.4953% ( 1) 00:15:19.235 3.461 - 3.484: 98.5030% ( 1) 00:15:19.235 3.532 - 3.556: 98.5183% ( 2) 00:15:19.235 3.556 - 3.579: 98.5260% ( 1) 00:15:19.235 3.627 - 3.650: 98.5567% ( 4) 00:15:19.235 3.650 - 3.674: 98.5644% ( 1) 00:15:19.235 3.674 - 3.698: 98.5951% ( 4) 00:15:19.235 3.698 - 3.721: 98.6105% ( 2) 00:15:19.235 3.721 - 3.745: 98.6181% ( 1) 00:15:19.235 3.745 - 3.769: 98.6258% ( 1) 00:15:19.235 3.769 - 3.793: 98.6335% ( 1) 00:15:19.235 3.793 - 3.816: 98.6565% ( 3) 00:15:19.235 3.816 - 3.840: 98.6642% ( 1) 00:15:19.235 3.887 - 3.911: 98.6719% ( 1) 00:15:19.235 3.959 - 3.982: 98.6796% ( 1) 00:15:19.235 3.982 - 4.006: 98.7026% ( 3) 00:15:19.235 4.006 - 4.030: 98.7179% ( 2) 00:15:19.235 4.030 - 4.053: 98.7256% ( 1) 00:15:19.235 4.053 - 4.077: 98.7333% ( 1) 00:15:19.235 4.124 - 4.148: 98.7410% ( 1) 00:15:19.235 4.172 - 4.196: 98.7563% ( 2) 00:15:19.235 4.219 - 4.243: 98.7640% ( 1) 00:15:19.235 6.210 - 6.258: 98.7717% ( 1) 00:15:19.235 6.637 - 6.684: 98.7794% ( 1) 00:15:19.235 6.684 - 6.732: 98.7870% ( 1) 00:15:19.235 6.827 - 6.874: 98.7947% ( 1) 00:15:19.235 6.874 - 6.921: 98.8101% ( 2) 00:15:19.235 7.585 - 7.633: 98.8254% ( 2) 00:15:19.235 7.633 - 7.680: 98.8331% ( 1) 00:15:19.235 8.059 - 8.107: 98.8408% ( 1) 00:15:19.235 8.107 - 8.154: 98.8485% ( 1) 00:15:19.235 8.154 - 8.201: 98.8561% ( 1) 00:15:19.235 8.344 - 8.391: 98.8638% ( 1) 00:15:19.235 8.439 - 8.486: 98.8715% ( 1) 00:15:19.235 9.481 - 9.529: 98.8792% ( 1) 00:15:19.235 9.529 - 9.576: 98.8868% ( 1) 00:15:19.235 13.369 - 13.464: 98.8945% ( 1) 00:15:19.235 14.033 - 14.127: 98.9022% ( 1) 00:15:19.235 15.644 - 15.739: 98.9252% ( 3) 00:15:19.236 15.834 - 15.929: 98.9636% ( 5) 00:15:19.236 15.929 - 16.024: 98.9943% ( 4) 00:15:19.236 16.024 - 16.119: 99.0250% ( 4) 00:15:19.236 16.119 - 16.213: 99.0404% ( 2) 00:15:19.236 16.213 - 16.308: 99.0941% ( 7) 00:15:19.236 16.308 - 16.403: 99.1325% ( 5) 00:15:19.236 16.403 - 16.498: 99.1709% ( 5) 00:15:19.236 16.498 - 16.593: 99.1862% ( 2) 00:15:19.236 16.593 - 16.687: 99.1939% ( 1) 00:15:19.236 16.687 - 16.782: 99.2707% ( 10) 00:15:19.236 16.782 - 16.877: 99.3091% ( 5) 00:15:19.236 16.877 - 16.972: 99.3244% ( 2) 00:15:19.236 16.972 - 17.067: 99.3321% ( 1) 00:15:19.236 17.161 - 17.256: 99.3475% ( 2) 00:15:19.236 17.256 - 17.351: 99.3551% ( 1) 00:15:19.236 17.446 - 17.541: 99.3628% ( 1) 00:15:19.236 18.015 - 18.110: 99.3782% ( 2) 00:15:19.236 23.893 - 23.988: 99.3858% ( 1) 00:15:19.236 30.151 - 30.341: 99.3935% ( 1) 00:15:19.236 3835.070 - 3859.342: 99.4012% ( 1) 00:15:19.236 3980.705 - 4004.978: 99.8465% ( 58) 00:15:19.236 4004.978 - 4029.250: 100.0000% ( 20) 00:15:19.236 00:15:19.236 09:36:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:15:19.236 09:36:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:19.236 09:36:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:15:19.236 09:36:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:15:19.236 09:36:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:19.236 [ 00:15:19.236 { 00:15:19.236 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:19.236 "subtype": "Discovery", 00:15:19.236 "listen_addresses": [], 00:15:19.236 "allow_any_host": true, 00:15:19.236 "hosts": [] 00:15:19.236 }, 00:15:19.236 { 00:15:19.236 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:19.236 "subtype": "NVMe", 00:15:19.236 "listen_addresses": [ 00:15:19.236 { 00:15:19.236 "trtype": "VFIOUSER", 00:15:19.236 "adrfam": "IPv4", 00:15:19.236 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:19.236 "trsvcid": "0" 00:15:19.236 } 00:15:19.236 ], 00:15:19.236 "allow_any_host": true, 00:15:19.236 "hosts": [], 00:15:19.236 "serial_number": "SPDK1", 00:15:19.236 "model_number": "SPDK bdev Controller", 00:15:19.236 "max_namespaces": 32, 00:15:19.236 "min_cntlid": 1, 00:15:19.236 "max_cntlid": 65519, 00:15:19.236 "namespaces": [ 00:15:19.236 { 00:15:19.236 "nsid": 1, 00:15:19.236 "bdev_name": "Malloc1", 00:15:19.236 "name": "Malloc1", 00:15:19.236 "nguid": "4735CD9965F84D9CB61198CF54FBBDD4", 00:15:19.236 "uuid": "4735cd99-65f8-4d9c-b611-98cf54fbbdd4" 00:15:19.236 }, 00:15:19.236 { 00:15:19.236 "nsid": 2, 00:15:19.236 "bdev_name": "Malloc3", 00:15:19.236 "name": "Malloc3", 00:15:19.236 "nguid": "CC8805DDCE674702AA026A3B04C389EF", 00:15:19.236 "uuid": "cc8805dd-ce67-4702-aa02-6a3b04c389ef" 00:15:19.236 } 00:15:19.236 ] 00:15:19.236 }, 00:15:19.236 { 00:15:19.236 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:19.236 "subtype": "NVMe", 00:15:19.236 "listen_addresses": [ 00:15:19.236 { 00:15:19.236 "trtype": "VFIOUSER", 00:15:19.236 "adrfam": "IPv4", 00:15:19.236 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:19.236 "trsvcid": "0" 00:15:19.236 } 00:15:19.236 ], 00:15:19.236 "allow_any_host": true, 00:15:19.236 "hosts": [], 00:15:19.236 "serial_number": "SPDK2", 00:15:19.236 "model_number": "SPDK bdev Controller", 00:15:19.236 "max_namespaces": 32, 00:15:19.236 "min_cntlid": 1, 00:15:19.236 "max_cntlid": 65519, 00:15:19.236 "namespaces": [ 00:15:19.236 { 00:15:19.236 "nsid": 1, 00:15:19.236 "bdev_name": "Malloc2", 00:15:19.236 "name": "Malloc2", 00:15:19.236 "nguid": "6B8BDE5A36F04672A42F27089D7A4EB5", 00:15:19.236 "uuid": "6b8bde5a-36f0-4672-a42f-27089d7a4eb5" 00:15:19.236 } 00:15:19.236 ] 00:15:19.236 } 00:15:19.236 ] 00:15:19.236 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:19.236 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=201609 00:15:19.236 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:15:19.236 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:19.236 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:15:19.236 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:19.236 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:15:19.236 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # i=1 00:15:19.236 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # sleep 0.1 00:15:19.495 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:19.495 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:15:19.495 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # i=2 00:15:19.495 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # sleep 0.1 00:15:19.495 [2024-10-07 09:36:08.362168] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:19.495 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:19.495 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:19.495 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:15:19.495 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:19.495 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:15:19.753 Malloc4 00:15:19.753 09:36:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:15:20.010 [2024-10-07 09:36:08.991973] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:20.010 09:36:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:20.269 Asynchronous Event Request test 00:15:20.269 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:20.269 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:20.269 Registering asynchronous event callbacks... 00:15:20.269 Starting namespace attribute notice tests for all controllers... 00:15:20.269 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:20.269 aer_cb - Changed Namespace 00:15:20.269 Cleaning up... 00:15:20.528 [ 00:15:20.528 { 00:15:20.528 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:20.528 "subtype": "Discovery", 00:15:20.528 "listen_addresses": [], 00:15:20.528 "allow_any_host": true, 00:15:20.528 "hosts": [] 00:15:20.528 }, 00:15:20.528 { 00:15:20.528 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:20.528 "subtype": "NVMe", 00:15:20.528 "listen_addresses": [ 00:15:20.528 { 00:15:20.528 "trtype": "VFIOUSER", 00:15:20.528 "adrfam": "IPv4", 00:15:20.528 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:20.528 "trsvcid": "0" 00:15:20.528 } 00:15:20.528 ], 00:15:20.528 "allow_any_host": true, 00:15:20.528 "hosts": [], 00:15:20.528 "serial_number": "SPDK1", 00:15:20.528 "model_number": "SPDK bdev Controller", 00:15:20.528 "max_namespaces": 32, 00:15:20.528 "min_cntlid": 1, 00:15:20.528 "max_cntlid": 65519, 00:15:20.528 "namespaces": [ 00:15:20.528 { 00:15:20.528 "nsid": 1, 00:15:20.528 "bdev_name": "Malloc1", 00:15:20.528 "name": "Malloc1", 00:15:20.528 "nguid": "4735CD9965F84D9CB61198CF54FBBDD4", 00:15:20.528 "uuid": "4735cd99-65f8-4d9c-b611-98cf54fbbdd4" 00:15:20.528 }, 00:15:20.528 { 00:15:20.528 "nsid": 2, 00:15:20.528 "bdev_name": "Malloc3", 00:15:20.528 "name": "Malloc3", 00:15:20.528 "nguid": "CC8805DDCE674702AA026A3B04C389EF", 00:15:20.528 "uuid": "cc8805dd-ce67-4702-aa02-6a3b04c389ef" 00:15:20.528 } 00:15:20.528 ] 00:15:20.528 }, 00:15:20.528 { 00:15:20.528 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:20.528 "subtype": "NVMe", 00:15:20.528 "listen_addresses": [ 00:15:20.528 { 00:15:20.528 "trtype": "VFIOUSER", 00:15:20.528 "adrfam": "IPv4", 00:15:20.528 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:20.528 "trsvcid": "0" 00:15:20.528 } 00:15:20.528 ], 00:15:20.528 "allow_any_host": true, 00:15:20.528 "hosts": [], 00:15:20.528 "serial_number": "SPDK2", 00:15:20.528 "model_number": "SPDK bdev Controller", 00:15:20.528 "max_namespaces": 32, 00:15:20.528 "min_cntlid": 1, 00:15:20.528 "max_cntlid": 65519, 00:15:20.528 "namespaces": [ 00:15:20.528 { 00:15:20.528 "nsid": 1, 00:15:20.528 "bdev_name": "Malloc2", 00:15:20.528 "name": "Malloc2", 00:15:20.528 "nguid": "6B8BDE5A36F04672A42F27089D7A4EB5", 00:15:20.528 "uuid": "6b8bde5a-36f0-4672-a42f-27089d7a4eb5" 00:15:20.528 }, 00:15:20.528 { 00:15:20.528 "nsid": 2, 00:15:20.528 "bdev_name": "Malloc4", 00:15:20.528 "name": "Malloc4", 00:15:20.528 "nguid": "272D092152174029B7A352D018E05045", 00:15:20.528 "uuid": "272d0921-5217-4029-b7a3-52d018e05045" 00:15:20.528 } 00:15:20.528 ] 00:15:20.528 } 00:15:20.528 ] 00:15:20.528 09:36:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 201609 00:15:20.528 09:36:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:15:20.528 09:36:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 195827 00:15:20.528 09:36:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 195827 ']' 00:15:20.528 09:36:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 195827 00:15:20.528 09:36:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:15:20.528 09:36:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:20.528 09:36:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 195827 00:15:20.528 09:36:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:20.528 09:36:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:20.528 09:36:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 195827' 00:15:20.528 killing process with pid 195827 00:15:20.528 09:36:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 195827 00:15:20.528 09:36:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 195827 00:15:20.787 09:36:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:20.787 09:36:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:20.787 09:36:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:15:20.787 09:36:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:15:20.787 09:36:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:15:20.787 09:36:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=202041 00:15:20.787 09:36:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:15:20.787 09:36:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 202041' 00:15:20.787 Process pid: 202041 00:15:20.787 09:36:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:20.787 09:36:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 202041 00:15:20.787 09:36:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 202041 ']' 00:15:20.787 09:36:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:20.787 09:36:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:20.787 09:36:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:20.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:20.787 09:36:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:20.787 09:36:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:20.787 [2024-10-07 09:36:09.732883] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:15:20.787 [2024-10-07 09:36:09.733956] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:15:20.787 [2024-10-07 09:36:09.734025] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:21.046 [2024-10-07 09:36:09.792809] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:21.046 [2024-10-07 09:36:09.899818] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:21.046 [2024-10-07 09:36:09.899872] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:21.046 [2024-10-07 09:36:09.899895] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:21.046 [2024-10-07 09:36:09.899906] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:21.046 [2024-10-07 09:36:09.899915] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:21.046 [2024-10-07 09:36:09.901322] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:15:21.046 [2024-10-07 09:36:09.901388] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:15:21.046 [2024-10-07 09:36:09.901453] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:15:21.047 [2024-10-07 09:36:09.901456] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:15:21.047 [2024-10-07 09:36:09.998520] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:15:21.047 [2024-10-07 09:36:09.998784] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:15:21.047 [2024-10-07 09:36:09.999063] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:15:21.047 [2024-10-07 09:36:09.999725] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:15:21.047 [2024-10-07 09:36:09.999984] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:15:21.047 09:36:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:21.047 09:36:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:15:21.047 09:36:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:22.427 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:15:22.427 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:22.427 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:22.427 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:22.427 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:22.427 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:22.687 Malloc1 00:15:22.687 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:22.945 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:23.203 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:23.769 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:23.769 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:23.769 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:24.028 Malloc2 00:15:24.028 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:24.287 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:24.545 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:24.803 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:15:24.803 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 202041 00:15:24.803 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 202041 ']' 00:15:24.803 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 202041 00:15:24.803 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:15:24.803 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:24.803 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 202041 00:15:24.803 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:24.804 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:24.804 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 202041' 00:15:24.804 killing process with pid 202041 00:15:24.804 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 202041 00:15:24.804 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 202041 00:15:25.062 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:25.062 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:25.062 00:15:25.062 real 0m53.691s 00:15:25.062 user 3m27.119s 00:15:25.062 sys 0m3.888s 00:15:25.062 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:25.062 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:25.062 ************************************ 00:15:25.062 END TEST nvmf_vfio_user 00:15:25.062 ************************************ 00:15:25.062 09:36:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:25.062 09:36:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:25.062 09:36:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:25.062 09:36:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:25.062 ************************************ 00:15:25.062 START TEST nvmf_vfio_user_nvme_compliance 00:15:25.062 ************************************ 00:15:25.062 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:25.062 * Looking for test storage... 00:15:25.062 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:15:25.062 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:25.062 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1681 -- # lcov --version 00:15:25.062 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:25.322 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:25.322 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:25.322 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:25.322 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:25.322 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:15:25.322 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:15:25.322 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:15:25.322 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:15:25.322 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:15:25.322 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:15:25.322 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:15:25.322 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:25.322 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:15:25.322 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:15:25.322 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:25.322 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:25.322 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:15:25.322 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:15:25.322 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:25.322 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:15:25.322 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:15:25.322 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:15:25.322 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:15:25.322 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:25.322 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:15:25.322 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:15:25.322 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:25.322 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:25.322 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:15:25.322 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:25.322 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:25.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:25.322 --rc genhtml_branch_coverage=1 00:15:25.322 --rc genhtml_function_coverage=1 00:15:25.322 --rc genhtml_legend=1 00:15:25.322 --rc geninfo_all_blocks=1 00:15:25.322 --rc geninfo_unexecuted_blocks=1 00:15:25.322 00:15:25.322 ' 00:15:25.322 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:25.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:25.323 --rc genhtml_branch_coverage=1 00:15:25.323 --rc genhtml_function_coverage=1 00:15:25.323 --rc genhtml_legend=1 00:15:25.323 --rc geninfo_all_blocks=1 00:15:25.323 --rc geninfo_unexecuted_blocks=1 00:15:25.323 00:15:25.323 ' 00:15:25.323 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:25.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:25.323 --rc genhtml_branch_coverage=1 00:15:25.323 --rc genhtml_function_coverage=1 00:15:25.323 --rc genhtml_legend=1 00:15:25.323 --rc geninfo_all_blocks=1 00:15:25.323 --rc geninfo_unexecuted_blocks=1 00:15:25.323 00:15:25.323 ' 00:15:25.323 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:25.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:25.323 --rc genhtml_branch_coverage=1 00:15:25.323 --rc genhtml_function_coverage=1 00:15:25.323 --rc genhtml_legend=1 00:15:25.323 --rc geninfo_all_blocks=1 00:15:25.323 --rc geninfo_unexecuted_blocks=1 00:15:25.323 00:15:25.323 ' 00:15:25.323 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:25.323 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:15:25.323 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:25.323 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:25.323 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:25.323 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:25.323 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:25.323 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:25.323 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:25.323 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:25.323 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:25.323 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:25.323 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:15:25.323 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:15:25.323 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:25.323 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:25.323 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:25.323 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:25.323 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:25.323 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:15:25.323 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:25.323 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:25.323 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:25.323 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.323 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.323 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.323 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:15:25.323 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.323 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:15:25.323 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:25.323 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:25.323 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:25.323 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:25.323 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:25.323 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:25.323 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:25.323 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:25.323 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:25.323 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:25.323 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:25.323 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:25.323 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:15:25.323 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:15:25.323 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:15:25.323 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=202627 00:15:25.323 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:25.323 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 202627' 00:15:25.323 Process pid: 202627 00:15:25.323 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:25.323 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 202627 00:15:25.323 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # '[' -z 202627 ']' 00:15:25.323 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:25.323 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:25.323 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:25.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:25.323 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:25.323 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:25.323 [2024-10-07 09:36:14.199812] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:15:25.323 [2024-10-07 09:36:14.199893] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:25.323 [2024-10-07 09:36:14.255232] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:25.584 [2024-10-07 09:36:14.360721] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:25.584 [2024-10-07 09:36:14.360792] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:25.584 [2024-10-07 09:36:14.360815] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:25.584 [2024-10-07 09:36:14.360825] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:25.584 [2024-10-07 09:36:14.360834] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:25.584 [2024-10-07 09:36:14.361547] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:15:25.584 [2024-10-07 09:36:14.361675] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:15:25.584 [2024-10-07 09:36:14.361671] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:15:25.584 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:25.584 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # return 0 00:15:25.584 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:15:26.521 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:26.521 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:15:26.521 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:26.521 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.521 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:26.521 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.521 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:15:26.521 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:26.521 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.521 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:26.780 malloc0 00:15:26.781 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.781 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:15:26.781 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.781 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:26.781 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.781 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:26.781 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.781 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:26.781 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.781 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:26.781 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.781 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:26.781 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.781 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:15:26.781 00:15:26.781 00:15:26.781 CUnit - A unit testing framework for C - Version 2.1-3 00:15:26.781 http://cunit.sourceforge.net/ 00:15:26.781 00:15:26.781 00:15:26.781 Suite: nvme_compliance 00:15:26.781 Test: admin_identify_ctrlr_verify_dptr ...[2024-10-07 09:36:15.702305] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:26.781 [2024-10-07 09:36:15.703759] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:15:26.781 [2024-10-07 09:36:15.703784] vfio_user.c:5507:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:15:26.781 [2024-10-07 09:36:15.703796] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:15:26.781 [2024-10-07 09:36:15.705322] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:26.781 passed 00:15:27.039 Test: admin_identify_ctrlr_verify_fused ...[2024-10-07 09:36:15.793913] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:27.039 [2024-10-07 09:36:15.796934] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:27.039 passed 00:15:27.039 Test: admin_identify_ns ...[2024-10-07 09:36:15.883219] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:27.039 [2024-10-07 09:36:15.943686] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:15:27.039 [2024-10-07 09:36:15.951683] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:15:27.039 [2024-10-07 09:36:15.972806] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:27.039 passed 00:15:27.299 Test: admin_get_features_mandatory_features ...[2024-10-07 09:36:16.056425] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:27.299 [2024-10-07 09:36:16.059443] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:27.299 passed 00:15:27.299 Test: admin_get_features_optional_features ...[2024-10-07 09:36:16.145043] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:27.299 [2024-10-07 09:36:16.148066] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:27.299 passed 00:15:27.299 Test: admin_set_features_number_of_queues ...[2024-10-07 09:36:16.233239] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:27.559 [2024-10-07 09:36:16.341795] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:27.559 passed 00:15:27.559 Test: admin_get_log_page_mandatory_logs ...[2024-10-07 09:36:16.422329] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:27.559 [2024-10-07 09:36:16.425352] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:27.559 passed 00:15:27.559 Test: admin_get_log_page_with_lpo ...[2024-10-07 09:36:16.512760] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:27.817 [2024-10-07 09:36:16.581686] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:15:27.817 [2024-10-07 09:36:16.594763] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:27.817 passed 00:15:27.817 Test: fabric_property_get ...[2024-10-07 09:36:16.682046] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:27.817 [2024-10-07 09:36:16.683319] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:15:27.817 [2024-10-07 09:36:16.685067] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:27.817 passed 00:15:27.817 Test: admin_delete_io_sq_use_admin_qid ...[2024-10-07 09:36:16.767569] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:27.817 [2024-10-07 09:36:16.768897] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:15:27.817 [2024-10-07 09:36:16.770590] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:27.817 passed 00:15:28.075 Test: admin_delete_io_sq_delete_sq_twice ...[2024-10-07 09:36:16.857289] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:28.075 [2024-10-07 09:36:16.941678] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:28.075 [2024-10-07 09:36:16.957674] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:28.075 [2024-10-07 09:36:16.962790] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:28.075 passed 00:15:28.075 Test: admin_delete_io_cq_use_admin_qid ...[2024-10-07 09:36:17.046625] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:28.075 [2024-10-07 09:36:17.047955] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:15:28.075 [2024-10-07 09:36:17.049661] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:28.335 passed 00:15:28.335 Test: admin_delete_io_cq_delete_cq_first ...[2024-10-07 09:36:17.137241] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:28.335 [2024-10-07 09:36:17.213678] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:28.335 [2024-10-07 09:36:17.237682] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:28.335 [2024-10-07 09:36:17.242793] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:28.335 passed 00:15:28.335 Test: admin_create_io_cq_verify_iv_pc ...[2024-10-07 09:36:17.325293] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:28.335 [2024-10-07 09:36:17.326608] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:15:28.335 [2024-10-07 09:36:17.326664] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:15:28.335 [2024-10-07 09:36:17.328315] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:28.595 passed 00:15:28.595 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-10-07 09:36:17.411436] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:28.595 [2024-10-07 09:36:17.502675] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:15:28.595 [2024-10-07 09:36:17.509696] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:15:28.595 [2024-10-07 09:36:17.518693] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:15:28.595 [2024-10-07 09:36:17.526690] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:15:28.595 [2024-10-07 09:36:17.555802] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:28.595 passed 00:15:28.855 Test: admin_create_io_sq_verify_pc ...[2024-10-07 09:36:17.638324] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:28.855 [2024-10-07 09:36:17.654701] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:15:28.855 [2024-10-07 09:36:17.672663] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:28.855 passed 00:15:28.855 Test: admin_create_io_qp_max_qps ...[2024-10-07 09:36:17.757282] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:30.236 [2024-10-07 09:36:18.862685] nvme_ctrlr.c:5504:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:15:30.494 [2024-10-07 09:36:19.242968] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:30.494 passed 00:15:30.494 Test: admin_create_io_sq_shared_cq ...[2024-10-07 09:36:19.326168] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:30.494 [2024-10-07 09:36:19.457682] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:30.752 [2024-10-07 09:36:19.494787] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:30.752 passed 00:15:30.752 00:15:30.752 Run Summary: Type Total Ran Passed Failed Inactive 00:15:30.752 suites 1 1 n/a 0 0 00:15:30.752 tests 18 18 18 0 0 00:15:30.752 asserts 360 360 360 0 n/a 00:15:30.752 00:15:30.752 Elapsed time = 1.575 seconds 00:15:30.752 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 202627 00:15:30.752 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # '[' -z 202627 ']' 00:15:30.752 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # kill -0 202627 00:15:30.753 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # uname 00:15:30.753 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:30.753 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 202627 00:15:30.753 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:30.753 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:30.753 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@968 -- # echo 'killing process with pid 202627' 00:15:30.753 killing process with pid 202627 00:15:30.753 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@969 -- # kill 202627 00:15:30.753 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@974 -- # wait 202627 00:15:31.012 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:15:31.012 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:15:31.012 00:15:31.012 real 0m5.852s 00:15:31.012 user 0m16.277s 00:15:31.012 sys 0m0.564s 00:15:31.012 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:31.012 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:31.012 ************************************ 00:15:31.012 END TEST nvmf_vfio_user_nvme_compliance 00:15:31.012 ************************************ 00:15:31.012 09:36:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:31.012 09:36:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:31.012 09:36:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:31.012 09:36:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:31.012 ************************************ 00:15:31.012 START TEST nvmf_vfio_user_fuzz 00:15:31.012 ************************************ 00:15:31.012 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:31.012 * Looking for test storage... 00:15:31.012 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:31.012 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:31.012 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1681 -- # lcov --version 00:15:31.012 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:31.271 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:31.271 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:31.271 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:31.271 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:31.271 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:15:31.271 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:15:31.271 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:15:31.271 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:15:31.271 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:15:31.271 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:15:31.271 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:15:31.271 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:31.271 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:15:31.271 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:15:31.271 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:31.271 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:31.271 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:15:31.271 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:15:31.271 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:31.271 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:15:31.271 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:15:31.271 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:15:31.271 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:15:31.271 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:31.271 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:15:31.271 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:15:31.271 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:31.271 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:31.271 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:15:31.271 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:31.271 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:31.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:31.271 --rc genhtml_branch_coverage=1 00:15:31.271 --rc genhtml_function_coverage=1 00:15:31.271 --rc genhtml_legend=1 00:15:31.271 --rc geninfo_all_blocks=1 00:15:31.271 --rc geninfo_unexecuted_blocks=1 00:15:31.271 00:15:31.271 ' 00:15:31.271 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:31.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:31.271 --rc genhtml_branch_coverage=1 00:15:31.271 --rc genhtml_function_coverage=1 00:15:31.271 --rc genhtml_legend=1 00:15:31.271 --rc geninfo_all_blocks=1 00:15:31.271 --rc geninfo_unexecuted_blocks=1 00:15:31.271 00:15:31.271 ' 00:15:31.271 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:31.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:31.271 --rc genhtml_branch_coverage=1 00:15:31.271 --rc genhtml_function_coverage=1 00:15:31.271 --rc genhtml_legend=1 00:15:31.271 --rc geninfo_all_blocks=1 00:15:31.271 --rc geninfo_unexecuted_blocks=1 00:15:31.271 00:15:31.271 ' 00:15:31.271 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:31.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:31.271 --rc genhtml_branch_coverage=1 00:15:31.271 --rc genhtml_function_coverage=1 00:15:31.271 --rc genhtml_legend=1 00:15:31.271 --rc geninfo_all_blocks=1 00:15:31.271 --rc geninfo_unexecuted_blocks=1 00:15:31.271 00:15:31.271 ' 00:15:31.271 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:31.271 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:15:31.271 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:31.271 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:31.271 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:31.271 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:31.271 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:31.271 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:31.271 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:31.271 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:31.271 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:31.271 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:31.271 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:15:31.271 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:15:31.271 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:31.271 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:31.271 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:31.271 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:31.271 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:31.271 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:15:31.271 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:31.271 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:31.271 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:31.271 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.272 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.272 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.272 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:15:31.272 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.272 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:15:31.272 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:31.272 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:31.272 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:31.272 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:31.272 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:31.272 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:31.272 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:31.272 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:31.272 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:31.272 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:31.272 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:31.272 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:31.272 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:31.272 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:15:31.272 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:31.272 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:31.272 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:15:31.272 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=203334 00:15:31.272 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:31.272 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 203334' 00:15:31.272 Process pid: 203334 00:15:31.272 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:31.272 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 203334 00:15:31.272 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # '[' -z 203334 ']' 00:15:31.272 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:31.272 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:31.272 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:31.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:31.272 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:31.272 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:31.530 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:31.530 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # return 0 00:15:31.530 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:15:32.471 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:32.471 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.471 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:32.471 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.471 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:15:32.471 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:32.471 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.471 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:32.471 malloc0 00:15:32.471 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.471 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:15:32.471 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.471 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:32.471 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.471 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:32.471 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.471 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:32.471 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.471 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:32.471 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.471 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:32.471 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.471 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:15:32.471 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:16:04.545 Fuzzing completed. Shutting down the fuzz application 00:16:04.545 00:16:04.546 Dumping successful admin opcodes: 00:16:04.546 8, 9, 10, 24, 00:16:04.546 Dumping successful io opcodes: 00:16:04.546 0, 00:16:04.546 NS: 0x200003a1ef00 I/O qp, Total commands completed: 642113, total successful commands: 2492, random_seed: 2272032896 00:16:04.546 NS: 0x200003a1ef00 admin qp, Total commands completed: 110262, total successful commands: 906, random_seed: 1898723904 00:16:04.546 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:16:04.546 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.546 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:04.546 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.546 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 203334 00:16:04.546 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # '[' -z 203334 ']' 00:16:04.546 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # kill -0 203334 00:16:04.546 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # uname 00:16:04.546 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:04.546 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 203334 00:16:04.546 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:04.546 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:04.546 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 203334' 00:16:04.546 killing process with pid 203334 00:16:04.546 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@969 -- # kill 203334 00:16:04.546 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@974 -- # wait 203334 00:16:04.546 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:16:04.546 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:16:04.546 00:16:04.546 real 0m32.412s 00:16:04.546 user 0m30.360s 00:16:04.546 sys 0m28.881s 00:16:04.546 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:04.546 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:04.546 ************************************ 00:16:04.546 END TEST nvmf_vfio_user_fuzz 00:16:04.546 ************************************ 00:16:04.546 09:36:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:04.546 09:36:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:04.546 09:36:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:04.546 09:36:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:04.546 ************************************ 00:16:04.546 START TEST nvmf_auth_target 00:16:04.546 ************************************ 00:16:04.546 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:04.546 * Looking for test storage... 00:16:04.546 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:04.546 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:04.546 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # lcov --version 00:16:04.546 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:04.546 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:04.546 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:04.546 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:04.546 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:04.546 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:16:04.546 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:16:04.546 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:16:04.546 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:16:04.546 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:16:04.546 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:16:04.546 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:16:04.546 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:04.546 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:16:04.546 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:16:04.546 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:04.546 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:04.546 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:16:04.546 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:16:04.546 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:04.546 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:16:04.546 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:16:04.546 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:16:04.546 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:16:04.546 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:04.546 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:16:04.546 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:16:04.546 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:04.546 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:04.546 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:16:04.546 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:04.546 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:04.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:04.546 --rc genhtml_branch_coverage=1 00:16:04.546 --rc genhtml_function_coverage=1 00:16:04.546 --rc genhtml_legend=1 00:16:04.546 --rc geninfo_all_blocks=1 00:16:04.546 --rc geninfo_unexecuted_blocks=1 00:16:04.546 00:16:04.546 ' 00:16:04.546 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:04.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:04.546 --rc genhtml_branch_coverage=1 00:16:04.546 --rc genhtml_function_coverage=1 00:16:04.546 --rc genhtml_legend=1 00:16:04.546 --rc geninfo_all_blocks=1 00:16:04.546 --rc geninfo_unexecuted_blocks=1 00:16:04.546 00:16:04.546 ' 00:16:04.546 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:04.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:04.546 --rc genhtml_branch_coverage=1 00:16:04.546 --rc genhtml_function_coverage=1 00:16:04.546 --rc genhtml_legend=1 00:16:04.546 --rc geninfo_all_blocks=1 00:16:04.546 --rc geninfo_unexecuted_blocks=1 00:16:04.546 00:16:04.546 ' 00:16:04.546 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:04.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:04.546 --rc genhtml_branch_coverage=1 00:16:04.546 --rc genhtml_function_coverage=1 00:16:04.546 --rc genhtml_legend=1 00:16:04.546 --rc geninfo_all_blocks=1 00:16:04.546 --rc geninfo_unexecuted_blocks=1 00:16:04.546 00:16:04.546 ' 00:16:04.546 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:04.546 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:16:04.546 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:04.546 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:04.546 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:04.546 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:04.546 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:04.546 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:04.547 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:04.547 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:04.547 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:04.547 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:04.547 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:16:04.547 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:16:04.547 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:04.547 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:04.547 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:04.547 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:04.547 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:04.547 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:16:04.547 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:04.547 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:04.547 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:04.547 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.547 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.547 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.547 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:16:04.547 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.547 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:16:04.547 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:04.547 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:04.547 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:04.547 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:04.547 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:04.547 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:04.547 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:04.547 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:04.547 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:04.547 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:04.547 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:16:04.547 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:16:04.547 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:16:04.547 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:16:04.547 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:16:04.547 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:16:04.547 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:16:04.547 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:16:04.547 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:16:04.547 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:04.547 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:16:04.547 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:16:04.547 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:16:04.547 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:04.547 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:04.547 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:04.547 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:16:04.547 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:16:04.547 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:16:04.547 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.925 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:05.925 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:16:05.925 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:05.925 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:05.925 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:05.925 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:05.925 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:05.925 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:16:05.925 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:05.925 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:16:05.925 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:16:05.925 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:16:05.925 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:16:05.925 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:16:05.925 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:16:05.925 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:05.925 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:05.925 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:05.925 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:05.925 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:05.925 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:05.925 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:05.925 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:05.925 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:05.926 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:05.926 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:05.926 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:05.926 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:05.926 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:05.926 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:05.926 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:05.926 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:05.926 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:05.926 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:05.926 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x1592)' 00:16:05.926 Found 0000:09:00.0 (0x8086 - 0x1592) 00:16:05.926 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:05.926 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:05.926 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:16:05.926 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:16:05.926 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:05.926 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:05.926 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x1592)' 00:16:05.926 Found 0000:09:00.1 (0x8086 - 0x1592) 00:16:05.926 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:05.926 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:05.926 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:16:05.926 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:16:05.926 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:05.926 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:05.926 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:05.926 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:05.926 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:16:05.926 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:05.926 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:16:05.926 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:05.926 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:16:05.926 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:16:05.926 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:05.926 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:16:05.926 Found net devices under 0000:09:00.0: cvl_0_0 00:16:05.926 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:16:05.926 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:16:05.926 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:05.926 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:16:05.926 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:05.926 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:16:05.926 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:16:05.926 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:05.926 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:16:05.926 Found net devices under 0000:09:00.1: cvl_0_1 00:16:05.926 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:16:05.926 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:16:05.926 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # is_hw=yes 00:16:05.926 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:16:05.926 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:16:05.926 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:16:05.926 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:05.926 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:05.926 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:05.926 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:05.926 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:05.926 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:05.926 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:05.926 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:05.926 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:05.926 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:05.926 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:05.926 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:05.926 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:05.926 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:05.926 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:05.926 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:05.926 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:05.926 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:05.926 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:05.926 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:05.926 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:05.926 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:05.926 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:05.926 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:05.926 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.302 ms 00:16:05.926 00:16:05.926 --- 10.0.0.2 ping statistics --- 00:16:05.926 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:05.926 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:16:05.926 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:05.926 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:05.926 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.071 ms 00:16:05.926 00:16:05.926 --- 10.0.0.1 ping statistics --- 00:16:05.926 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:05.926 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:16:05.926 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:05.926 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # return 0 00:16:05.926 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:16:05.926 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:05.926 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:16:05.926 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:16:05.926 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:05.927 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:16:05.927 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:16:05.927 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:16:05.927 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:16:05.927 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:05.927 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.927 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # nvmfpid=208530 00:16:05.927 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:16:05.927 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # waitforlisten 208530 00:16:05.927 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 208530 ']' 00:16:05.927 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:05.927 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:05.927 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:05.927 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:05.927 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.186 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:06.186 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:16:06.186 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:16:06.186 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:06.186 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.186 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:06.186 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=208662 00:16:06.186 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:16:06.186 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:06.186 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:16:06.186 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:16:06.186 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:06.186 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:16:06.186 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=null 00:16:06.186 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:16:06.186 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:06.186 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=682fa872f6c003098da83c5bf37de0a1660fa2995a0989f4 00:16:06.186 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:16:06.186 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.APp 00:16:06.186 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 682fa872f6c003098da83c5bf37de0a1660fa2995a0989f4 0 00:16:06.186 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 682fa872f6c003098da83c5bf37de0a1660fa2995a0989f4 0 00:16:06.186 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:16:06.186 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:16:06.186 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=682fa872f6c003098da83c5bf37de0a1660fa2995a0989f4 00:16:06.186 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=0 00:16:06.186 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:16:06.186 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.APp 00:16:06.186 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.APp 00:16:06.186 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.APp 00:16:06.186 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:16:06.186 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:16:06.186 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:06.186 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:16:06.186 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha512 00:16:06.186 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=64 00:16:06.186 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:06.186 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=9fe200041f841b6ace757cc0a55ab972cdff2c1dc3a2eb77c37b24548911a2c2 00:16:06.186 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:16:06.186 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.AOe 00:16:06.186 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 9fe200041f841b6ace757cc0a55ab972cdff2c1dc3a2eb77c37b24548911a2c2 3 00:16:06.186 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 9fe200041f841b6ace757cc0a55ab972cdff2c1dc3a2eb77c37b24548911a2c2 3 00:16:06.186 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:16:06.186 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:16:06.186 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=9fe200041f841b6ace757cc0a55ab972cdff2c1dc3a2eb77c37b24548911a2c2 00:16:06.186 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=3 00:16:06.186 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:16:06.446 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.AOe 00:16:06.446 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.AOe 00:16:06.446 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.AOe 00:16:06.446 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:16:06.446 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:16:06.446 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:06.446 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:16:06.446 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha256 00:16:06.446 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=32 00:16:06.446 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:06.446 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=f1a5bdb31712e667c9a2a52fded4ee5c 00:16:06.446 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:16:06.446 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.GyJ 00:16:06.446 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key f1a5bdb31712e667c9a2a52fded4ee5c 1 00:16:06.446 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 f1a5bdb31712e667c9a2a52fded4ee5c 1 00:16:06.446 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:16:06.446 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:16:06.446 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=f1a5bdb31712e667c9a2a52fded4ee5c 00:16:06.446 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=1 00:16:06.446 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:16:06.446 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.GyJ 00:16:06.446 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.GyJ 00:16:06.446 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.GyJ 00:16:06.446 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:16:06.446 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:16:06.446 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:06.446 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:16:06.446 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha384 00:16:06.446 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:16:06.446 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:06.446 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=affdfcdc89179a9100f0c7757fee6befb251af3bdb27d09b 00:16:06.446 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:16:06.446 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.VUe 00:16:06.446 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key affdfcdc89179a9100f0c7757fee6befb251af3bdb27d09b 2 00:16:06.446 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 affdfcdc89179a9100f0c7757fee6befb251af3bdb27d09b 2 00:16:06.446 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:16:06.446 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:16:06.446 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=affdfcdc89179a9100f0c7757fee6befb251af3bdb27d09b 00:16:06.446 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=2 00:16:06.446 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:16:06.446 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.VUe 00:16:06.446 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.VUe 00:16:06.446 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.VUe 00:16:06.446 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:16:06.446 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:16:06.446 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:06.446 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:16:06.446 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha384 00:16:06.446 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:16:06.446 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:06.446 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=881c71aed66768354be97349661d65a6fd32e612cbc7de62 00:16:06.446 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:16:06.446 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.6Pr 00:16:06.446 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 881c71aed66768354be97349661d65a6fd32e612cbc7de62 2 00:16:06.446 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 881c71aed66768354be97349661d65a6fd32e612cbc7de62 2 00:16:06.446 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:16:06.446 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:16:06.446 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=881c71aed66768354be97349661d65a6fd32e612cbc7de62 00:16:06.446 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=2 00:16:06.446 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:16:06.446 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.6Pr 00:16:06.446 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.6Pr 00:16:06.446 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.6Pr 00:16:06.446 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:16:06.446 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:16:06.446 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:06.446 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:16:06.446 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha256 00:16:06.446 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=32 00:16:06.446 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:06.446 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=60fcf3056dfcc30b4244ce0b4947b6fc 00:16:06.446 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:16:06.446 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.LzD 00:16:06.446 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 60fcf3056dfcc30b4244ce0b4947b6fc 1 00:16:06.446 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 60fcf3056dfcc30b4244ce0b4947b6fc 1 00:16:06.446 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:16:06.446 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:16:06.446 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=60fcf3056dfcc30b4244ce0b4947b6fc 00:16:06.446 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=1 00:16:06.446 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:16:06.446 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.LzD 00:16:06.446 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.LzD 00:16:06.447 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.LzD 00:16:06.447 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:16:06.447 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:16:06.447 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:06.447 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:16:06.447 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha512 00:16:06.447 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=64 00:16:06.447 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:06.447 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=53dab97388a7bacbdf50fc7e5b19960a31f776a28c2ed2b36be62a54ba655be1 00:16:06.447 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:16:06.447 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.QoR 00:16:06.447 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 53dab97388a7bacbdf50fc7e5b19960a31f776a28c2ed2b36be62a54ba655be1 3 00:16:06.447 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 53dab97388a7bacbdf50fc7e5b19960a31f776a28c2ed2b36be62a54ba655be1 3 00:16:06.447 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:16:06.447 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:16:06.447 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=53dab97388a7bacbdf50fc7e5b19960a31f776a28c2ed2b36be62a54ba655be1 00:16:06.447 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=3 00:16:06.447 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:16:06.705 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.QoR 00:16:06.705 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.QoR 00:16:06.705 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.QoR 00:16:06.705 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:16:06.705 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 208530 00:16:06.705 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 208530 ']' 00:16:06.705 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:06.705 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:06.705 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:06.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:06.705 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:06.705 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.963 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:06.963 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:16:06.963 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 208662 /var/tmp/host.sock 00:16:06.963 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 208662 ']' 00:16:06.963 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:16:06.963 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:06.963 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:16:06.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:16:06.963 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:06.963 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.221 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:07.221 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:16:07.221 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:16:07.222 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.222 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.222 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.222 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:07.222 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.APp 00:16:07.222 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.222 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.222 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.222 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.APp 00:16:07.222 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.APp 00:16:07.480 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.AOe ]] 00:16:07.480 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.AOe 00:16:07.480 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.480 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.480 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.480 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.AOe 00:16:07.480 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.AOe 00:16:07.739 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:07.739 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.GyJ 00:16:07.739 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.739 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.739 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.739 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.GyJ 00:16:07.739 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.GyJ 00:16:07.997 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.VUe ]] 00:16:07.997 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.VUe 00:16:07.997 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.997 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.997 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.997 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.VUe 00:16:07.997 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.VUe 00:16:08.255 09:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:08.255 09:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.6Pr 00:16:08.255 09:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.255 09:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.255 09:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.255 09:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.6Pr 00:16:08.255 09:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.6Pr 00:16:08.514 09:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.LzD ]] 00:16:08.514 09:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.LzD 00:16:08.514 09:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.514 09:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.514 09:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.514 09:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.LzD 00:16:08.514 09:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.LzD 00:16:08.772 09:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:08.772 09:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.QoR 00:16:08.772 09:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.772 09:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.772 09:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.772 09:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.QoR 00:16:08.772 09:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.QoR 00:16:09.030 09:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:16:09.030 09:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:09.030 09:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:09.030 09:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:09.030 09:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:09.030 09:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:09.289 09:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:16:09.289 09:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:09.289 09:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:09.289 09:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:09.289 09:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:09.289 09:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:09.289 09:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:09.289 09:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.289 09:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.289 09:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.289 09:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:09.289 09:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:09.289 09:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:09.861 00:16:09.861 09:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:09.861 09:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:09.861 09:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:10.119 09:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:10.119 09:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:10.119 09:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.119 09:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.119 09:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.119 09:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:10.119 { 00:16:10.119 "cntlid": 1, 00:16:10.119 "qid": 0, 00:16:10.119 "state": "enabled", 00:16:10.119 "thread": "nvmf_tgt_poll_group_000", 00:16:10.119 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:16:10.119 "listen_address": { 00:16:10.119 "trtype": "TCP", 00:16:10.119 "adrfam": "IPv4", 00:16:10.119 "traddr": "10.0.0.2", 00:16:10.119 "trsvcid": "4420" 00:16:10.119 }, 00:16:10.119 "peer_address": { 00:16:10.119 "trtype": "TCP", 00:16:10.119 "adrfam": "IPv4", 00:16:10.119 "traddr": "10.0.0.1", 00:16:10.119 "trsvcid": "39352" 00:16:10.119 }, 00:16:10.119 "auth": { 00:16:10.119 "state": "completed", 00:16:10.119 "digest": "sha256", 00:16:10.119 "dhgroup": "null" 00:16:10.119 } 00:16:10.119 } 00:16:10.119 ]' 00:16:10.119 09:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:10.119 09:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:10.119 09:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:10.119 09:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:10.119 09:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:10.119 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:10.119 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:10.119 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:10.376 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjgyZmE4NzJmNmMwMDMwOThkYTgzYzViZjM3ZGUwYTE2NjBmYTI5OTVhMDk4OWY0VvKaGA==: --dhchap-ctrl-secret DHHC-1:03:OWZlMjAwMDQxZjg0MWI2YWNlNzU3Y2MwYTU1YWI5NzJjZGZmMmMxZGMzYTJlYjc3YzM3YjI0NTQ4OTExYTJjMi1JLHc=: 00:16:10.376 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:00:NjgyZmE4NzJmNmMwMDMwOThkYTgzYzViZjM3ZGUwYTE2NjBmYTI5OTVhMDk4OWY0VvKaGA==: --dhchap-ctrl-secret DHHC-1:03:OWZlMjAwMDQxZjg0MWI2YWNlNzU3Y2MwYTU1YWI5NzJjZGZmMmMxZGMzYTJlYjc3YzM3YjI0NTQ4OTExYTJjMi1JLHc=: 00:16:15.640 09:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:15.640 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:15.640 09:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:16:15.640 09:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.640 09:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.640 09:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.640 09:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:15.640 09:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:15.640 09:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:15.640 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:16:15.640 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:15.640 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:15.640 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:15.640 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:15.640 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:15.640 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:15.640 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.640 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.640 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.640 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:15.640 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:15.640 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:15.640 00:16:15.640 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:15.640 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:15.640 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:15.898 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:15.898 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:15.898 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.898 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.898 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.898 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:15.898 { 00:16:15.898 "cntlid": 3, 00:16:15.898 "qid": 0, 00:16:15.898 "state": "enabled", 00:16:15.898 "thread": "nvmf_tgt_poll_group_000", 00:16:15.898 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:16:15.898 "listen_address": { 00:16:15.898 "trtype": "TCP", 00:16:15.898 "adrfam": "IPv4", 00:16:15.898 "traddr": "10.0.0.2", 00:16:15.898 "trsvcid": "4420" 00:16:15.898 }, 00:16:15.898 "peer_address": { 00:16:15.898 "trtype": "TCP", 00:16:15.898 "adrfam": "IPv4", 00:16:15.898 "traddr": "10.0.0.1", 00:16:15.898 "trsvcid": "39378" 00:16:15.898 }, 00:16:15.898 "auth": { 00:16:15.898 "state": "completed", 00:16:15.898 "digest": "sha256", 00:16:15.898 "dhgroup": "null" 00:16:15.898 } 00:16:15.898 } 00:16:15.898 ]' 00:16:15.898 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:15.898 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:15.898 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:15.898 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:15.898 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:15.898 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:15.898 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:15.898 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:16.156 09:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjFhNWJkYjMxNzEyZTY2N2M5YTJhNTJmZGVkNGVlNWO88mZF: --dhchap-ctrl-secret DHHC-1:02:YWZmZGZjZGM4OTE3OWE5MTAwZjBjNzc1N2ZlZTZiZWZiMjUxYWYzYmRiMjdkMDlilcC2uQ==: 00:16:16.156 09:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:01:ZjFhNWJkYjMxNzEyZTY2N2M5YTJhNTJmZGVkNGVlNWO88mZF: --dhchap-ctrl-secret DHHC-1:02:YWZmZGZjZGM4OTE3OWE5MTAwZjBjNzc1N2ZlZTZiZWZiMjUxYWYzYmRiMjdkMDlilcC2uQ==: 00:16:17.094 09:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:17.094 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:17.094 09:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:16:17.094 09:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.094 09:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.094 09:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.094 09:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:17.094 09:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:17.094 09:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:17.352 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:16:17.352 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:17.352 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:17.352 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:17.352 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:17.352 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:17.352 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:17.352 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.352 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.352 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.352 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:17.352 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:17.352 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:17.920 00:16:17.920 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:17.920 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:17.920 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:18.178 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:18.178 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:18.178 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.178 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.178 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.178 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:18.178 { 00:16:18.178 "cntlid": 5, 00:16:18.178 "qid": 0, 00:16:18.178 "state": "enabled", 00:16:18.178 "thread": "nvmf_tgt_poll_group_000", 00:16:18.178 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:16:18.178 "listen_address": { 00:16:18.178 "trtype": "TCP", 00:16:18.178 "adrfam": "IPv4", 00:16:18.178 "traddr": "10.0.0.2", 00:16:18.178 "trsvcid": "4420" 00:16:18.178 }, 00:16:18.178 "peer_address": { 00:16:18.178 "trtype": "TCP", 00:16:18.178 "adrfam": "IPv4", 00:16:18.178 "traddr": "10.0.0.1", 00:16:18.178 "trsvcid": "51200" 00:16:18.178 }, 00:16:18.178 "auth": { 00:16:18.178 "state": "completed", 00:16:18.178 "digest": "sha256", 00:16:18.178 "dhgroup": "null" 00:16:18.178 } 00:16:18.178 } 00:16:18.178 ]' 00:16:18.178 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:18.178 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:18.178 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:18.178 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:18.178 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:18.178 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:18.178 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:18.178 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:18.437 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODgxYzcxYWVkNjY3NjgzNTRiZTk3MzQ5NjYxZDY1YTZmZDMyZTYxMmNiYzdkZTYyukxcjw==: --dhchap-ctrl-secret DHHC-1:01:NjBmY2YzMDU2ZGZjYzMwYjQyNDRjZTBiNDk0N2I2ZmNCgJ4d: 00:16:18.437 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:02:ODgxYzcxYWVkNjY3NjgzNTRiZTk3MzQ5NjYxZDY1YTZmZDMyZTYxMmNiYzdkZTYyukxcjw==: --dhchap-ctrl-secret DHHC-1:01:NjBmY2YzMDU2ZGZjYzMwYjQyNDRjZTBiNDk0N2I2ZmNCgJ4d: 00:16:19.376 09:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:19.376 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:19.376 09:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:16:19.376 09:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.376 09:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.376 09:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.376 09:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:19.376 09:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:19.376 09:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:19.634 09:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:16:19.634 09:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:19.634 09:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:19.634 09:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:19.634 09:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:19.634 09:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:19.634 09:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key3 00:16:19.634 09:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.634 09:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.634 09:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.634 09:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:19.634 09:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:19.634 09:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:19.891 00:16:19.891 09:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:19.891 09:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:19.891 09:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:20.149 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:20.149 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:20.149 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.149 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.149 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.149 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:20.149 { 00:16:20.149 "cntlid": 7, 00:16:20.149 "qid": 0, 00:16:20.149 "state": "enabled", 00:16:20.149 "thread": "nvmf_tgt_poll_group_000", 00:16:20.149 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:16:20.149 "listen_address": { 00:16:20.149 "trtype": "TCP", 00:16:20.149 "adrfam": "IPv4", 00:16:20.149 "traddr": "10.0.0.2", 00:16:20.149 "trsvcid": "4420" 00:16:20.149 }, 00:16:20.149 "peer_address": { 00:16:20.149 "trtype": "TCP", 00:16:20.149 "adrfam": "IPv4", 00:16:20.149 "traddr": "10.0.0.1", 00:16:20.149 "trsvcid": "51216" 00:16:20.149 }, 00:16:20.149 "auth": { 00:16:20.149 "state": "completed", 00:16:20.149 "digest": "sha256", 00:16:20.149 "dhgroup": "null" 00:16:20.149 } 00:16:20.149 } 00:16:20.149 ]' 00:16:20.149 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:20.408 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:20.408 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:20.408 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:20.408 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:20.408 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:20.408 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:20.408 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:20.666 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTNkYWI5NzM4OGE3YmFjYmRmNTBmYzdlNWIxOTk2MGEzMWY3NzZhMjhjMmVkMmIzNmJlNjJhNTRiYTY1NWJlMQDRflw=: 00:16:20.666 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:03:NTNkYWI5NzM4OGE3YmFjYmRmNTBmYzdlNWIxOTk2MGEzMWY3NzZhMjhjMmVkMmIzNmJlNjJhNTRiYTY1NWJlMQDRflw=: 00:16:21.608 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:21.608 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:21.608 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:16:21.608 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.608 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.608 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.608 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:21.608 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:21.608 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:21.608 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:21.867 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:16:21.867 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:21.867 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:21.867 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:21.867 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:21.867 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:21.867 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:21.867 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.867 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.867 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.867 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:21.867 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:21.867 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:22.124 00:16:22.124 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:22.124 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:22.124 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:22.383 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:22.383 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:22.383 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.383 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.383 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.383 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:22.383 { 00:16:22.383 "cntlid": 9, 00:16:22.383 "qid": 0, 00:16:22.383 "state": "enabled", 00:16:22.383 "thread": "nvmf_tgt_poll_group_000", 00:16:22.383 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:16:22.383 "listen_address": { 00:16:22.384 "trtype": "TCP", 00:16:22.384 "adrfam": "IPv4", 00:16:22.384 "traddr": "10.0.0.2", 00:16:22.384 "trsvcid": "4420" 00:16:22.384 }, 00:16:22.384 "peer_address": { 00:16:22.384 "trtype": "TCP", 00:16:22.384 "adrfam": "IPv4", 00:16:22.384 "traddr": "10.0.0.1", 00:16:22.384 "trsvcid": "51234" 00:16:22.384 }, 00:16:22.384 "auth": { 00:16:22.384 "state": "completed", 00:16:22.384 "digest": "sha256", 00:16:22.384 "dhgroup": "ffdhe2048" 00:16:22.384 } 00:16:22.384 } 00:16:22.384 ]' 00:16:22.384 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:22.642 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:22.642 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:22.642 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:22.642 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:22.642 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:22.642 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:22.642 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:22.900 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjgyZmE4NzJmNmMwMDMwOThkYTgzYzViZjM3ZGUwYTE2NjBmYTI5OTVhMDk4OWY0VvKaGA==: --dhchap-ctrl-secret DHHC-1:03:OWZlMjAwMDQxZjg0MWI2YWNlNzU3Y2MwYTU1YWI5NzJjZGZmMmMxZGMzYTJlYjc3YzM3YjI0NTQ4OTExYTJjMi1JLHc=: 00:16:22.900 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:00:NjgyZmE4NzJmNmMwMDMwOThkYTgzYzViZjM3ZGUwYTE2NjBmYTI5OTVhMDk4OWY0VvKaGA==: --dhchap-ctrl-secret DHHC-1:03:OWZlMjAwMDQxZjg0MWI2YWNlNzU3Y2MwYTU1YWI5NzJjZGZmMmMxZGMzYTJlYjc3YzM3YjI0NTQ4OTExYTJjMi1JLHc=: 00:16:23.838 09:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:23.838 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:23.838 09:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:16:23.838 09:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.838 09:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.838 09:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.838 09:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:23.838 09:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:23.838 09:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:24.096 09:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:16:24.096 09:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:24.096 09:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:24.096 09:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:24.096 09:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:24.096 09:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:24.096 09:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:24.096 09:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.096 09:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.096 09:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.096 09:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:24.096 09:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:24.096 09:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:24.353 00:16:24.354 09:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:24.354 09:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:24.354 09:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:24.612 09:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:24.612 09:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:24.612 09:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.612 09:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.612 09:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.612 09:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:24.612 { 00:16:24.612 "cntlid": 11, 00:16:24.612 "qid": 0, 00:16:24.612 "state": "enabled", 00:16:24.612 "thread": "nvmf_tgt_poll_group_000", 00:16:24.612 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:16:24.612 "listen_address": { 00:16:24.612 "trtype": "TCP", 00:16:24.612 "adrfam": "IPv4", 00:16:24.612 "traddr": "10.0.0.2", 00:16:24.612 "trsvcid": "4420" 00:16:24.612 }, 00:16:24.612 "peer_address": { 00:16:24.612 "trtype": "TCP", 00:16:24.612 "adrfam": "IPv4", 00:16:24.612 "traddr": "10.0.0.1", 00:16:24.612 "trsvcid": "51258" 00:16:24.612 }, 00:16:24.612 "auth": { 00:16:24.612 "state": "completed", 00:16:24.612 "digest": "sha256", 00:16:24.612 "dhgroup": "ffdhe2048" 00:16:24.612 } 00:16:24.612 } 00:16:24.612 ]' 00:16:24.612 09:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:24.870 09:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:24.870 09:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:24.871 09:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:24.871 09:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:24.871 09:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:24.871 09:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:24.871 09:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:25.129 09:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjFhNWJkYjMxNzEyZTY2N2M5YTJhNTJmZGVkNGVlNWO88mZF: --dhchap-ctrl-secret DHHC-1:02:YWZmZGZjZGM4OTE3OWE5MTAwZjBjNzc1N2ZlZTZiZWZiMjUxYWYzYmRiMjdkMDlilcC2uQ==: 00:16:25.129 09:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:01:ZjFhNWJkYjMxNzEyZTY2N2M5YTJhNTJmZGVkNGVlNWO88mZF: --dhchap-ctrl-secret DHHC-1:02:YWZmZGZjZGM4OTE3OWE5MTAwZjBjNzc1N2ZlZTZiZWZiMjUxYWYzYmRiMjdkMDlilcC2uQ==: 00:16:26.068 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:26.068 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:26.068 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:16:26.068 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.068 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.068 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.068 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:26.068 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:26.068 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:26.327 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:16:26.327 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:26.327 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:26.327 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:26.327 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:26.327 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:26.327 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:26.327 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.327 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.327 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.327 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:26.327 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:26.327 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:26.585 00:16:26.585 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:26.585 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:26.585 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:26.844 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:26.844 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:26.844 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.844 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.844 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.844 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:26.844 { 00:16:26.844 "cntlid": 13, 00:16:26.844 "qid": 0, 00:16:26.844 "state": "enabled", 00:16:26.844 "thread": "nvmf_tgt_poll_group_000", 00:16:26.844 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:16:26.844 "listen_address": { 00:16:26.844 "trtype": "TCP", 00:16:26.844 "adrfam": "IPv4", 00:16:26.844 "traddr": "10.0.0.2", 00:16:26.844 "trsvcid": "4420" 00:16:26.844 }, 00:16:26.844 "peer_address": { 00:16:26.844 "trtype": "TCP", 00:16:26.844 "adrfam": "IPv4", 00:16:26.844 "traddr": "10.0.0.1", 00:16:26.844 "trsvcid": "56956" 00:16:26.844 }, 00:16:26.844 "auth": { 00:16:26.844 "state": "completed", 00:16:26.844 "digest": "sha256", 00:16:26.844 "dhgroup": "ffdhe2048" 00:16:26.844 } 00:16:26.844 } 00:16:26.844 ]' 00:16:26.844 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:26.844 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:26.844 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:26.844 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:26.844 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:27.102 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:27.102 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:27.102 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:27.360 09:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODgxYzcxYWVkNjY3NjgzNTRiZTk3MzQ5NjYxZDY1YTZmZDMyZTYxMmNiYzdkZTYyukxcjw==: --dhchap-ctrl-secret DHHC-1:01:NjBmY2YzMDU2ZGZjYzMwYjQyNDRjZTBiNDk0N2I2ZmNCgJ4d: 00:16:27.360 09:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:02:ODgxYzcxYWVkNjY3NjgzNTRiZTk3MzQ5NjYxZDY1YTZmZDMyZTYxMmNiYzdkZTYyukxcjw==: --dhchap-ctrl-secret DHHC-1:01:NjBmY2YzMDU2ZGZjYzMwYjQyNDRjZTBiNDk0N2I2ZmNCgJ4d: 00:16:28.295 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:28.295 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:28.295 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:16:28.295 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.295 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.295 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.295 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:28.295 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:28.295 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:28.553 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:16:28.553 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:28.553 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:28.553 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:28.553 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:28.553 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:28.553 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key3 00:16:28.553 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.553 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.553 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.553 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:28.553 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:28.553 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:28.812 00:16:28.812 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:28.812 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:28.812 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:29.070 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:29.070 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:29.070 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.070 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.070 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.070 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:29.070 { 00:16:29.070 "cntlid": 15, 00:16:29.070 "qid": 0, 00:16:29.070 "state": "enabled", 00:16:29.070 "thread": "nvmf_tgt_poll_group_000", 00:16:29.070 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:16:29.070 "listen_address": { 00:16:29.070 "trtype": "TCP", 00:16:29.070 "adrfam": "IPv4", 00:16:29.070 "traddr": "10.0.0.2", 00:16:29.070 "trsvcid": "4420" 00:16:29.070 }, 00:16:29.070 "peer_address": { 00:16:29.070 "trtype": "TCP", 00:16:29.070 "adrfam": "IPv4", 00:16:29.070 "traddr": "10.0.0.1", 00:16:29.070 "trsvcid": "56974" 00:16:29.070 }, 00:16:29.070 "auth": { 00:16:29.070 "state": "completed", 00:16:29.070 "digest": "sha256", 00:16:29.070 "dhgroup": "ffdhe2048" 00:16:29.070 } 00:16:29.070 } 00:16:29.070 ]' 00:16:29.070 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:29.070 09:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:29.070 09:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:29.070 09:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:29.070 09:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:29.328 09:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:29.328 09:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:29.328 09:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:29.587 09:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTNkYWI5NzM4OGE3YmFjYmRmNTBmYzdlNWIxOTk2MGEzMWY3NzZhMjhjMmVkMmIzNmJlNjJhNTRiYTY1NWJlMQDRflw=: 00:16:29.587 09:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:03:NTNkYWI5NzM4OGE3YmFjYmRmNTBmYzdlNWIxOTk2MGEzMWY3NzZhMjhjMmVkMmIzNmJlNjJhNTRiYTY1NWJlMQDRflw=: 00:16:30.523 09:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:30.523 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:30.523 09:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:16:30.523 09:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.523 09:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.523 09:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.524 09:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:30.524 09:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:30.524 09:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:30.524 09:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:30.781 09:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:16:30.781 09:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:30.781 09:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:30.781 09:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:30.781 09:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:30.781 09:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:30.781 09:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:30.781 09:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.781 09:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.781 09:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.781 09:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:30.781 09:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:30.781 09:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:31.039 00:16:31.039 09:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:31.039 09:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:31.039 09:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:31.297 09:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.297 09:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:31.297 09:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.297 09:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.297 09:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.297 09:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:31.297 { 00:16:31.297 "cntlid": 17, 00:16:31.297 "qid": 0, 00:16:31.297 "state": "enabled", 00:16:31.297 "thread": "nvmf_tgt_poll_group_000", 00:16:31.297 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:16:31.297 "listen_address": { 00:16:31.297 "trtype": "TCP", 00:16:31.297 "adrfam": "IPv4", 00:16:31.297 "traddr": "10.0.0.2", 00:16:31.297 "trsvcid": "4420" 00:16:31.297 }, 00:16:31.297 "peer_address": { 00:16:31.297 "trtype": "TCP", 00:16:31.297 "adrfam": "IPv4", 00:16:31.297 "traddr": "10.0.0.1", 00:16:31.297 "trsvcid": "57002" 00:16:31.297 }, 00:16:31.297 "auth": { 00:16:31.297 "state": "completed", 00:16:31.297 "digest": "sha256", 00:16:31.297 "dhgroup": "ffdhe3072" 00:16:31.297 } 00:16:31.297 } 00:16:31.297 ]' 00:16:31.297 09:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:31.297 09:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:31.297 09:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:31.297 09:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:31.297 09:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:31.297 09:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:31.297 09:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:31.297 09:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:31.865 09:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjgyZmE4NzJmNmMwMDMwOThkYTgzYzViZjM3ZGUwYTE2NjBmYTI5OTVhMDk4OWY0VvKaGA==: --dhchap-ctrl-secret DHHC-1:03:OWZlMjAwMDQxZjg0MWI2YWNlNzU3Y2MwYTU1YWI5NzJjZGZmMmMxZGMzYTJlYjc3YzM3YjI0NTQ4OTExYTJjMi1JLHc=: 00:16:31.865 09:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:00:NjgyZmE4NzJmNmMwMDMwOThkYTgzYzViZjM3ZGUwYTE2NjBmYTI5OTVhMDk4OWY0VvKaGA==: --dhchap-ctrl-secret DHHC-1:03:OWZlMjAwMDQxZjg0MWI2YWNlNzU3Y2MwYTU1YWI5NzJjZGZmMmMxZGMzYTJlYjc3YzM3YjI0NTQ4OTExYTJjMi1JLHc=: 00:16:32.800 09:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:32.800 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:32.800 09:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:16:32.800 09:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.800 09:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.801 09:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.801 09:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:32.801 09:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:32.801 09:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:32.801 09:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:16:32.801 09:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:32.801 09:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:32.801 09:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:32.801 09:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:32.801 09:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:32.801 09:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:32.801 09:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.801 09:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.801 09:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.801 09:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:32.801 09:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:32.801 09:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:33.479 00:16:33.479 09:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:33.479 09:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:33.479 09:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:33.479 09:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:33.479 09:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:33.479 09:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.479 09:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.479 09:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.479 09:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:33.479 { 00:16:33.479 "cntlid": 19, 00:16:33.479 "qid": 0, 00:16:33.479 "state": "enabled", 00:16:33.479 "thread": "nvmf_tgt_poll_group_000", 00:16:33.479 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:16:33.479 "listen_address": { 00:16:33.479 "trtype": "TCP", 00:16:33.479 "adrfam": "IPv4", 00:16:33.479 "traddr": "10.0.0.2", 00:16:33.479 "trsvcid": "4420" 00:16:33.479 }, 00:16:33.479 "peer_address": { 00:16:33.479 "trtype": "TCP", 00:16:33.479 "adrfam": "IPv4", 00:16:33.479 "traddr": "10.0.0.1", 00:16:33.479 "trsvcid": "57026" 00:16:33.479 }, 00:16:33.479 "auth": { 00:16:33.479 "state": "completed", 00:16:33.479 "digest": "sha256", 00:16:33.479 "dhgroup": "ffdhe3072" 00:16:33.479 } 00:16:33.479 } 00:16:33.479 ]' 00:16:33.479 09:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:33.764 09:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:33.764 09:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:33.764 09:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:33.764 09:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:33.764 09:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:33.764 09:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:33.764 09:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:34.054 09:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjFhNWJkYjMxNzEyZTY2N2M5YTJhNTJmZGVkNGVlNWO88mZF: --dhchap-ctrl-secret DHHC-1:02:YWZmZGZjZGM4OTE3OWE5MTAwZjBjNzc1N2ZlZTZiZWZiMjUxYWYzYmRiMjdkMDlilcC2uQ==: 00:16:34.054 09:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:01:ZjFhNWJkYjMxNzEyZTY2N2M5YTJhNTJmZGVkNGVlNWO88mZF: --dhchap-ctrl-secret DHHC-1:02:YWZmZGZjZGM4OTE3OWE5MTAwZjBjNzc1N2ZlZTZiZWZiMjUxYWYzYmRiMjdkMDlilcC2uQ==: 00:16:35.032 09:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:35.032 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:35.032 09:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:16:35.032 09:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.032 09:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.032 09:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.032 09:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:35.032 09:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:35.032 09:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:35.032 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:16:35.032 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:35.032 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:35.032 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:35.032 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:35.032 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:35.033 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:35.033 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.033 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.033 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.033 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:35.033 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:35.033 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:35.661 00:16:35.661 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:35.661 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:35.661 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:35.661 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:35.661 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:35.661 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.661 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.661 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.661 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:35.661 { 00:16:35.661 "cntlid": 21, 00:16:35.661 "qid": 0, 00:16:35.661 "state": "enabled", 00:16:35.661 "thread": "nvmf_tgt_poll_group_000", 00:16:35.662 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:16:35.662 "listen_address": { 00:16:35.662 "trtype": "TCP", 00:16:35.662 "adrfam": "IPv4", 00:16:35.662 "traddr": "10.0.0.2", 00:16:35.662 "trsvcid": "4420" 00:16:35.662 }, 00:16:35.662 "peer_address": { 00:16:35.662 "trtype": "TCP", 00:16:35.662 "adrfam": "IPv4", 00:16:35.662 "traddr": "10.0.0.1", 00:16:35.662 "trsvcid": "57060" 00:16:35.662 }, 00:16:35.662 "auth": { 00:16:35.662 "state": "completed", 00:16:35.662 "digest": "sha256", 00:16:35.662 "dhgroup": "ffdhe3072" 00:16:35.662 } 00:16:35.662 } 00:16:35.662 ]' 00:16:35.662 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:35.949 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:35.949 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:35.949 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:35.949 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:35.949 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:35.949 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:35.950 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:36.235 09:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODgxYzcxYWVkNjY3NjgzNTRiZTk3MzQ5NjYxZDY1YTZmZDMyZTYxMmNiYzdkZTYyukxcjw==: --dhchap-ctrl-secret DHHC-1:01:NjBmY2YzMDU2ZGZjYzMwYjQyNDRjZTBiNDk0N2I2ZmNCgJ4d: 00:16:36.235 09:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:02:ODgxYzcxYWVkNjY3NjgzNTRiZTk3MzQ5NjYxZDY1YTZmZDMyZTYxMmNiYzdkZTYyukxcjw==: --dhchap-ctrl-secret DHHC-1:01:NjBmY2YzMDU2ZGZjYzMwYjQyNDRjZTBiNDk0N2I2ZmNCgJ4d: 00:16:37.246 09:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:37.246 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:37.246 09:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:16:37.246 09:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.246 09:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.246 09:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.246 09:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:37.246 09:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:37.246 09:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:37.547 09:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:16:37.547 09:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:37.547 09:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:37.547 09:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:37.547 09:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:37.547 09:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:37.547 09:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key3 00:16:37.547 09:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.547 09:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.547 09:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.547 09:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:37.548 09:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:37.548 09:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:37.877 00:16:37.877 09:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:37.877 09:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:37.877 09:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:38.188 09:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:38.188 09:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:38.188 09:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.188 09:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.188 09:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.188 09:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:38.188 { 00:16:38.188 "cntlid": 23, 00:16:38.188 "qid": 0, 00:16:38.188 "state": "enabled", 00:16:38.188 "thread": "nvmf_tgt_poll_group_000", 00:16:38.188 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:16:38.188 "listen_address": { 00:16:38.188 "trtype": "TCP", 00:16:38.188 "adrfam": "IPv4", 00:16:38.188 "traddr": "10.0.0.2", 00:16:38.188 "trsvcid": "4420" 00:16:38.188 }, 00:16:38.188 "peer_address": { 00:16:38.188 "trtype": "TCP", 00:16:38.188 "adrfam": "IPv4", 00:16:38.188 "traddr": "10.0.0.1", 00:16:38.188 "trsvcid": "36178" 00:16:38.188 }, 00:16:38.188 "auth": { 00:16:38.188 "state": "completed", 00:16:38.188 "digest": "sha256", 00:16:38.188 "dhgroup": "ffdhe3072" 00:16:38.188 } 00:16:38.188 } 00:16:38.188 ]' 00:16:38.188 09:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:38.188 09:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:38.188 09:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:38.188 09:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:38.188 09:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:38.188 09:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:38.188 09:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:38.188 09:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:38.491 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTNkYWI5NzM4OGE3YmFjYmRmNTBmYzdlNWIxOTk2MGEzMWY3NzZhMjhjMmVkMmIzNmJlNjJhNTRiYTY1NWJlMQDRflw=: 00:16:38.491 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:03:NTNkYWI5NzM4OGE3YmFjYmRmNTBmYzdlNWIxOTk2MGEzMWY3NzZhMjhjMmVkMmIzNmJlNjJhNTRiYTY1NWJlMQDRflw=: 00:16:39.205 09:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:39.205 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:39.205 09:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:16:39.205 09:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.205 09:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.205 09:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.205 09:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:39.205 09:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:39.205 09:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:39.205 09:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:39.488 09:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:16:39.488 09:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:39.489 09:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:39.489 09:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:39.489 09:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:39.489 09:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:39.489 09:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.489 09:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.489 09:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.489 09:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.489 09:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.489 09:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.489 09:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:40.158 00:16:40.158 09:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:40.158 09:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:40.158 09:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:40.158 09:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:40.158 09:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:40.158 09:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.158 09:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.456 09:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.456 09:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:40.456 { 00:16:40.456 "cntlid": 25, 00:16:40.456 "qid": 0, 00:16:40.456 "state": "enabled", 00:16:40.456 "thread": "nvmf_tgt_poll_group_000", 00:16:40.456 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:16:40.456 "listen_address": { 00:16:40.456 "trtype": "TCP", 00:16:40.456 "adrfam": "IPv4", 00:16:40.456 "traddr": "10.0.0.2", 00:16:40.456 "trsvcid": "4420" 00:16:40.456 }, 00:16:40.456 "peer_address": { 00:16:40.456 "trtype": "TCP", 00:16:40.456 "adrfam": "IPv4", 00:16:40.456 "traddr": "10.0.0.1", 00:16:40.456 "trsvcid": "36194" 00:16:40.456 }, 00:16:40.456 "auth": { 00:16:40.456 "state": "completed", 00:16:40.456 "digest": "sha256", 00:16:40.456 "dhgroup": "ffdhe4096" 00:16:40.456 } 00:16:40.456 } 00:16:40.456 ]' 00:16:40.457 09:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:40.457 09:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:40.457 09:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:40.457 09:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:40.457 09:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:40.457 09:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:40.457 09:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:40.457 09:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:40.773 09:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjgyZmE4NzJmNmMwMDMwOThkYTgzYzViZjM3ZGUwYTE2NjBmYTI5OTVhMDk4OWY0VvKaGA==: --dhchap-ctrl-secret DHHC-1:03:OWZlMjAwMDQxZjg0MWI2YWNlNzU3Y2MwYTU1YWI5NzJjZGZmMmMxZGMzYTJlYjc3YzM3YjI0NTQ4OTExYTJjMi1JLHc=: 00:16:40.773 09:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:00:NjgyZmE4NzJmNmMwMDMwOThkYTgzYzViZjM3ZGUwYTE2NjBmYTI5OTVhMDk4OWY0VvKaGA==: --dhchap-ctrl-secret DHHC-1:03:OWZlMjAwMDQxZjg0MWI2YWNlNzU3Y2MwYTU1YWI5NzJjZGZmMmMxZGMzYTJlYjc3YzM3YjI0NTQ4OTExYTJjMi1JLHc=: 00:16:41.708 09:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:41.708 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:41.708 09:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:16:41.708 09:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.708 09:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.708 09:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.708 09:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:41.708 09:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:41.708 09:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:41.966 09:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:16:41.966 09:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:41.966 09:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:41.966 09:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:41.966 09:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:41.966 09:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:41.966 09:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:41.966 09:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.966 09:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.966 09:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.966 09:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:41.966 09:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:41.966 09:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:42.224 00:16:42.224 09:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:42.224 09:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:42.224 09:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:42.482 09:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.482 09:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:42.482 09:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.482 09:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.482 09:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.482 09:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:42.482 { 00:16:42.482 "cntlid": 27, 00:16:42.482 "qid": 0, 00:16:42.482 "state": "enabled", 00:16:42.482 "thread": "nvmf_tgt_poll_group_000", 00:16:42.482 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:16:42.482 "listen_address": { 00:16:42.482 "trtype": "TCP", 00:16:42.482 "adrfam": "IPv4", 00:16:42.482 "traddr": "10.0.0.2", 00:16:42.482 "trsvcid": "4420" 00:16:42.482 }, 00:16:42.482 "peer_address": { 00:16:42.482 "trtype": "TCP", 00:16:42.482 "adrfam": "IPv4", 00:16:42.482 "traddr": "10.0.0.1", 00:16:42.482 "trsvcid": "36222" 00:16:42.482 }, 00:16:42.482 "auth": { 00:16:42.482 "state": "completed", 00:16:42.482 "digest": "sha256", 00:16:42.482 "dhgroup": "ffdhe4096" 00:16:42.482 } 00:16:42.482 } 00:16:42.482 ]' 00:16:42.482 09:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:42.740 09:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:42.740 09:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:42.740 09:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:42.740 09:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:42.740 09:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:42.741 09:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:42.741 09:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:42.998 09:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjFhNWJkYjMxNzEyZTY2N2M5YTJhNTJmZGVkNGVlNWO88mZF: --dhchap-ctrl-secret DHHC-1:02:YWZmZGZjZGM4OTE3OWE5MTAwZjBjNzc1N2ZlZTZiZWZiMjUxYWYzYmRiMjdkMDlilcC2uQ==: 00:16:42.998 09:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:01:ZjFhNWJkYjMxNzEyZTY2N2M5YTJhNTJmZGVkNGVlNWO88mZF: --dhchap-ctrl-secret DHHC-1:02:YWZmZGZjZGM4OTE3OWE5MTAwZjBjNzc1N2ZlZTZiZWZiMjUxYWYzYmRiMjdkMDlilcC2uQ==: 00:16:43.936 09:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:43.936 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:43.936 09:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:16:43.936 09:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.936 09:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.936 09:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.936 09:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:43.936 09:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:43.936 09:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:44.194 09:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:16:44.194 09:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:44.194 09:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:44.194 09:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:44.194 09:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:44.194 09:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:44.194 09:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:44.194 09:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.194 09:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.194 09:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.194 09:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:44.194 09:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:44.194 09:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:44.452 00:16:44.452 09:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:44.452 09:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:44.452 09:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:44.710 09:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.710 09:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:44.710 09:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.710 09:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.710 09:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.710 09:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:44.710 { 00:16:44.710 "cntlid": 29, 00:16:44.710 "qid": 0, 00:16:44.710 "state": "enabled", 00:16:44.710 "thread": "nvmf_tgt_poll_group_000", 00:16:44.710 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:16:44.710 "listen_address": { 00:16:44.710 "trtype": "TCP", 00:16:44.710 "adrfam": "IPv4", 00:16:44.710 "traddr": "10.0.0.2", 00:16:44.710 "trsvcid": "4420" 00:16:44.710 }, 00:16:44.710 "peer_address": { 00:16:44.710 "trtype": "TCP", 00:16:44.710 "adrfam": "IPv4", 00:16:44.710 "traddr": "10.0.0.1", 00:16:44.710 "trsvcid": "36242" 00:16:44.710 }, 00:16:44.710 "auth": { 00:16:44.710 "state": "completed", 00:16:44.710 "digest": "sha256", 00:16:44.710 "dhgroup": "ffdhe4096" 00:16:44.710 } 00:16:44.710 } 00:16:44.710 ]' 00:16:44.710 09:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:44.970 09:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:44.970 09:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:44.970 09:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:44.970 09:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:44.970 09:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:44.970 09:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:44.970 09:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:45.229 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODgxYzcxYWVkNjY3NjgzNTRiZTk3MzQ5NjYxZDY1YTZmZDMyZTYxMmNiYzdkZTYyukxcjw==: --dhchap-ctrl-secret DHHC-1:01:NjBmY2YzMDU2ZGZjYzMwYjQyNDRjZTBiNDk0N2I2ZmNCgJ4d: 00:16:45.229 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:02:ODgxYzcxYWVkNjY3NjgzNTRiZTk3MzQ5NjYxZDY1YTZmZDMyZTYxMmNiYzdkZTYyukxcjw==: --dhchap-ctrl-secret DHHC-1:01:NjBmY2YzMDU2ZGZjYzMwYjQyNDRjZTBiNDk0N2I2ZmNCgJ4d: 00:16:46.165 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:46.165 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:46.165 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:16:46.165 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.165 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.165 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.165 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:46.165 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:46.165 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:46.423 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:16:46.423 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:46.423 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:46.423 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:46.423 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:46.424 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:46.424 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key3 00:16:46.424 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.424 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.424 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.424 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:46.424 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:46.424 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:46.682 00:16:46.682 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:46.682 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:46.682 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:46.940 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:46.940 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:46.940 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.940 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.940 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.940 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:46.940 { 00:16:46.940 "cntlid": 31, 00:16:46.940 "qid": 0, 00:16:46.940 "state": "enabled", 00:16:46.940 "thread": "nvmf_tgt_poll_group_000", 00:16:46.940 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:16:46.940 "listen_address": { 00:16:46.940 "trtype": "TCP", 00:16:46.940 "adrfam": "IPv4", 00:16:46.940 "traddr": "10.0.0.2", 00:16:46.940 "trsvcid": "4420" 00:16:46.940 }, 00:16:46.940 "peer_address": { 00:16:46.940 "trtype": "TCP", 00:16:46.940 "adrfam": "IPv4", 00:16:46.940 "traddr": "10.0.0.1", 00:16:46.940 "trsvcid": "36114" 00:16:46.940 }, 00:16:46.940 "auth": { 00:16:46.940 "state": "completed", 00:16:46.940 "digest": "sha256", 00:16:46.940 "dhgroup": "ffdhe4096" 00:16:46.940 } 00:16:46.940 } 00:16:46.940 ]' 00:16:46.940 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:47.199 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:47.199 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:47.199 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:47.199 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:47.199 09:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:47.199 09:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:47.199 09:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:47.458 09:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTNkYWI5NzM4OGE3YmFjYmRmNTBmYzdlNWIxOTk2MGEzMWY3NzZhMjhjMmVkMmIzNmJlNjJhNTRiYTY1NWJlMQDRflw=: 00:16:47.458 09:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:03:NTNkYWI5NzM4OGE3YmFjYmRmNTBmYzdlNWIxOTk2MGEzMWY3NzZhMjhjMmVkMmIzNmJlNjJhNTRiYTY1NWJlMQDRflw=: 00:16:48.397 09:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:48.397 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:48.397 09:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:16:48.397 09:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.397 09:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.397 09:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.397 09:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:48.397 09:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:48.397 09:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:48.397 09:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:48.655 09:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:16:48.655 09:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:48.655 09:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:48.655 09:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:48.655 09:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:48.655 09:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:48.655 09:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:48.655 09:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.655 09:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.655 09:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.655 09:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:48.655 09:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:48.655 09:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:49.222 00:16:49.222 09:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:49.222 09:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:49.222 09:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:49.480 09:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.480 09:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:49.480 09:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.480 09:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.480 09:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.480 09:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:49.480 { 00:16:49.480 "cntlid": 33, 00:16:49.480 "qid": 0, 00:16:49.480 "state": "enabled", 00:16:49.480 "thread": "nvmf_tgt_poll_group_000", 00:16:49.480 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:16:49.480 "listen_address": { 00:16:49.480 "trtype": "TCP", 00:16:49.480 "adrfam": "IPv4", 00:16:49.480 "traddr": "10.0.0.2", 00:16:49.480 "trsvcid": "4420" 00:16:49.480 }, 00:16:49.480 "peer_address": { 00:16:49.480 "trtype": "TCP", 00:16:49.480 "adrfam": "IPv4", 00:16:49.480 "traddr": "10.0.0.1", 00:16:49.480 "trsvcid": "36142" 00:16:49.480 }, 00:16:49.480 "auth": { 00:16:49.480 "state": "completed", 00:16:49.480 "digest": "sha256", 00:16:49.480 "dhgroup": "ffdhe6144" 00:16:49.480 } 00:16:49.480 } 00:16:49.480 ]' 00:16:49.480 09:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:49.480 09:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:49.480 09:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:49.481 09:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:49.481 09:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:49.481 09:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:49.481 09:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:49.481 09:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:49.738 09:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjgyZmE4NzJmNmMwMDMwOThkYTgzYzViZjM3ZGUwYTE2NjBmYTI5OTVhMDk4OWY0VvKaGA==: --dhchap-ctrl-secret DHHC-1:03:OWZlMjAwMDQxZjg0MWI2YWNlNzU3Y2MwYTU1YWI5NzJjZGZmMmMxZGMzYTJlYjc3YzM3YjI0NTQ4OTExYTJjMi1JLHc=: 00:16:49.738 09:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:00:NjgyZmE4NzJmNmMwMDMwOThkYTgzYzViZjM3ZGUwYTE2NjBmYTI5OTVhMDk4OWY0VvKaGA==: --dhchap-ctrl-secret DHHC-1:03:OWZlMjAwMDQxZjg0MWI2YWNlNzU3Y2MwYTU1YWI5NzJjZGZmMmMxZGMzYTJlYjc3YzM3YjI0NTQ4OTExYTJjMi1JLHc=: 00:16:50.677 09:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:50.677 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:50.677 09:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:16:50.677 09:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.677 09:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.677 09:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.677 09:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:50.677 09:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:50.677 09:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:50.936 09:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:16:50.936 09:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:50.936 09:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:50.936 09:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:50.936 09:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:50.936 09:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:50.936 09:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.936 09:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.936 09:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.936 09:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.936 09:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.936 09:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.936 09:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:51.501 00:16:51.501 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:51.501 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:51.502 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:51.760 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.760 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:51.760 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.760 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.760 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.760 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:51.760 { 00:16:51.760 "cntlid": 35, 00:16:51.760 "qid": 0, 00:16:51.760 "state": "enabled", 00:16:51.760 "thread": "nvmf_tgt_poll_group_000", 00:16:51.760 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:16:51.760 "listen_address": { 00:16:51.760 "trtype": "TCP", 00:16:51.760 "adrfam": "IPv4", 00:16:51.760 "traddr": "10.0.0.2", 00:16:51.760 "trsvcid": "4420" 00:16:51.760 }, 00:16:51.760 "peer_address": { 00:16:51.760 "trtype": "TCP", 00:16:51.760 "adrfam": "IPv4", 00:16:51.760 "traddr": "10.0.0.1", 00:16:51.760 "trsvcid": "36170" 00:16:51.760 }, 00:16:51.760 "auth": { 00:16:51.760 "state": "completed", 00:16:51.760 "digest": "sha256", 00:16:51.760 "dhgroup": "ffdhe6144" 00:16:51.760 } 00:16:51.760 } 00:16:51.760 ]' 00:16:51.760 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:51.760 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:51.760 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:51.761 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:52.019 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:52.019 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:52.019 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:52.019 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:52.276 09:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjFhNWJkYjMxNzEyZTY2N2M5YTJhNTJmZGVkNGVlNWO88mZF: --dhchap-ctrl-secret DHHC-1:02:YWZmZGZjZGM4OTE3OWE5MTAwZjBjNzc1N2ZlZTZiZWZiMjUxYWYzYmRiMjdkMDlilcC2uQ==: 00:16:52.276 09:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:01:ZjFhNWJkYjMxNzEyZTY2N2M5YTJhNTJmZGVkNGVlNWO88mZF: --dhchap-ctrl-secret DHHC-1:02:YWZmZGZjZGM4OTE3OWE5MTAwZjBjNzc1N2ZlZTZiZWZiMjUxYWYzYmRiMjdkMDlilcC2uQ==: 00:16:53.212 09:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:53.212 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:53.212 09:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:16:53.212 09:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.212 09:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.212 09:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.212 09:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:53.212 09:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:53.212 09:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:53.471 09:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:16:53.471 09:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:53.471 09:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:53.471 09:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:53.471 09:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:53.471 09:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:53.471 09:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:53.471 09:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.471 09:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.471 09:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.471 09:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:53.471 09:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:53.471 09:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:54.040 00:16:54.040 09:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:54.040 09:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:54.040 09:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:54.299 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.299 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:54.299 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.299 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.299 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.299 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:54.299 { 00:16:54.299 "cntlid": 37, 00:16:54.299 "qid": 0, 00:16:54.299 "state": "enabled", 00:16:54.299 "thread": "nvmf_tgt_poll_group_000", 00:16:54.299 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:16:54.299 "listen_address": { 00:16:54.299 "trtype": "TCP", 00:16:54.299 "adrfam": "IPv4", 00:16:54.299 "traddr": "10.0.0.2", 00:16:54.299 "trsvcid": "4420" 00:16:54.299 }, 00:16:54.299 "peer_address": { 00:16:54.299 "trtype": "TCP", 00:16:54.299 "adrfam": "IPv4", 00:16:54.299 "traddr": "10.0.0.1", 00:16:54.299 "trsvcid": "36196" 00:16:54.299 }, 00:16:54.299 "auth": { 00:16:54.299 "state": "completed", 00:16:54.299 "digest": "sha256", 00:16:54.299 "dhgroup": "ffdhe6144" 00:16:54.299 } 00:16:54.299 } 00:16:54.299 ]' 00:16:54.299 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:54.299 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:54.299 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:54.299 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:54.299 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:54.299 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:54.299 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:54.299 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:54.559 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODgxYzcxYWVkNjY3NjgzNTRiZTk3MzQ5NjYxZDY1YTZmZDMyZTYxMmNiYzdkZTYyukxcjw==: --dhchap-ctrl-secret DHHC-1:01:NjBmY2YzMDU2ZGZjYzMwYjQyNDRjZTBiNDk0N2I2ZmNCgJ4d: 00:16:54.559 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:02:ODgxYzcxYWVkNjY3NjgzNTRiZTk3MzQ5NjYxZDY1YTZmZDMyZTYxMmNiYzdkZTYyukxcjw==: --dhchap-ctrl-secret DHHC-1:01:NjBmY2YzMDU2ZGZjYzMwYjQyNDRjZTBiNDk0N2I2ZmNCgJ4d: 00:16:55.497 09:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:55.497 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:55.497 09:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:16:55.497 09:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.497 09:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.497 09:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.497 09:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:55.497 09:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:55.497 09:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:55.756 09:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:16:55.756 09:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:55.756 09:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:55.756 09:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:55.756 09:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:55.756 09:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:55.756 09:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key3 00:16:55.756 09:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.756 09:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.756 09:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.756 09:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:55.756 09:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:55.756 09:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:56.323 00:16:56.323 09:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:56.323 09:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:56.323 09:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:56.582 09:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.582 09:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:56.582 09:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.582 09:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.582 09:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.582 09:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:56.582 { 00:16:56.582 "cntlid": 39, 00:16:56.582 "qid": 0, 00:16:56.582 "state": "enabled", 00:16:56.582 "thread": "nvmf_tgt_poll_group_000", 00:16:56.582 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:16:56.582 "listen_address": { 00:16:56.582 "trtype": "TCP", 00:16:56.582 "adrfam": "IPv4", 00:16:56.582 "traddr": "10.0.0.2", 00:16:56.582 "trsvcid": "4420" 00:16:56.582 }, 00:16:56.582 "peer_address": { 00:16:56.582 "trtype": "TCP", 00:16:56.582 "adrfam": "IPv4", 00:16:56.582 "traddr": "10.0.0.1", 00:16:56.582 "trsvcid": "36226" 00:16:56.582 }, 00:16:56.582 "auth": { 00:16:56.582 "state": "completed", 00:16:56.582 "digest": "sha256", 00:16:56.582 "dhgroup": "ffdhe6144" 00:16:56.582 } 00:16:56.582 } 00:16:56.582 ]' 00:16:56.582 09:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:56.840 09:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:56.840 09:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:56.840 09:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:56.840 09:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:56.840 09:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:56.840 09:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:56.840 09:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:57.098 09:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTNkYWI5NzM4OGE3YmFjYmRmNTBmYzdlNWIxOTk2MGEzMWY3NzZhMjhjMmVkMmIzNmJlNjJhNTRiYTY1NWJlMQDRflw=: 00:16:57.098 09:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:03:NTNkYWI5NzM4OGE3YmFjYmRmNTBmYzdlNWIxOTk2MGEzMWY3NzZhMjhjMmVkMmIzNmJlNjJhNTRiYTY1NWJlMQDRflw=: 00:16:58.034 09:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:58.034 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:58.034 09:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:16:58.034 09:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.034 09:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.034 09:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.034 09:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:58.034 09:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:58.034 09:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:58.034 09:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:58.291 09:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:16:58.292 09:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:58.292 09:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:58.292 09:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:58.292 09:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:58.292 09:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:58.292 09:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:58.292 09:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.292 09:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.292 09:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.292 09:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:58.292 09:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:58.292 09:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:59.229 00:16:59.229 09:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:59.229 09:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:59.229 09:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:59.229 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.229 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:59.229 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.229 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.488 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.488 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:59.488 { 00:16:59.488 "cntlid": 41, 00:16:59.488 "qid": 0, 00:16:59.488 "state": "enabled", 00:16:59.488 "thread": "nvmf_tgt_poll_group_000", 00:16:59.488 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:16:59.488 "listen_address": { 00:16:59.488 "trtype": "TCP", 00:16:59.488 "adrfam": "IPv4", 00:16:59.488 "traddr": "10.0.0.2", 00:16:59.488 "trsvcid": "4420" 00:16:59.488 }, 00:16:59.488 "peer_address": { 00:16:59.488 "trtype": "TCP", 00:16:59.488 "adrfam": "IPv4", 00:16:59.488 "traddr": "10.0.0.1", 00:16:59.488 "trsvcid": "47542" 00:16:59.488 }, 00:16:59.488 "auth": { 00:16:59.488 "state": "completed", 00:16:59.488 "digest": "sha256", 00:16:59.488 "dhgroup": "ffdhe8192" 00:16:59.488 } 00:16:59.488 } 00:16:59.488 ]' 00:16:59.488 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:59.488 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:59.488 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:59.488 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:59.488 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:59.488 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:59.488 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:59.488 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:59.746 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjgyZmE4NzJmNmMwMDMwOThkYTgzYzViZjM3ZGUwYTE2NjBmYTI5OTVhMDk4OWY0VvKaGA==: --dhchap-ctrl-secret DHHC-1:03:OWZlMjAwMDQxZjg0MWI2YWNlNzU3Y2MwYTU1YWI5NzJjZGZmMmMxZGMzYTJlYjc3YzM3YjI0NTQ4OTExYTJjMi1JLHc=: 00:16:59.747 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:00:NjgyZmE4NzJmNmMwMDMwOThkYTgzYzViZjM3ZGUwYTE2NjBmYTI5OTVhMDk4OWY0VvKaGA==: --dhchap-ctrl-secret DHHC-1:03:OWZlMjAwMDQxZjg0MWI2YWNlNzU3Y2MwYTU1YWI5NzJjZGZmMmMxZGMzYTJlYjc3YzM3YjI0NTQ4OTExYTJjMi1JLHc=: 00:17:00.683 09:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:00.683 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:00.683 09:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:17:00.683 09:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.683 09:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.683 09:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.683 09:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:00.683 09:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:00.683 09:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:00.942 09:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:17:00.942 09:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:00.942 09:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:00.942 09:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:00.942 09:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:00.942 09:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:00.942 09:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:00.942 09:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.942 09:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.942 09:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.942 09:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:00.942 09:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:00.942 09:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:01.879 00:17:01.879 09:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:01.879 09:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:01.879 09:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:02.137 09:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.137 09:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:02.137 09:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.137 09:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.137 09:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.137 09:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:02.137 { 00:17:02.137 "cntlid": 43, 00:17:02.137 "qid": 0, 00:17:02.137 "state": "enabled", 00:17:02.137 "thread": "nvmf_tgt_poll_group_000", 00:17:02.137 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:17:02.137 "listen_address": { 00:17:02.137 "trtype": "TCP", 00:17:02.137 "adrfam": "IPv4", 00:17:02.137 "traddr": "10.0.0.2", 00:17:02.137 "trsvcid": "4420" 00:17:02.137 }, 00:17:02.137 "peer_address": { 00:17:02.137 "trtype": "TCP", 00:17:02.137 "adrfam": "IPv4", 00:17:02.137 "traddr": "10.0.0.1", 00:17:02.137 "trsvcid": "47568" 00:17:02.137 }, 00:17:02.137 "auth": { 00:17:02.137 "state": "completed", 00:17:02.137 "digest": "sha256", 00:17:02.137 "dhgroup": "ffdhe8192" 00:17:02.137 } 00:17:02.137 } 00:17:02.137 ]' 00:17:02.137 09:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:02.137 09:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:02.137 09:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:02.137 09:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:02.137 09:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:02.137 09:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:02.137 09:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:02.137 09:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:02.702 09:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjFhNWJkYjMxNzEyZTY2N2M5YTJhNTJmZGVkNGVlNWO88mZF: --dhchap-ctrl-secret DHHC-1:02:YWZmZGZjZGM4OTE3OWE5MTAwZjBjNzc1N2ZlZTZiZWZiMjUxYWYzYmRiMjdkMDlilcC2uQ==: 00:17:02.702 09:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:01:ZjFhNWJkYjMxNzEyZTY2N2M5YTJhNTJmZGVkNGVlNWO88mZF: --dhchap-ctrl-secret DHHC-1:02:YWZmZGZjZGM4OTE3OWE5MTAwZjBjNzc1N2ZlZTZiZWZiMjUxYWYzYmRiMjdkMDlilcC2uQ==: 00:17:03.640 09:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:03.640 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:03.640 09:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:17:03.640 09:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.640 09:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.640 09:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.640 09:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:03.640 09:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:03.640 09:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:03.640 09:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:17:03.640 09:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:03.640 09:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:03.640 09:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:03.640 09:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:03.640 09:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:03.640 09:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:03.640 09:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.640 09:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.640 09:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.640 09:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:03.640 09:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:03.640 09:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:04.578 00:17:04.579 09:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:04.579 09:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:04.579 09:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:04.836 09:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.836 09:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:04.836 09:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.837 09:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.837 09:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.837 09:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:04.837 { 00:17:04.837 "cntlid": 45, 00:17:04.837 "qid": 0, 00:17:04.837 "state": "enabled", 00:17:04.837 "thread": "nvmf_tgt_poll_group_000", 00:17:04.837 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:17:04.837 "listen_address": { 00:17:04.837 "trtype": "TCP", 00:17:04.837 "adrfam": "IPv4", 00:17:04.837 "traddr": "10.0.0.2", 00:17:04.837 "trsvcid": "4420" 00:17:04.837 }, 00:17:04.837 "peer_address": { 00:17:04.837 "trtype": "TCP", 00:17:04.837 "adrfam": "IPv4", 00:17:04.837 "traddr": "10.0.0.1", 00:17:04.837 "trsvcid": "47594" 00:17:04.837 }, 00:17:04.837 "auth": { 00:17:04.837 "state": "completed", 00:17:04.837 "digest": "sha256", 00:17:04.837 "dhgroup": "ffdhe8192" 00:17:04.837 } 00:17:04.837 } 00:17:04.837 ]' 00:17:04.837 09:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:04.837 09:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:04.837 09:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:04.837 09:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:04.837 09:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:05.095 09:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:05.095 09:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:05.095 09:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:05.354 09:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODgxYzcxYWVkNjY3NjgzNTRiZTk3MzQ5NjYxZDY1YTZmZDMyZTYxMmNiYzdkZTYyukxcjw==: --dhchap-ctrl-secret DHHC-1:01:NjBmY2YzMDU2ZGZjYzMwYjQyNDRjZTBiNDk0N2I2ZmNCgJ4d: 00:17:05.354 09:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:02:ODgxYzcxYWVkNjY3NjgzNTRiZTk3MzQ5NjYxZDY1YTZmZDMyZTYxMmNiYzdkZTYyukxcjw==: --dhchap-ctrl-secret DHHC-1:01:NjBmY2YzMDU2ZGZjYzMwYjQyNDRjZTBiNDk0N2I2ZmNCgJ4d: 00:17:06.290 09:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:06.290 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:06.290 09:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:17:06.290 09:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.290 09:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.290 09:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.290 09:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:06.290 09:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:06.290 09:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:06.549 09:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:17:06.550 09:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:06.550 09:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:06.550 09:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:06.550 09:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:06.550 09:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:06.550 09:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key3 00:17:06.550 09:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.550 09:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.550 09:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.550 09:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:06.550 09:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:06.550 09:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:07.488 00:17:07.488 09:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:07.488 09:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:07.488 09:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:07.488 09:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.488 09:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:07.488 09:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.488 09:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.488 09:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.488 09:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:07.488 { 00:17:07.488 "cntlid": 47, 00:17:07.488 "qid": 0, 00:17:07.488 "state": "enabled", 00:17:07.488 "thread": "nvmf_tgt_poll_group_000", 00:17:07.488 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:17:07.488 "listen_address": { 00:17:07.488 "trtype": "TCP", 00:17:07.488 "adrfam": "IPv4", 00:17:07.488 "traddr": "10.0.0.2", 00:17:07.488 "trsvcid": "4420" 00:17:07.488 }, 00:17:07.488 "peer_address": { 00:17:07.488 "trtype": "TCP", 00:17:07.488 "adrfam": "IPv4", 00:17:07.488 "traddr": "10.0.0.1", 00:17:07.488 "trsvcid": "53420" 00:17:07.488 }, 00:17:07.488 "auth": { 00:17:07.488 "state": "completed", 00:17:07.488 "digest": "sha256", 00:17:07.488 "dhgroup": "ffdhe8192" 00:17:07.488 } 00:17:07.488 } 00:17:07.488 ]' 00:17:07.488 09:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:07.488 09:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:07.488 09:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:07.746 09:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:07.746 09:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:07.746 09:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:07.746 09:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:07.746 09:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:08.005 09:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTNkYWI5NzM4OGE3YmFjYmRmNTBmYzdlNWIxOTk2MGEzMWY3NzZhMjhjMmVkMmIzNmJlNjJhNTRiYTY1NWJlMQDRflw=: 00:17:08.005 09:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:03:NTNkYWI5NzM4OGE3YmFjYmRmNTBmYzdlNWIxOTk2MGEzMWY3NzZhMjhjMmVkMmIzNmJlNjJhNTRiYTY1NWJlMQDRflw=: 00:17:08.996 09:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:08.996 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:08.996 09:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:17:08.996 09:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.996 09:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.996 09:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.996 09:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:08.996 09:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:08.996 09:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:08.996 09:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:08.996 09:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:09.254 09:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:17:09.254 09:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:09.254 09:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:09.254 09:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:09.254 09:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:09.254 09:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:09.254 09:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:09.254 09:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.254 09:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.254 09:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.254 09:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:09.254 09:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:09.254 09:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:09.511 00:17:09.511 09:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:09.511 09:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:09.511 09:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:09.768 09:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.768 09:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:09.768 09:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.768 09:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.768 09:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.768 09:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:09.768 { 00:17:09.768 "cntlid": 49, 00:17:09.768 "qid": 0, 00:17:09.768 "state": "enabled", 00:17:09.768 "thread": "nvmf_tgt_poll_group_000", 00:17:09.768 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:17:09.768 "listen_address": { 00:17:09.768 "trtype": "TCP", 00:17:09.768 "adrfam": "IPv4", 00:17:09.768 "traddr": "10.0.0.2", 00:17:09.768 "trsvcid": "4420" 00:17:09.768 }, 00:17:09.768 "peer_address": { 00:17:09.768 "trtype": "TCP", 00:17:09.768 "adrfam": "IPv4", 00:17:09.768 "traddr": "10.0.0.1", 00:17:09.768 "trsvcid": "53448" 00:17:09.768 }, 00:17:09.768 "auth": { 00:17:09.768 "state": "completed", 00:17:09.768 "digest": "sha384", 00:17:09.768 "dhgroup": "null" 00:17:09.768 } 00:17:09.768 } 00:17:09.768 ]' 00:17:09.768 09:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:09.768 09:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:09.768 09:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:09.768 09:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:09.768 09:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:10.025 09:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:10.025 09:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:10.025 09:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:10.282 09:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjgyZmE4NzJmNmMwMDMwOThkYTgzYzViZjM3ZGUwYTE2NjBmYTI5OTVhMDk4OWY0VvKaGA==: --dhchap-ctrl-secret DHHC-1:03:OWZlMjAwMDQxZjg0MWI2YWNlNzU3Y2MwYTU1YWI5NzJjZGZmMmMxZGMzYTJlYjc3YzM3YjI0NTQ4OTExYTJjMi1JLHc=: 00:17:10.283 09:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:00:NjgyZmE4NzJmNmMwMDMwOThkYTgzYzViZjM3ZGUwYTE2NjBmYTI5OTVhMDk4OWY0VvKaGA==: --dhchap-ctrl-secret DHHC-1:03:OWZlMjAwMDQxZjg0MWI2YWNlNzU3Y2MwYTU1YWI5NzJjZGZmMmMxZGMzYTJlYjc3YzM3YjI0NTQ4OTExYTJjMi1JLHc=: 00:17:11.218 09:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:11.218 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:11.218 09:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:17:11.218 09:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.218 09:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.218 09:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.218 09:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:11.218 09:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:11.218 09:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:11.478 09:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:17:11.478 09:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:11.478 09:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:11.478 09:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:11.478 09:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:11.478 09:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:11.478 09:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:11.478 09:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.478 09:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.478 09:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.478 09:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:11.478 09:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:11.478 09:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:11.736 00:17:11.736 09:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:11.736 09:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:11.736 09:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:11.994 09:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.994 09:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:11.994 09:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.994 09:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.994 09:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.994 09:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:11.994 { 00:17:11.994 "cntlid": 51, 00:17:11.994 "qid": 0, 00:17:11.994 "state": "enabled", 00:17:11.994 "thread": "nvmf_tgt_poll_group_000", 00:17:11.994 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:17:11.994 "listen_address": { 00:17:11.994 "trtype": "TCP", 00:17:11.994 "adrfam": "IPv4", 00:17:11.994 "traddr": "10.0.0.2", 00:17:11.994 "trsvcid": "4420" 00:17:11.994 }, 00:17:11.994 "peer_address": { 00:17:11.994 "trtype": "TCP", 00:17:11.994 "adrfam": "IPv4", 00:17:11.994 "traddr": "10.0.0.1", 00:17:11.994 "trsvcid": "53480" 00:17:11.994 }, 00:17:11.994 "auth": { 00:17:11.994 "state": "completed", 00:17:11.994 "digest": "sha384", 00:17:11.994 "dhgroup": "null" 00:17:11.994 } 00:17:11.994 } 00:17:11.994 ]' 00:17:11.994 09:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:11.994 09:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:11.994 09:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:11.994 09:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:11.994 09:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:11.994 09:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:11.995 09:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:11.995 09:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:12.562 09:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjFhNWJkYjMxNzEyZTY2N2M5YTJhNTJmZGVkNGVlNWO88mZF: --dhchap-ctrl-secret DHHC-1:02:YWZmZGZjZGM4OTE3OWE5MTAwZjBjNzc1N2ZlZTZiZWZiMjUxYWYzYmRiMjdkMDlilcC2uQ==: 00:17:12.562 09:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:01:ZjFhNWJkYjMxNzEyZTY2N2M5YTJhNTJmZGVkNGVlNWO88mZF: --dhchap-ctrl-secret DHHC-1:02:YWZmZGZjZGM4OTE3OWE5MTAwZjBjNzc1N2ZlZTZiZWZiMjUxYWYzYmRiMjdkMDlilcC2uQ==: 00:17:13.496 09:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:13.496 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:13.496 09:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:17:13.496 09:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.496 09:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.496 09:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.496 09:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:13.496 09:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:13.496 09:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:13.496 09:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:17:13.496 09:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:13.496 09:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:13.496 09:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:13.496 09:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:13.496 09:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:13.496 09:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:13.496 09:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.496 09:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.496 09:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.496 09:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:13.496 09:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:13.496 09:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:14.065 00:17:14.065 09:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:14.065 09:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:14.065 09:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:14.065 09:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.065 09:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:14.065 09:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.065 09:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.065 09:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.065 09:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:14.065 { 00:17:14.065 "cntlid": 53, 00:17:14.065 "qid": 0, 00:17:14.065 "state": "enabled", 00:17:14.065 "thread": "nvmf_tgt_poll_group_000", 00:17:14.065 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:17:14.065 "listen_address": { 00:17:14.065 "trtype": "TCP", 00:17:14.065 "adrfam": "IPv4", 00:17:14.065 "traddr": "10.0.0.2", 00:17:14.065 "trsvcid": "4420" 00:17:14.065 }, 00:17:14.065 "peer_address": { 00:17:14.065 "trtype": "TCP", 00:17:14.065 "adrfam": "IPv4", 00:17:14.065 "traddr": "10.0.0.1", 00:17:14.065 "trsvcid": "53508" 00:17:14.065 }, 00:17:14.065 "auth": { 00:17:14.065 "state": "completed", 00:17:14.065 "digest": "sha384", 00:17:14.065 "dhgroup": "null" 00:17:14.065 } 00:17:14.065 } 00:17:14.065 ]' 00:17:14.324 09:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:14.324 09:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:14.324 09:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:14.324 09:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:14.324 09:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:14.324 09:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:14.324 09:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:14.324 09:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:14.582 09:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODgxYzcxYWVkNjY3NjgzNTRiZTk3MzQ5NjYxZDY1YTZmZDMyZTYxMmNiYzdkZTYyukxcjw==: --dhchap-ctrl-secret DHHC-1:01:NjBmY2YzMDU2ZGZjYzMwYjQyNDRjZTBiNDk0N2I2ZmNCgJ4d: 00:17:14.582 09:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:02:ODgxYzcxYWVkNjY3NjgzNTRiZTk3MzQ5NjYxZDY1YTZmZDMyZTYxMmNiYzdkZTYyukxcjw==: --dhchap-ctrl-secret DHHC-1:01:NjBmY2YzMDU2ZGZjYzMwYjQyNDRjZTBiNDk0N2I2ZmNCgJ4d: 00:17:15.518 09:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:15.518 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:15.518 09:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:17:15.518 09:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.518 09:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.518 09:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.518 09:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:15.518 09:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:15.518 09:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:15.775 09:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:17:15.776 09:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:15.776 09:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:15.776 09:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:15.776 09:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:15.776 09:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:15.776 09:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key3 00:17:15.776 09:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.776 09:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.776 09:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.776 09:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:15.776 09:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:15.776 09:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:16.034 00:17:16.034 09:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:16.034 09:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:16.034 09:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:16.292 09:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.292 09:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:16.292 09:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.292 09:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.292 09:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.292 09:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:16.292 { 00:17:16.292 "cntlid": 55, 00:17:16.292 "qid": 0, 00:17:16.292 "state": "enabled", 00:17:16.292 "thread": "nvmf_tgt_poll_group_000", 00:17:16.292 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:17:16.292 "listen_address": { 00:17:16.292 "trtype": "TCP", 00:17:16.292 "adrfam": "IPv4", 00:17:16.292 "traddr": "10.0.0.2", 00:17:16.292 "trsvcid": "4420" 00:17:16.292 }, 00:17:16.292 "peer_address": { 00:17:16.292 "trtype": "TCP", 00:17:16.292 "adrfam": "IPv4", 00:17:16.292 "traddr": "10.0.0.1", 00:17:16.292 "trsvcid": "53542" 00:17:16.292 }, 00:17:16.292 "auth": { 00:17:16.292 "state": "completed", 00:17:16.292 "digest": "sha384", 00:17:16.292 "dhgroup": "null" 00:17:16.292 } 00:17:16.292 } 00:17:16.292 ]' 00:17:16.292 09:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:16.292 09:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:16.292 09:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:16.550 09:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:16.550 09:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:16.550 09:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:16.550 09:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:16.550 09:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:16.808 09:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTNkYWI5NzM4OGE3YmFjYmRmNTBmYzdlNWIxOTk2MGEzMWY3NzZhMjhjMmVkMmIzNmJlNjJhNTRiYTY1NWJlMQDRflw=: 00:17:16.808 09:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:03:NTNkYWI5NzM4OGE3YmFjYmRmNTBmYzdlNWIxOTk2MGEzMWY3NzZhMjhjMmVkMmIzNmJlNjJhNTRiYTY1NWJlMQDRflw=: 00:17:17.742 09:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:17.742 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:17.742 09:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:17:17.742 09:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.742 09:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.742 09:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.742 09:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:17.742 09:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:17.742 09:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:17.742 09:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:18.000 09:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:17:18.000 09:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:18.000 09:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:18.000 09:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:18.000 09:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:18.000 09:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:18.000 09:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:18.000 09:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.000 09:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.000 09:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.000 09:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:18.000 09:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:18.000 09:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:18.257 00:17:18.257 09:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:18.257 09:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:18.257 09:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:18.515 09:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.515 09:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:18.515 09:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.515 09:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.515 09:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.515 09:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:18.515 { 00:17:18.515 "cntlid": 57, 00:17:18.515 "qid": 0, 00:17:18.515 "state": "enabled", 00:17:18.515 "thread": "nvmf_tgt_poll_group_000", 00:17:18.515 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:17:18.515 "listen_address": { 00:17:18.515 "trtype": "TCP", 00:17:18.515 "adrfam": "IPv4", 00:17:18.515 "traddr": "10.0.0.2", 00:17:18.515 "trsvcid": "4420" 00:17:18.515 }, 00:17:18.515 "peer_address": { 00:17:18.515 "trtype": "TCP", 00:17:18.515 "adrfam": "IPv4", 00:17:18.515 "traddr": "10.0.0.1", 00:17:18.515 "trsvcid": "32920" 00:17:18.515 }, 00:17:18.515 "auth": { 00:17:18.515 "state": "completed", 00:17:18.515 "digest": "sha384", 00:17:18.515 "dhgroup": "ffdhe2048" 00:17:18.515 } 00:17:18.515 } 00:17:18.515 ]' 00:17:18.515 09:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:18.515 09:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:18.515 09:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:18.773 09:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:18.773 09:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:18.773 09:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:18.773 09:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:18.773 09:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:19.031 09:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjgyZmE4NzJmNmMwMDMwOThkYTgzYzViZjM3ZGUwYTE2NjBmYTI5OTVhMDk4OWY0VvKaGA==: --dhchap-ctrl-secret DHHC-1:03:OWZlMjAwMDQxZjg0MWI2YWNlNzU3Y2MwYTU1YWI5NzJjZGZmMmMxZGMzYTJlYjc3YzM3YjI0NTQ4OTExYTJjMi1JLHc=: 00:17:19.031 09:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:00:NjgyZmE4NzJmNmMwMDMwOThkYTgzYzViZjM3ZGUwYTE2NjBmYTI5OTVhMDk4OWY0VvKaGA==: --dhchap-ctrl-secret DHHC-1:03:OWZlMjAwMDQxZjg0MWI2YWNlNzU3Y2MwYTU1YWI5NzJjZGZmMmMxZGMzYTJlYjc3YzM3YjI0NTQ4OTExYTJjMi1JLHc=: 00:17:19.968 09:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:19.968 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:19.968 09:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:17:19.968 09:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.968 09:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.968 09:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.968 09:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:19.968 09:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:19.968 09:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:20.226 09:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:17:20.226 09:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:20.226 09:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:20.226 09:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:20.226 09:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:20.226 09:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:20.226 09:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:20.226 09:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.226 09:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.226 09:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.226 09:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:20.226 09:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:20.227 09:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:20.486 00:17:20.486 09:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:20.486 09:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:20.486 09:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:21.054 09:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.054 09:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:21.054 09:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.054 09:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.054 09:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.054 09:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:21.054 { 00:17:21.054 "cntlid": 59, 00:17:21.054 "qid": 0, 00:17:21.054 "state": "enabled", 00:17:21.054 "thread": "nvmf_tgt_poll_group_000", 00:17:21.054 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:17:21.054 "listen_address": { 00:17:21.054 "trtype": "TCP", 00:17:21.054 "adrfam": "IPv4", 00:17:21.054 "traddr": "10.0.0.2", 00:17:21.054 "trsvcid": "4420" 00:17:21.054 }, 00:17:21.054 "peer_address": { 00:17:21.054 "trtype": "TCP", 00:17:21.054 "adrfam": "IPv4", 00:17:21.054 "traddr": "10.0.0.1", 00:17:21.054 "trsvcid": "32950" 00:17:21.054 }, 00:17:21.054 "auth": { 00:17:21.054 "state": "completed", 00:17:21.054 "digest": "sha384", 00:17:21.054 "dhgroup": "ffdhe2048" 00:17:21.054 } 00:17:21.054 } 00:17:21.054 ]' 00:17:21.054 09:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:21.054 09:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:21.054 09:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:21.054 09:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:21.054 09:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:21.054 09:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:21.054 09:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:21.054 09:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:21.312 09:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjFhNWJkYjMxNzEyZTY2N2M5YTJhNTJmZGVkNGVlNWO88mZF: --dhchap-ctrl-secret DHHC-1:02:YWZmZGZjZGM4OTE3OWE5MTAwZjBjNzc1N2ZlZTZiZWZiMjUxYWYzYmRiMjdkMDlilcC2uQ==: 00:17:21.312 09:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:01:ZjFhNWJkYjMxNzEyZTY2N2M5YTJhNTJmZGVkNGVlNWO88mZF: --dhchap-ctrl-secret DHHC-1:02:YWZmZGZjZGM4OTE3OWE5MTAwZjBjNzc1N2ZlZTZiZWZiMjUxYWYzYmRiMjdkMDlilcC2uQ==: 00:17:22.251 09:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:22.251 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:22.251 09:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:17:22.251 09:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.251 09:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.251 09:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.251 09:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:22.251 09:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:22.251 09:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:22.511 09:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:17:22.511 09:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:22.511 09:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:22.511 09:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:22.511 09:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:22.511 09:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:22.511 09:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:22.511 09:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.511 09:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.511 09:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.511 09:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:22.511 09:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:22.511 09:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:23.082 00:17:23.082 09:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:23.083 09:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:23.083 09:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:23.083 09:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.083 09:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:23.083 09:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.083 09:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.083 09:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.083 09:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:23.083 { 00:17:23.083 "cntlid": 61, 00:17:23.083 "qid": 0, 00:17:23.083 "state": "enabled", 00:17:23.083 "thread": "nvmf_tgt_poll_group_000", 00:17:23.083 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:17:23.083 "listen_address": { 00:17:23.083 "trtype": "TCP", 00:17:23.083 "adrfam": "IPv4", 00:17:23.083 "traddr": "10.0.0.2", 00:17:23.083 "trsvcid": "4420" 00:17:23.083 }, 00:17:23.083 "peer_address": { 00:17:23.083 "trtype": "TCP", 00:17:23.083 "adrfam": "IPv4", 00:17:23.083 "traddr": "10.0.0.1", 00:17:23.083 "trsvcid": "32974" 00:17:23.083 }, 00:17:23.083 "auth": { 00:17:23.083 "state": "completed", 00:17:23.083 "digest": "sha384", 00:17:23.083 "dhgroup": "ffdhe2048" 00:17:23.083 } 00:17:23.083 } 00:17:23.083 ]' 00:17:23.342 09:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:23.342 09:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:23.342 09:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:23.342 09:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:23.342 09:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:23.342 09:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:23.342 09:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:23.342 09:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:23.601 09:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODgxYzcxYWVkNjY3NjgzNTRiZTk3MzQ5NjYxZDY1YTZmZDMyZTYxMmNiYzdkZTYyukxcjw==: --dhchap-ctrl-secret DHHC-1:01:NjBmY2YzMDU2ZGZjYzMwYjQyNDRjZTBiNDk0N2I2ZmNCgJ4d: 00:17:23.601 09:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:02:ODgxYzcxYWVkNjY3NjgzNTRiZTk3MzQ5NjYxZDY1YTZmZDMyZTYxMmNiYzdkZTYyukxcjw==: --dhchap-ctrl-secret DHHC-1:01:NjBmY2YzMDU2ZGZjYzMwYjQyNDRjZTBiNDk0N2I2ZmNCgJ4d: 00:17:24.538 09:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:24.538 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:24.538 09:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:17:24.538 09:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.538 09:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.538 09:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.538 09:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:24.538 09:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:24.538 09:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:24.796 09:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:17:24.796 09:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:24.796 09:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:24.796 09:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:24.796 09:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:24.796 09:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:24.796 09:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key3 00:17:24.796 09:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.796 09:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.796 09:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.796 09:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:24.796 09:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:24.796 09:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:25.054 00:17:25.312 09:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:25.312 09:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:25.312 09:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:25.571 09:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.571 09:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:25.571 09:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.571 09:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.571 09:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.571 09:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:25.571 { 00:17:25.571 "cntlid": 63, 00:17:25.571 "qid": 0, 00:17:25.571 "state": "enabled", 00:17:25.571 "thread": "nvmf_tgt_poll_group_000", 00:17:25.571 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:17:25.571 "listen_address": { 00:17:25.571 "trtype": "TCP", 00:17:25.571 "adrfam": "IPv4", 00:17:25.571 "traddr": "10.0.0.2", 00:17:25.571 "trsvcid": "4420" 00:17:25.571 }, 00:17:25.571 "peer_address": { 00:17:25.571 "trtype": "TCP", 00:17:25.571 "adrfam": "IPv4", 00:17:25.571 "traddr": "10.0.0.1", 00:17:25.571 "trsvcid": "33000" 00:17:25.571 }, 00:17:25.571 "auth": { 00:17:25.571 "state": "completed", 00:17:25.571 "digest": "sha384", 00:17:25.571 "dhgroup": "ffdhe2048" 00:17:25.571 } 00:17:25.571 } 00:17:25.571 ]' 00:17:25.571 09:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:25.571 09:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:25.571 09:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:25.571 09:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:25.571 09:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:25.571 09:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:25.571 09:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:25.571 09:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:25.830 09:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTNkYWI5NzM4OGE3YmFjYmRmNTBmYzdlNWIxOTk2MGEzMWY3NzZhMjhjMmVkMmIzNmJlNjJhNTRiYTY1NWJlMQDRflw=: 00:17:25.830 09:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:03:NTNkYWI5NzM4OGE3YmFjYmRmNTBmYzdlNWIxOTk2MGEzMWY3NzZhMjhjMmVkMmIzNmJlNjJhNTRiYTY1NWJlMQDRflw=: 00:17:26.768 09:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:26.768 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:26.768 09:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:17:26.768 09:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.768 09:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.768 09:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.768 09:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:26.768 09:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:26.768 09:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:26.768 09:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:27.026 09:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:17:27.026 09:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:27.026 09:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:27.026 09:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:27.026 09:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:27.026 09:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:27.026 09:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:27.026 09:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.026 09:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.026 09:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.026 09:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:27.026 09:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:27.026 09:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:27.285 00:17:27.285 09:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:27.285 09:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:27.285 09:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:27.545 09:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.545 09:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:27.545 09:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.545 09:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.803 09:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.803 09:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:27.804 { 00:17:27.804 "cntlid": 65, 00:17:27.804 "qid": 0, 00:17:27.804 "state": "enabled", 00:17:27.804 "thread": "nvmf_tgt_poll_group_000", 00:17:27.804 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:17:27.804 "listen_address": { 00:17:27.804 "trtype": "TCP", 00:17:27.804 "adrfam": "IPv4", 00:17:27.804 "traddr": "10.0.0.2", 00:17:27.804 "trsvcid": "4420" 00:17:27.804 }, 00:17:27.804 "peer_address": { 00:17:27.804 "trtype": "TCP", 00:17:27.804 "adrfam": "IPv4", 00:17:27.804 "traddr": "10.0.0.1", 00:17:27.804 "trsvcid": "55506" 00:17:27.804 }, 00:17:27.804 "auth": { 00:17:27.804 "state": "completed", 00:17:27.804 "digest": "sha384", 00:17:27.804 "dhgroup": "ffdhe3072" 00:17:27.804 } 00:17:27.804 } 00:17:27.804 ]' 00:17:27.804 09:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:27.804 09:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:27.804 09:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:27.804 09:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:27.804 09:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:27.804 09:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:27.804 09:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:27.804 09:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:28.062 09:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjgyZmE4NzJmNmMwMDMwOThkYTgzYzViZjM3ZGUwYTE2NjBmYTI5OTVhMDk4OWY0VvKaGA==: --dhchap-ctrl-secret DHHC-1:03:OWZlMjAwMDQxZjg0MWI2YWNlNzU3Y2MwYTU1YWI5NzJjZGZmMmMxZGMzYTJlYjc3YzM3YjI0NTQ4OTExYTJjMi1JLHc=: 00:17:28.062 09:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:00:NjgyZmE4NzJmNmMwMDMwOThkYTgzYzViZjM3ZGUwYTE2NjBmYTI5OTVhMDk4OWY0VvKaGA==: --dhchap-ctrl-secret DHHC-1:03:OWZlMjAwMDQxZjg0MWI2YWNlNzU3Y2MwYTU1YWI5NzJjZGZmMmMxZGMzYTJlYjc3YzM3YjI0NTQ4OTExYTJjMi1JLHc=: 00:17:28.998 09:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:28.998 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:28.998 09:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:17:28.998 09:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.998 09:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.998 09:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.998 09:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:28.998 09:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:28.998 09:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:29.256 09:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:17:29.256 09:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:29.256 09:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:29.256 09:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:29.256 09:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:29.256 09:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:29.256 09:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:29.256 09:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.256 09:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.256 09:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.256 09:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:29.256 09:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:29.256 09:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:29.821 00:17:29.821 09:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:29.821 09:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:29.821 09:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:29.821 09:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.080 09:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:30.080 09:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.080 09:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.080 09:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.080 09:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:30.080 { 00:17:30.080 "cntlid": 67, 00:17:30.080 "qid": 0, 00:17:30.080 "state": "enabled", 00:17:30.080 "thread": "nvmf_tgt_poll_group_000", 00:17:30.080 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:17:30.080 "listen_address": { 00:17:30.080 "trtype": "TCP", 00:17:30.080 "adrfam": "IPv4", 00:17:30.080 "traddr": "10.0.0.2", 00:17:30.080 "trsvcid": "4420" 00:17:30.080 }, 00:17:30.080 "peer_address": { 00:17:30.080 "trtype": "TCP", 00:17:30.080 "adrfam": "IPv4", 00:17:30.080 "traddr": "10.0.0.1", 00:17:30.080 "trsvcid": "55530" 00:17:30.080 }, 00:17:30.080 "auth": { 00:17:30.080 "state": "completed", 00:17:30.080 "digest": "sha384", 00:17:30.080 "dhgroup": "ffdhe3072" 00:17:30.080 } 00:17:30.080 } 00:17:30.080 ]' 00:17:30.080 09:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:30.080 09:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:30.080 09:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:30.080 09:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:30.080 09:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:30.080 09:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:30.080 09:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:30.080 09:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:30.340 09:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjFhNWJkYjMxNzEyZTY2N2M5YTJhNTJmZGVkNGVlNWO88mZF: --dhchap-ctrl-secret DHHC-1:02:YWZmZGZjZGM4OTE3OWE5MTAwZjBjNzc1N2ZlZTZiZWZiMjUxYWYzYmRiMjdkMDlilcC2uQ==: 00:17:30.340 09:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:01:ZjFhNWJkYjMxNzEyZTY2N2M5YTJhNTJmZGVkNGVlNWO88mZF: --dhchap-ctrl-secret DHHC-1:02:YWZmZGZjZGM4OTE3OWE5MTAwZjBjNzc1N2ZlZTZiZWZiMjUxYWYzYmRiMjdkMDlilcC2uQ==: 00:17:31.278 09:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:31.278 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:31.278 09:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:17:31.278 09:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.278 09:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.278 09:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.278 09:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:31.278 09:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:31.278 09:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:31.536 09:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:17:31.536 09:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:31.536 09:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:31.536 09:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:31.536 09:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:31.536 09:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:31.536 09:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:31.536 09:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.536 09:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.536 09:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.536 09:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:31.537 09:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:31.537 09:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:31.795 00:17:31.795 09:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:31.795 09:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:31.795 09:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:32.053 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.053 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:32.053 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.053 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.053 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.053 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:32.053 { 00:17:32.053 "cntlid": 69, 00:17:32.053 "qid": 0, 00:17:32.053 "state": "enabled", 00:17:32.053 "thread": "nvmf_tgt_poll_group_000", 00:17:32.053 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:17:32.053 "listen_address": { 00:17:32.053 "trtype": "TCP", 00:17:32.053 "adrfam": "IPv4", 00:17:32.053 "traddr": "10.0.0.2", 00:17:32.053 "trsvcid": "4420" 00:17:32.053 }, 00:17:32.053 "peer_address": { 00:17:32.054 "trtype": "TCP", 00:17:32.054 "adrfam": "IPv4", 00:17:32.054 "traddr": "10.0.0.1", 00:17:32.054 "trsvcid": "55556" 00:17:32.054 }, 00:17:32.054 "auth": { 00:17:32.054 "state": "completed", 00:17:32.054 "digest": "sha384", 00:17:32.054 "dhgroup": "ffdhe3072" 00:17:32.054 } 00:17:32.054 } 00:17:32.054 ]' 00:17:32.054 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:32.312 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:32.312 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:32.312 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:32.312 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:32.312 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:32.312 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:32.312 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:32.569 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODgxYzcxYWVkNjY3NjgzNTRiZTk3MzQ5NjYxZDY1YTZmZDMyZTYxMmNiYzdkZTYyukxcjw==: --dhchap-ctrl-secret DHHC-1:01:NjBmY2YzMDU2ZGZjYzMwYjQyNDRjZTBiNDk0N2I2ZmNCgJ4d: 00:17:32.569 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:02:ODgxYzcxYWVkNjY3NjgzNTRiZTk3MzQ5NjYxZDY1YTZmZDMyZTYxMmNiYzdkZTYyukxcjw==: --dhchap-ctrl-secret DHHC-1:01:NjBmY2YzMDU2ZGZjYzMwYjQyNDRjZTBiNDk0N2I2ZmNCgJ4d: 00:17:33.506 09:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:33.506 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:33.506 09:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:17:33.506 09:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.506 09:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.506 09:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.506 09:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:33.506 09:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:33.506 09:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:33.765 09:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:17:33.765 09:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:33.765 09:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:33.765 09:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:33.765 09:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:33.765 09:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:33.765 09:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key3 00:17:33.765 09:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.766 09:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.766 09:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.766 09:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:33.766 09:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:33.766 09:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:34.022 00:17:34.279 09:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:34.279 09:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:34.279 09:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:34.536 09:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.536 09:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:34.536 09:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.536 09:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.537 09:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.537 09:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:34.537 { 00:17:34.537 "cntlid": 71, 00:17:34.537 "qid": 0, 00:17:34.537 "state": "enabled", 00:17:34.537 "thread": "nvmf_tgt_poll_group_000", 00:17:34.537 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:17:34.537 "listen_address": { 00:17:34.537 "trtype": "TCP", 00:17:34.537 "adrfam": "IPv4", 00:17:34.537 "traddr": "10.0.0.2", 00:17:34.537 "trsvcid": "4420" 00:17:34.537 }, 00:17:34.537 "peer_address": { 00:17:34.537 "trtype": "TCP", 00:17:34.537 "adrfam": "IPv4", 00:17:34.537 "traddr": "10.0.0.1", 00:17:34.537 "trsvcid": "55588" 00:17:34.537 }, 00:17:34.537 "auth": { 00:17:34.537 "state": "completed", 00:17:34.537 "digest": "sha384", 00:17:34.537 "dhgroup": "ffdhe3072" 00:17:34.537 } 00:17:34.537 } 00:17:34.537 ]' 00:17:34.537 09:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:34.537 09:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:34.537 09:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:34.537 09:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:34.537 09:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:34.537 09:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:34.537 09:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:34.537 09:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:34.795 09:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTNkYWI5NzM4OGE3YmFjYmRmNTBmYzdlNWIxOTk2MGEzMWY3NzZhMjhjMmVkMmIzNmJlNjJhNTRiYTY1NWJlMQDRflw=: 00:17:34.795 09:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:03:NTNkYWI5NzM4OGE3YmFjYmRmNTBmYzdlNWIxOTk2MGEzMWY3NzZhMjhjMmVkMmIzNmJlNjJhNTRiYTY1NWJlMQDRflw=: 00:17:35.728 09:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:35.728 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:35.728 09:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:17:35.728 09:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.728 09:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.728 09:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.728 09:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:35.728 09:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:35.728 09:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:35.728 09:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:35.986 09:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:17:35.986 09:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:35.986 09:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:35.986 09:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:35.986 09:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:35.986 09:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:35.986 09:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:35.986 09:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.986 09:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.986 09:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.986 09:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:35.986 09:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:35.986 09:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:36.576 00:17:36.576 09:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:36.576 09:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:36.576 09:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:36.833 09:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.833 09:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:36.833 09:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.833 09:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.833 09:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.833 09:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:36.833 { 00:17:36.833 "cntlid": 73, 00:17:36.833 "qid": 0, 00:17:36.833 "state": "enabled", 00:17:36.833 "thread": "nvmf_tgt_poll_group_000", 00:17:36.834 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:17:36.834 "listen_address": { 00:17:36.834 "trtype": "TCP", 00:17:36.834 "adrfam": "IPv4", 00:17:36.834 "traddr": "10.0.0.2", 00:17:36.834 "trsvcid": "4420" 00:17:36.834 }, 00:17:36.834 "peer_address": { 00:17:36.834 "trtype": "TCP", 00:17:36.834 "adrfam": "IPv4", 00:17:36.834 "traddr": "10.0.0.1", 00:17:36.834 "trsvcid": "53026" 00:17:36.834 }, 00:17:36.834 "auth": { 00:17:36.834 "state": "completed", 00:17:36.834 "digest": "sha384", 00:17:36.834 "dhgroup": "ffdhe4096" 00:17:36.834 } 00:17:36.834 } 00:17:36.834 ]' 00:17:36.834 09:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:36.834 09:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:36.834 09:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:36.834 09:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:36.834 09:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:36.834 09:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:36.834 09:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:36.834 09:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:37.091 09:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjgyZmE4NzJmNmMwMDMwOThkYTgzYzViZjM3ZGUwYTE2NjBmYTI5OTVhMDk4OWY0VvKaGA==: --dhchap-ctrl-secret DHHC-1:03:OWZlMjAwMDQxZjg0MWI2YWNlNzU3Y2MwYTU1YWI5NzJjZGZmMmMxZGMzYTJlYjc3YzM3YjI0NTQ4OTExYTJjMi1JLHc=: 00:17:37.091 09:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:00:NjgyZmE4NzJmNmMwMDMwOThkYTgzYzViZjM3ZGUwYTE2NjBmYTI5OTVhMDk4OWY0VvKaGA==: --dhchap-ctrl-secret DHHC-1:03:OWZlMjAwMDQxZjg0MWI2YWNlNzU3Y2MwYTU1YWI5NzJjZGZmMmMxZGMzYTJlYjc3YzM3YjI0NTQ4OTExYTJjMi1JLHc=: 00:17:38.024 09:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:38.024 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:38.024 09:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:17:38.024 09:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.024 09:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.024 09:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.024 09:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:38.024 09:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:38.024 09:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:38.284 09:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:17:38.284 09:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:38.284 09:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:38.284 09:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:38.284 09:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:38.284 09:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:38.284 09:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:38.284 09:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.284 09:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.284 09:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.284 09:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:38.284 09:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:38.284 09:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:38.855 00:17:38.855 09:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:38.855 09:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:38.855 09:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:38.855 09:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.113 09:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:39.113 09:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.113 09:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.113 09:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.113 09:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:39.113 { 00:17:39.113 "cntlid": 75, 00:17:39.113 "qid": 0, 00:17:39.113 "state": "enabled", 00:17:39.113 "thread": "nvmf_tgt_poll_group_000", 00:17:39.113 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:17:39.113 "listen_address": { 00:17:39.113 "trtype": "TCP", 00:17:39.113 "adrfam": "IPv4", 00:17:39.113 "traddr": "10.0.0.2", 00:17:39.113 "trsvcid": "4420" 00:17:39.113 }, 00:17:39.114 "peer_address": { 00:17:39.114 "trtype": "TCP", 00:17:39.114 "adrfam": "IPv4", 00:17:39.114 "traddr": "10.0.0.1", 00:17:39.114 "trsvcid": "53066" 00:17:39.114 }, 00:17:39.114 "auth": { 00:17:39.114 "state": "completed", 00:17:39.114 "digest": "sha384", 00:17:39.114 "dhgroup": "ffdhe4096" 00:17:39.114 } 00:17:39.114 } 00:17:39.114 ]' 00:17:39.114 09:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:39.114 09:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:39.114 09:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:39.114 09:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:39.114 09:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:39.114 09:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:39.114 09:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:39.114 09:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:39.372 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjFhNWJkYjMxNzEyZTY2N2M5YTJhNTJmZGVkNGVlNWO88mZF: --dhchap-ctrl-secret DHHC-1:02:YWZmZGZjZGM4OTE3OWE5MTAwZjBjNzc1N2ZlZTZiZWZiMjUxYWYzYmRiMjdkMDlilcC2uQ==: 00:17:39.372 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:01:ZjFhNWJkYjMxNzEyZTY2N2M5YTJhNTJmZGVkNGVlNWO88mZF: --dhchap-ctrl-secret DHHC-1:02:YWZmZGZjZGM4OTE3OWE5MTAwZjBjNzc1N2ZlZTZiZWZiMjUxYWYzYmRiMjdkMDlilcC2uQ==: 00:17:40.308 09:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:40.308 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:40.308 09:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:17:40.308 09:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.309 09:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.309 09:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.309 09:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:40.309 09:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:40.309 09:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:40.567 09:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:17:40.567 09:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:40.567 09:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:40.567 09:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:40.567 09:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:40.567 09:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:40.567 09:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:40.567 09:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.567 09:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.567 09:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.567 09:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:40.567 09:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:40.567 09:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:41.137 00:17:41.137 09:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:41.137 09:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:41.137 09:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:41.137 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.137 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:41.137 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.396 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.396 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.396 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:41.396 { 00:17:41.396 "cntlid": 77, 00:17:41.396 "qid": 0, 00:17:41.396 "state": "enabled", 00:17:41.396 "thread": "nvmf_tgt_poll_group_000", 00:17:41.396 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:17:41.396 "listen_address": { 00:17:41.396 "trtype": "TCP", 00:17:41.396 "adrfam": "IPv4", 00:17:41.396 "traddr": "10.0.0.2", 00:17:41.396 "trsvcid": "4420" 00:17:41.396 }, 00:17:41.396 "peer_address": { 00:17:41.396 "trtype": "TCP", 00:17:41.396 "adrfam": "IPv4", 00:17:41.396 "traddr": "10.0.0.1", 00:17:41.396 "trsvcid": "53094" 00:17:41.396 }, 00:17:41.396 "auth": { 00:17:41.396 "state": "completed", 00:17:41.396 "digest": "sha384", 00:17:41.396 "dhgroup": "ffdhe4096" 00:17:41.396 } 00:17:41.396 } 00:17:41.396 ]' 00:17:41.396 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:41.396 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:41.396 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:41.396 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:41.396 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:41.396 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:41.396 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:41.396 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:41.654 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODgxYzcxYWVkNjY3NjgzNTRiZTk3MzQ5NjYxZDY1YTZmZDMyZTYxMmNiYzdkZTYyukxcjw==: --dhchap-ctrl-secret DHHC-1:01:NjBmY2YzMDU2ZGZjYzMwYjQyNDRjZTBiNDk0N2I2ZmNCgJ4d: 00:17:41.654 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:02:ODgxYzcxYWVkNjY3NjgzNTRiZTk3MzQ5NjYxZDY1YTZmZDMyZTYxMmNiYzdkZTYyukxcjw==: --dhchap-ctrl-secret DHHC-1:01:NjBmY2YzMDU2ZGZjYzMwYjQyNDRjZTBiNDk0N2I2ZmNCgJ4d: 00:17:42.592 09:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:42.592 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:42.592 09:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:17:42.592 09:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.592 09:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.592 09:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.592 09:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:42.592 09:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:42.592 09:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:42.850 09:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:17:42.850 09:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:42.850 09:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:42.850 09:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:42.850 09:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:42.850 09:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:42.850 09:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key3 00:17:42.850 09:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.850 09:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.850 09:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.850 09:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:42.850 09:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:42.850 09:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:43.420 00:17:43.420 09:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:43.420 09:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:43.420 09:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:43.680 09:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.680 09:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:43.680 09:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.680 09:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.680 09:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.680 09:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:43.680 { 00:17:43.680 "cntlid": 79, 00:17:43.680 "qid": 0, 00:17:43.680 "state": "enabled", 00:17:43.680 "thread": "nvmf_tgt_poll_group_000", 00:17:43.680 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:17:43.680 "listen_address": { 00:17:43.680 "trtype": "TCP", 00:17:43.680 "adrfam": "IPv4", 00:17:43.680 "traddr": "10.0.0.2", 00:17:43.680 "trsvcid": "4420" 00:17:43.680 }, 00:17:43.680 "peer_address": { 00:17:43.680 "trtype": "TCP", 00:17:43.680 "adrfam": "IPv4", 00:17:43.680 "traddr": "10.0.0.1", 00:17:43.680 "trsvcid": "53118" 00:17:43.680 }, 00:17:43.680 "auth": { 00:17:43.680 "state": "completed", 00:17:43.680 "digest": "sha384", 00:17:43.680 "dhgroup": "ffdhe4096" 00:17:43.680 } 00:17:43.680 } 00:17:43.680 ]' 00:17:43.680 09:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:43.680 09:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:43.680 09:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:43.680 09:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:43.680 09:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:43.680 09:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:43.680 09:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:43.680 09:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:43.938 09:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTNkYWI5NzM4OGE3YmFjYmRmNTBmYzdlNWIxOTk2MGEzMWY3NzZhMjhjMmVkMmIzNmJlNjJhNTRiYTY1NWJlMQDRflw=: 00:17:43.938 09:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:03:NTNkYWI5NzM4OGE3YmFjYmRmNTBmYzdlNWIxOTk2MGEzMWY3NzZhMjhjMmVkMmIzNmJlNjJhNTRiYTY1NWJlMQDRflw=: 00:17:44.875 09:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:44.875 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:44.875 09:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:17:44.875 09:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.875 09:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.875 09:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.875 09:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:44.875 09:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:44.875 09:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:44.875 09:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:45.134 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:17:45.134 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:45.134 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:45.134 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:45.134 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:45.134 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:45.134 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:45.134 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.134 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.134 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.134 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:45.134 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:45.134 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:45.706 00:17:45.706 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:45.706 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:45.706 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:45.965 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:45.965 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:45.965 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.965 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.965 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.965 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:45.965 { 00:17:45.965 "cntlid": 81, 00:17:45.965 "qid": 0, 00:17:45.965 "state": "enabled", 00:17:45.965 "thread": "nvmf_tgt_poll_group_000", 00:17:45.965 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:17:45.965 "listen_address": { 00:17:45.965 "trtype": "TCP", 00:17:45.965 "adrfam": "IPv4", 00:17:45.965 "traddr": "10.0.0.2", 00:17:45.965 "trsvcid": "4420" 00:17:45.965 }, 00:17:45.965 "peer_address": { 00:17:45.965 "trtype": "TCP", 00:17:45.965 "adrfam": "IPv4", 00:17:45.965 "traddr": "10.0.0.1", 00:17:45.965 "trsvcid": "53144" 00:17:45.965 }, 00:17:45.965 "auth": { 00:17:45.965 "state": "completed", 00:17:45.965 "digest": "sha384", 00:17:45.965 "dhgroup": "ffdhe6144" 00:17:45.965 } 00:17:45.965 } 00:17:45.965 ]' 00:17:45.965 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:45.965 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:45.965 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:45.965 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:45.965 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:45.965 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:45.965 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:45.965 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:46.531 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjgyZmE4NzJmNmMwMDMwOThkYTgzYzViZjM3ZGUwYTE2NjBmYTI5OTVhMDk4OWY0VvKaGA==: --dhchap-ctrl-secret DHHC-1:03:OWZlMjAwMDQxZjg0MWI2YWNlNzU3Y2MwYTU1YWI5NzJjZGZmMmMxZGMzYTJlYjc3YzM3YjI0NTQ4OTExYTJjMi1JLHc=: 00:17:46.531 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:00:NjgyZmE4NzJmNmMwMDMwOThkYTgzYzViZjM3ZGUwYTE2NjBmYTI5OTVhMDk4OWY0VvKaGA==: --dhchap-ctrl-secret DHHC-1:03:OWZlMjAwMDQxZjg0MWI2YWNlNzU3Y2MwYTU1YWI5NzJjZGZmMmMxZGMzYTJlYjc3YzM3YjI0NTQ4OTExYTJjMi1JLHc=: 00:17:47.100 09:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:47.100 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:47.100 09:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:17:47.100 09:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.100 09:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.358 09:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.358 09:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:47.358 09:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:47.358 09:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:47.616 09:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:17:47.616 09:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:47.616 09:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:47.616 09:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:47.616 09:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:47.616 09:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:47.616 09:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:47.616 09:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.616 09:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.616 09:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.616 09:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:47.616 09:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:47.616 09:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:48.183 00:17:48.183 09:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:48.183 09:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:48.183 09:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:48.440 09:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.440 09:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:48.440 09:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.440 09:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.440 09:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.440 09:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:48.440 { 00:17:48.440 "cntlid": 83, 00:17:48.440 "qid": 0, 00:17:48.440 "state": "enabled", 00:17:48.440 "thread": "nvmf_tgt_poll_group_000", 00:17:48.440 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:17:48.440 "listen_address": { 00:17:48.440 "trtype": "TCP", 00:17:48.440 "adrfam": "IPv4", 00:17:48.440 "traddr": "10.0.0.2", 00:17:48.440 "trsvcid": "4420" 00:17:48.440 }, 00:17:48.440 "peer_address": { 00:17:48.440 "trtype": "TCP", 00:17:48.440 "adrfam": "IPv4", 00:17:48.440 "traddr": "10.0.0.1", 00:17:48.441 "trsvcid": "43980" 00:17:48.441 }, 00:17:48.441 "auth": { 00:17:48.441 "state": "completed", 00:17:48.441 "digest": "sha384", 00:17:48.441 "dhgroup": "ffdhe6144" 00:17:48.441 } 00:17:48.441 } 00:17:48.441 ]' 00:17:48.441 09:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:48.441 09:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:48.441 09:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:48.441 09:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:48.441 09:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:48.441 09:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:48.441 09:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:48.441 09:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:48.700 09:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjFhNWJkYjMxNzEyZTY2N2M5YTJhNTJmZGVkNGVlNWO88mZF: --dhchap-ctrl-secret DHHC-1:02:YWZmZGZjZGM4OTE3OWE5MTAwZjBjNzc1N2ZlZTZiZWZiMjUxYWYzYmRiMjdkMDlilcC2uQ==: 00:17:48.701 09:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:01:ZjFhNWJkYjMxNzEyZTY2N2M5YTJhNTJmZGVkNGVlNWO88mZF: --dhchap-ctrl-secret DHHC-1:02:YWZmZGZjZGM4OTE3OWE5MTAwZjBjNzc1N2ZlZTZiZWZiMjUxYWYzYmRiMjdkMDlilcC2uQ==: 00:17:49.674 09:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:49.674 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:49.674 09:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:17:49.674 09:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.674 09:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.674 09:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.674 09:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:49.674 09:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:49.674 09:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:49.952 09:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:17:49.952 09:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:49.952 09:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:49.952 09:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:49.952 09:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:49.952 09:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:49.952 09:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:49.952 09:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.952 09:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.952 09:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.952 09:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:49.952 09:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:49.952 09:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:50.560 00:17:50.560 09:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:50.560 09:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:50.560 09:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:50.837 09:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.837 09:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:50.837 09:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.837 09:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.837 09:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.837 09:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:50.837 { 00:17:50.837 "cntlid": 85, 00:17:50.837 "qid": 0, 00:17:50.837 "state": "enabled", 00:17:50.837 "thread": "nvmf_tgt_poll_group_000", 00:17:50.837 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:17:50.837 "listen_address": { 00:17:50.837 "trtype": "TCP", 00:17:50.837 "adrfam": "IPv4", 00:17:50.837 "traddr": "10.0.0.2", 00:17:50.837 "trsvcid": "4420" 00:17:50.837 }, 00:17:50.837 "peer_address": { 00:17:50.837 "trtype": "TCP", 00:17:50.837 "adrfam": "IPv4", 00:17:50.837 "traddr": "10.0.0.1", 00:17:50.837 "trsvcid": "43992" 00:17:50.837 }, 00:17:50.837 "auth": { 00:17:50.837 "state": "completed", 00:17:50.837 "digest": "sha384", 00:17:50.837 "dhgroup": "ffdhe6144" 00:17:50.837 } 00:17:50.837 } 00:17:50.837 ]' 00:17:50.837 09:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:50.837 09:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:50.837 09:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:50.837 09:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:50.837 09:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:50.837 09:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:50.837 09:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:50.837 09:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:51.110 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODgxYzcxYWVkNjY3NjgzNTRiZTk3MzQ5NjYxZDY1YTZmZDMyZTYxMmNiYzdkZTYyukxcjw==: --dhchap-ctrl-secret DHHC-1:01:NjBmY2YzMDU2ZGZjYzMwYjQyNDRjZTBiNDk0N2I2ZmNCgJ4d: 00:17:51.110 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:02:ODgxYzcxYWVkNjY3NjgzNTRiZTk3MzQ5NjYxZDY1YTZmZDMyZTYxMmNiYzdkZTYyukxcjw==: --dhchap-ctrl-secret DHHC-1:01:NjBmY2YzMDU2ZGZjYzMwYjQyNDRjZTBiNDk0N2I2ZmNCgJ4d: 00:17:52.079 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:52.079 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:52.079 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:17:52.079 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.079 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.079 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.079 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:52.079 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:52.079 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:52.338 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:17:52.338 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:52.338 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:52.338 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:52.338 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:52.338 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:52.338 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key3 00:17:52.338 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.338 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.338 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.338 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:52.338 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:52.338 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:52.908 00:17:52.908 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:52.908 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:52.908 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:53.167 09:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.167 09:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:53.167 09:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.167 09:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.167 09:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.167 09:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:53.167 { 00:17:53.167 "cntlid": 87, 00:17:53.167 "qid": 0, 00:17:53.167 "state": "enabled", 00:17:53.167 "thread": "nvmf_tgt_poll_group_000", 00:17:53.167 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:17:53.167 "listen_address": { 00:17:53.167 "trtype": "TCP", 00:17:53.167 "adrfam": "IPv4", 00:17:53.167 "traddr": "10.0.0.2", 00:17:53.167 "trsvcid": "4420" 00:17:53.167 }, 00:17:53.167 "peer_address": { 00:17:53.167 "trtype": "TCP", 00:17:53.167 "adrfam": "IPv4", 00:17:53.167 "traddr": "10.0.0.1", 00:17:53.167 "trsvcid": "44012" 00:17:53.167 }, 00:17:53.167 "auth": { 00:17:53.167 "state": "completed", 00:17:53.167 "digest": "sha384", 00:17:53.167 "dhgroup": "ffdhe6144" 00:17:53.167 } 00:17:53.167 } 00:17:53.167 ]' 00:17:53.167 09:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:53.167 09:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:53.167 09:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:53.425 09:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:53.425 09:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:53.425 09:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:53.425 09:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:53.425 09:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:53.684 09:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTNkYWI5NzM4OGE3YmFjYmRmNTBmYzdlNWIxOTk2MGEzMWY3NzZhMjhjMmVkMmIzNmJlNjJhNTRiYTY1NWJlMQDRflw=: 00:17:53.684 09:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:03:NTNkYWI5NzM4OGE3YmFjYmRmNTBmYzdlNWIxOTk2MGEzMWY3NzZhMjhjMmVkMmIzNmJlNjJhNTRiYTY1NWJlMQDRflw=: 00:17:54.622 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:54.622 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:54.622 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:17:54.622 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.622 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.622 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.622 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:54.622 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:54.622 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:54.622 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:54.882 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:17:54.882 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:54.882 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:54.882 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:54.882 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:54.882 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:54.882 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:54.882 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.882 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.882 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.882 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:54.882 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:54.882 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:55.458 00:17:55.718 09:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:55.718 09:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:55.718 09:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:55.978 09:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.978 09:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:55.978 09:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.978 09:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.978 09:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.978 09:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:55.978 { 00:17:55.978 "cntlid": 89, 00:17:55.978 "qid": 0, 00:17:55.978 "state": "enabled", 00:17:55.978 "thread": "nvmf_tgt_poll_group_000", 00:17:55.978 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:17:55.978 "listen_address": { 00:17:55.978 "trtype": "TCP", 00:17:55.978 "adrfam": "IPv4", 00:17:55.978 "traddr": "10.0.0.2", 00:17:55.978 "trsvcid": "4420" 00:17:55.978 }, 00:17:55.978 "peer_address": { 00:17:55.978 "trtype": "TCP", 00:17:55.978 "adrfam": "IPv4", 00:17:55.978 "traddr": "10.0.0.1", 00:17:55.978 "trsvcid": "44050" 00:17:55.978 }, 00:17:55.978 "auth": { 00:17:55.978 "state": "completed", 00:17:55.978 "digest": "sha384", 00:17:55.978 "dhgroup": "ffdhe8192" 00:17:55.978 } 00:17:55.978 } 00:17:55.978 ]' 00:17:55.978 09:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:55.978 09:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:55.978 09:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:55.978 09:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:55.978 09:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:55.978 09:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:55.978 09:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:55.978 09:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:56.238 09:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjgyZmE4NzJmNmMwMDMwOThkYTgzYzViZjM3ZGUwYTE2NjBmYTI5OTVhMDk4OWY0VvKaGA==: --dhchap-ctrl-secret DHHC-1:03:OWZlMjAwMDQxZjg0MWI2YWNlNzU3Y2MwYTU1YWI5NzJjZGZmMmMxZGMzYTJlYjc3YzM3YjI0NTQ4OTExYTJjMi1JLHc=: 00:17:56.238 09:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:00:NjgyZmE4NzJmNmMwMDMwOThkYTgzYzViZjM3ZGUwYTE2NjBmYTI5OTVhMDk4OWY0VvKaGA==: --dhchap-ctrl-secret DHHC-1:03:OWZlMjAwMDQxZjg0MWI2YWNlNzU3Y2MwYTU1YWI5NzJjZGZmMmMxZGMzYTJlYjc3YzM3YjI0NTQ4OTExYTJjMi1JLHc=: 00:17:57.206 09:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:57.206 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:57.206 09:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:17:57.206 09:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.206 09:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.206 09:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.206 09:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:57.206 09:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:57.206 09:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:57.771 09:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:17:57.772 09:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:57.772 09:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:57.772 09:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:57.772 09:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:57.772 09:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:57.772 09:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:57.772 09:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.772 09:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.772 09:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.772 09:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:57.772 09:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:57.772 09:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:58.341 00:17:58.601 09:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:58.601 09:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:58.601 09:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:58.866 09:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.866 09:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:58.866 09:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.866 09:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.866 09:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.866 09:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:58.866 { 00:17:58.866 "cntlid": 91, 00:17:58.866 "qid": 0, 00:17:58.866 "state": "enabled", 00:17:58.866 "thread": "nvmf_tgt_poll_group_000", 00:17:58.866 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:17:58.866 "listen_address": { 00:17:58.866 "trtype": "TCP", 00:17:58.866 "adrfam": "IPv4", 00:17:58.866 "traddr": "10.0.0.2", 00:17:58.866 "trsvcid": "4420" 00:17:58.866 }, 00:17:58.866 "peer_address": { 00:17:58.866 "trtype": "TCP", 00:17:58.866 "adrfam": "IPv4", 00:17:58.866 "traddr": "10.0.0.1", 00:17:58.866 "trsvcid": "56050" 00:17:58.866 }, 00:17:58.866 "auth": { 00:17:58.866 "state": "completed", 00:17:58.866 "digest": "sha384", 00:17:58.866 "dhgroup": "ffdhe8192" 00:17:58.866 } 00:17:58.866 } 00:17:58.866 ]' 00:17:58.866 09:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:58.866 09:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:58.866 09:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:58.866 09:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:58.866 09:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:58.866 09:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:58.866 09:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:58.866 09:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:59.130 09:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjFhNWJkYjMxNzEyZTY2N2M5YTJhNTJmZGVkNGVlNWO88mZF: --dhchap-ctrl-secret DHHC-1:02:YWZmZGZjZGM4OTE3OWE5MTAwZjBjNzc1N2ZlZTZiZWZiMjUxYWYzYmRiMjdkMDlilcC2uQ==: 00:17:59.130 09:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:01:ZjFhNWJkYjMxNzEyZTY2N2M5YTJhNTJmZGVkNGVlNWO88mZF: --dhchap-ctrl-secret DHHC-1:02:YWZmZGZjZGM4OTE3OWE5MTAwZjBjNzc1N2ZlZTZiZWZiMjUxYWYzYmRiMjdkMDlilcC2uQ==: 00:18:00.067 09:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:00.067 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:00.067 09:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:18:00.067 09:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.067 09:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.067 09:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.067 09:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:00.067 09:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:00.068 09:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:00.326 09:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:18:00.326 09:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:00.326 09:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:00.326 09:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:00.326 09:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:00.326 09:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:00.326 09:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:00.326 09:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.326 09:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.326 09:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.326 09:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:00.326 09:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:00.326 09:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:01.264 00:18:01.264 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:01.264 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:01.264 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:01.522 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:01.522 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:01.522 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.522 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.522 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.522 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:01.522 { 00:18:01.522 "cntlid": 93, 00:18:01.522 "qid": 0, 00:18:01.522 "state": "enabled", 00:18:01.522 "thread": "nvmf_tgt_poll_group_000", 00:18:01.522 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:18:01.522 "listen_address": { 00:18:01.522 "trtype": "TCP", 00:18:01.522 "adrfam": "IPv4", 00:18:01.522 "traddr": "10.0.0.2", 00:18:01.522 "trsvcid": "4420" 00:18:01.522 }, 00:18:01.522 "peer_address": { 00:18:01.522 "trtype": "TCP", 00:18:01.522 "adrfam": "IPv4", 00:18:01.522 "traddr": "10.0.0.1", 00:18:01.522 "trsvcid": "56074" 00:18:01.522 }, 00:18:01.522 "auth": { 00:18:01.522 "state": "completed", 00:18:01.522 "digest": "sha384", 00:18:01.522 "dhgroup": "ffdhe8192" 00:18:01.522 } 00:18:01.522 } 00:18:01.522 ]' 00:18:01.522 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:01.522 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:01.522 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:01.522 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:01.522 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:01.780 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:01.780 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:01.780 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:02.040 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODgxYzcxYWVkNjY3NjgzNTRiZTk3MzQ5NjYxZDY1YTZmZDMyZTYxMmNiYzdkZTYyukxcjw==: --dhchap-ctrl-secret DHHC-1:01:NjBmY2YzMDU2ZGZjYzMwYjQyNDRjZTBiNDk0N2I2ZmNCgJ4d: 00:18:02.040 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:02:ODgxYzcxYWVkNjY3NjgzNTRiZTk3MzQ5NjYxZDY1YTZmZDMyZTYxMmNiYzdkZTYyukxcjw==: --dhchap-ctrl-secret DHHC-1:01:NjBmY2YzMDU2ZGZjYzMwYjQyNDRjZTBiNDk0N2I2ZmNCgJ4d: 00:18:02.978 09:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:02.978 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:02.978 09:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:18:02.978 09:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.978 09:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.978 09:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.978 09:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:02.978 09:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:02.978 09:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:03.236 09:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:18:03.236 09:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:03.236 09:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:03.236 09:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:03.236 09:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:03.236 09:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:03.236 09:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key3 00:18:03.236 09:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.236 09:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.236 09:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.236 09:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:03.236 09:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:03.236 09:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:04.176 00:18:04.176 09:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:04.176 09:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:04.176 09:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:04.435 09:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.435 09:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:04.436 09:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.436 09:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.436 09:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.436 09:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:04.436 { 00:18:04.436 "cntlid": 95, 00:18:04.436 "qid": 0, 00:18:04.436 "state": "enabled", 00:18:04.436 "thread": "nvmf_tgt_poll_group_000", 00:18:04.436 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:18:04.436 "listen_address": { 00:18:04.436 "trtype": "TCP", 00:18:04.436 "adrfam": "IPv4", 00:18:04.436 "traddr": "10.0.0.2", 00:18:04.436 "trsvcid": "4420" 00:18:04.436 }, 00:18:04.436 "peer_address": { 00:18:04.436 "trtype": "TCP", 00:18:04.436 "adrfam": "IPv4", 00:18:04.436 "traddr": "10.0.0.1", 00:18:04.436 "trsvcid": "56096" 00:18:04.436 }, 00:18:04.436 "auth": { 00:18:04.436 "state": "completed", 00:18:04.436 "digest": "sha384", 00:18:04.436 "dhgroup": "ffdhe8192" 00:18:04.436 } 00:18:04.436 } 00:18:04.436 ]' 00:18:04.436 09:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:04.436 09:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:04.436 09:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:04.436 09:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:04.436 09:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:04.436 09:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:04.436 09:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:04.436 09:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:04.695 09:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTNkYWI5NzM4OGE3YmFjYmRmNTBmYzdlNWIxOTk2MGEzMWY3NzZhMjhjMmVkMmIzNmJlNjJhNTRiYTY1NWJlMQDRflw=: 00:18:04.695 09:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:03:NTNkYWI5NzM4OGE3YmFjYmRmNTBmYzdlNWIxOTk2MGEzMWY3NzZhMjhjMmVkMmIzNmJlNjJhNTRiYTY1NWJlMQDRflw=: 00:18:05.633 09:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:05.633 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:05.633 09:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:18:05.633 09:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.633 09:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.633 09:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.633 09:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:18:05.633 09:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:05.633 09:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:05.633 09:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:05.633 09:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:05.891 09:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:18:05.891 09:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:05.891 09:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:05.891 09:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:05.891 09:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:05.891 09:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:05.891 09:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:05.891 09:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.891 09:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.891 09:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.891 09:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:05.891 09:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:05.891 09:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:06.150 00:18:06.150 09:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:06.150 09:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:06.150 09:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:06.410 09:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.669 09:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:06.669 09:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.669 09:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.669 09:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.669 09:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:06.669 { 00:18:06.669 "cntlid": 97, 00:18:06.669 "qid": 0, 00:18:06.669 "state": "enabled", 00:18:06.669 "thread": "nvmf_tgt_poll_group_000", 00:18:06.669 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:18:06.669 "listen_address": { 00:18:06.669 "trtype": "TCP", 00:18:06.669 "adrfam": "IPv4", 00:18:06.669 "traddr": "10.0.0.2", 00:18:06.669 "trsvcid": "4420" 00:18:06.669 }, 00:18:06.669 "peer_address": { 00:18:06.669 "trtype": "TCP", 00:18:06.669 "adrfam": "IPv4", 00:18:06.669 "traddr": "10.0.0.1", 00:18:06.669 "trsvcid": "46110" 00:18:06.669 }, 00:18:06.669 "auth": { 00:18:06.669 "state": "completed", 00:18:06.669 "digest": "sha512", 00:18:06.669 "dhgroup": "null" 00:18:06.669 } 00:18:06.669 } 00:18:06.669 ]' 00:18:06.669 09:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:06.669 09:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:06.669 09:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:06.669 09:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:06.669 09:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:06.669 09:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:06.669 09:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:06.669 09:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:06.930 09:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjgyZmE4NzJmNmMwMDMwOThkYTgzYzViZjM3ZGUwYTE2NjBmYTI5OTVhMDk4OWY0VvKaGA==: --dhchap-ctrl-secret DHHC-1:03:OWZlMjAwMDQxZjg0MWI2YWNlNzU3Y2MwYTU1YWI5NzJjZGZmMmMxZGMzYTJlYjc3YzM3YjI0NTQ4OTExYTJjMi1JLHc=: 00:18:06.930 09:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:00:NjgyZmE4NzJmNmMwMDMwOThkYTgzYzViZjM3ZGUwYTE2NjBmYTI5OTVhMDk4OWY0VvKaGA==: --dhchap-ctrl-secret DHHC-1:03:OWZlMjAwMDQxZjg0MWI2YWNlNzU3Y2MwYTU1YWI5NzJjZGZmMmMxZGMzYTJlYjc3YzM3YjI0NTQ4OTExYTJjMi1JLHc=: 00:18:07.871 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:07.871 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:07.871 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:18:07.871 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.871 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.871 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.871 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:07.871 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:07.871 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:08.129 09:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:18:08.129 09:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:08.129 09:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:08.129 09:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:08.129 09:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:08.129 09:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:08.129 09:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:08.129 09:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.129 09:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.129 09:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.129 09:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:08.129 09:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:08.129 09:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:08.700 00:18:08.700 09:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:08.700 09:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:08.700 09:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:08.958 09:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:08.958 09:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:08.958 09:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.958 09:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.958 09:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.958 09:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:08.958 { 00:18:08.958 "cntlid": 99, 00:18:08.958 "qid": 0, 00:18:08.958 "state": "enabled", 00:18:08.958 "thread": "nvmf_tgt_poll_group_000", 00:18:08.958 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:18:08.958 "listen_address": { 00:18:08.958 "trtype": "TCP", 00:18:08.958 "adrfam": "IPv4", 00:18:08.958 "traddr": "10.0.0.2", 00:18:08.958 "trsvcid": "4420" 00:18:08.958 }, 00:18:08.958 "peer_address": { 00:18:08.958 "trtype": "TCP", 00:18:08.958 "adrfam": "IPv4", 00:18:08.958 "traddr": "10.0.0.1", 00:18:08.958 "trsvcid": "46128" 00:18:08.958 }, 00:18:08.958 "auth": { 00:18:08.958 "state": "completed", 00:18:08.958 "digest": "sha512", 00:18:08.958 "dhgroup": "null" 00:18:08.958 } 00:18:08.958 } 00:18:08.958 ]' 00:18:08.958 09:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:08.958 09:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:08.958 09:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:08.958 09:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:08.958 09:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:08.958 09:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:08.958 09:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:08.958 09:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:09.215 09:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjFhNWJkYjMxNzEyZTY2N2M5YTJhNTJmZGVkNGVlNWO88mZF: --dhchap-ctrl-secret DHHC-1:02:YWZmZGZjZGM4OTE3OWE5MTAwZjBjNzc1N2ZlZTZiZWZiMjUxYWYzYmRiMjdkMDlilcC2uQ==: 00:18:09.215 09:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:01:ZjFhNWJkYjMxNzEyZTY2N2M5YTJhNTJmZGVkNGVlNWO88mZF: --dhchap-ctrl-secret DHHC-1:02:YWZmZGZjZGM4OTE3OWE5MTAwZjBjNzc1N2ZlZTZiZWZiMjUxYWYzYmRiMjdkMDlilcC2uQ==: 00:18:10.156 09:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:10.156 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:10.156 09:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:18:10.156 09:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.156 09:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.156 09:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.156 09:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:10.156 09:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:10.156 09:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:10.415 09:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:18:10.415 09:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:10.415 09:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:10.415 09:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:10.415 09:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:10.415 09:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:10.415 09:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:10.415 09:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.415 09:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.415 09:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.415 09:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:10.415 09:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:10.415 09:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:10.674 00:18:10.674 09:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:10.674 09:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:10.674 09:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:10.933 09:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:10.933 09:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:10.933 09:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.933 09:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.933 09:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.933 09:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:10.933 { 00:18:10.933 "cntlid": 101, 00:18:10.933 "qid": 0, 00:18:10.933 "state": "enabled", 00:18:10.933 "thread": "nvmf_tgt_poll_group_000", 00:18:10.933 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:18:10.933 "listen_address": { 00:18:10.933 "trtype": "TCP", 00:18:10.933 "adrfam": "IPv4", 00:18:10.933 "traddr": "10.0.0.2", 00:18:10.933 "trsvcid": "4420" 00:18:10.933 }, 00:18:10.933 "peer_address": { 00:18:10.933 "trtype": "TCP", 00:18:10.933 "adrfam": "IPv4", 00:18:10.933 "traddr": "10.0.0.1", 00:18:10.933 "trsvcid": "46138" 00:18:10.933 }, 00:18:10.933 "auth": { 00:18:10.933 "state": "completed", 00:18:10.933 "digest": "sha512", 00:18:10.933 "dhgroup": "null" 00:18:10.933 } 00:18:10.933 } 00:18:10.933 ]' 00:18:10.933 09:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:11.192 09:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:11.192 09:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:11.192 09:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:11.192 09:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:11.192 09:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:11.192 09:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:11.192 09:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:11.452 09:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODgxYzcxYWVkNjY3NjgzNTRiZTk3MzQ5NjYxZDY1YTZmZDMyZTYxMmNiYzdkZTYyukxcjw==: --dhchap-ctrl-secret DHHC-1:01:NjBmY2YzMDU2ZGZjYzMwYjQyNDRjZTBiNDk0N2I2ZmNCgJ4d: 00:18:11.452 09:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:02:ODgxYzcxYWVkNjY3NjgzNTRiZTk3MzQ5NjYxZDY1YTZmZDMyZTYxMmNiYzdkZTYyukxcjw==: --dhchap-ctrl-secret DHHC-1:01:NjBmY2YzMDU2ZGZjYzMwYjQyNDRjZTBiNDk0N2I2ZmNCgJ4d: 00:18:12.389 09:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:12.389 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:12.389 09:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:18:12.389 09:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.389 09:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.389 09:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.389 09:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:12.389 09:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:12.389 09:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:12.647 09:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:18:12.647 09:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:12.647 09:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:12.647 09:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:12.647 09:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:12.647 09:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:12.647 09:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key3 00:18:12.647 09:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.647 09:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.647 09:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.647 09:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:12.647 09:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:12.647 09:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:12.906 00:18:12.906 09:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:12.906 09:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:12.906 09:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:13.165 09:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:13.165 09:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:13.165 09:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.165 09:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.165 09:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.165 09:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:13.165 { 00:18:13.165 "cntlid": 103, 00:18:13.165 "qid": 0, 00:18:13.165 "state": "enabled", 00:18:13.165 "thread": "nvmf_tgt_poll_group_000", 00:18:13.165 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:18:13.165 "listen_address": { 00:18:13.165 "trtype": "TCP", 00:18:13.165 "adrfam": "IPv4", 00:18:13.165 "traddr": "10.0.0.2", 00:18:13.165 "trsvcid": "4420" 00:18:13.165 }, 00:18:13.165 "peer_address": { 00:18:13.165 "trtype": "TCP", 00:18:13.165 "adrfam": "IPv4", 00:18:13.165 "traddr": "10.0.0.1", 00:18:13.165 "trsvcid": "46156" 00:18:13.165 }, 00:18:13.165 "auth": { 00:18:13.165 "state": "completed", 00:18:13.165 "digest": "sha512", 00:18:13.165 "dhgroup": "null" 00:18:13.165 } 00:18:13.165 } 00:18:13.165 ]' 00:18:13.165 09:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:13.424 09:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:13.424 09:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:13.424 09:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:13.424 09:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:13.424 09:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:13.424 09:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:13.424 09:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:13.682 09:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTNkYWI5NzM4OGE3YmFjYmRmNTBmYzdlNWIxOTk2MGEzMWY3NzZhMjhjMmVkMmIzNmJlNjJhNTRiYTY1NWJlMQDRflw=: 00:18:13.682 09:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:03:NTNkYWI5NzM4OGE3YmFjYmRmNTBmYzdlNWIxOTk2MGEzMWY3NzZhMjhjMmVkMmIzNmJlNjJhNTRiYTY1NWJlMQDRflw=: 00:18:14.623 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:14.623 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:14.623 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:18:14.623 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.623 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.623 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.623 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:14.623 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:14.623 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:14.623 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:14.882 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:18:14.882 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:14.882 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:14.882 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:14.882 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:14.882 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:14.882 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:14.882 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.882 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.882 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.882 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:14.882 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:14.882 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:15.141 00:18:15.141 09:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:15.141 09:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:15.141 09:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:15.400 09:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:15.400 09:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:15.400 09:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.400 09:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.400 09:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.400 09:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:15.400 { 00:18:15.400 "cntlid": 105, 00:18:15.400 "qid": 0, 00:18:15.400 "state": "enabled", 00:18:15.400 "thread": "nvmf_tgt_poll_group_000", 00:18:15.400 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:18:15.400 "listen_address": { 00:18:15.400 "trtype": "TCP", 00:18:15.400 "adrfam": "IPv4", 00:18:15.400 "traddr": "10.0.0.2", 00:18:15.400 "trsvcid": "4420" 00:18:15.400 }, 00:18:15.400 "peer_address": { 00:18:15.400 "trtype": "TCP", 00:18:15.400 "adrfam": "IPv4", 00:18:15.400 "traddr": "10.0.0.1", 00:18:15.400 "trsvcid": "46168" 00:18:15.400 }, 00:18:15.400 "auth": { 00:18:15.400 "state": "completed", 00:18:15.400 "digest": "sha512", 00:18:15.400 "dhgroup": "ffdhe2048" 00:18:15.400 } 00:18:15.400 } 00:18:15.400 ]' 00:18:15.400 09:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:15.400 09:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:15.400 09:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:15.659 09:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:15.659 09:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:15.659 09:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:15.659 09:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:15.659 09:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:15.918 09:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjgyZmE4NzJmNmMwMDMwOThkYTgzYzViZjM3ZGUwYTE2NjBmYTI5OTVhMDk4OWY0VvKaGA==: --dhchap-ctrl-secret DHHC-1:03:OWZlMjAwMDQxZjg0MWI2YWNlNzU3Y2MwYTU1YWI5NzJjZGZmMmMxZGMzYTJlYjc3YzM3YjI0NTQ4OTExYTJjMi1JLHc=: 00:18:15.918 09:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:00:NjgyZmE4NzJmNmMwMDMwOThkYTgzYzViZjM3ZGUwYTE2NjBmYTI5OTVhMDk4OWY0VvKaGA==: --dhchap-ctrl-secret DHHC-1:03:OWZlMjAwMDQxZjg0MWI2YWNlNzU3Y2MwYTU1YWI5NzJjZGZmMmMxZGMzYTJlYjc3YzM3YjI0NTQ4OTExYTJjMi1JLHc=: 00:18:16.856 09:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:16.856 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:16.856 09:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:18:16.856 09:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.856 09:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.856 09:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.856 09:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:16.856 09:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:16.856 09:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:17.114 09:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:18:17.115 09:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:17.115 09:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:17.115 09:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:17.115 09:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:17.115 09:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:17.115 09:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:17.115 09:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.115 09:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.115 09:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.115 09:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:17.115 09:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:17.115 09:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:17.373 00:18:17.373 09:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:17.373 09:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:17.373 09:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:17.632 09:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:17.632 09:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:17.632 09:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.632 09:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.632 09:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.632 09:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:17.632 { 00:18:17.632 "cntlid": 107, 00:18:17.632 "qid": 0, 00:18:17.632 "state": "enabled", 00:18:17.632 "thread": "nvmf_tgt_poll_group_000", 00:18:17.632 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:18:17.632 "listen_address": { 00:18:17.632 "trtype": "TCP", 00:18:17.632 "adrfam": "IPv4", 00:18:17.632 "traddr": "10.0.0.2", 00:18:17.632 "trsvcid": "4420" 00:18:17.632 }, 00:18:17.632 "peer_address": { 00:18:17.632 "trtype": "TCP", 00:18:17.632 "adrfam": "IPv4", 00:18:17.632 "traddr": "10.0.0.1", 00:18:17.632 "trsvcid": "50786" 00:18:17.632 }, 00:18:17.632 "auth": { 00:18:17.632 "state": "completed", 00:18:17.632 "digest": "sha512", 00:18:17.632 "dhgroup": "ffdhe2048" 00:18:17.632 } 00:18:17.632 } 00:18:17.632 ]' 00:18:17.632 09:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:17.632 09:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:17.632 09:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:17.890 09:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:17.890 09:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:17.890 09:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:17.890 09:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:17.890 09:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:18.152 09:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjFhNWJkYjMxNzEyZTY2N2M5YTJhNTJmZGVkNGVlNWO88mZF: --dhchap-ctrl-secret DHHC-1:02:YWZmZGZjZGM4OTE3OWE5MTAwZjBjNzc1N2ZlZTZiZWZiMjUxYWYzYmRiMjdkMDlilcC2uQ==: 00:18:18.152 09:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:01:ZjFhNWJkYjMxNzEyZTY2N2M5YTJhNTJmZGVkNGVlNWO88mZF: --dhchap-ctrl-secret DHHC-1:02:YWZmZGZjZGM4OTE3OWE5MTAwZjBjNzc1N2ZlZTZiZWZiMjUxYWYzYmRiMjdkMDlilcC2uQ==: 00:18:19.091 09:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:19.091 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:19.091 09:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:18:19.091 09:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.091 09:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.091 09:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.091 09:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:19.091 09:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:19.091 09:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:19.351 09:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:18:19.351 09:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:19.351 09:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:19.351 09:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:19.351 09:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:19.351 09:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:19.351 09:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:19.351 09:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.351 09:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.351 09:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.351 09:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:19.351 09:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:19.351 09:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:19.607 00:18:19.607 09:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:19.607 09:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:19.608 09:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:19.866 09:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:19.866 09:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:19.866 09:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.866 09:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.866 09:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.866 09:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:19.866 { 00:18:19.866 "cntlid": 109, 00:18:19.866 "qid": 0, 00:18:19.866 "state": "enabled", 00:18:19.866 "thread": "nvmf_tgt_poll_group_000", 00:18:19.866 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:18:19.866 "listen_address": { 00:18:19.866 "trtype": "TCP", 00:18:19.866 "adrfam": "IPv4", 00:18:19.866 "traddr": "10.0.0.2", 00:18:19.866 "trsvcid": "4420" 00:18:19.866 }, 00:18:19.866 "peer_address": { 00:18:19.866 "trtype": "TCP", 00:18:19.866 "adrfam": "IPv4", 00:18:19.866 "traddr": "10.0.0.1", 00:18:19.866 "trsvcid": "50814" 00:18:19.866 }, 00:18:19.866 "auth": { 00:18:19.866 "state": "completed", 00:18:19.866 "digest": "sha512", 00:18:19.866 "dhgroup": "ffdhe2048" 00:18:19.866 } 00:18:19.866 } 00:18:19.866 ]' 00:18:19.866 09:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:19.866 09:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:19.866 09:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:19.866 09:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:19.866 09:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:20.124 09:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:20.124 09:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:20.124 09:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:20.382 09:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODgxYzcxYWVkNjY3NjgzNTRiZTk3MzQ5NjYxZDY1YTZmZDMyZTYxMmNiYzdkZTYyukxcjw==: --dhchap-ctrl-secret DHHC-1:01:NjBmY2YzMDU2ZGZjYzMwYjQyNDRjZTBiNDk0N2I2ZmNCgJ4d: 00:18:20.382 09:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:02:ODgxYzcxYWVkNjY3NjgzNTRiZTk3MzQ5NjYxZDY1YTZmZDMyZTYxMmNiYzdkZTYyukxcjw==: --dhchap-ctrl-secret DHHC-1:01:NjBmY2YzMDU2ZGZjYzMwYjQyNDRjZTBiNDk0N2I2ZmNCgJ4d: 00:18:21.321 09:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:21.321 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:21.321 09:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:18:21.321 09:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.321 09:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.321 09:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.321 09:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:21.321 09:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:21.321 09:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:21.580 09:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:18:21.580 09:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:21.580 09:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:21.580 09:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:21.580 09:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:21.580 09:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:21.580 09:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key3 00:18:21.580 09:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.580 09:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.580 09:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.580 09:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:21.580 09:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:21.580 09:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:21.838 00:18:21.838 09:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:21.838 09:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:21.838 09:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:22.096 09:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:22.096 09:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:22.096 09:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.096 09:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.096 09:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.096 09:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:22.096 { 00:18:22.096 "cntlid": 111, 00:18:22.096 "qid": 0, 00:18:22.096 "state": "enabled", 00:18:22.096 "thread": "nvmf_tgt_poll_group_000", 00:18:22.096 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:18:22.096 "listen_address": { 00:18:22.096 "trtype": "TCP", 00:18:22.096 "adrfam": "IPv4", 00:18:22.097 "traddr": "10.0.0.2", 00:18:22.097 "trsvcid": "4420" 00:18:22.097 }, 00:18:22.097 "peer_address": { 00:18:22.097 "trtype": "TCP", 00:18:22.097 "adrfam": "IPv4", 00:18:22.097 "traddr": "10.0.0.1", 00:18:22.097 "trsvcid": "50836" 00:18:22.097 }, 00:18:22.097 "auth": { 00:18:22.097 "state": "completed", 00:18:22.097 "digest": "sha512", 00:18:22.097 "dhgroup": "ffdhe2048" 00:18:22.097 } 00:18:22.097 } 00:18:22.097 ]' 00:18:22.097 09:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:22.097 09:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:22.097 09:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:22.097 09:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:22.097 09:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:22.355 09:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:22.355 09:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:22.355 09:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:22.613 09:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTNkYWI5NzM4OGE3YmFjYmRmNTBmYzdlNWIxOTk2MGEzMWY3NzZhMjhjMmVkMmIzNmJlNjJhNTRiYTY1NWJlMQDRflw=: 00:18:22.613 09:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:03:NTNkYWI5NzM4OGE3YmFjYmRmNTBmYzdlNWIxOTk2MGEzMWY3NzZhMjhjMmVkMmIzNmJlNjJhNTRiYTY1NWJlMQDRflw=: 00:18:23.550 09:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:23.550 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:23.550 09:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:18:23.550 09:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.550 09:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.550 09:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.550 09:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:23.550 09:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:23.550 09:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:23.550 09:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:23.809 09:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:18:23.809 09:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:23.809 09:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:23.809 09:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:23.809 09:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:23.809 09:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:23.809 09:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:23.809 09:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.809 09:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.809 09:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.809 09:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:23.809 09:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:23.809 09:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:24.069 00:18:24.069 09:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:24.070 09:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:24.070 09:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:24.330 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:24.330 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:24.330 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.330 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.330 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.330 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:24.330 { 00:18:24.330 "cntlid": 113, 00:18:24.330 "qid": 0, 00:18:24.330 "state": "enabled", 00:18:24.330 "thread": "nvmf_tgt_poll_group_000", 00:18:24.330 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:18:24.330 "listen_address": { 00:18:24.330 "trtype": "TCP", 00:18:24.330 "adrfam": "IPv4", 00:18:24.330 "traddr": "10.0.0.2", 00:18:24.330 "trsvcid": "4420" 00:18:24.330 }, 00:18:24.330 "peer_address": { 00:18:24.330 "trtype": "TCP", 00:18:24.330 "adrfam": "IPv4", 00:18:24.330 "traddr": "10.0.0.1", 00:18:24.330 "trsvcid": "50858" 00:18:24.330 }, 00:18:24.330 "auth": { 00:18:24.330 "state": "completed", 00:18:24.330 "digest": "sha512", 00:18:24.330 "dhgroup": "ffdhe3072" 00:18:24.330 } 00:18:24.330 } 00:18:24.330 ]' 00:18:24.330 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:24.330 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:24.330 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:24.330 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:24.330 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:24.588 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:24.588 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:24.588 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:24.849 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjgyZmE4NzJmNmMwMDMwOThkYTgzYzViZjM3ZGUwYTE2NjBmYTI5OTVhMDk4OWY0VvKaGA==: --dhchap-ctrl-secret DHHC-1:03:OWZlMjAwMDQxZjg0MWI2YWNlNzU3Y2MwYTU1YWI5NzJjZGZmMmMxZGMzYTJlYjc3YzM3YjI0NTQ4OTExYTJjMi1JLHc=: 00:18:24.849 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:00:NjgyZmE4NzJmNmMwMDMwOThkYTgzYzViZjM3ZGUwYTE2NjBmYTI5OTVhMDk4OWY0VvKaGA==: --dhchap-ctrl-secret DHHC-1:03:OWZlMjAwMDQxZjg0MWI2YWNlNzU3Y2MwYTU1YWI5NzJjZGZmMmMxZGMzYTJlYjc3YzM3YjI0NTQ4OTExYTJjMi1JLHc=: 00:18:25.790 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:25.790 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:25.790 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:18:25.790 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.790 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.790 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.790 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:25.790 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:25.790 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:26.049 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:18:26.049 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:26.049 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:26.049 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:26.049 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:26.049 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:26.049 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:26.049 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.049 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.049 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.049 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:26.049 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:26.049 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:26.308 00:18:26.308 09:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:26.308 09:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:26.308 09:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:26.566 09:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:26.566 09:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:26.566 09:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.566 09:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.566 09:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.566 09:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:26.566 { 00:18:26.566 "cntlid": 115, 00:18:26.566 "qid": 0, 00:18:26.566 "state": "enabled", 00:18:26.566 "thread": "nvmf_tgt_poll_group_000", 00:18:26.566 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:18:26.566 "listen_address": { 00:18:26.566 "trtype": "TCP", 00:18:26.566 "adrfam": "IPv4", 00:18:26.566 "traddr": "10.0.0.2", 00:18:26.566 "trsvcid": "4420" 00:18:26.566 }, 00:18:26.566 "peer_address": { 00:18:26.566 "trtype": "TCP", 00:18:26.567 "adrfam": "IPv4", 00:18:26.567 "traddr": "10.0.0.1", 00:18:26.567 "trsvcid": "53732" 00:18:26.567 }, 00:18:26.567 "auth": { 00:18:26.567 "state": "completed", 00:18:26.567 "digest": "sha512", 00:18:26.567 "dhgroup": "ffdhe3072" 00:18:26.567 } 00:18:26.567 } 00:18:26.567 ]' 00:18:26.567 09:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:26.567 09:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:26.567 09:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:26.825 09:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:26.825 09:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:26.825 09:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:26.825 09:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:26.825 09:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:27.083 09:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjFhNWJkYjMxNzEyZTY2N2M5YTJhNTJmZGVkNGVlNWO88mZF: --dhchap-ctrl-secret DHHC-1:02:YWZmZGZjZGM4OTE3OWE5MTAwZjBjNzc1N2ZlZTZiZWZiMjUxYWYzYmRiMjdkMDlilcC2uQ==: 00:18:27.083 09:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:01:ZjFhNWJkYjMxNzEyZTY2N2M5YTJhNTJmZGVkNGVlNWO88mZF: --dhchap-ctrl-secret DHHC-1:02:YWZmZGZjZGM4OTE3OWE5MTAwZjBjNzc1N2ZlZTZiZWZiMjUxYWYzYmRiMjdkMDlilcC2uQ==: 00:18:28.022 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:28.022 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:28.022 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:18:28.022 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.022 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.022 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.022 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:28.022 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:28.022 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:28.280 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:18:28.280 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:28.280 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:28.280 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:28.280 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:28.280 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:28.280 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:28.280 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.280 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.280 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.280 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:28.280 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:28.280 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:28.850 00:18:28.850 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:28.850 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:28.850 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:29.108 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:29.108 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:29.108 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.108 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.108 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.108 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:29.108 { 00:18:29.108 "cntlid": 117, 00:18:29.108 "qid": 0, 00:18:29.108 "state": "enabled", 00:18:29.108 "thread": "nvmf_tgt_poll_group_000", 00:18:29.108 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:18:29.108 "listen_address": { 00:18:29.108 "trtype": "TCP", 00:18:29.108 "adrfam": "IPv4", 00:18:29.108 "traddr": "10.0.0.2", 00:18:29.108 "trsvcid": "4420" 00:18:29.108 }, 00:18:29.108 "peer_address": { 00:18:29.108 "trtype": "TCP", 00:18:29.108 "adrfam": "IPv4", 00:18:29.108 "traddr": "10.0.0.1", 00:18:29.108 "trsvcid": "53752" 00:18:29.108 }, 00:18:29.108 "auth": { 00:18:29.108 "state": "completed", 00:18:29.108 "digest": "sha512", 00:18:29.108 "dhgroup": "ffdhe3072" 00:18:29.108 } 00:18:29.108 } 00:18:29.108 ]' 00:18:29.108 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:29.108 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:29.108 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:29.108 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:29.108 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:29.108 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:29.108 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:29.108 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:29.366 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODgxYzcxYWVkNjY3NjgzNTRiZTk3MzQ5NjYxZDY1YTZmZDMyZTYxMmNiYzdkZTYyukxcjw==: --dhchap-ctrl-secret DHHC-1:01:NjBmY2YzMDU2ZGZjYzMwYjQyNDRjZTBiNDk0N2I2ZmNCgJ4d: 00:18:29.367 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:02:ODgxYzcxYWVkNjY3NjgzNTRiZTk3MzQ5NjYxZDY1YTZmZDMyZTYxMmNiYzdkZTYyukxcjw==: --dhchap-ctrl-secret DHHC-1:01:NjBmY2YzMDU2ZGZjYzMwYjQyNDRjZTBiNDk0N2I2ZmNCgJ4d: 00:18:30.301 09:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:30.301 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:30.301 09:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:18:30.301 09:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.301 09:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.301 09:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.301 09:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:30.301 09:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:30.301 09:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:30.558 09:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:18:30.558 09:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:30.558 09:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:30.558 09:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:30.558 09:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:30.558 09:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:30.558 09:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key3 00:18:30.558 09:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.558 09:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.558 09:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.558 09:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:30.558 09:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:30.558 09:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:30.816 00:18:30.816 09:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:30.816 09:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:30.816 09:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:31.075 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:31.334 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:31.334 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.334 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.334 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.334 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:31.334 { 00:18:31.334 "cntlid": 119, 00:18:31.334 "qid": 0, 00:18:31.334 "state": "enabled", 00:18:31.334 "thread": "nvmf_tgt_poll_group_000", 00:18:31.334 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:18:31.334 "listen_address": { 00:18:31.334 "trtype": "TCP", 00:18:31.334 "adrfam": "IPv4", 00:18:31.334 "traddr": "10.0.0.2", 00:18:31.334 "trsvcid": "4420" 00:18:31.334 }, 00:18:31.334 "peer_address": { 00:18:31.334 "trtype": "TCP", 00:18:31.334 "adrfam": "IPv4", 00:18:31.334 "traddr": "10.0.0.1", 00:18:31.334 "trsvcid": "53782" 00:18:31.334 }, 00:18:31.334 "auth": { 00:18:31.334 "state": "completed", 00:18:31.334 "digest": "sha512", 00:18:31.334 "dhgroup": "ffdhe3072" 00:18:31.334 } 00:18:31.334 } 00:18:31.334 ]' 00:18:31.334 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:31.334 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:31.334 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:31.334 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:31.334 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:31.334 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:31.334 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:31.334 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:31.593 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTNkYWI5NzM4OGE3YmFjYmRmNTBmYzdlNWIxOTk2MGEzMWY3NzZhMjhjMmVkMmIzNmJlNjJhNTRiYTY1NWJlMQDRflw=: 00:18:31.593 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:03:NTNkYWI5NzM4OGE3YmFjYmRmNTBmYzdlNWIxOTk2MGEzMWY3NzZhMjhjMmVkMmIzNmJlNjJhNTRiYTY1NWJlMQDRflw=: 00:18:32.532 09:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:32.532 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:32.532 09:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:18:32.532 09:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.532 09:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.532 09:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.532 09:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:32.532 09:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:32.532 09:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:32.532 09:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:32.790 09:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:18:32.790 09:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:32.790 09:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:32.790 09:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:32.790 09:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:32.790 09:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:32.790 09:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:32.790 09:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.790 09:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.790 09:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.790 09:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:32.790 09:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:32.790 09:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:33.360 00:18:33.360 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:33.360 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:33.360 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:33.618 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:33.618 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:33.618 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.618 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.618 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.619 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:33.619 { 00:18:33.619 "cntlid": 121, 00:18:33.619 "qid": 0, 00:18:33.619 "state": "enabled", 00:18:33.619 "thread": "nvmf_tgt_poll_group_000", 00:18:33.619 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:18:33.619 "listen_address": { 00:18:33.619 "trtype": "TCP", 00:18:33.619 "adrfam": "IPv4", 00:18:33.619 "traddr": "10.0.0.2", 00:18:33.619 "trsvcid": "4420" 00:18:33.619 }, 00:18:33.619 "peer_address": { 00:18:33.619 "trtype": "TCP", 00:18:33.619 "adrfam": "IPv4", 00:18:33.619 "traddr": "10.0.0.1", 00:18:33.619 "trsvcid": "53808" 00:18:33.619 }, 00:18:33.619 "auth": { 00:18:33.619 "state": "completed", 00:18:33.619 "digest": "sha512", 00:18:33.619 "dhgroup": "ffdhe4096" 00:18:33.619 } 00:18:33.619 } 00:18:33.619 ]' 00:18:33.619 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:33.619 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:33.619 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:33.619 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:33.619 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:33.619 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:33.619 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:33.619 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:33.877 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjgyZmE4NzJmNmMwMDMwOThkYTgzYzViZjM3ZGUwYTE2NjBmYTI5OTVhMDk4OWY0VvKaGA==: --dhchap-ctrl-secret DHHC-1:03:OWZlMjAwMDQxZjg0MWI2YWNlNzU3Y2MwYTU1YWI5NzJjZGZmMmMxZGMzYTJlYjc3YzM3YjI0NTQ4OTExYTJjMi1JLHc=: 00:18:33.877 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:00:NjgyZmE4NzJmNmMwMDMwOThkYTgzYzViZjM3ZGUwYTE2NjBmYTI5OTVhMDk4OWY0VvKaGA==: --dhchap-ctrl-secret DHHC-1:03:OWZlMjAwMDQxZjg0MWI2YWNlNzU3Y2MwYTU1YWI5NzJjZGZmMmMxZGMzYTJlYjc3YzM3YjI0NTQ4OTExYTJjMi1JLHc=: 00:18:34.817 09:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:34.817 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:34.817 09:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:18:34.817 09:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.817 09:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.817 09:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.818 09:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:34.818 09:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:34.818 09:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:35.076 09:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:18:35.076 09:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:35.076 09:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:35.076 09:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:35.076 09:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:35.076 09:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:35.076 09:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:35.076 09:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.076 09:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.076 09:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.076 09:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:35.076 09:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:35.076 09:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:35.334 00:18:35.593 09:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:35.593 09:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:35.593 09:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:35.851 09:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:35.851 09:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:35.851 09:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.851 09:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.851 09:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.851 09:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:35.851 { 00:18:35.851 "cntlid": 123, 00:18:35.851 "qid": 0, 00:18:35.851 "state": "enabled", 00:18:35.851 "thread": "nvmf_tgt_poll_group_000", 00:18:35.851 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:18:35.851 "listen_address": { 00:18:35.851 "trtype": "TCP", 00:18:35.851 "adrfam": "IPv4", 00:18:35.851 "traddr": "10.0.0.2", 00:18:35.851 "trsvcid": "4420" 00:18:35.851 }, 00:18:35.851 "peer_address": { 00:18:35.851 "trtype": "TCP", 00:18:35.851 "adrfam": "IPv4", 00:18:35.851 "traddr": "10.0.0.1", 00:18:35.851 "trsvcid": "53834" 00:18:35.851 }, 00:18:35.851 "auth": { 00:18:35.851 "state": "completed", 00:18:35.851 "digest": "sha512", 00:18:35.851 "dhgroup": "ffdhe4096" 00:18:35.851 } 00:18:35.851 } 00:18:35.851 ]' 00:18:35.851 09:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:35.851 09:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:35.851 09:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:35.851 09:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:35.851 09:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:35.851 09:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:35.851 09:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:35.851 09:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:36.107 09:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjFhNWJkYjMxNzEyZTY2N2M5YTJhNTJmZGVkNGVlNWO88mZF: --dhchap-ctrl-secret DHHC-1:02:YWZmZGZjZGM4OTE3OWE5MTAwZjBjNzc1N2ZlZTZiZWZiMjUxYWYzYmRiMjdkMDlilcC2uQ==: 00:18:36.107 09:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:01:ZjFhNWJkYjMxNzEyZTY2N2M5YTJhNTJmZGVkNGVlNWO88mZF: --dhchap-ctrl-secret DHHC-1:02:YWZmZGZjZGM4OTE3OWE5MTAwZjBjNzc1N2ZlZTZiZWZiMjUxYWYzYmRiMjdkMDlilcC2uQ==: 00:18:37.047 09:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:37.047 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:37.047 09:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:18:37.047 09:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.047 09:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.047 09:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.047 09:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:37.047 09:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:37.047 09:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:37.306 09:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:18:37.306 09:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:37.306 09:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:37.306 09:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:37.306 09:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:37.306 09:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:37.306 09:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:37.306 09:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.306 09:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.306 09:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.306 09:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:37.306 09:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:37.306 09:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:37.875 00:18:37.875 09:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:37.875 09:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:37.875 09:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:38.134 09:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:38.134 09:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:38.134 09:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.134 09:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.134 09:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.134 09:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:38.134 { 00:18:38.134 "cntlid": 125, 00:18:38.134 "qid": 0, 00:18:38.134 "state": "enabled", 00:18:38.134 "thread": "nvmf_tgt_poll_group_000", 00:18:38.134 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:18:38.134 "listen_address": { 00:18:38.134 "trtype": "TCP", 00:18:38.134 "adrfam": "IPv4", 00:18:38.134 "traddr": "10.0.0.2", 00:18:38.134 "trsvcid": "4420" 00:18:38.134 }, 00:18:38.134 "peer_address": { 00:18:38.134 "trtype": "TCP", 00:18:38.134 "adrfam": "IPv4", 00:18:38.134 "traddr": "10.0.0.1", 00:18:38.134 "trsvcid": "33738" 00:18:38.134 }, 00:18:38.134 "auth": { 00:18:38.134 "state": "completed", 00:18:38.134 "digest": "sha512", 00:18:38.134 "dhgroup": "ffdhe4096" 00:18:38.134 } 00:18:38.134 } 00:18:38.134 ]' 00:18:38.134 09:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:38.134 09:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:38.134 09:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:38.134 09:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:38.134 09:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:38.134 09:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:38.134 09:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:38.134 09:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:38.393 09:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODgxYzcxYWVkNjY3NjgzNTRiZTk3MzQ5NjYxZDY1YTZmZDMyZTYxMmNiYzdkZTYyukxcjw==: --dhchap-ctrl-secret DHHC-1:01:NjBmY2YzMDU2ZGZjYzMwYjQyNDRjZTBiNDk0N2I2ZmNCgJ4d: 00:18:38.393 09:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:02:ODgxYzcxYWVkNjY3NjgzNTRiZTk3MzQ5NjYxZDY1YTZmZDMyZTYxMmNiYzdkZTYyukxcjw==: --dhchap-ctrl-secret DHHC-1:01:NjBmY2YzMDU2ZGZjYzMwYjQyNDRjZTBiNDk0N2I2ZmNCgJ4d: 00:18:39.333 09:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:39.333 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:39.333 09:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:18:39.333 09:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.333 09:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.333 09:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.333 09:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:39.333 09:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:39.333 09:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:39.591 09:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:18:39.591 09:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:39.591 09:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:39.591 09:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:39.591 09:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:39.591 09:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:39.591 09:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key3 00:18:39.591 09:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.591 09:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.591 09:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.591 09:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:39.591 09:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:39.591 09:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:40.159 00:18:40.159 09:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:40.159 09:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:40.159 09:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:40.159 09:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:40.418 09:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:40.418 09:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.418 09:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.418 09:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.418 09:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:40.418 { 00:18:40.418 "cntlid": 127, 00:18:40.418 "qid": 0, 00:18:40.418 "state": "enabled", 00:18:40.418 "thread": "nvmf_tgt_poll_group_000", 00:18:40.418 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:18:40.418 "listen_address": { 00:18:40.418 "trtype": "TCP", 00:18:40.418 "adrfam": "IPv4", 00:18:40.418 "traddr": "10.0.0.2", 00:18:40.418 "trsvcid": "4420" 00:18:40.418 }, 00:18:40.418 "peer_address": { 00:18:40.418 "trtype": "TCP", 00:18:40.418 "adrfam": "IPv4", 00:18:40.418 "traddr": "10.0.0.1", 00:18:40.418 "trsvcid": "33760" 00:18:40.418 }, 00:18:40.418 "auth": { 00:18:40.418 "state": "completed", 00:18:40.418 "digest": "sha512", 00:18:40.418 "dhgroup": "ffdhe4096" 00:18:40.418 } 00:18:40.418 } 00:18:40.418 ]' 00:18:40.418 09:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:40.418 09:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:40.418 09:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:40.418 09:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:40.418 09:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:40.418 09:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:40.418 09:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:40.418 09:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:40.676 09:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTNkYWI5NzM4OGE3YmFjYmRmNTBmYzdlNWIxOTk2MGEzMWY3NzZhMjhjMmVkMmIzNmJlNjJhNTRiYTY1NWJlMQDRflw=: 00:18:40.676 09:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:03:NTNkYWI5NzM4OGE3YmFjYmRmNTBmYzdlNWIxOTk2MGEzMWY3NzZhMjhjMmVkMmIzNmJlNjJhNTRiYTY1NWJlMQDRflw=: 00:18:41.616 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:41.616 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:41.616 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:18:41.616 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.616 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.616 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.616 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:41.616 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:41.616 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:41.616 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:41.875 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:18:41.875 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:41.875 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:41.875 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:41.875 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:41.875 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:41.875 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:41.875 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.875 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.875 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.875 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:41.875 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:41.875 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:42.442 00:18:42.442 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:42.442 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:42.442 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:42.702 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:42.702 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:42.702 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.702 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.961 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.961 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:42.961 { 00:18:42.961 "cntlid": 129, 00:18:42.961 "qid": 0, 00:18:42.961 "state": "enabled", 00:18:42.961 "thread": "nvmf_tgt_poll_group_000", 00:18:42.961 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:18:42.961 "listen_address": { 00:18:42.961 "trtype": "TCP", 00:18:42.961 "adrfam": "IPv4", 00:18:42.961 "traddr": "10.0.0.2", 00:18:42.961 "trsvcid": "4420" 00:18:42.961 }, 00:18:42.961 "peer_address": { 00:18:42.961 "trtype": "TCP", 00:18:42.961 "adrfam": "IPv4", 00:18:42.961 "traddr": "10.0.0.1", 00:18:42.961 "trsvcid": "33784" 00:18:42.961 }, 00:18:42.961 "auth": { 00:18:42.961 "state": "completed", 00:18:42.961 "digest": "sha512", 00:18:42.961 "dhgroup": "ffdhe6144" 00:18:42.961 } 00:18:42.961 } 00:18:42.961 ]' 00:18:42.961 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:42.961 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:42.961 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:42.961 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:42.961 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:42.961 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:42.961 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:42.961 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:43.219 09:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjgyZmE4NzJmNmMwMDMwOThkYTgzYzViZjM3ZGUwYTE2NjBmYTI5OTVhMDk4OWY0VvKaGA==: --dhchap-ctrl-secret DHHC-1:03:OWZlMjAwMDQxZjg0MWI2YWNlNzU3Y2MwYTU1YWI5NzJjZGZmMmMxZGMzYTJlYjc3YzM3YjI0NTQ4OTExYTJjMi1JLHc=: 00:18:43.219 09:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:00:NjgyZmE4NzJmNmMwMDMwOThkYTgzYzViZjM3ZGUwYTE2NjBmYTI5OTVhMDk4OWY0VvKaGA==: --dhchap-ctrl-secret DHHC-1:03:OWZlMjAwMDQxZjg0MWI2YWNlNzU3Y2MwYTU1YWI5NzJjZGZmMmMxZGMzYTJlYjc3YzM3YjI0NTQ4OTExYTJjMi1JLHc=: 00:18:44.157 09:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:44.157 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:44.157 09:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:18:44.157 09:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.157 09:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.157 09:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.157 09:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:44.157 09:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:44.157 09:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:44.415 09:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:18:44.415 09:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:44.415 09:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:44.415 09:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:44.415 09:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:44.415 09:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:44.415 09:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:44.415 09:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.415 09:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.415 09:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.415 09:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:44.415 09:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:44.415 09:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:44.985 00:18:44.985 09:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:44.985 09:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:44.985 09:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:45.244 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:45.244 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:45.244 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.244 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.244 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.244 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:45.244 { 00:18:45.244 "cntlid": 131, 00:18:45.244 "qid": 0, 00:18:45.244 "state": "enabled", 00:18:45.244 "thread": "nvmf_tgt_poll_group_000", 00:18:45.244 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:18:45.244 "listen_address": { 00:18:45.244 "trtype": "TCP", 00:18:45.244 "adrfam": "IPv4", 00:18:45.244 "traddr": "10.0.0.2", 00:18:45.244 "trsvcid": "4420" 00:18:45.244 }, 00:18:45.244 "peer_address": { 00:18:45.244 "trtype": "TCP", 00:18:45.244 "adrfam": "IPv4", 00:18:45.244 "traddr": "10.0.0.1", 00:18:45.244 "trsvcid": "33810" 00:18:45.244 }, 00:18:45.244 "auth": { 00:18:45.244 "state": "completed", 00:18:45.244 "digest": "sha512", 00:18:45.244 "dhgroup": "ffdhe6144" 00:18:45.244 } 00:18:45.244 } 00:18:45.244 ]' 00:18:45.244 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:45.244 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:45.244 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:45.503 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:45.503 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:45.503 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:45.503 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:45.503 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:45.764 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjFhNWJkYjMxNzEyZTY2N2M5YTJhNTJmZGVkNGVlNWO88mZF: --dhchap-ctrl-secret DHHC-1:02:YWZmZGZjZGM4OTE3OWE5MTAwZjBjNzc1N2ZlZTZiZWZiMjUxYWYzYmRiMjdkMDlilcC2uQ==: 00:18:45.764 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:01:ZjFhNWJkYjMxNzEyZTY2N2M5YTJhNTJmZGVkNGVlNWO88mZF: --dhchap-ctrl-secret DHHC-1:02:YWZmZGZjZGM4OTE3OWE5MTAwZjBjNzc1N2ZlZTZiZWZiMjUxYWYzYmRiMjdkMDlilcC2uQ==: 00:18:46.704 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:46.704 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:46.704 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:18:46.704 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.704 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.704 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.704 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:46.704 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:46.704 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:46.962 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:18:46.962 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:46.962 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:46.962 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:46.962 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:46.962 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:46.962 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:46.962 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.962 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.962 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.962 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:46.962 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:46.962 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:47.529 00:18:47.529 09:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:47.529 09:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:47.529 09:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:47.788 09:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.788 09:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:47.789 09:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.789 09:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.789 09:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.789 09:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:47.789 { 00:18:47.789 "cntlid": 133, 00:18:47.789 "qid": 0, 00:18:47.789 "state": "enabled", 00:18:47.789 "thread": "nvmf_tgt_poll_group_000", 00:18:47.789 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:18:47.789 "listen_address": { 00:18:47.789 "trtype": "TCP", 00:18:47.789 "adrfam": "IPv4", 00:18:47.789 "traddr": "10.0.0.2", 00:18:47.789 "trsvcid": "4420" 00:18:47.789 }, 00:18:47.789 "peer_address": { 00:18:47.789 "trtype": "TCP", 00:18:47.789 "adrfam": "IPv4", 00:18:47.789 "traddr": "10.0.0.1", 00:18:47.789 "trsvcid": "54826" 00:18:47.789 }, 00:18:47.789 "auth": { 00:18:47.789 "state": "completed", 00:18:47.789 "digest": "sha512", 00:18:47.789 "dhgroup": "ffdhe6144" 00:18:47.789 } 00:18:47.789 } 00:18:47.789 ]' 00:18:47.789 09:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:47.789 09:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:47.789 09:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:47.789 09:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:47.789 09:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:47.789 09:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:47.789 09:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:47.789 09:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:48.049 09:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODgxYzcxYWVkNjY3NjgzNTRiZTk3MzQ5NjYxZDY1YTZmZDMyZTYxMmNiYzdkZTYyukxcjw==: --dhchap-ctrl-secret DHHC-1:01:NjBmY2YzMDU2ZGZjYzMwYjQyNDRjZTBiNDk0N2I2ZmNCgJ4d: 00:18:48.049 09:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:02:ODgxYzcxYWVkNjY3NjgzNTRiZTk3MzQ5NjYxZDY1YTZmZDMyZTYxMmNiYzdkZTYyukxcjw==: --dhchap-ctrl-secret DHHC-1:01:NjBmY2YzMDU2ZGZjYzMwYjQyNDRjZTBiNDk0N2I2ZmNCgJ4d: 00:18:48.989 09:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:48.989 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:48.989 09:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:18:48.989 09:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.989 09:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.989 09:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.989 09:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:48.989 09:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:48.989 09:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:49.248 09:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:18:49.248 09:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:49.248 09:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:49.248 09:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:49.248 09:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:49.248 09:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:49.248 09:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key3 00:18:49.248 09:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.248 09:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.248 09:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.248 09:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:49.248 09:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:49.248 09:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:49.814 00:18:49.814 09:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:49.814 09:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:49.814 09:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:50.074 09:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:50.074 09:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:50.074 09:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.074 09:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.333 09:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.333 09:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:50.333 { 00:18:50.333 "cntlid": 135, 00:18:50.333 "qid": 0, 00:18:50.333 "state": "enabled", 00:18:50.333 "thread": "nvmf_tgt_poll_group_000", 00:18:50.333 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:18:50.333 "listen_address": { 00:18:50.333 "trtype": "TCP", 00:18:50.333 "adrfam": "IPv4", 00:18:50.333 "traddr": "10.0.0.2", 00:18:50.333 "trsvcid": "4420" 00:18:50.333 }, 00:18:50.333 "peer_address": { 00:18:50.333 "trtype": "TCP", 00:18:50.333 "adrfam": "IPv4", 00:18:50.333 "traddr": "10.0.0.1", 00:18:50.333 "trsvcid": "54840" 00:18:50.333 }, 00:18:50.333 "auth": { 00:18:50.333 "state": "completed", 00:18:50.333 "digest": "sha512", 00:18:50.333 "dhgroup": "ffdhe6144" 00:18:50.333 } 00:18:50.333 } 00:18:50.333 ]' 00:18:50.333 09:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:50.333 09:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:50.333 09:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:50.333 09:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:50.333 09:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:50.333 09:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:50.333 09:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:50.333 09:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:50.592 09:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTNkYWI5NzM4OGE3YmFjYmRmNTBmYzdlNWIxOTk2MGEzMWY3NzZhMjhjMmVkMmIzNmJlNjJhNTRiYTY1NWJlMQDRflw=: 00:18:50.592 09:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:03:NTNkYWI5NzM4OGE3YmFjYmRmNTBmYzdlNWIxOTk2MGEzMWY3NzZhMjhjMmVkMmIzNmJlNjJhNTRiYTY1NWJlMQDRflw=: 00:18:51.534 09:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:51.534 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:51.534 09:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:18:51.534 09:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.534 09:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.534 09:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.534 09:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:51.534 09:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:51.534 09:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:51.534 09:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:52.105 09:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:18:52.105 09:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:52.105 09:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:52.105 09:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:52.105 09:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:52.105 09:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:52.105 09:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:52.105 09:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.105 09:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.105 09:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.105 09:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:52.105 09:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:52.105 09:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:52.674 00:18:52.674 09:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:52.674 09:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:52.674 09:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:52.935 09:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.935 09:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:52.935 09:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.935 09:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.935 09:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.935 09:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:52.935 { 00:18:52.935 "cntlid": 137, 00:18:52.935 "qid": 0, 00:18:52.935 "state": "enabled", 00:18:52.935 "thread": "nvmf_tgt_poll_group_000", 00:18:52.935 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:18:52.935 "listen_address": { 00:18:52.935 "trtype": "TCP", 00:18:52.935 "adrfam": "IPv4", 00:18:52.935 "traddr": "10.0.0.2", 00:18:52.935 "trsvcid": "4420" 00:18:52.935 }, 00:18:52.935 "peer_address": { 00:18:52.935 "trtype": "TCP", 00:18:52.935 "adrfam": "IPv4", 00:18:52.935 "traddr": "10.0.0.1", 00:18:52.935 "trsvcid": "54870" 00:18:52.935 }, 00:18:52.935 "auth": { 00:18:52.935 "state": "completed", 00:18:52.935 "digest": "sha512", 00:18:52.935 "dhgroup": "ffdhe8192" 00:18:52.935 } 00:18:52.935 } 00:18:52.935 ]' 00:18:53.194 09:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:53.194 09:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:53.194 09:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:53.194 09:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:53.194 09:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:53.194 09:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:53.194 09:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:53.194 09:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:53.452 09:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjgyZmE4NzJmNmMwMDMwOThkYTgzYzViZjM3ZGUwYTE2NjBmYTI5OTVhMDk4OWY0VvKaGA==: --dhchap-ctrl-secret DHHC-1:03:OWZlMjAwMDQxZjg0MWI2YWNlNzU3Y2MwYTU1YWI5NzJjZGZmMmMxZGMzYTJlYjc3YzM3YjI0NTQ4OTExYTJjMi1JLHc=: 00:18:53.453 09:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:00:NjgyZmE4NzJmNmMwMDMwOThkYTgzYzViZjM3ZGUwYTE2NjBmYTI5OTVhMDk4OWY0VvKaGA==: --dhchap-ctrl-secret DHHC-1:03:OWZlMjAwMDQxZjg0MWI2YWNlNzU3Y2MwYTU1YWI5NzJjZGZmMmMxZGMzYTJlYjc3YzM3YjI0NTQ4OTExYTJjMi1JLHc=: 00:18:54.389 09:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:54.389 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:54.389 09:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:18:54.389 09:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.389 09:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.389 09:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.389 09:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:54.389 09:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:54.389 09:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:54.648 09:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:18:54.648 09:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:54.648 09:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:54.648 09:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:54.648 09:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:54.648 09:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:54.648 09:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:54.648 09:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.648 09:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.648 09:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.648 09:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:54.648 09:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:54.648 09:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:55.590 00:18:55.590 09:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:55.590 09:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:55.590 09:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:55.849 09:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:55.849 09:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:55.849 09:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.849 09:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.849 09:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.849 09:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:55.849 { 00:18:55.849 "cntlid": 139, 00:18:55.849 "qid": 0, 00:18:55.849 "state": "enabled", 00:18:55.849 "thread": "nvmf_tgt_poll_group_000", 00:18:55.849 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:18:55.849 "listen_address": { 00:18:55.849 "trtype": "TCP", 00:18:55.849 "adrfam": "IPv4", 00:18:55.849 "traddr": "10.0.0.2", 00:18:55.849 "trsvcid": "4420" 00:18:55.849 }, 00:18:55.849 "peer_address": { 00:18:55.849 "trtype": "TCP", 00:18:55.849 "adrfam": "IPv4", 00:18:55.849 "traddr": "10.0.0.1", 00:18:55.849 "trsvcid": "54884" 00:18:55.849 }, 00:18:55.849 "auth": { 00:18:55.849 "state": "completed", 00:18:55.849 "digest": "sha512", 00:18:55.849 "dhgroup": "ffdhe8192" 00:18:55.849 } 00:18:55.849 } 00:18:55.849 ]' 00:18:55.849 09:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:55.849 09:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:55.849 09:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:55.849 09:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:55.849 09:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:55.849 09:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:55.849 09:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:55.849 09:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:56.418 09:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjFhNWJkYjMxNzEyZTY2N2M5YTJhNTJmZGVkNGVlNWO88mZF: --dhchap-ctrl-secret DHHC-1:02:YWZmZGZjZGM4OTE3OWE5MTAwZjBjNzc1N2ZlZTZiZWZiMjUxYWYzYmRiMjdkMDlilcC2uQ==: 00:18:56.418 09:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:01:ZjFhNWJkYjMxNzEyZTY2N2M5YTJhNTJmZGVkNGVlNWO88mZF: --dhchap-ctrl-secret DHHC-1:02:YWZmZGZjZGM4OTE3OWE5MTAwZjBjNzc1N2ZlZTZiZWZiMjUxYWYzYmRiMjdkMDlilcC2uQ==: 00:18:57.354 09:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:57.354 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:57.354 09:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:18:57.354 09:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.354 09:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.354 09:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.354 09:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:57.354 09:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:57.354 09:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:57.354 09:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:18:57.354 09:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:57.354 09:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:57.354 09:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:57.354 09:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:57.354 09:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:57.354 09:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:57.354 09:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.354 09:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.354 09:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.354 09:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:57.354 09:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:57.354 09:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:58.297 00:18:58.297 09:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:58.297 09:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:58.297 09:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:58.556 09:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:58.556 09:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:58.556 09:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.556 09:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.556 09:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.556 09:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:58.556 { 00:18:58.556 "cntlid": 141, 00:18:58.556 "qid": 0, 00:18:58.556 "state": "enabled", 00:18:58.556 "thread": "nvmf_tgt_poll_group_000", 00:18:58.556 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:18:58.556 "listen_address": { 00:18:58.556 "trtype": "TCP", 00:18:58.556 "adrfam": "IPv4", 00:18:58.556 "traddr": "10.0.0.2", 00:18:58.556 "trsvcid": "4420" 00:18:58.556 }, 00:18:58.556 "peer_address": { 00:18:58.556 "trtype": "TCP", 00:18:58.556 "adrfam": "IPv4", 00:18:58.556 "traddr": "10.0.0.1", 00:18:58.556 "trsvcid": "36492" 00:18:58.556 }, 00:18:58.556 "auth": { 00:18:58.556 "state": "completed", 00:18:58.556 "digest": "sha512", 00:18:58.556 "dhgroup": "ffdhe8192" 00:18:58.556 } 00:18:58.556 } 00:18:58.556 ]' 00:18:58.556 09:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:58.556 09:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:58.556 09:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:58.556 09:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:58.556 09:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:58.816 09:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:58.816 09:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:58.816 09:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:59.076 09:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODgxYzcxYWVkNjY3NjgzNTRiZTk3MzQ5NjYxZDY1YTZmZDMyZTYxMmNiYzdkZTYyukxcjw==: --dhchap-ctrl-secret DHHC-1:01:NjBmY2YzMDU2ZGZjYzMwYjQyNDRjZTBiNDk0N2I2ZmNCgJ4d: 00:18:59.076 09:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:02:ODgxYzcxYWVkNjY3NjgzNTRiZTk3MzQ5NjYxZDY1YTZmZDMyZTYxMmNiYzdkZTYyukxcjw==: --dhchap-ctrl-secret DHHC-1:01:NjBmY2YzMDU2ZGZjYzMwYjQyNDRjZTBiNDk0N2I2ZmNCgJ4d: 00:19:00.012 09:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:00.012 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:00.012 09:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:19:00.012 09:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.012 09:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.012 09:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.012 09:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:00.012 09:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:00.012 09:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:00.271 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:19:00.271 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:00.271 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:00.271 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:00.271 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:00.271 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:00.271 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key3 00:19:00.271 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.271 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.271 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.271 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:00.271 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:00.271 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:01.209 00:19:01.209 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:01.209 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:01.209 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:01.209 09:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:01.209 09:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:01.209 09:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.209 09:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.209 09:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.209 09:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:01.209 { 00:19:01.209 "cntlid": 143, 00:19:01.209 "qid": 0, 00:19:01.209 "state": "enabled", 00:19:01.209 "thread": "nvmf_tgt_poll_group_000", 00:19:01.209 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:19:01.209 "listen_address": { 00:19:01.209 "trtype": "TCP", 00:19:01.209 "adrfam": "IPv4", 00:19:01.209 "traddr": "10.0.0.2", 00:19:01.209 "trsvcid": "4420" 00:19:01.209 }, 00:19:01.209 "peer_address": { 00:19:01.209 "trtype": "TCP", 00:19:01.209 "adrfam": "IPv4", 00:19:01.209 "traddr": "10.0.0.1", 00:19:01.209 "trsvcid": "36512" 00:19:01.209 }, 00:19:01.209 "auth": { 00:19:01.209 "state": "completed", 00:19:01.209 "digest": "sha512", 00:19:01.209 "dhgroup": "ffdhe8192" 00:19:01.209 } 00:19:01.209 } 00:19:01.209 ]' 00:19:01.209 09:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:01.209 09:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:01.209 09:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:01.468 09:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:01.468 09:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:01.468 09:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:01.468 09:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:01.468 09:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:01.730 09:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTNkYWI5NzM4OGE3YmFjYmRmNTBmYzdlNWIxOTk2MGEzMWY3NzZhMjhjMmVkMmIzNmJlNjJhNTRiYTY1NWJlMQDRflw=: 00:19:01.730 09:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:03:NTNkYWI5NzM4OGE3YmFjYmRmNTBmYzdlNWIxOTk2MGEzMWY3NzZhMjhjMmVkMmIzNmJlNjJhNTRiYTY1NWJlMQDRflw=: 00:19:02.668 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:02.668 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:02.668 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:19:02.668 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.668 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.668 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.668 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:19:02.668 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:19:02.668 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:19:02.668 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:02.668 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:02.668 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:02.927 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:19:02.927 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:02.927 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:02.927 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:02.927 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:02.927 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:02.927 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:02.927 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.927 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.927 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.927 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:02.927 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:02.927 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:03.866 00:19:03.866 09:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:03.866 09:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:03.866 09:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:04.125 09:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:04.125 09:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:04.125 09:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.125 09:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.125 09:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.125 09:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:04.125 { 00:19:04.125 "cntlid": 145, 00:19:04.125 "qid": 0, 00:19:04.125 "state": "enabled", 00:19:04.125 "thread": "nvmf_tgt_poll_group_000", 00:19:04.125 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:19:04.125 "listen_address": { 00:19:04.125 "trtype": "TCP", 00:19:04.125 "adrfam": "IPv4", 00:19:04.125 "traddr": "10.0.0.2", 00:19:04.125 "trsvcid": "4420" 00:19:04.125 }, 00:19:04.125 "peer_address": { 00:19:04.125 "trtype": "TCP", 00:19:04.125 "adrfam": "IPv4", 00:19:04.125 "traddr": "10.0.0.1", 00:19:04.125 "trsvcid": "36534" 00:19:04.125 }, 00:19:04.125 "auth": { 00:19:04.125 "state": "completed", 00:19:04.125 "digest": "sha512", 00:19:04.125 "dhgroup": "ffdhe8192" 00:19:04.125 } 00:19:04.125 } 00:19:04.125 ]' 00:19:04.125 09:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:04.125 09:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:04.125 09:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:04.125 09:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:04.125 09:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:04.125 09:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:04.125 09:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:04.125 09:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:04.386 09:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjgyZmE4NzJmNmMwMDMwOThkYTgzYzViZjM3ZGUwYTE2NjBmYTI5OTVhMDk4OWY0VvKaGA==: --dhchap-ctrl-secret DHHC-1:03:OWZlMjAwMDQxZjg0MWI2YWNlNzU3Y2MwYTU1YWI5NzJjZGZmMmMxZGMzYTJlYjc3YzM3YjI0NTQ4OTExYTJjMi1JLHc=: 00:19:04.386 09:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:00:NjgyZmE4NzJmNmMwMDMwOThkYTgzYzViZjM3ZGUwYTE2NjBmYTI5OTVhMDk4OWY0VvKaGA==: --dhchap-ctrl-secret DHHC-1:03:OWZlMjAwMDQxZjg0MWI2YWNlNzU3Y2MwYTU1YWI5NzJjZGZmMmMxZGMzYTJlYjc3YzM3YjI0NTQ4OTExYTJjMi1JLHc=: 00:19:05.327 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:05.327 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:05.327 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:19:05.327 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.327 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.327 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.327 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key1 00:19:05.327 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.327 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.327 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.327 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:19:05.327 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:19:05.327 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:19:05.327 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:19:05.327 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:05.327 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:19:05.328 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:05.328 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:19:05.328 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:19:05.328 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:19:06.268 request: 00:19:06.268 { 00:19:06.268 "name": "nvme0", 00:19:06.268 "trtype": "tcp", 00:19:06.268 "traddr": "10.0.0.2", 00:19:06.268 "adrfam": "ipv4", 00:19:06.268 "trsvcid": "4420", 00:19:06.268 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:06.268 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:19:06.268 "prchk_reftag": false, 00:19:06.268 "prchk_guard": false, 00:19:06.268 "hdgst": false, 00:19:06.268 "ddgst": false, 00:19:06.268 "dhchap_key": "key2", 00:19:06.268 "allow_unrecognized_csi": false, 00:19:06.268 "method": "bdev_nvme_attach_controller", 00:19:06.268 "req_id": 1 00:19:06.268 } 00:19:06.268 Got JSON-RPC error response 00:19:06.268 response: 00:19:06.268 { 00:19:06.268 "code": -5, 00:19:06.268 "message": "Input/output error" 00:19:06.268 } 00:19:06.269 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:19:06.269 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:06.269 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:06.269 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:06.269 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:19:06.269 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.269 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.269 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.269 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:06.269 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.269 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.269 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.269 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:06.269 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:19:06.269 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:06.269 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:19:06.269 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:06.269 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:19:06.269 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:06.269 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:06.269 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:06.269 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:07.208 request: 00:19:07.208 { 00:19:07.208 "name": "nvme0", 00:19:07.208 "trtype": "tcp", 00:19:07.208 "traddr": "10.0.0.2", 00:19:07.208 "adrfam": "ipv4", 00:19:07.208 "trsvcid": "4420", 00:19:07.208 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:07.208 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:19:07.208 "prchk_reftag": false, 00:19:07.208 "prchk_guard": false, 00:19:07.208 "hdgst": false, 00:19:07.208 "ddgst": false, 00:19:07.208 "dhchap_key": "key1", 00:19:07.208 "dhchap_ctrlr_key": "ckey2", 00:19:07.208 "allow_unrecognized_csi": false, 00:19:07.208 "method": "bdev_nvme_attach_controller", 00:19:07.208 "req_id": 1 00:19:07.208 } 00:19:07.208 Got JSON-RPC error response 00:19:07.208 response: 00:19:07.208 { 00:19:07.208 "code": -5, 00:19:07.208 "message": "Input/output error" 00:19:07.208 } 00:19:07.208 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:19:07.208 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:07.208 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:07.208 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:07.208 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:19:07.208 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.208 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.208 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.208 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key1 00:19:07.208 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.208 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.208 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.208 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:07.208 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:19:07.208 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:07.208 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:19:07.208 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:07.208 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:19:07.208 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:07.208 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:07.208 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:07.208 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:07.778 request: 00:19:07.778 { 00:19:07.778 "name": "nvme0", 00:19:07.778 "trtype": "tcp", 00:19:07.778 "traddr": "10.0.0.2", 00:19:07.778 "adrfam": "ipv4", 00:19:07.778 "trsvcid": "4420", 00:19:07.778 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:07.778 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:19:07.778 "prchk_reftag": false, 00:19:07.778 "prchk_guard": false, 00:19:07.778 "hdgst": false, 00:19:07.778 "ddgst": false, 00:19:07.778 "dhchap_key": "key1", 00:19:07.778 "dhchap_ctrlr_key": "ckey1", 00:19:07.778 "allow_unrecognized_csi": false, 00:19:07.778 "method": "bdev_nvme_attach_controller", 00:19:07.778 "req_id": 1 00:19:07.778 } 00:19:07.778 Got JSON-RPC error response 00:19:07.778 response: 00:19:07.778 { 00:19:07.778 "code": -5, 00:19:07.778 "message": "Input/output error" 00:19:07.778 } 00:19:08.037 09:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:19:08.037 09:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:08.037 09:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:08.037 09:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:08.038 09:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:19:08.038 09:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.038 09:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.038 09:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.038 09:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 208530 00:19:08.038 09:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 208530 ']' 00:19:08.038 09:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 208530 00:19:08.038 09:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:19:08.038 09:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:08.038 09:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 208530 00:19:08.038 09:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:08.038 09:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:08.038 09:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 208530' 00:19:08.038 killing process with pid 208530 00:19:08.038 09:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 208530 00:19:08.038 09:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 208530 00:19:08.300 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:19:08.300 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:19:08.300 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:08.300 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.300 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # nvmfpid=230977 00:19:08.300 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:19:08.300 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # waitforlisten 230977 00:19:08.300 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 230977 ']' 00:19:08.300 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:08.300 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:08.300 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:08.300 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:08.300 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.559 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:08.559 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:19:08.559 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:19:08.559 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:08.559 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.559 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:08.559 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:08.559 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 230977 00:19:08.559 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 230977 ']' 00:19:08.559 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:08.559 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:08.559 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:08.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:08.559 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:08.559 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.819 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:08.819 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:19:08.819 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:19:08.819 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.819 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.819 null0 00:19:09.080 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.080 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:19:09.080 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.APp 00:19:09.080 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.080 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.080 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.080 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.AOe ]] 00:19:09.080 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.AOe 00:19:09.080 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.080 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.080 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.080 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:19:09.080 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.GyJ 00:19:09.080 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.080 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.080 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.080 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.VUe ]] 00:19:09.080 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.VUe 00:19:09.080 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.080 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.080 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.080 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:19:09.080 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.6Pr 00:19:09.080 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.080 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.080 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.080 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.LzD ]] 00:19:09.080 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.LzD 00:19:09.080 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.080 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.080 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.080 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:19:09.080 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.QoR 00:19:09.080 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.080 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.081 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.081 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:19:09.081 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:19:09.081 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:09.081 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:09.081 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:09.081 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:09.081 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:09.081 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key3 00:19:09.081 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.081 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.081 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.081 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:09.081 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:09.081 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:10.463 nvme0n1 00:19:10.463 09:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:10.463 09:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:10.463 09:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:10.722 09:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:10.722 09:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:10.722 09:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.722 09:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.722 09:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.722 09:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:10.722 { 00:19:10.722 "cntlid": 1, 00:19:10.722 "qid": 0, 00:19:10.722 "state": "enabled", 00:19:10.722 "thread": "nvmf_tgt_poll_group_000", 00:19:10.722 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:19:10.722 "listen_address": { 00:19:10.722 "trtype": "TCP", 00:19:10.722 "adrfam": "IPv4", 00:19:10.722 "traddr": "10.0.0.2", 00:19:10.722 "trsvcid": "4420" 00:19:10.722 }, 00:19:10.722 "peer_address": { 00:19:10.722 "trtype": "TCP", 00:19:10.722 "adrfam": "IPv4", 00:19:10.722 "traddr": "10.0.0.1", 00:19:10.722 "trsvcid": "40658" 00:19:10.722 }, 00:19:10.722 "auth": { 00:19:10.722 "state": "completed", 00:19:10.722 "digest": "sha512", 00:19:10.722 "dhgroup": "ffdhe8192" 00:19:10.722 } 00:19:10.722 } 00:19:10.722 ]' 00:19:10.722 09:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:10.722 09:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:10.722 09:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:10.722 09:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:10.722 09:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:10.722 09:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:10.722 09:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:10.722 09:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:10.981 09:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTNkYWI5NzM4OGE3YmFjYmRmNTBmYzdlNWIxOTk2MGEzMWY3NzZhMjhjMmVkMmIzNmJlNjJhNTRiYTY1NWJlMQDRflw=: 00:19:10.981 09:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:03:NTNkYWI5NzM4OGE3YmFjYmRmNTBmYzdlNWIxOTk2MGEzMWY3NzZhMjhjMmVkMmIzNmJlNjJhNTRiYTY1NWJlMQDRflw=: 00:19:11.921 09:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:11.921 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:11.921 09:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:19:11.921 09:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.921 09:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.921 09:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.921 09:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key3 00:19:11.921 09:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.921 09:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.921 09:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.921 09:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:19:11.921 09:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:19:12.487 09:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:19:12.487 09:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:19:12.487 09:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:19:12.487 09:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:19:12.487 09:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:12.487 09:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:19:12.487 09:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:12.487 09:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:12.487 09:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:12.487 09:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:12.744 request: 00:19:12.744 { 00:19:12.744 "name": "nvme0", 00:19:12.744 "trtype": "tcp", 00:19:12.744 "traddr": "10.0.0.2", 00:19:12.744 "adrfam": "ipv4", 00:19:12.744 "trsvcid": "4420", 00:19:12.744 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:12.744 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:19:12.744 "prchk_reftag": false, 00:19:12.744 "prchk_guard": false, 00:19:12.744 "hdgst": false, 00:19:12.744 "ddgst": false, 00:19:12.744 "dhchap_key": "key3", 00:19:12.744 "allow_unrecognized_csi": false, 00:19:12.744 "method": "bdev_nvme_attach_controller", 00:19:12.744 "req_id": 1 00:19:12.744 } 00:19:12.744 Got JSON-RPC error response 00:19:12.744 response: 00:19:12.744 { 00:19:12.744 "code": -5, 00:19:12.744 "message": "Input/output error" 00:19:12.744 } 00:19:12.744 09:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:19:12.744 09:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:12.744 09:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:12.744 09:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:12.744 09:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:19:12.744 09:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:19:12.744 09:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:19:12.744 09:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:19:13.002 09:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:19:13.002 09:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:19:13.002 09:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:19:13.002 09:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:19:13.002 09:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:13.002 09:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:19:13.002 09:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:13.002 09:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:13.002 09:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:13.002 09:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:13.259 request: 00:19:13.259 { 00:19:13.259 "name": "nvme0", 00:19:13.259 "trtype": "tcp", 00:19:13.259 "traddr": "10.0.0.2", 00:19:13.259 "adrfam": "ipv4", 00:19:13.259 "trsvcid": "4420", 00:19:13.259 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:13.259 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:19:13.259 "prchk_reftag": false, 00:19:13.259 "prchk_guard": false, 00:19:13.259 "hdgst": false, 00:19:13.259 "ddgst": false, 00:19:13.259 "dhchap_key": "key3", 00:19:13.259 "allow_unrecognized_csi": false, 00:19:13.259 "method": "bdev_nvme_attach_controller", 00:19:13.259 "req_id": 1 00:19:13.259 } 00:19:13.259 Got JSON-RPC error response 00:19:13.259 response: 00:19:13.259 { 00:19:13.259 "code": -5, 00:19:13.259 "message": "Input/output error" 00:19:13.259 } 00:19:13.259 09:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:19:13.259 09:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:13.259 09:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:13.259 09:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:13.259 09:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:19:13.259 09:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:19:13.259 09:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:19:13.259 09:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:13.259 09:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:13.259 09:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:13.517 09:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:19:13.517 09:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.517 09:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.517 09:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.517 09:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:19:13.517 09:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.517 09:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.517 09:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.517 09:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:13.517 09:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:19:13.517 09:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:13.517 09:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:19:13.517 09:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:13.517 09:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:19:13.517 09:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:13.517 09:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:13.517 09:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:13.517 09:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:14.084 request: 00:19:14.084 { 00:19:14.084 "name": "nvme0", 00:19:14.084 "trtype": "tcp", 00:19:14.084 "traddr": "10.0.0.2", 00:19:14.084 "adrfam": "ipv4", 00:19:14.084 "trsvcid": "4420", 00:19:14.084 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:14.084 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:19:14.084 "prchk_reftag": false, 00:19:14.084 "prchk_guard": false, 00:19:14.084 "hdgst": false, 00:19:14.084 "ddgst": false, 00:19:14.084 "dhchap_key": "key0", 00:19:14.084 "dhchap_ctrlr_key": "key1", 00:19:14.084 "allow_unrecognized_csi": false, 00:19:14.084 "method": "bdev_nvme_attach_controller", 00:19:14.084 "req_id": 1 00:19:14.084 } 00:19:14.084 Got JSON-RPC error response 00:19:14.084 response: 00:19:14.084 { 00:19:14.084 "code": -5, 00:19:14.084 "message": "Input/output error" 00:19:14.084 } 00:19:14.084 09:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:19:14.084 09:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:14.084 09:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:14.084 09:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:14.084 09:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:19:14.084 09:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:19:14.084 09:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:19:14.342 nvme0n1 00:19:14.342 09:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:19:14.342 09:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:19:14.342 09:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:14.601 09:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.601 09:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:14.601 09:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:14.858 09:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key1 00:19:14.858 09:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.858 09:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.858 09:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.858 09:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:19:14.858 09:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:19:14.858 09:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:19:16.233 nvme0n1 00:19:16.233 09:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:19:16.233 09:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:19:16.233 09:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:16.492 09:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.492 09:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:16.492 09:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.492 09:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.492 09:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.492 09:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:19:16.492 09:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:19:16.492 09:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:16.751 09:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.751 09:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:ODgxYzcxYWVkNjY3NjgzNTRiZTk3MzQ5NjYxZDY1YTZmZDMyZTYxMmNiYzdkZTYyukxcjw==: --dhchap-ctrl-secret DHHC-1:03:NTNkYWI5NzM4OGE3YmFjYmRmNTBmYzdlNWIxOTk2MGEzMWY3NzZhMjhjMmVkMmIzNmJlNjJhNTRiYTY1NWJlMQDRflw=: 00:19:16.751 09:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:02:ODgxYzcxYWVkNjY3NjgzNTRiZTk3MzQ5NjYxZDY1YTZmZDMyZTYxMmNiYzdkZTYyukxcjw==: --dhchap-ctrl-secret DHHC-1:03:NTNkYWI5NzM4OGE3YmFjYmRmNTBmYzdlNWIxOTk2MGEzMWY3NzZhMjhjMmVkMmIzNmJlNjJhNTRiYTY1NWJlMQDRflw=: 00:19:17.692 09:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:19:17.692 09:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:19:17.692 09:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:19:17.692 09:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:19:17.692 09:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:19:17.692 09:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:19:17.692 09:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:19:17.692 09:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:17.692 09:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:17.951 09:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:19:17.951 09:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:19:17.951 09:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:19:17.951 09:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:19:17.951 09:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:17.951 09:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:19:17.951 09:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:17.951 09:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:19:17.951 09:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:19:17.951 09:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:19:18.891 request: 00:19:18.891 { 00:19:18.891 "name": "nvme0", 00:19:18.891 "trtype": "tcp", 00:19:18.891 "traddr": "10.0.0.2", 00:19:18.891 "adrfam": "ipv4", 00:19:18.891 "trsvcid": "4420", 00:19:18.891 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:18.891 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:19:18.891 "prchk_reftag": false, 00:19:18.891 "prchk_guard": false, 00:19:18.891 "hdgst": false, 00:19:18.891 "ddgst": false, 00:19:18.891 "dhchap_key": "key1", 00:19:18.891 "allow_unrecognized_csi": false, 00:19:18.891 "method": "bdev_nvme_attach_controller", 00:19:18.891 "req_id": 1 00:19:18.891 } 00:19:18.892 Got JSON-RPC error response 00:19:18.892 response: 00:19:18.892 { 00:19:18.892 "code": -5, 00:19:18.892 "message": "Input/output error" 00:19:18.892 } 00:19:18.892 09:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:19:18.892 09:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:18.892 09:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:18.892 09:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:18.892 09:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:18.892 09:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:18.892 09:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:20.275 nvme0n1 00:19:20.275 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:19:20.275 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:19:20.275 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:20.533 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.533 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:20.533 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:20.792 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:19:20.792 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.792 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.792 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.792 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:19:20.792 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:19:20.792 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:19:21.051 nvme0n1 00:19:21.051 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:19:21.051 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:19:21.051 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:21.309 09:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:21.309 09:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:21.309 09:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:21.570 09:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:21.570 09:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.570 09:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.831 09:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.831 09:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:ZjFhNWJkYjMxNzEyZTY2N2M5YTJhNTJmZGVkNGVlNWO88mZF: '' 2s 00:19:21.831 09:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:19:21.831 09:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:19:21.831 09:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:ZjFhNWJkYjMxNzEyZTY2N2M5YTJhNTJmZGVkNGVlNWO88mZF: 00:19:21.831 09:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:19:21.831 09:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:19:21.831 09:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:19:21.831 09:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:ZjFhNWJkYjMxNzEyZTY2N2M5YTJhNTJmZGVkNGVlNWO88mZF: ]] 00:19:21.831 09:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:ZjFhNWJkYjMxNzEyZTY2N2M5YTJhNTJmZGVkNGVlNWO88mZF: 00:19:21.831 09:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:19:21.831 09:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:19:21.831 09:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:19:23.743 09:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:19:23.743 09:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:19:23.743 09:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:19:23.743 09:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:19:23.743 09:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:19:23.743 09:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:19:23.743 09:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:19:23.743 09:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key1 --dhchap-ctrlr-key key2 00:19:23.743 09:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.743 09:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.743 09:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.743 09:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:ODgxYzcxYWVkNjY3NjgzNTRiZTk3MzQ5NjYxZDY1YTZmZDMyZTYxMmNiYzdkZTYyukxcjw==: 2s 00:19:23.743 09:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:19:23.743 09:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:19:23.743 09:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:19:23.743 09:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:ODgxYzcxYWVkNjY3NjgzNTRiZTk3MzQ5NjYxZDY1YTZmZDMyZTYxMmNiYzdkZTYyukxcjw==: 00:19:23.743 09:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:19:23.743 09:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:19:23.743 09:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:19:23.743 09:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:ODgxYzcxYWVkNjY3NjgzNTRiZTk3MzQ5NjYxZDY1YTZmZDMyZTYxMmNiYzdkZTYyukxcjw==: ]] 00:19:23.743 09:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:ODgxYzcxYWVkNjY3NjgzNTRiZTk3MzQ5NjYxZDY1YTZmZDMyZTYxMmNiYzdkZTYyukxcjw==: 00:19:23.743 09:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:19:23.743 09:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:19:25.653 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:19:25.653 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:19:25.653 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:19:25.653 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:19:25.653 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:19:25.653 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:19:25.653 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:19:25.653 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:25.911 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:25.911 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:25.911 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.911 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.911 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.911 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:25.911 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:25.911 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:27.296 nvme0n1 00:19:27.297 09:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:27.297 09:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.297 09:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.297 09:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.297 09:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:27.297 09:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:28.238 09:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:19:28.238 09:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:19:28.238 09:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:28.238 09:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:28.238 09:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:19:28.238 09:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.238 09:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.238 09:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.238 09:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:19:28.238 09:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:19:28.496 09:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:19:28.496 09:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:19:28.496 09:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:28.754 09:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:28.754 09:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:28.754 09:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.754 09:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.754 09:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.754 09:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:28.754 09:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:19:28.754 09:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:28.754 09:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:19:28.754 09:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:28.754 09:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:19:28.754 09:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:28.754 09:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:28.754 09:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:29.694 request: 00:19:29.694 { 00:19:29.694 "name": "nvme0", 00:19:29.694 "dhchap_key": "key1", 00:19:29.694 "dhchap_ctrlr_key": "key3", 00:19:29.694 "method": "bdev_nvme_set_keys", 00:19:29.694 "req_id": 1 00:19:29.694 } 00:19:29.694 Got JSON-RPC error response 00:19:29.694 response: 00:19:29.694 { 00:19:29.694 "code": -13, 00:19:29.694 "message": "Permission denied" 00:19:29.694 } 00:19:29.694 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:19:29.694 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:29.694 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:29.694 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:29.694 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:19:29.694 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:19:29.694 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:29.954 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:19:29.954 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:19:30.892 09:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:19:30.892 09:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:19:30.892 09:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:31.151 09:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:19:31.151 09:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:31.151 09:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.151 09:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.151 09:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.151 09:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:31.151 09:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:31.151 09:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:32.533 nvme0n1 00:19:32.533 09:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:32.533 09:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.533 09:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.533 09:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.533 09:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:19:32.533 09:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:19:32.533 09:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:19:32.533 09:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:19:32.533 09:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:32.533 09:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:19:32.533 09:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:32.533 09:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:19:32.533 09:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:19:33.475 request: 00:19:33.475 { 00:19:33.475 "name": "nvme0", 00:19:33.475 "dhchap_key": "key2", 00:19:33.475 "dhchap_ctrlr_key": "key0", 00:19:33.475 "method": "bdev_nvme_set_keys", 00:19:33.475 "req_id": 1 00:19:33.475 } 00:19:33.475 Got JSON-RPC error response 00:19:33.475 response: 00:19:33.475 { 00:19:33.475 "code": -13, 00:19:33.475 "message": "Permission denied" 00:19:33.475 } 00:19:33.475 09:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:19:33.475 09:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:33.475 09:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:33.475 09:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:33.475 09:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:19:33.475 09:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:19:33.475 09:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:33.733 09:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:19:33.733 09:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:19:34.673 09:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:19:34.673 09:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:19:34.674 09:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:34.933 09:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:19:34.933 09:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:19:34.933 09:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:19:34.933 09:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 208662 00:19:34.933 09:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 208662 ']' 00:19:34.933 09:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 208662 00:19:34.933 09:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:19:34.933 09:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:34.933 09:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 208662 00:19:34.933 09:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:34.933 09:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:34.933 09:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 208662' 00:19:34.933 killing process with pid 208662 00:19:34.933 09:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 208662 00:19:34.933 09:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 208662 00:19:35.504 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:19:35.504 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:19:35.504 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:19:35.504 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:35.504 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:19:35.504 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:35.504 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:35.504 rmmod nvme_tcp 00:19:35.504 rmmod nvme_fabrics 00:19:35.504 rmmod nvme_keyring 00:19:35.504 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:35.504 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:19:35.504 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:19:35.504 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@515 -- # '[' -n 230977 ']' 00:19:35.504 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # killprocess 230977 00:19:35.504 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 230977 ']' 00:19:35.504 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 230977 00:19:35.504 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:19:35.504 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:35.504 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 230977 00:19:35.504 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:35.504 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:35.504 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 230977' 00:19:35.504 killing process with pid 230977 00:19:35.504 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 230977 00:19:35.504 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 230977 00:19:35.764 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:19:35.764 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:19:35.764 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:19:35.764 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:19:35.764 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # iptables-save 00:19:35.764 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:19:35.765 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # iptables-restore 00:19:35.765 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:35.765 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:35.765 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:35.765 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:35.765 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:38.303 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:38.303 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.APp /tmp/spdk.key-sha256.GyJ /tmp/spdk.key-sha384.6Pr /tmp/spdk.key-sha512.QoR /tmp/spdk.key-sha512.AOe /tmp/spdk.key-sha384.VUe /tmp/spdk.key-sha256.LzD '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:19:38.303 00:19:38.303 real 3m34.441s 00:19:38.303 user 8m22.605s 00:19:38.303 sys 0m27.594s 00:19:38.303 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:38.303 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.303 ************************************ 00:19:38.303 END TEST nvmf_auth_target 00:19:38.303 ************************************ 00:19:38.303 09:40:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:19:38.303 09:40:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:38.303 09:40:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:19:38.303 09:40:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:38.303 09:40:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:38.303 ************************************ 00:19:38.303 START TEST nvmf_bdevio_no_huge 00:19:38.303 ************************************ 00:19:38.303 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:38.303 * Looking for test storage... 00:19:38.303 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:38.303 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:19:38.303 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # lcov --version 00:19:38.304 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:19:38.304 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:19:38.304 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:38.304 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:38.304 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:38.304 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:19:38.304 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:19:38.304 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:19:38.304 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:19:38.304 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:19:38.304 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:19:38.304 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:19:38.304 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:38.304 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:19:38.304 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:19:38.304 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:38.304 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:38.304 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:19:38.304 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:19:38.304 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:38.304 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:19:38.304 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:19:38.304 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:19:38.304 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:19:38.304 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:38.304 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:19:38.304 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:19:38.304 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:38.304 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:38.304 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:19:38.304 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:38.304 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:19:38.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:38.304 --rc genhtml_branch_coverage=1 00:19:38.304 --rc genhtml_function_coverage=1 00:19:38.304 --rc genhtml_legend=1 00:19:38.304 --rc geninfo_all_blocks=1 00:19:38.304 --rc geninfo_unexecuted_blocks=1 00:19:38.304 00:19:38.304 ' 00:19:38.304 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:19:38.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:38.304 --rc genhtml_branch_coverage=1 00:19:38.304 --rc genhtml_function_coverage=1 00:19:38.304 --rc genhtml_legend=1 00:19:38.304 --rc geninfo_all_blocks=1 00:19:38.304 --rc geninfo_unexecuted_blocks=1 00:19:38.304 00:19:38.304 ' 00:19:38.304 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:19:38.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:38.304 --rc genhtml_branch_coverage=1 00:19:38.304 --rc genhtml_function_coverage=1 00:19:38.304 --rc genhtml_legend=1 00:19:38.304 --rc geninfo_all_blocks=1 00:19:38.304 --rc geninfo_unexecuted_blocks=1 00:19:38.304 00:19:38.304 ' 00:19:38.304 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:19:38.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:38.304 --rc genhtml_branch_coverage=1 00:19:38.304 --rc genhtml_function_coverage=1 00:19:38.304 --rc genhtml_legend=1 00:19:38.304 --rc geninfo_all_blocks=1 00:19:38.304 --rc geninfo_unexecuted_blocks=1 00:19:38.304 00:19:38.304 ' 00:19:38.304 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:38.304 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:19:38.304 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:38.304 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:38.304 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:38.304 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:38.304 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:38.304 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:38.304 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:38.304 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:38.304 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:38.304 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:38.304 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:19:38.304 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:19:38.304 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:38.304 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:38.304 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:38.304 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:38.304 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:38.304 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:19:38.304 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:38.304 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:38.304 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:38.304 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:38.304 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:38.304 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:38.304 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:19:38.304 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:38.304 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:19:38.304 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:38.304 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:38.304 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:38.304 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:38.304 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:38.304 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:38.304 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:38.304 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:38.304 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:38.304 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:38.304 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:38.304 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:38.304 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:19:38.305 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:19:38.305 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:38.305 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # prepare_net_devs 00:19:38.305 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@436 -- # local -g is_hw=no 00:19:38.305 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # remove_spdk_ns 00:19:38.305 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:38.305 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:38.305 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:38.305 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:19:38.305 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:19:38.305 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:19:38.305 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:40.215 09:40:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:40.215 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:19:40.215 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:40.215 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:40.215 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:40.215 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:40.215 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:40.215 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:19:40.215 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:40.215 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:19:40.215 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:19:40.215 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:19:40.215 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:19:40.215 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:19:40.215 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:19:40.215 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:40.215 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:40.215 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:40.216 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:40.216 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:40.216 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:40.216 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:40.216 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:40.216 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:40.216 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:40.216 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:40.216 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:40.216 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:40.216 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:40.216 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:40.216 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:40.216 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:40.216 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:40.216 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:40.216 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x1592)' 00:19:40.216 Found 0000:09:00.0 (0x8086 - 0x1592) 00:19:40.216 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:40.216 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:40.216 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:19:40.216 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:19:40.216 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:40.216 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:40.216 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x1592)' 00:19:40.216 Found 0000:09:00.1 (0x8086 - 0x1592) 00:19:40.216 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:40.216 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:40.216 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:19:40.216 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:19:40.216 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:40.216 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:40.216 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:40.216 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:40.216 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:19:40.216 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:40.216 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:19:40.216 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:40.216 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ up == up ]] 00:19:40.216 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:19:40.216 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:40.216 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:19:40.216 Found net devices under 0000:09:00.0: cvl_0_0 00:19:40.216 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:19:40.216 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:19:40.216 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:40.216 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:19:40.216 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:40.216 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ up == up ]] 00:19:40.216 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:19:40.216 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:40.216 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:19:40.216 Found net devices under 0000:09:00.1: cvl_0_1 00:19:40.216 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:19:40.216 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:19:40.216 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # is_hw=yes 00:19:40.216 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:19:40.216 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:19:40.216 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:19:40.216 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:40.216 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:40.216 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:40.216 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:40.216 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:40.216 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:40.216 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:40.216 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:40.216 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:40.216 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:40.216 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:40.216 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:40.216 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:40.216 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:40.216 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:40.216 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:40.216 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:40.216 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:40.216 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:40.216 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:40.216 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:40.217 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:40.217 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:40.217 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:40.217 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.347 ms 00:19:40.217 00:19:40.217 --- 10.0.0.2 ping statistics --- 00:19:40.217 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:40.217 rtt min/avg/max/mdev = 0.347/0.347/0.347/0.000 ms 00:19:40.217 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:40.217 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:40.217 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:19:40.217 00:19:40.217 --- 10.0.0.1 ping statistics --- 00:19:40.217 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:40.217 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:19:40.217 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:40.217 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # return 0 00:19:40.217 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:19:40.217 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:40.217 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:19:40.217 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:19:40.217 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:40.217 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:19:40.217 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:19:40.217 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:40.217 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:19:40.217 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:40.217 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:40.217 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # nvmfpid=235988 00:19:40.217 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # waitforlisten 235988 00:19:40.217 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:19:40.217 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 235988 ']' 00:19:40.217 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:40.217 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:40.217 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:40.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:40.217 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:40.217 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:40.478 [2024-10-07 09:40:29.222036] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:19:40.479 [2024-10-07 09:40:29.222128] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:19:40.479 [2024-10-07 09:40:29.292797] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:40.479 [2024-10-07 09:40:29.401471] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:40.479 [2024-10-07 09:40:29.401539] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:40.479 [2024-10-07 09:40:29.401552] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:40.479 [2024-10-07 09:40:29.401563] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:40.479 [2024-10-07 09:40:29.401572] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:40.479 [2024-10-07 09:40:29.402448] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:19:40.479 [2024-10-07 09:40:29.402512] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 5 00:19:40.479 [2024-10-07 09:40:29.402576] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 6 00:19:40.479 [2024-10-07 09:40:29.402580] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:19:40.741 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:40.741 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:19:40.741 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:19:40.741 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:40.741 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:40.741 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:40.741 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:40.741 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.741 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:40.741 [2024-10-07 09:40:29.553016] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:40.741 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.741 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:40.741 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.741 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:40.741 Malloc0 00:19:40.741 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.741 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:40.741 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.741 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:40.741 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.741 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:40.741 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.741 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:40.741 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.741 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:40.741 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.741 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:40.741 [2024-10-07 09:40:29.590678] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:40.741 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.741 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:19:40.741 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:40.741 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # config=() 00:19:40.741 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # local subsystem config 00:19:40.741 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:19:40.741 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:19:40.741 { 00:19:40.741 "params": { 00:19:40.741 "name": "Nvme$subsystem", 00:19:40.741 "trtype": "$TEST_TRANSPORT", 00:19:40.741 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:40.741 "adrfam": "ipv4", 00:19:40.741 "trsvcid": "$NVMF_PORT", 00:19:40.741 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:40.741 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:40.741 "hdgst": ${hdgst:-false}, 00:19:40.741 "ddgst": ${ddgst:-false} 00:19:40.741 }, 00:19:40.741 "method": "bdev_nvme_attach_controller" 00:19:40.741 } 00:19:40.741 EOF 00:19:40.741 )") 00:19:40.741 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@580 -- # cat 00:19:40.741 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # jq . 00:19:40.741 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@583 -- # IFS=, 00:19:40.741 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:19:40.741 "params": { 00:19:40.741 "name": "Nvme1", 00:19:40.741 "trtype": "tcp", 00:19:40.741 "traddr": "10.0.0.2", 00:19:40.741 "adrfam": "ipv4", 00:19:40.741 "trsvcid": "4420", 00:19:40.741 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:40.741 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:40.741 "hdgst": false, 00:19:40.741 "ddgst": false 00:19:40.742 }, 00:19:40.742 "method": "bdev_nvme_attach_controller" 00:19:40.742 }' 00:19:40.742 [2024-10-07 09:40:29.644366] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:19:40.742 [2024-10-07 09:40:29.644446] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid236101 ] 00:19:40.742 [2024-10-07 09:40:29.706195] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:41.001 [2024-10-07 09:40:29.822362] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:19:41.001 [2024-10-07 09:40:29.822412] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:19:41.001 [2024-10-07 09:40:29.822416] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:19:41.259 I/O targets: 00:19:41.259 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:41.259 00:19:41.259 00:19:41.259 CUnit - A unit testing framework for C - Version 2.1-3 00:19:41.259 http://cunit.sourceforge.net/ 00:19:41.259 00:19:41.259 00:19:41.259 Suite: bdevio tests on: Nvme1n1 00:19:41.259 Test: blockdev write read block ...passed 00:19:41.259 Test: blockdev write zeroes read block ...passed 00:19:41.259 Test: blockdev write zeroes read no split ...passed 00:19:41.259 Test: blockdev write zeroes read split ...passed 00:19:41.259 Test: blockdev write zeroes read split partial ...passed 00:19:41.259 Test: blockdev reset ...[2024-10-07 09:40:30.209053] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:41.259 [2024-10-07 09:40:30.209166] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cadf60 (9): Bad file descriptor 00:19:41.259 [2024-10-07 09:40:30.225574] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:41.259 passed 00:19:41.520 Test: blockdev write read 8 blocks ...passed 00:19:41.520 Test: blockdev write read size > 128k ...passed 00:19:41.520 Test: blockdev write read invalid size ...passed 00:19:41.520 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:41.520 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:41.520 Test: blockdev write read max offset ...passed 00:19:41.520 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:41.520 Test: blockdev writev readv 8 blocks ...passed 00:19:41.520 Test: blockdev writev readv 30 x 1block ...passed 00:19:41.520 Test: blockdev writev readv block ...passed 00:19:41.783 Test: blockdev writev readv size > 128k ...passed 00:19:41.783 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:41.783 Test: blockdev comparev and writev ...[2024-10-07 09:40:30.523827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:41.783 [2024-10-07 09:40:30.523863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:41.783 [2024-10-07 09:40:30.523888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:41.783 [2024-10-07 09:40:30.523905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:41.783 [2024-10-07 09:40:30.524255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:41.783 [2024-10-07 09:40:30.524280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:41.783 [2024-10-07 09:40:30.524302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:41.783 [2024-10-07 09:40:30.524318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:41.783 [2024-10-07 09:40:30.524639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:41.783 [2024-10-07 09:40:30.524663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:41.783 [2024-10-07 09:40:30.524693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:41.783 [2024-10-07 09:40:30.524709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:41.783 [2024-10-07 09:40:30.525052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:41.783 [2024-10-07 09:40:30.525076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:41.783 [2024-10-07 09:40:30.525097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:41.783 [2024-10-07 09:40:30.525114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:41.783 passed 00:19:41.783 Test: blockdev nvme passthru rw ...passed 00:19:41.783 Test: blockdev nvme passthru vendor specific ...[2024-10-07 09:40:30.607926] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:41.783 [2024-10-07 09:40:30.607955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:41.783 [2024-10-07 09:40:30.608104] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:41.783 [2024-10-07 09:40:30.608128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:41.783 [2024-10-07 09:40:30.608271] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:41.783 [2024-10-07 09:40:30.608300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:41.783 [2024-10-07 09:40:30.608445] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:41.783 [2024-10-07 09:40:30.608469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:41.783 passed 00:19:41.783 Test: blockdev nvme admin passthru ...passed 00:19:41.783 Test: blockdev copy ...passed 00:19:41.783 00:19:41.783 Run Summary: Type Total Ran Passed Failed Inactive 00:19:41.783 suites 1 1 n/a 0 0 00:19:41.783 tests 23 23 23 0 0 00:19:41.783 asserts 152 152 152 0 n/a 00:19:41.783 00:19:41.783 Elapsed time = 1.233 seconds 00:19:42.355 09:40:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:42.355 09:40:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.355 09:40:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:42.355 09:40:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.355 09:40:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:42.355 09:40:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:19:42.355 09:40:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@514 -- # nvmfcleanup 00:19:42.355 09:40:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:19:42.355 09:40:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:42.355 09:40:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:19:42.355 09:40:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:42.355 09:40:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:42.355 rmmod nvme_tcp 00:19:42.355 rmmod nvme_fabrics 00:19:42.355 rmmod nvme_keyring 00:19:42.355 09:40:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:42.355 09:40:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:19:42.355 09:40:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:19:42.355 09:40:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@515 -- # '[' -n 235988 ']' 00:19:42.355 09:40:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # killprocess 235988 00:19:42.355 09:40:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 235988 ']' 00:19:42.355 09:40:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 235988 00:19:42.355 09:40:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:19:42.355 09:40:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:42.355 09:40:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 235988 00:19:42.355 09:40:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:19:42.355 09:40:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:19:42.355 09:40:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 235988' 00:19:42.355 killing process with pid 235988 00:19:42.355 09:40:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 235988 00:19:42.355 09:40:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 235988 00:19:42.615 09:40:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:19:42.615 09:40:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:19:42.615 09:40:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:19:42.615 09:40:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:19:42.615 09:40:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # iptables-save 00:19:42.615 09:40:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:19:42.615 09:40:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # iptables-restore 00:19:42.615 09:40:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:42.615 09:40:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:42.615 09:40:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:42.615 09:40:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:42.615 09:40:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:45.156 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:45.156 00:19:45.156 real 0m6.777s 00:19:45.156 user 0m11.383s 00:19:45.156 sys 0m2.651s 00:19:45.156 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:45.156 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:45.156 ************************************ 00:19:45.156 END TEST nvmf_bdevio_no_huge 00:19:45.156 ************************************ 00:19:45.156 09:40:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:45.156 09:40:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:45.156 09:40:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:45.156 09:40:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:45.156 ************************************ 00:19:45.156 START TEST nvmf_tls 00:19:45.156 ************************************ 00:19:45.156 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:45.156 * Looking for test storage... 00:19:45.156 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:45.156 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:19:45.156 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # lcov --version 00:19:45.156 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:19:45.156 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:19:45.156 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:45.156 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:45.156 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:45.156 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:19:45.156 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:19:45.156 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:19:45.156 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:19:45.156 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:19:45.156 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:19:45.156 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:19:45.156 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:45.156 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:19:45.156 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:19:45.156 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:45.156 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:45.156 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:19:45.156 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:19:45.156 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:45.156 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:19:45.157 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:19:45.157 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:19:45.157 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:19:45.157 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:45.157 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:19:45.157 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:19:45.157 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:45.157 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:45.157 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:19:45.157 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:45.157 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:19:45.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:45.157 --rc genhtml_branch_coverage=1 00:19:45.157 --rc genhtml_function_coverage=1 00:19:45.157 --rc genhtml_legend=1 00:19:45.157 --rc geninfo_all_blocks=1 00:19:45.157 --rc geninfo_unexecuted_blocks=1 00:19:45.157 00:19:45.157 ' 00:19:45.157 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:19:45.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:45.157 --rc genhtml_branch_coverage=1 00:19:45.157 --rc genhtml_function_coverage=1 00:19:45.157 --rc genhtml_legend=1 00:19:45.157 --rc geninfo_all_blocks=1 00:19:45.157 --rc geninfo_unexecuted_blocks=1 00:19:45.157 00:19:45.157 ' 00:19:45.157 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:19:45.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:45.157 --rc genhtml_branch_coverage=1 00:19:45.157 --rc genhtml_function_coverage=1 00:19:45.157 --rc genhtml_legend=1 00:19:45.157 --rc geninfo_all_blocks=1 00:19:45.157 --rc geninfo_unexecuted_blocks=1 00:19:45.157 00:19:45.157 ' 00:19:45.157 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:19:45.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:45.157 --rc genhtml_branch_coverage=1 00:19:45.157 --rc genhtml_function_coverage=1 00:19:45.157 --rc genhtml_legend=1 00:19:45.157 --rc geninfo_all_blocks=1 00:19:45.157 --rc geninfo_unexecuted_blocks=1 00:19:45.157 00:19:45.157 ' 00:19:45.157 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:45.157 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:19:45.157 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:45.157 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:45.157 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:45.157 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:45.157 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:45.157 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:45.157 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:45.157 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:45.157 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:45.157 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:45.157 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:19:45.157 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:19:45.157 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:45.157 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:45.157 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:45.157 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:45.157 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:45.157 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:19:45.157 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:45.157 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:45.157 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:45.157 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.157 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.157 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.157 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:19:45.157 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.157 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:19:45.157 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:45.157 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:45.157 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:45.157 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:45.157 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:45.157 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:45.157 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:45.157 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:45.157 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:45.157 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:45.157 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:45.157 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:19:45.157 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:19:45.157 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:45.157 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # prepare_net_devs 00:19:45.157 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@436 -- # local -g is_hw=no 00:19:45.157 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # remove_spdk_ns 00:19:45.157 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:45.157 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:45.157 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:45.157 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:19:45.157 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:19:45.157 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:19:45.157 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:47.064 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:47.064 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:19:47.064 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:47.064 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:47.064 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:47.064 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:47.064 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:47.064 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:19:47.064 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:47.064 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:19:47.064 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:19:47.064 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:19:47.064 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:19:47.064 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:19:47.064 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:19:47.064 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:47.064 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:47.064 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:47.064 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:47.064 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:47.064 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:47.064 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:47.064 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:47.064 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:47.064 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:47.064 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:47.064 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:47.064 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:47.064 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:47.064 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:47.064 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:47.064 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:47.064 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:47.064 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:47.064 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x1592)' 00:19:47.064 Found 0000:09:00.0 (0x8086 - 0x1592) 00:19:47.064 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:47.064 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:47.064 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:19:47.064 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:19:47.064 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:47.064 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:47.064 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x1592)' 00:19:47.064 Found 0000:09:00.1 (0x8086 - 0x1592) 00:19:47.064 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:47.064 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:47.064 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:19:47.064 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:19:47.064 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:47.064 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:47.064 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:47.064 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:47.064 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:19:47.064 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:47.064 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:19:47.064 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:47.064 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ up == up ]] 00:19:47.064 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:19:47.064 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:47.065 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:19:47.065 Found net devices under 0000:09:00.0: cvl_0_0 00:19:47.065 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:19:47.065 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:19:47.065 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:47.065 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:19:47.065 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:47.065 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ up == up ]] 00:19:47.065 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:19:47.065 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:47.065 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:19:47.065 Found net devices under 0000:09:00.1: cvl_0_1 00:19:47.065 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:19:47.065 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:19:47.065 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # is_hw=yes 00:19:47.065 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:19:47.065 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:19:47.065 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:19:47.065 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:47.065 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:47.065 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:47.065 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:47.065 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:47.065 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:47.065 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:47.065 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:47.065 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:47.065 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:47.065 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:47.065 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:47.065 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:47.065 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:47.065 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:47.065 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:47.065 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:47.065 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:47.065 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:47.065 09:40:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:47.065 09:40:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:47.065 09:40:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:47.065 09:40:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:47.065 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:47.065 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.196 ms 00:19:47.065 00:19:47.065 --- 10.0.0.2 ping statistics --- 00:19:47.065 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:47.065 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:19:47.065 09:40:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:47.065 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:47.065 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:19:47.065 00:19:47.065 --- 10.0.0.1 ping statistics --- 00:19:47.065 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:47.065 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:19:47.065 09:40:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:47.065 09:40:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@448 -- # return 0 00:19:47.065 09:40:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:19:47.065 09:40:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:47.065 09:40:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:19:47.065 09:40:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:19:47.065 09:40:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:47.065 09:40:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:19:47.065 09:40:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:19:47.324 09:40:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:19:47.324 09:40:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:19:47.324 09:40:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:47.324 09:40:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:47.324 09:40:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=238112 00:19:47.324 09:40:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:19:47.324 09:40:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 238112 00:19:47.324 09:40:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 238112 ']' 00:19:47.324 09:40:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:47.324 09:40:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:47.324 09:40:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:47.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:47.324 09:40:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:47.324 09:40:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:47.324 [2024-10-07 09:40:36.133319] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:19:47.324 [2024-10-07 09:40:36.133405] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:47.324 [2024-10-07 09:40:36.198043] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:47.324 [2024-10-07 09:40:36.307004] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:47.324 [2024-10-07 09:40:36.307072] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:47.324 [2024-10-07 09:40:36.307085] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:47.324 [2024-10-07 09:40:36.307096] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:47.324 [2024-10-07 09:40:36.307105] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:47.325 [2024-10-07 09:40:36.307642] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:19:47.584 09:40:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:47.584 09:40:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:47.584 09:40:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:19:47.584 09:40:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:47.584 09:40:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:47.584 09:40:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:47.584 09:40:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:19:47.584 09:40:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:19:47.843 true 00:19:47.843 09:40:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:47.843 09:40:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:19:48.102 09:40:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:19:48.102 09:40:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:19:48.102 09:40:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:48.362 09:40:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:48.362 09:40:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:19:48.623 09:40:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:19:48.623 09:40:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:19:48.623 09:40:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:19:48.882 09:40:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:48.882 09:40:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:19:49.140 09:40:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:19:49.140 09:40:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:19:49.140 09:40:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:49.140 09:40:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:19:49.399 09:40:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:19:49.399 09:40:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:19:49.399 09:40:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:19:49.965 09:40:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:49.965 09:40:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:19:50.226 09:40:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:19:50.226 09:40:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:19:50.226 09:40:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:19:50.487 09:40:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:50.487 09:40:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:19:50.747 09:40:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:19:50.747 09:40:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:19:50.747 09:40:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:19:50.747 09:40:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:19:50.747 09:40:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:19:50.747 09:40:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:19:50.747 09:40:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:19:50.747 09:40:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=1 00:19:50.747 09:40:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:19:50.747 09:40:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:50.747 09:40:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:19:50.747 09:40:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:19:50.747 09:40:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:19:50.747 09:40:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:19:50.747 09:40:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=ffeeddccbbaa99887766554433221100 00:19:50.747 09:40:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=1 00:19:50.747 09:40:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:19:50.747 09:40:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:50.747 09:40:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:19:50.747 09:40:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.rYgSbdgz6W 00:19:50.747 09:40:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:19:50.747 09:40:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.SbXL90oPP0 00:19:50.747 09:40:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:50.747 09:40:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:50.747 09:40:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.rYgSbdgz6W 00:19:50.747 09:40:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.SbXL90oPP0 00:19:50.747 09:40:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:51.006 09:40:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:19:51.266 09:40:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.rYgSbdgz6W 00:19:51.266 09:40:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.rYgSbdgz6W 00:19:51.266 09:40:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:51.526 [2024-10-07 09:40:40.499977] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:51.526 09:40:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:51.786 09:40:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:52.046 [2024-10-07 09:40:41.021387] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:52.046 [2024-10-07 09:40:41.021654] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:52.046 09:40:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:52.612 malloc0 00:19:52.612 09:40:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:52.612 09:40:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.rYgSbdgz6W 00:19:52.871 09:40:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:53.439 09:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.rYgSbdgz6W 00:20:03.430 Initializing NVMe Controllers 00:20:03.430 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:03.430 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:03.430 Initialization complete. Launching workers. 00:20:03.430 ======================================================== 00:20:03.430 Latency(us) 00:20:03.430 Device Information : IOPS MiB/s Average min max 00:20:03.430 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8581.77 33.52 7459.78 1034.21 8949.54 00:20:03.430 ======================================================== 00:20:03.430 Total : 8581.77 33.52 7459.78 1034.21 8949.54 00:20:03.430 00:20:03.430 09:40:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.rYgSbdgz6W 00:20:03.430 09:40:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:03.430 09:40:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:03.430 09:40:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:03.430 09:40:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.rYgSbdgz6W 00:20:03.430 09:40:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:03.430 09:40:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=240043 00:20:03.430 09:40:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:03.430 09:40:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:03.430 09:40:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 240043 /var/tmp/bdevperf.sock 00:20:03.430 09:40:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 240043 ']' 00:20:03.430 09:40:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:03.430 09:40:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:03.430 09:40:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:03.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:03.430 09:40:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:03.430 09:40:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:03.430 [2024-10-07 09:40:52.303189] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:20:03.430 [2024-10-07 09:40:52.303279] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid240043 ] 00:20:03.430 [2024-10-07 09:40:52.359309] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:03.688 [2024-10-07 09:40:52.465459] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:20:03.688 09:40:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:03.688 09:40:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:03.688 09:40:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.rYgSbdgz6W 00:20:03.947 09:40:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:04.206 [2024-10-07 09:40:53.109277] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:04.206 TLSTESTn1 00:20:04.206 09:40:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:04.466 Running I/O for 10 seconds... 00:20:14.540 3414.00 IOPS, 13.34 MiB/s 3412.00 IOPS, 13.33 MiB/s 3457.00 IOPS, 13.50 MiB/s 3457.00 IOPS, 13.50 MiB/s 3448.20 IOPS, 13.47 MiB/s 3447.33 IOPS, 13.47 MiB/s 3449.14 IOPS, 13.47 MiB/s 3455.00 IOPS, 13.50 MiB/s 3460.00 IOPS, 13.52 MiB/s 3429.10 IOPS, 13.39 MiB/s 00:20:14.540 Latency(us) 00:20:14.540 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:14.540 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:14.540 Verification LBA range: start 0x0 length 0x2000 00:20:14.540 TLSTESTn1 : 10.03 3432.89 13.41 0.00 0.00 37216.38 6189.51 46797.56 00:20:14.540 =================================================================================================================== 00:20:14.540 Total : 3432.89 13.41 0.00 0.00 37216.38 6189.51 46797.56 00:20:14.540 { 00:20:14.540 "results": [ 00:20:14.540 { 00:20:14.540 "job": "TLSTESTn1", 00:20:14.540 "core_mask": "0x4", 00:20:14.540 "workload": "verify", 00:20:14.540 "status": "finished", 00:20:14.540 "verify_range": { 00:20:14.540 "start": 0, 00:20:14.540 "length": 8192 00:20:14.540 }, 00:20:14.540 "queue_depth": 128, 00:20:14.540 "io_size": 4096, 00:20:14.540 "runtime": 10.026243, 00:20:14.540 "iops": 3432.891063980795, 00:20:14.540 "mibps": 13.40973071867498, 00:20:14.540 "io_failed": 0, 00:20:14.540 "io_timeout": 0, 00:20:14.540 "avg_latency_us": 37216.38047822424, 00:20:14.540 "min_latency_us": 6189.511111111111, 00:20:14.540 "max_latency_us": 46797.55851851852 00:20:14.540 } 00:20:14.540 ], 00:20:14.540 "core_count": 1 00:20:14.540 } 00:20:14.540 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:14.540 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 240043 00:20:14.540 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 240043 ']' 00:20:14.540 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 240043 00:20:14.540 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:14.540 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:14.540 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 240043 00:20:14.540 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:14.540 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:14.540 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 240043' 00:20:14.540 killing process with pid 240043 00:20:14.540 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 240043 00:20:14.540 Received shutdown signal, test time was about 10.000000 seconds 00:20:14.540 00:20:14.540 Latency(us) 00:20:14.540 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:14.540 =================================================================================================================== 00:20:14.540 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:14.540 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 240043 00:20:14.799 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.SbXL90oPP0 00:20:14.799 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:20:14.799 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.SbXL90oPP0 00:20:14.799 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:20:14.799 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:14.799 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:20:14.799 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:14.799 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.SbXL90oPP0 00:20:14.799 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:14.799 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:14.799 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:14.799 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.SbXL90oPP0 00:20:14.799 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:14.799 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=241321 00:20:14.799 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:14.799 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:14.799 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 241321 /var/tmp/bdevperf.sock 00:20:14.799 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 241321 ']' 00:20:14.799 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:14.799 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:14.799 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:14.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:14.799 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:14.799 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:14.799 [2024-10-07 09:41:03.725120] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:20:14.799 [2024-10-07 09:41:03.725208] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid241321 ] 00:20:14.799 [2024-10-07 09:41:03.782597] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:15.056 [2024-10-07 09:41:03.887254] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:20:15.056 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:15.056 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:15.056 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.SbXL90oPP0 00:20:15.621 09:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:15.621 [2024-10-07 09:41:04.585580] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:15.621 [2024-10-07 09:41:04.591122] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:15.621 [2024-10-07 09:41:04.591676] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeb16b0 (107): Transport endpoint is not connected 00:20:15.621 [2024-10-07 09:41:04.592645] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeb16b0 (9): Bad file descriptor 00:20:15.621 [2024-10-07 09:41:04.593644] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:15.621 [2024-10-07 09:41:04.593696] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:15.621 [2024-10-07 09:41:04.593711] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:20:15.621 [2024-10-07 09:41:04.593728] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:15.621 request: 00:20:15.621 { 00:20:15.621 "name": "TLSTEST", 00:20:15.621 "trtype": "tcp", 00:20:15.621 "traddr": "10.0.0.2", 00:20:15.621 "adrfam": "ipv4", 00:20:15.621 "trsvcid": "4420", 00:20:15.621 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:15.621 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:15.621 "prchk_reftag": false, 00:20:15.621 "prchk_guard": false, 00:20:15.621 "hdgst": false, 00:20:15.621 "ddgst": false, 00:20:15.621 "psk": "key0", 00:20:15.621 "allow_unrecognized_csi": false, 00:20:15.621 "method": "bdev_nvme_attach_controller", 00:20:15.621 "req_id": 1 00:20:15.621 } 00:20:15.621 Got JSON-RPC error response 00:20:15.621 response: 00:20:15.621 { 00:20:15.621 "code": -5, 00:20:15.621 "message": "Input/output error" 00:20:15.621 } 00:20:15.621 09:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 241321 00:20:15.621 09:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 241321 ']' 00:20:15.621 09:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 241321 00:20:15.621 09:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:15.621 09:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:15.879 09:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 241321 00:20:15.879 09:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:15.879 09:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:15.879 09:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 241321' 00:20:15.879 killing process with pid 241321 00:20:15.879 09:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 241321 00:20:15.879 Received shutdown signal, test time was about 10.000000 seconds 00:20:15.879 00:20:15.879 Latency(us) 00:20:15.879 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:15.879 =================================================================================================================== 00:20:15.879 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:15.879 09:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 241321 00:20:16.138 09:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:16.138 09:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:20:16.138 09:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:16.138 09:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:16.138 09:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:16.138 09:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.rYgSbdgz6W 00:20:16.138 09:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:20:16.138 09:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.rYgSbdgz6W 00:20:16.138 09:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:20:16.138 09:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:16.138 09:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:20:16.138 09:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:16.138 09:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.rYgSbdgz6W 00:20:16.138 09:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:16.138 09:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:16.138 09:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:20:16.138 09:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.rYgSbdgz6W 00:20:16.138 09:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:16.138 09:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=241462 00:20:16.138 09:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:16.138 09:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:16.138 09:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 241462 /var/tmp/bdevperf.sock 00:20:16.138 09:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 241462 ']' 00:20:16.138 09:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:16.138 09:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:16.138 09:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:16.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:16.138 09:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:16.138 09:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:16.138 [2024-10-07 09:41:04.935409] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:20:16.138 [2024-10-07 09:41:04.935494] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid241462 ] 00:20:16.138 [2024-10-07 09:41:04.991161] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:16.138 [2024-10-07 09:41:05.103594] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:20:16.396 09:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:16.396 09:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:16.396 09:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.rYgSbdgz6W 00:20:16.654 09:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:20:16.913 [2024-10-07 09:41:05.731526] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:16.913 [2024-10-07 09:41:05.743361] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:16.913 [2024-10-07 09:41:05.743391] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:16.913 [2024-10-07 09:41:05.743441] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:16.913 [2024-10-07 09:41:05.743698] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcce6b0 (107): Transport endpoint is not connected 00:20:16.913 [2024-10-07 09:41:05.744688] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcce6b0 (9): Bad file descriptor 00:20:16.913 [2024-10-07 09:41:05.745687] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:16.913 [2024-10-07 09:41:05.745709] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:16.913 [2024-10-07 09:41:05.745722] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:20:16.913 [2024-10-07 09:41:05.745742] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:16.913 request: 00:20:16.913 { 00:20:16.913 "name": "TLSTEST", 00:20:16.913 "trtype": "tcp", 00:20:16.913 "traddr": "10.0.0.2", 00:20:16.913 "adrfam": "ipv4", 00:20:16.913 "trsvcid": "4420", 00:20:16.913 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:16.913 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:16.913 "prchk_reftag": false, 00:20:16.913 "prchk_guard": false, 00:20:16.913 "hdgst": false, 00:20:16.913 "ddgst": false, 00:20:16.913 "psk": "key0", 00:20:16.913 "allow_unrecognized_csi": false, 00:20:16.913 "method": "bdev_nvme_attach_controller", 00:20:16.913 "req_id": 1 00:20:16.913 } 00:20:16.913 Got JSON-RPC error response 00:20:16.913 response: 00:20:16.913 { 00:20:16.913 "code": -5, 00:20:16.913 "message": "Input/output error" 00:20:16.913 } 00:20:16.913 09:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 241462 00:20:16.913 09:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 241462 ']' 00:20:16.913 09:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 241462 00:20:16.913 09:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:16.914 09:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:16.914 09:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 241462 00:20:16.914 09:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:16.914 09:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:16.914 09:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 241462' 00:20:16.914 killing process with pid 241462 00:20:16.914 09:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 241462 00:20:16.914 Received shutdown signal, test time was about 10.000000 seconds 00:20:16.914 00:20:16.914 Latency(us) 00:20:16.914 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:16.914 =================================================================================================================== 00:20:16.914 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:16.914 09:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 241462 00:20:17.172 09:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:17.172 09:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:20:17.172 09:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:17.172 09:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:17.172 09:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:17.172 09:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.rYgSbdgz6W 00:20:17.172 09:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:20:17.172 09:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.rYgSbdgz6W 00:20:17.172 09:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:20:17.172 09:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:17.172 09:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:20:17.172 09:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:17.172 09:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.rYgSbdgz6W 00:20:17.172 09:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:17.172 09:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:20:17.172 09:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:17.172 09:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.rYgSbdgz6W 00:20:17.172 09:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:17.172 09:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=241599 00:20:17.172 09:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:17.172 09:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:17.172 09:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 241599 /var/tmp/bdevperf.sock 00:20:17.172 09:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 241599 ']' 00:20:17.172 09:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:17.172 09:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:17.172 09:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:17.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:17.172 09:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:17.172 09:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:17.172 [2024-10-07 09:41:06.114041] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:20:17.172 [2024-10-07 09:41:06.114138] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid241599 ] 00:20:17.429 [2024-10-07 09:41:06.170149] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:17.429 [2024-10-07 09:41:06.278696] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:20:17.429 09:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:17.430 09:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:17.430 09:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.rYgSbdgz6W 00:20:17.996 09:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:17.996 [2024-10-07 09:41:06.964637] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:17.996 [2024-10-07 09:41:06.976536] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:17.996 [2024-10-07 09:41:06.976565] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:17.996 [2024-10-07 09:41:06.976600] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:17.996 [2024-10-07 09:41:06.976655] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xae86b0 (107): Transport endpoint is not connected 00:20:17.996 [2024-10-07 09:41:06.977645] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xae86b0 (9): Bad file descriptor 00:20:17.996 [2024-10-07 09:41:06.978657] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:20:17.996 [2024-10-07 09:41:06.978691] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:17.996 [2024-10-07 09:41:06.978706] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:20:17.996 [2024-10-07 09:41:06.978726] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:20:17.996 request: 00:20:17.996 { 00:20:17.996 "name": "TLSTEST", 00:20:17.996 "trtype": "tcp", 00:20:17.996 "traddr": "10.0.0.2", 00:20:17.996 "adrfam": "ipv4", 00:20:17.996 "trsvcid": "4420", 00:20:17.996 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:17.996 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:17.996 "prchk_reftag": false, 00:20:17.996 "prchk_guard": false, 00:20:17.996 "hdgst": false, 00:20:17.996 "ddgst": false, 00:20:17.996 "psk": "key0", 00:20:17.996 "allow_unrecognized_csi": false, 00:20:17.996 "method": "bdev_nvme_attach_controller", 00:20:17.996 "req_id": 1 00:20:17.996 } 00:20:17.996 Got JSON-RPC error response 00:20:17.996 response: 00:20:17.996 { 00:20:17.996 "code": -5, 00:20:17.996 "message": "Input/output error" 00:20:17.996 } 00:20:18.256 09:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 241599 00:20:18.256 09:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 241599 ']' 00:20:18.256 09:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 241599 00:20:18.256 09:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:18.256 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:18.256 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 241599 00:20:18.256 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:18.256 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:18.256 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 241599' 00:20:18.256 killing process with pid 241599 00:20:18.256 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 241599 00:20:18.256 Received shutdown signal, test time was about 10.000000 seconds 00:20:18.256 00:20:18.256 Latency(us) 00:20:18.256 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:18.256 =================================================================================================================== 00:20:18.256 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:18.256 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 241599 00:20:18.515 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:18.515 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:20:18.515 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:18.515 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:18.515 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:18.515 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:18.515 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:20:18.515 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:18.515 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:20:18.515 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:18.515 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:20:18.515 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:18.515 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:18.515 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:18.515 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:18.515 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:18.515 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:20:18.515 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:18.515 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=241731 00:20:18.515 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:18.515 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:18.515 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 241731 /var/tmp/bdevperf.sock 00:20:18.515 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 241731 ']' 00:20:18.515 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:18.515 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:18.515 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:18.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:18.515 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:18.515 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:18.515 [2024-10-07 09:41:07.351203] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:20:18.515 [2024-10-07 09:41:07.351292] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid241731 ] 00:20:18.515 [2024-10-07 09:41:07.407403] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:18.515 [2024-10-07 09:41:07.510406] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:20:18.773 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:18.773 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:18.773 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:20:19.031 [2024-10-07 09:41:07.868342] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:20:19.031 [2024-10-07 09:41:07.868382] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:20:19.031 request: 00:20:19.031 { 00:20:19.031 "name": "key0", 00:20:19.031 "path": "", 00:20:19.031 "method": "keyring_file_add_key", 00:20:19.031 "req_id": 1 00:20:19.031 } 00:20:19.031 Got JSON-RPC error response 00:20:19.031 response: 00:20:19.031 { 00:20:19.031 "code": -1, 00:20:19.031 "message": "Operation not permitted" 00:20:19.031 } 00:20:19.031 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:19.289 [2024-10-07 09:41:08.149252] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:19.290 [2024-10-07 09:41:08.149311] bdev_nvme.c:6412:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:20:19.290 request: 00:20:19.290 { 00:20:19.290 "name": "TLSTEST", 00:20:19.290 "trtype": "tcp", 00:20:19.290 "traddr": "10.0.0.2", 00:20:19.290 "adrfam": "ipv4", 00:20:19.290 "trsvcid": "4420", 00:20:19.290 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:19.290 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:19.290 "prchk_reftag": false, 00:20:19.290 "prchk_guard": false, 00:20:19.290 "hdgst": false, 00:20:19.290 "ddgst": false, 00:20:19.290 "psk": "key0", 00:20:19.290 "allow_unrecognized_csi": false, 00:20:19.290 "method": "bdev_nvme_attach_controller", 00:20:19.290 "req_id": 1 00:20:19.290 } 00:20:19.290 Got JSON-RPC error response 00:20:19.290 response: 00:20:19.290 { 00:20:19.290 "code": -126, 00:20:19.290 "message": "Required key not available" 00:20:19.290 } 00:20:19.290 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 241731 00:20:19.290 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 241731 ']' 00:20:19.290 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 241731 00:20:19.290 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:19.290 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:19.290 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 241731 00:20:19.290 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:19.290 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:19.290 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 241731' 00:20:19.290 killing process with pid 241731 00:20:19.290 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 241731 00:20:19.290 Received shutdown signal, test time was about 10.000000 seconds 00:20:19.290 00:20:19.290 Latency(us) 00:20:19.290 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:19.290 =================================================================================================================== 00:20:19.290 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:19.290 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 241731 00:20:19.548 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:19.548 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:20:19.548 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:19.548 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:19.548 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:19.548 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 238112 00:20:19.548 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 238112 ']' 00:20:19.548 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 238112 00:20:19.548 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:19.548 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:19.548 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 238112 00:20:19.548 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:19.548 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:19.548 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 238112' 00:20:19.548 killing process with pid 238112 00:20:19.548 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 238112 00:20:19.548 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 238112 00:20:19.805 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:20:19.805 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:20:19.805 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:20:19.805 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:20:19.805 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:20:19.805 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=2 00:20:19.805 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:20:20.063 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:20.063 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:20:20.063 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.JkXXP8sNf2 00:20:20.063 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:20.063 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.JkXXP8sNf2 00:20:20.063 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:20:20.063 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:20:20.063 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:20.063 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:20.063 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=241885 00:20:20.063 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:20.063 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 241885 00:20:20.063 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 241885 ']' 00:20:20.063 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:20.063 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:20.063 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:20.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:20.064 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:20.064 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:20.064 [2024-10-07 09:41:08.881961] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:20:20.064 [2024-10-07 09:41:08.882054] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:20.064 [2024-10-07 09:41:08.942305] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:20.064 [2024-10-07 09:41:09.049738] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:20.064 [2024-10-07 09:41:09.049790] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:20.064 [2024-10-07 09:41:09.049812] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:20.064 [2024-10-07 09:41:09.049823] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:20.064 [2024-10-07 09:41:09.049833] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:20.064 [2024-10-07 09:41:09.050338] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:20:20.321 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:20.321 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:20.321 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:20:20.321 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:20.321 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:20.321 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:20.321 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.JkXXP8sNf2 00:20:20.321 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.JkXXP8sNf2 00:20:20.321 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:20.579 [2024-10-07 09:41:09.430910] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:20.579 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:20.837 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:21.094 [2024-10-07 09:41:09.956294] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:21.094 [2024-10-07 09:41:09.956519] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:21.094 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:21.352 malloc0 00:20:21.352 09:41:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:21.609 09:41:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.JkXXP8sNf2 00:20:21.866 09:41:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:22.124 09:41:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.JkXXP8sNf2 00:20:22.124 09:41:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:22.124 09:41:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:22.124 09:41:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:22.124 09:41:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.JkXXP8sNf2 00:20:22.124 09:41:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:22.124 09:41:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=242161 00:20:22.124 09:41:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:22.124 09:41:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:22.124 09:41:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 242161 /var/tmp/bdevperf.sock 00:20:22.124 09:41:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 242161 ']' 00:20:22.124 09:41:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:22.124 09:41:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:22.124 09:41:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:22.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:22.124 09:41:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:22.124 09:41:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:22.383 [2024-10-07 09:41:11.137056] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:20:22.383 [2024-10-07 09:41:11.137146] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid242161 ] 00:20:22.383 [2024-10-07 09:41:11.193182] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:22.383 [2024-10-07 09:41:11.302703] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:20:22.642 09:41:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:22.642 09:41:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:22.642 09:41:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.JkXXP8sNf2 00:20:22.899 09:41:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:23.157 [2024-10-07 09:41:11.927303] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:23.157 TLSTESTn1 00:20:23.157 09:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:23.157 Running I/O for 10 seconds... 00:20:33.412 3198.00 IOPS, 12.49 MiB/s 3213.00 IOPS, 12.55 MiB/s 3237.00 IOPS, 12.64 MiB/s 3238.00 IOPS, 12.65 MiB/s 3236.40 IOPS, 12.64 MiB/s 3249.50 IOPS, 12.69 MiB/s 3260.86 IOPS, 12.74 MiB/s 3261.12 IOPS, 12.74 MiB/s 3255.00 IOPS, 12.71 MiB/s 3261.30 IOPS, 12.74 MiB/s 00:20:33.412 Latency(us) 00:20:33.412 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:33.412 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:33.412 Verification LBA range: start 0x0 length 0x2000 00:20:33.412 TLSTESTn1 : 10.02 3268.10 12.77 0.00 0.00 39107.28 6602.15 33593.27 00:20:33.412 =================================================================================================================== 00:20:33.412 Total : 3268.10 12.77 0.00 0.00 39107.28 6602.15 33593.27 00:20:33.412 { 00:20:33.412 "results": [ 00:20:33.412 { 00:20:33.412 "job": "TLSTESTn1", 00:20:33.412 "core_mask": "0x4", 00:20:33.412 "workload": "verify", 00:20:33.412 "status": "finished", 00:20:33.412 "verify_range": { 00:20:33.412 "start": 0, 00:20:33.412 "length": 8192 00:20:33.412 }, 00:20:33.412 "queue_depth": 128, 00:20:33.412 "io_size": 4096, 00:20:33.412 "runtime": 10.018354, 00:20:33.412 "iops": 3268.1017260919307, 00:20:33.412 "mibps": 12.766022367546604, 00:20:33.412 "io_failed": 0, 00:20:33.412 "io_timeout": 0, 00:20:33.412 "avg_latency_us": 39107.27525370274, 00:20:33.412 "min_latency_us": 6602.145185185185, 00:20:33.412 "max_latency_us": 33593.26814814815 00:20:33.412 } 00:20:33.412 ], 00:20:33.412 "core_count": 1 00:20:33.412 } 00:20:33.412 09:41:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:33.412 09:41:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 242161 00:20:33.412 09:41:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 242161 ']' 00:20:33.412 09:41:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 242161 00:20:33.412 09:41:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:33.412 09:41:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:33.412 09:41:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 242161 00:20:33.412 09:41:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:33.412 09:41:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:33.412 09:41:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 242161' 00:20:33.412 killing process with pid 242161 00:20:33.412 09:41:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 242161 00:20:33.412 Received shutdown signal, test time was about 10.000000 seconds 00:20:33.412 00:20:33.412 Latency(us) 00:20:33.412 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:33.412 =================================================================================================================== 00:20:33.412 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:33.412 09:41:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 242161 00:20:33.670 09:41:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.JkXXP8sNf2 00:20:33.670 09:41:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.JkXXP8sNf2 00:20:33.670 09:41:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:20:33.670 09:41:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.JkXXP8sNf2 00:20:33.670 09:41:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:20:33.670 09:41:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:33.670 09:41:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:20:33.670 09:41:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:33.670 09:41:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.JkXXP8sNf2 00:20:33.670 09:41:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:33.670 09:41:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:33.670 09:41:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:33.670 09:41:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.JkXXP8sNf2 00:20:33.670 09:41:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:33.670 09:41:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=243422 00:20:33.670 09:41:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:33.670 09:41:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:33.670 09:41:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 243422 /var/tmp/bdevperf.sock 00:20:33.670 09:41:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 243422 ']' 00:20:33.670 09:41:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:33.670 09:41:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:33.670 09:41:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:33.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:33.670 09:41:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:33.670 09:41:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:33.670 [2024-10-07 09:41:22.544522] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:20:33.670 [2024-10-07 09:41:22.544613] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid243422 ] 00:20:33.670 [2024-10-07 09:41:22.600216] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:33.928 [2024-10-07 09:41:22.711313] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:20:33.928 09:41:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:33.928 09:41:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:33.928 09:41:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.JkXXP8sNf2 00:20:34.186 [2024-10-07 09:41:23.067078] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.JkXXP8sNf2': 0100666 00:20:34.186 [2024-10-07 09:41:23.067120] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:20:34.186 request: 00:20:34.186 { 00:20:34.186 "name": "key0", 00:20:34.186 "path": "/tmp/tmp.JkXXP8sNf2", 00:20:34.186 "method": "keyring_file_add_key", 00:20:34.186 "req_id": 1 00:20:34.186 } 00:20:34.186 Got JSON-RPC error response 00:20:34.186 response: 00:20:34.186 { 00:20:34.186 "code": -1, 00:20:34.186 "message": "Operation not permitted" 00:20:34.186 } 00:20:34.186 09:41:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:34.443 [2024-10-07 09:41:23.392038] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:34.443 [2024-10-07 09:41:23.392082] bdev_nvme.c:6412:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:20:34.443 request: 00:20:34.443 { 00:20:34.443 "name": "TLSTEST", 00:20:34.443 "trtype": "tcp", 00:20:34.443 "traddr": "10.0.0.2", 00:20:34.443 "adrfam": "ipv4", 00:20:34.443 "trsvcid": "4420", 00:20:34.443 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:34.443 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:34.443 "prchk_reftag": false, 00:20:34.443 "prchk_guard": false, 00:20:34.443 "hdgst": false, 00:20:34.443 "ddgst": false, 00:20:34.443 "psk": "key0", 00:20:34.443 "allow_unrecognized_csi": false, 00:20:34.443 "method": "bdev_nvme_attach_controller", 00:20:34.443 "req_id": 1 00:20:34.443 } 00:20:34.443 Got JSON-RPC error response 00:20:34.443 response: 00:20:34.443 { 00:20:34.443 "code": -126, 00:20:34.443 "message": "Required key not available" 00:20:34.443 } 00:20:34.443 09:41:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 243422 00:20:34.443 09:41:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 243422 ']' 00:20:34.443 09:41:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 243422 00:20:34.443 09:41:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:34.443 09:41:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:34.443 09:41:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 243422 00:20:34.702 09:41:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:34.702 09:41:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:34.702 09:41:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 243422' 00:20:34.702 killing process with pid 243422 00:20:34.702 09:41:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 243422 00:20:34.702 Received shutdown signal, test time was about 10.000000 seconds 00:20:34.702 00:20:34.702 Latency(us) 00:20:34.702 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:34.702 =================================================================================================================== 00:20:34.702 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:34.702 09:41:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 243422 00:20:34.960 09:41:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:34.960 09:41:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:20:34.960 09:41:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:34.960 09:41:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:34.960 09:41:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:34.960 09:41:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 241885 00:20:34.960 09:41:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 241885 ']' 00:20:34.960 09:41:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 241885 00:20:34.960 09:41:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:34.960 09:41:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:34.960 09:41:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 241885 00:20:34.960 09:41:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:34.960 09:41:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:34.960 09:41:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 241885' 00:20:34.960 killing process with pid 241885 00:20:34.960 09:41:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 241885 00:20:34.960 09:41:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 241885 00:20:35.218 09:41:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:20:35.218 09:41:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:20:35.218 09:41:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:35.218 09:41:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:35.218 09:41:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=243679 00:20:35.218 09:41:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:35.218 09:41:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 243679 00:20:35.218 09:41:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 243679 ']' 00:20:35.218 09:41:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:35.218 09:41:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:35.218 09:41:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:35.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:35.218 09:41:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:35.218 09:41:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:35.218 [2024-10-07 09:41:24.082360] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:20:35.218 [2024-10-07 09:41:24.082441] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:35.218 [2024-10-07 09:41:24.141513] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:35.475 [2024-10-07 09:41:24.247135] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:35.475 [2024-10-07 09:41:24.247195] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:35.475 [2024-10-07 09:41:24.247208] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:35.475 [2024-10-07 09:41:24.247218] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:35.475 [2024-10-07 09:41:24.247227] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:35.475 [2024-10-07 09:41:24.247752] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:20:35.475 09:41:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:35.475 09:41:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:35.475 09:41:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:20:35.475 09:41:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:35.475 09:41:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:35.475 09:41:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:35.475 09:41:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.JkXXP8sNf2 00:20:35.475 09:41:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:20:35.475 09:41:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.JkXXP8sNf2 00:20:35.475 09:41:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:20:35.475 09:41:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:35.475 09:41:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:20:35.475 09:41:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:35.475 09:41:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.JkXXP8sNf2 00:20:35.475 09:41:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.JkXXP8sNf2 00:20:35.475 09:41:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:35.733 [2024-10-07 09:41:24.634371] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:35.733 09:41:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:35.991 09:41:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:36.249 [2024-10-07 09:41:25.167827] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:36.249 [2024-10-07 09:41:25.168110] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:36.249 09:41:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:36.508 malloc0 00:20:36.508 09:41:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:36.765 09:41:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.JkXXP8sNf2 00:20:37.330 [2024-10-07 09:41:26.045692] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.JkXXP8sNf2': 0100666 00:20:37.330 [2024-10-07 09:41:26.045756] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:20:37.330 request: 00:20:37.330 { 00:20:37.330 "name": "key0", 00:20:37.330 "path": "/tmp/tmp.JkXXP8sNf2", 00:20:37.330 "method": "keyring_file_add_key", 00:20:37.330 "req_id": 1 00:20:37.330 } 00:20:37.330 Got JSON-RPC error response 00:20:37.330 response: 00:20:37.330 { 00:20:37.330 "code": -1, 00:20:37.330 "message": "Operation not permitted" 00:20:37.330 } 00:20:37.330 09:41:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:37.330 [2024-10-07 09:41:26.326442] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:20:37.330 [2024-10-07 09:41:26.326510] subsystem.c:1055:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:20:37.588 request: 00:20:37.588 { 00:20:37.588 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:37.588 "host": "nqn.2016-06.io.spdk:host1", 00:20:37.588 "psk": "key0", 00:20:37.588 "method": "nvmf_subsystem_add_host", 00:20:37.588 "req_id": 1 00:20:37.588 } 00:20:37.588 Got JSON-RPC error response 00:20:37.588 response: 00:20:37.588 { 00:20:37.588 "code": -32603, 00:20:37.588 "message": "Internal error" 00:20:37.588 } 00:20:37.588 09:41:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:20:37.588 09:41:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:37.588 09:41:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:37.588 09:41:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:37.588 09:41:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 243679 00:20:37.589 09:41:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 243679 ']' 00:20:37.589 09:41:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 243679 00:20:37.589 09:41:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:37.589 09:41:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:37.589 09:41:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 243679 00:20:37.589 09:41:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:37.589 09:41:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:37.589 09:41:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 243679' 00:20:37.589 killing process with pid 243679 00:20:37.589 09:41:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 243679 00:20:37.589 09:41:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 243679 00:20:37.848 09:41:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.JkXXP8sNf2 00:20:37.848 09:41:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:20:37.848 09:41:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:20:37.848 09:41:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:37.848 09:41:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:37.848 09:41:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=243966 00:20:37.848 09:41:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:37.848 09:41:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 243966 00:20:37.848 09:41:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 243966 ']' 00:20:37.848 09:41:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:37.848 09:41:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:37.848 09:41:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:37.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:37.848 09:41:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:37.848 09:41:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:37.849 [2024-10-07 09:41:26.728394] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:20:37.849 [2024-10-07 09:41:26.728476] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:37.849 [2024-10-07 09:41:26.799248] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:38.106 [2024-10-07 09:41:26.909300] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:38.106 [2024-10-07 09:41:26.909372] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:38.106 [2024-10-07 09:41:26.909385] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:38.106 [2024-10-07 09:41:26.909396] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:38.106 [2024-10-07 09:41:26.909405] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:38.106 [2024-10-07 09:41:26.909964] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:20:38.106 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:38.106 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:38.106 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:20:38.106 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:38.106 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:38.106 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:38.107 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.JkXXP8sNf2 00:20:38.107 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.JkXXP8sNf2 00:20:38.107 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:38.365 [2024-10-07 09:41:27.345620] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:38.662 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:38.663 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:38.920 [2024-10-07 09:41:27.875000] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:38.920 [2024-10-07 09:41:27.875240] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:38.920 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:39.178 malloc0 00:20:39.178 09:41:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:39.743 09:41:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.JkXXP8sNf2 00:20:39.743 09:41:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:40.000 09:41:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=244243 00:20:40.000 09:41:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:40.000 09:41:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:40.000 09:41:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 244243 /var/tmp/bdevperf.sock 00:20:40.000 09:41:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 244243 ']' 00:20:40.000 09:41:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:40.000 09:41:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:40.000 09:41:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:40.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:40.000 09:41:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:40.000 09:41:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:40.260 [2024-10-07 09:41:29.014482] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:20:40.260 [2024-10-07 09:41:29.014569] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid244243 ] 00:20:40.260 [2024-10-07 09:41:29.070260] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:40.260 [2024-10-07 09:41:29.179302] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:20:40.518 09:41:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:40.518 09:41:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:40.518 09:41:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.JkXXP8sNf2 00:20:40.777 09:41:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:41.035 [2024-10-07 09:41:29.834123] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:41.035 TLSTESTn1 00:20:41.035 09:41:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:20:41.603 09:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:20:41.603 "subsystems": [ 00:20:41.603 { 00:20:41.603 "subsystem": "keyring", 00:20:41.603 "config": [ 00:20:41.603 { 00:20:41.603 "method": "keyring_file_add_key", 00:20:41.603 "params": { 00:20:41.603 "name": "key0", 00:20:41.603 "path": "/tmp/tmp.JkXXP8sNf2" 00:20:41.603 } 00:20:41.603 } 00:20:41.603 ] 00:20:41.603 }, 00:20:41.603 { 00:20:41.603 "subsystem": "iobuf", 00:20:41.603 "config": [ 00:20:41.603 { 00:20:41.603 "method": "iobuf_set_options", 00:20:41.603 "params": { 00:20:41.603 "small_pool_count": 8192, 00:20:41.603 "large_pool_count": 1024, 00:20:41.603 "small_bufsize": 8192, 00:20:41.603 "large_bufsize": 135168 00:20:41.603 } 00:20:41.603 } 00:20:41.603 ] 00:20:41.603 }, 00:20:41.603 { 00:20:41.603 "subsystem": "sock", 00:20:41.603 "config": [ 00:20:41.603 { 00:20:41.603 "method": "sock_set_default_impl", 00:20:41.603 "params": { 00:20:41.603 "impl_name": "posix" 00:20:41.603 } 00:20:41.603 }, 00:20:41.603 { 00:20:41.603 "method": "sock_impl_set_options", 00:20:41.603 "params": { 00:20:41.603 "impl_name": "ssl", 00:20:41.604 "recv_buf_size": 4096, 00:20:41.604 "send_buf_size": 4096, 00:20:41.604 "enable_recv_pipe": true, 00:20:41.604 "enable_quickack": false, 00:20:41.604 "enable_placement_id": 0, 00:20:41.604 "enable_zerocopy_send_server": true, 00:20:41.604 "enable_zerocopy_send_client": false, 00:20:41.604 "zerocopy_threshold": 0, 00:20:41.604 "tls_version": 0, 00:20:41.604 "enable_ktls": false 00:20:41.604 } 00:20:41.604 }, 00:20:41.604 { 00:20:41.604 "method": "sock_impl_set_options", 00:20:41.604 "params": { 00:20:41.604 "impl_name": "posix", 00:20:41.604 "recv_buf_size": 2097152, 00:20:41.604 "send_buf_size": 2097152, 00:20:41.604 "enable_recv_pipe": true, 00:20:41.604 "enable_quickack": false, 00:20:41.604 "enable_placement_id": 0, 00:20:41.604 "enable_zerocopy_send_server": true, 00:20:41.604 "enable_zerocopy_send_client": false, 00:20:41.604 "zerocopy_threshold": 0, 00:20:41.604 "tls_version": 0, 00:20:41.604 "enable_ktls": false 00:20:41.604 } 00:20:41.604 } 00:20:41.604 ] 00:20:41.604 }, 00:20:41.604 { 00:20:41.604 "subsystem": "vmd", 00:20:41.604 "config": [] 00:20:41.604 }, 00:20:41.604 { 00:20:41.604 "subsystem": "accel", 00:20:41.604 "config": [ 00:20:41.604 { 00:20:41.604 "method": "accel_set_options", 00:20:41.604 "params": { 00:20:41.604 "small_cache_size": 128, 00:20:41.604 "large_cache_size": 16, 00:20:41.604 "task_count": 2048, 00:20:41.604 "sequence_count": 2048, 00:20:41.604 "buf_count": 2048 00:20:41.604 } 00:20:41.604 } 00:20:41.604 ] 00:20:41.604 }, 00:20:41.604 { 00:20:41.604 "subsystem": "bdev", 00:20:41.604 "config": [ 00:20:41.604 { 00:20:41.604 "method": "bdev_set_options", 00:20:41.604 "params": { 00:20:41.604 "bdev_io_pool_size": 65535, 00:20:41.604 "bdev_io_cache_size": 256, 00:20:41.604 "bdev_auto_examine": true, 00:20:41.604 "iobuf_small_cache_size": 128, 00:20:41.604 "iobuf_large_cache_size": 16 00:20:41.604 } 00:20:41.604 }, 00:20:41.604 { 00:20:41.604 "method": "bdev_raid_set_options", 00:20:41.604 "params": { 00:20:41.604 "process_window_size_kb": 1024, 00:20:41.604 "process_max_bandwidth_mb_sec": 0 00:20:41.604 } 00:20:41.604 }, 00:20:41.604 { 00:20:41.604 "method": "bdev_iscsi_set_options", 00:20:41.604 "params": { 00:20:41.604 "timeout_sec": 30 00:20:41.604 } 00:20:41.604 }, 00:20:41.604 { 00:20:41.604 "method": "bdev_nvme_set_options", 00:20:41.604 "params": { 00:20:41.604 "action_on_timeout": "none", 00:20:41.604 "timeout_us": 0, 00:20:41.604 "timeout_admin_us": 0, 00:20:41.604 "keep_alive_timeout_ms": 10000, 00:20:41.604 "arbitration_burst": 0, 00:20:41.604 "low_priority_weight": 0, 00:20:41.604 "medium_priority_weight": 0, 00:20:41.604 "high_priority_weight": 0, 00:20:41.604 "nvme_adminq_poll_period_us": 10000, 00:20:41.604 "nvme_ioq_poll_period_us": 0, 00:20:41.604 "io_queue_requests": 0, 00:20:41.604 "delay_cmd_submit": true, 00:20:41.604 "transport_retry_count": 4, 00:20:41.604 "bdev_retry_count": 3, 00:20:41.604 "transport_ack_timeout": 0, 00:20:41.604 "ctrlr_loss_timeout_sec": 0, 00:20:41.604 "reconnect_delay_sec": 0, 00:20:41.604 "fast_io_fail_timeout_sec": 0, 00:20:41.604 "disable_auto_failback": false, 00:20:41.604 "generate_uuids": false, 00:20:41.604 "transport_tos": 0, 00:20:41.604 "nvme_error_stat": false, 00:20:41.604 "rdma_srq_size": 0, 00:20:41.604 "io_path_stat": false, 00:20:41.604 "allow_accel_sequence": false, 00:20:41.604 "rdma_max_cq_size": 0, 00:20:41.604 "rdma_cm_event_timeout_ms": 0, 00:20:41.604 "dhchap_digests": [ 00:20:41.604 "sha256", 00:20:41.604 "sha384", 00:20:41.604 "sha512" 00:20:41.604 ], 00:20:41.604 "dhchap_dhgroups": [ 00:20:41.604 "null", 00:20:41.604 "ffdhe2048", 00:20:41.604 "ffdhe3072", 00:20:41.604 "ffdhe4096", 00:20:41.604 "ffdhe6144", 00:20:41.604 "ffdhe8192" 00:20:41.604 ] 00:20:41.604 } 00:20:41.604 }, 00:20:41.604 { 00:20:41.604 "method": "bdev_nvme_set_hotplug", 00:20:41.604 "params": { 00:20:41.604 "period_us": 100000, 00:20:41.604 "enable": false 00:20:41.604 } 00:20:41.604 }, 00:20:41.604 { 00:20:41.604 "method": "bdev_malloc_create", 00:20:41.604 "params": { 00:20:41.604 "name": "malloc0", 00:20:41.604 "num_blocks": 8192, 00:20:41.604 "block_size": 4096, 00:20:41.604 "physical_block_size": 4096, 00:20:41.604 "uuid": "a9ef2fa6-fc15-4992-bfa0-edc3c7182102", 00:20:41.604 "optimal_io_boundary": 0, 00:20:41.604 "md_size": 0, 00:20:41.604 "dif_type": 0, 00:20:41.604 "dif_is_head_of_md": false, 00:20:41.604 "dif_pi_format": 0 00:20:41.604 } 00:20:41.604 }, 00:20:41.604 { 00:20:41.604 "method": "bdev_wait_for_examine" 00:20:41.604 } 00:20:41.604 ] 00:20:41.604 }, 00:20:41.604 { 00:20:41.604 "subsystem": "nbd", 00:20:41.604 "config": [] 00:20:41.604 }, 00:20:41.604 { 00:20:41.604 "subsystem": "scheduler", 00:20:41.604 "config": [ 00:20:41.604 { 00:20:41.604 "method": "framework_set_scheduler", 00:20:41.604 "params": { 00:20:41.604 "name": "static" 00:20:41.604 } 00:20:41.604 } 00:20:41.604 ] 00:20:41.604 }, 00:20:41.604 { 00:20:41.604 "subsystem": "nvmf", 00:20:41.604 "config": [ 00:20:41.604 { 00:20:41.604 "method": "nvmf_set_config", 00:20:41.604 "params": { 00:20:41.604 "discovery_filter": "match_any", 00:20:41.604 "admin_cmd_passthru": { 00:20:41.604 "identify_ctrlr": false 00:20:41.604 }, 00:20:41.604 "dhchap_digests": [ 00:20:41.604 "sha256", 00:20:41.604 "sha384", 00:20:41.604 "sha512" 00:20:41.604 ], 00:20:41.604 "dhchap_dhgroups": [ 00:20:41.604 "null", 00:20:41.604 "ffdhe2048", 00:20:41.604 "ffdhe3072", 00:20:41.604 "ffdhe4096", 00:20:41.604 "ffdhe6144", 00:20:41.604 "ffdhe8192" 00:20:41.604 ] 00:20:41.604 } 00:20:41.604 }, 00:20:41.604 { 00:20:41.604 "method": "nvmf_set_max_subsystems", 00:20:41.604 "params": { 00:20:41.604 "max_subsystems": 1024 00:20:41.604 } 00:20:41.604 }, 00:20:41.604 { 00:20:41.604 "method": "nvmf_set_crdt", 00:20:41.604 "params": { 00:20:41.604 "crdt1": 0, 00:20:41.604 "crdt2": 0, 00:20:41.604 "crdt3": 0 00:20:41.604 } 00:20:41.604 }, 00:20:41.604 { 00:20:41.604 "method": "nvmf_create_transport", 00:20:41.604 "params": { 00:20:41.604 "trtype": "TCP", 00:20:41.604 "max_queue_depth": 128, 00:20:41.604 "max_io_qpairs_per_ctrlr": 127, 00:20:41.604 "in_capsule_data_size": 4096, 00:20:41.604 "max_io_size": 131072, 00:20:41.604 "io_unit_size": 131072, 00:20:41.604 "max_aq_depth": 128, 00:20:41.604 "num_shared_buffers": 511, 00:20:41.604 "buf_cache_size": 4294967295, 00:20:41.604 "dif_insert_or_strip": false, 00:20:41.604 "zcopy": false, 00:20:41.604 "c2h_success": false, 00:20:41.604 "sock_priority": 0, 00:20:41.604 "abort_timeout_sec": 1, 00:20:41.604 "ack_timeout": 0, 00:20:41.604 "data_wr_pool_size": 0 00:20:41.604 } 00:20:41.604 }, 00:20:41.604 { 00:20:41.604 "method": "nvmf_create_subsystem", 00:20:41.604 "params": { 00:20:41.604 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:41.604 "allow_any_host": false, 00:20:41.604 "serial_number": "SPDK00000000000001", 00:20:41.604 "model_number": "SPDK bdev Controller", 00:20:41.604 "max_namespaces": 10, 00:20:41.604 "min_cntlid": 1, 00:20:41.604 "max_cntlid": 65519, 00:20:41.604 "ana_reporting": false 00:20:41.604 } 00:20:41.604 }, 00:20:41.604 { 00:20:41.604 "method": "nvmf_subsystem_add_host", 00:20:41.604 "params": { 00:20:41.604 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:41.604 "host": "nqn.2016-06.io.spdk:host1", 00:20:41.604 "psk": "key0" 00:20:41.604 } 00:20:41.604 }, 00:20:41.605 { 00:20:41.605 "method": "nvmf_subsystem_add_ns", 00:20:41.605 "params": { 00:20:41.605 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:41.605 "namespace": { 00:20:41.605 "nsid": 1, 00:20:41.605 "bdev_name": "malloc0", 00:20:41.605 "nguid": "A9EF2FA6FC154992BFA0EDC3C7182102", 00:20:41.605 "uuid": "a9ef2fa6-fc15-4992-bfa0-edc3c7182102", 00:20:41.605 "no_auto_visible": false 00:20:41.605 } 00:20:41.605 } 00:20:41.605 }, 00:20:41.605 { 00:20:41.605 "method": "nvmf_subsystem_add_listener", 00:20:41.605 "params": { 00:20:41.605 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:41.605 "listen_address": { 00:20:41.605 "trtype": "TCP", 00:20:41.605 "adrfam": "IPv4", 00:20:41.605 "traddr": "10.0.0.2", 00:20:41.605 "trsvcid": "4420" 00:20:41.605 }, 00:20:41.605 "secure_channel": true 00:20:41.605 } 00:20:41.605 } 00:20:41.605 ] 00:20:41.605 } 00:20:41.605 ] 00:20:41.605 }' 00:20:41.605 09:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:41.864 09:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:20:41.864 "subsystems": [ 00:20:41.864 { 00:20:41.864 "subsystem": "keyring", 00:20:41.864 "config": [ 00:20:41.864 { 00:20:41.864 "method": "keyring_file_add_key", 00:20:41.864 "params": { 00:20:41.864 "name": "key0", 00:20:41.864 "path": "/tmp/tmp.JkXXP8sNf2" 00:20:41.864 } 00:20:41.864 } 00:20:41.864 ] 00:20:41.864 }, 00:20:41.864 { 00:20:41.864 "subsystem": "iobuf", 00:20:41.864 "config": [ 00:20:41.864 { 00:20:41.864 "method": "iobuf_set_options", 00:20:41.864 "params": { 00:20:41.864 "small_pool_count": 8192, 00:20:41.864 "large_pool_count": 1024, 00:20:41.864 "small_bufsize": 8192, 00:20:41.864 "large_bufsize": 135168 00:20:41.864 } 00:20:41.864 } 00:20:41.864 ] 00:20:41.864 }, 00:20:41.864 { 00:20:41.864 "subsystem": "sock", 00:20:41.864 "config": [ 00:20:41.864 { 00:20:41.864 "method": "sock_set_default_impl", 00:20:41.864 "params": { 00:20:41.864 "impl_name": "posix" 00:20:41.864 } 00:20:41.864 }, 00:20:41.864 { 00:20:41.864 "method": "sock_impl_set_options", 00:20:41.864 "params": { 00:20:41.864 "impl_name": "ssl", 00:20:41.864 "recv_buf_size": 4096, 00:20:41.864 "send_buf_size": 4096, 00:20:41.864 "enable_recv_pipe": true, 00:20:41.864 "enable_quickack": false, 00:20:41.864 "enable_placement_id": 0, 00:20:41.864 "enable_zerocopy_send_server": true, 00:20:41.864 "enable_zerocopy_send_client": false, 00:20:41.864 "zerocopy_threshold": 0, 00:20:41.864 "tls_version": 0, 00:20:41.864 "enable_ktls": false 00:20:41.864 } 00:20:41.864 }, 00:20:41.864 { 00:20:41.864 "method": "sock_impl_set_options", 00:20:41.864 "params": { 00:20:41.864 "impl_name": "posix", 00:20:41.864 "recv_buf_size": 2097152, 00:20:41.864 "send_buf_size": 2097152, 00:20:41.864 "enable_recv_pipe": true, 00:20:41.864 "enable_quickack": false, 00:20:41.864 "enable_placement_id": 0, 00:20:41.864 "enable_zerocopy_send_server": true, 00:20:41.864 "enable_zerocopy_send_client": false, 00:20:41.864 "zerocopy_threshold": 0, 00:20:41.864 "tls_version": 0, 00:20:41.864 "enable_ktls": false 00:20:41.864 } 00:20:41.864 } 00:20:41.864 ] 00:20:41.864 }, 00:20:41.864 { 00:20:41.864 "subsystem": "vmd", 00:20:41.864 "config": [] 00:20:41.864 }, 00:20:41.864 { 00:20:41.864 "subsystem": "accel", 00:20:41.864 "config": [ 00:20:41.864 { 00:20:41.864 "method": "accel_set_options", 00:20:41.864 "params": { 00:20:41.864 "small_cache_size": 128, 00:20:41.864 "large_cache_size": 16, 00:20:41.864 "task_count": 2048, 00:20:41.864 "sequence_count": 2048, 00:20:41.864 "buf_count": 2048 00:20:41.864 } 00:20:41.864 } 00:20:41.864 ] 00:20:41.864 }, 00:20:41.864 { 00:20:41.864 "subsystem": "bdev", 00:20:41.864 "config": [ 00:20:41.864 { 00:20:41.864 "method": "bdev_set_options", 00:20:41.864 "params": { 00:20:41.864 "bdev_io_pool_size": 65535, 00:20:41.864 "bdev_io_cache_size": 256, 00:20:41.864 "bdev_auto_examine": true, 00:20:41.864 "iobuf_small_cache_size": 128, 00:20:41.864 "iobuf_large_cache_size": 16 00:20:41.864 } 00:20:41.864 }, 00:20:41.864 { 00:20:41.864 "method": "bdev_raid_set_options", 00:20:41.864 "params": { 00:20:41.864 "process_window_size_kb": 1024, 00:20:41.864 "process_max_bandwidth_mb_sec": 0 00:20:41.864 } 00:20:41.864 }, 00:20:41.864 { 00:20:41.864 "method": "bdev_iscsi_set_options", 00:20:41.864 "params": { 00:20:41.864 "timeout_sec": 30 00:20:41.864 } 00:20:41.864 }, 00:20:41.864 { 00:20:41.864 "method": "bdev_nvme_set_options", 00:20:41.864 "params": { 00:20:41.864 "action_on_timeout": "none", 00:20:41.864 "timeout_us": 0, 00:20:41.864 "timeout_admin_us": 0, 00:20:41.864 "keep_alive_timeout_ms": 10000, 00:20:41.864 "arbitration_burst": 0, 00:20:41.864 "low_priority_weight": 0, 00:20:41.864 "medium_priority_weight": 0, 00:20:41.864 "high_priority_weight": 0, 00:20:41.864 "nvme_adminq_poll_period_us": 10000, 00:20:41.864 "nvme_ioq_poll_period_us": 0, 00:20:41.864 "io_queue_requests": 512, 00:20:41.864 "delay_cmd_submit": true, 00:20:41.864 "transport_retry_count": 4, 00:20:41.864 "bdev_retry_count": 3, 00:20:41.864 "transport_ack_timeout": 0, 00:20:41.864 "ctrlr_loss_timeout_sec": 0, 00:20:41.864 "reconnect_delay_sec": 0, 00:20:41.864 "fast_io_fail_timeout_sec": 0, 00:20:41.864 "disable_auto_failback": false, 00:20:41.864 "generate_uuids": false, 00:20:41.864 "transport_tos": 0, 00:20:41.864 "nvme_error_stat": false, 00:20:41.865 "rdma_srq_size": 0, 00:20:41.865 "io_path_stat": false, 00:20:41.865 "allow_accel_sequence": false, 00:20:41.865 "rdma_max_cq_size": 0, 00:20:41.865 "rdma_cm_event_timeout_ms": 0, 00:20:41.865 "dhchap_digests": [ 00:20:41.865 "sha256", 00:20:41.865 "sha384", 00:20:41.865 "sha512" 00:20:41.865 ], 00:20:41.865 "dhchap_dhgroups": [ 00:20:41.865 "null", 00:20:41.865 "ffdhe2048", 00:20:41.865 "ffdhe3072", 00:20:41.865 "ffdhe4096", 00:20:41.865 "ffdhe6144", 00:20:41.865 "ffdhe8192" 00:20:41.865 ] 00:20:41.865 } 00:20:41.865 }, 00:20:41.865 { 00:20:41.865 "method": "bdev_nvme_attach_controller", 00:20:41.865 "params": { 00:20:41.865 "name": "TLSTEST", 00:20:41.865 "trtype": "TCP", 00:20:41.865 "adrfam": "IPv4", 00:20:41.865 "traddr": "10.0.0.2", 00:20:41.865 "trsvcid": "4420", 00:20:41.865 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:41.865 "prchk_reftag": false, 00:20:41.865 "prchk_guard": false, 00:20:41.865 "ctrlr_loss_timeout_sec": 0, 00:20:41.865 "reconnect_delay_sec": 0, 00:20:41.865 "fast_io_fail_timeout_sec": 0, 00:20:41.865 "psk": "key0", 00:20:41.865 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:41.865 "hdgst": false, 00:20:41.865 "ddgst": false 00:20:41.865 } 00:20:41.865 }, 00:20:41.865 { 00:20:41.865 "method": "bdev_nvme_set_hotplug", 00:20:41.865 "params": { 00:20:41.865 "period_us": 100000, 00:20:41.865 "enable": false 00:20:41.865 } 00:20:41.865 }, 00:20:41.865 { 00:20:41.865 "method": "bdev_wait_for_examine" 00:20:41.865 } 00:20:41.865 ] 00:20:41.865 }, 00:20:41.865 { 00:20:41.865 "subsystem": "nbd", 00:20:41.865 "config": [] 00:20:41.865 } 00:20:41.865 ] 00:20:41.865 }' 00:20:41.865 09:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 244243 00:20:41.865 09:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 244243 ']' 00:20:41.865 09:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 244243 00:20:41.865 09:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:41.865 09:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:41.865 09:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 244243 00:20:41.865 09:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:41.865 09:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:41.865 09:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 244243' 00:20:41.865 killing process with pid 244243 00:20:41.865 09:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 244243 00:20:41.865 Received shutdown signal, test time was about 10.000000 seconds 00:20:41.865 00:20:41.865 Latency(us) 00:20:41.865 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:41.865 =================================================================================================================== 00:20:41.865 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:41.865 09:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 244243 00:20:42.123 09:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 243966 00:20:42.123 09:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 243966 ']' 00:20:42.123 09:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 243966 00:20:42.123 09:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:42.123 09:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:42.123 09:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 243966 00:20:42.123 09:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:42.123 09:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:42.123 09:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 243966' 00:20:42.123 killing process with pid 243966 00:20:42.123 09:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 243966 00:20:42.123 09:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 243966 00:20:42.383 09:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:20:42.383 09:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:20:42.383 09:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:42.383 09:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:20:42.383 "subsystems": [ 00:20:42.383 { 00:20:42.383 "subsystem": "keyring", 00:20:42.383 "config": [ 00:20:42.383 { 00:20:42.383 "method": "keyring_file_add_key", 00:20:42.383 "params": { 00:20:42.383 "name": "key0", 00:20:42.383 "path": "/tmp/tmp.JkXXP8sNf2" 00:20:42.383 } 00:20:42.383 } 00:20:42.383 ] 00:20:42.383 }, 00:20:42.383 { 00:20:42.383 "subsystem": "iobuf", 00:20:42.383 "config": [ 00:20:42.383 { 00:20:42.383 "method": "iobuf_set_options", 00:20:42.383 "params": { 00:20:42.383 "small_pool_count": 8192, 00:20:42.383 "large_pool_count": 1024, 00:20:42.383 "small_bufsize": 8192, 00:20:42.383 "large_bufsize": 135168 00:20:42.383 } 00:20:42.383 } 00:20:42.383 ] 00:20:42.383 }, 00:20:42.383 { 00:20:42.383 "subsystem": "sock", 00:20:42.383 "config": [ 00:20:42.383 { 00:20:42.383 "method": "sock_set_default_impl", 00:20:42.383 "params": { 00:20:42.383 "impl_name": "posix" 00:20:42.383 } 00:20:42.383 }, 00:20:42.383 { 00:20:42.383 "method": "sock_impl_set_options", 00:20:42.383 "params": { 00:20:42.383 "impl_name": "ssl", 00:20:42.383 "recv_buf_size": 4096, 00:20:42.383 "send_buf_size": 4096, 00:20:42.383 "enable_recv_pipe": true, 00:20:42.383 "enable_quickack": false, 00:20:42.383 "enable_placement_id": 0, 00:20:42.383 "enable_zerocopy_send_server": true, 00:20:42.383 "enable_zerocopy_send_client": false, 00:20:42.383 "zerocopy_threshold": 0, 00:20:42.383 "tls_version": 0, 00:20:42.383 "enable_ktls": false 00:20:42.383 } 00:20:42.383 }, 00:20:42.383 { 00:20:42.383 "method": "sock_impl_set_options", 00:20:42.383 "params": { 00:20:42.383 "impl_name": "posix", 00:20:42.383 "recv_buf_size": 2097152, 00:20:42.383 "send_buf_size": 2097152, 00:20:42.383 "enable_recv_pipe": true, 00:20:42.383 "enable_quickack": false, 00:20:42.383 "enable_placement_id": 0, 00:20:42.383 "enable_zerocopy_send_server": true, 00:20:42.383 "enable_zerocopy_send_client": false, 00:20:42.383 "zerocopy_threshold": 0, 00:20:42.383 "tls_version": 0, 00:20:42.383 "enable_ktls": false 00:20:42.383 } 00:20:42.383 } 00:20:42.383 ] 00:20:42.383 }, 00:20:42.383 { 00:20:42.383 "subsystem": "vmd", 00:20:42.383 "config": [] 00:20:42.383 }, 00:20:42.383 { 00:20:42.383 "subsystem": "accel", 00:20:42.383 "config": [ 00:20:42.383 { 00:20:42.383 "method": "accel_set_options", 00:20:42.383 "params": { 00:20:42.383 "small_cache_size": 128, 00:20:42.383 "large_cache_size": 16, 00:20:42.383 "task_count": 2048, 00:20:42.383 "sequence_count": 2048, 00:20:42.383 "buf_count": 2048 00:20:42.383 } 00:20:42.383 } 00:20:42.383 ] 00:20:42.383 }, 00:20:42.383 { 00:20:42.383 "subsystem": "bdev", 00:20:42.383 "config": [ 00:20:42.383 { 00:20:42.383 "method": "bdev_set_options", 00:20:42.383 "params": { 00:20:42.383 "bdev_io_pool_size": 65535, 00:20:42.383 "bdev_io_cache_size": 256, 00:20:42.383 "bdev_auto_examine": true, 00:20:42.383 "iobuf_small_cache_size": 128, 00:20:42.383 "iobuf_large_cache_size": 16 00:20:42.383 } 00:20:42.383 }, 00:20:42.383 { 00:20:42.383 "method": "bdev_raid_set_options", 00:20:42.383 "params": { 00:20:42.383 "process_window_size_kb": 1024, 00:20:42.383 "process_max_bandwidth_mb_sec": 0 00:20:42.383 } 00:20:42.383 }, 00:20:42.383 { 00:20:42.383 "method": "bdev_iscsi_set_options", 00:20:42.383 "params": { 00:20:42.383 "timeout_sec": 30 00:20:42.383 } 00:20:42.383 }, 00:20:42.383 { 00:20:42.383 "method": "bdev_nvme_set_options", 00:20:42.383 "params": { 00:20:42.383 "action_on_timeout": "none", 00:20:42.383 "timeout_us": 0, 00:20:42.383 "timeout_admin_us": 0, 00:20:42.383 "keep_alive_timeout_ms": 10000, 00:20:42.383 "arbitration_burst": 0, 00:20:42.383 "low_priority_weight": 0, 00:20:42.383 "medium_priority_weight": 0, 00:20:42.383 "high_priority_weight": 0, 00:20:42.383 "nvme_adminq_poll_period_us": 10000, 00:20:42.383 "nvme_ioq_poll_period_us": 0, 00:20:42.383 "io_queue_requests": 0, 00:20:42.383 "delay_cmd_submit": true, 00:20:42.384 "transport_retry_count": 4, 00:20:42.384 "bdev_retry_count": 3, 00:20:42.384 "transport_ack_timeout": 0, 00:20:42.384 "ctrlr_loss_timeout_sec": 0, 00:20:42.384 "reconnect_delay_sec": 0, 00:20:42.384 "fast_io_fail_timeout_sec": 0, 00:20:42.384 "disable_auto_failback": false, 00:20:42.384 "generate_uuids": false, 00:20:42.384 "transport_tos": 0, 00:20:42.384 "nvme_error_stat": false, 00:20:42.384 "rdma_srq_size": 0, 00:20:42.384 "io_path_stat": false, 00:20:42.384 "allow_accel_sequence": false, 00:20:42.384 "rdma_max_cq_size": 0, 00:20:42.384 "rdma_cm_event_timeout_ms": 0, 00:20:42.384 "dhchap_digests": [ 00:20:42.384 "sha256", 00:20:42.384 "sha384", 00:20:42.384 "sha512" 00:20:42.384 ], 00:20:42.384 "dhchap_dhgroups": [ 00:20:42.384 "null", 00:20:42.384 "ffdhe2048", 00:20:42.384 "ffdhe3072", 00:20:42.384 "ffdhe4096", 00:20:42.384 "ffdhe6144", 00:20:42.384 "ffdhe8192" 00:20:42.384 ] 00:20:42.384 } 00:20:42.384 }, 00:20:42.384 { 00:20:42.384 "method": "bdev_nvme_set_hotplug", 00:20:42.384 "params": { 00:20:42.384 "period_us": 100000, 00:20:42.384 "enable": false 00:20:42.384 } 00:20:42.384 }, 00:20:42.384 { 00:20:42.384 "method": "bdev_malloc_create", 00:20:42.384 "params": { 00:20:42.384 "name": "malloc0", 00:20:42.384 "num_blocks": 8192, 00:20:42.384 "block_size": 4096, 00:20:42.384 "physical_block_size": 4096, 00:20:42.384 "uuid": "a9ef2fa6-fc15-4992-bfa0-edc3c7182102", 00:20:42.384 "optimal_io_boundary": 0, 00:20:42.384 "md_size": 0, 00:20:42.384 "dif_type": 0, 00:20:42.384 "dif_is_head_of_md": false, 00:20:42.384 "dif_pi_format": 0 00:20:42.384 } 00:20:42.384 }, 00:20:42.384 { 00:20:42.384 "method": "bdev_wait_for_examine" 00:20:42.384 } 00:20:42.384 ] 00:20:42.384 }, 00:20:42.384 { 00:20:42.384 "subsystem": "nbd", 00:20:42.384 "config": [] 00:20:42.384 }, 00:20:42.384 { 00:20:42.384 "subsystem": "scheduler", 00:20:42.384 "config": [ 00:20:42.384 { 00:20:42.384 "method": "framework_set_scheduler", 00:20:42.384 "params": { 00:20:42.384 "name": "static" 00:20:42.384 } 00:20:42.384 } 00:20:42.384 ] 00:20:42.384 }, 00:20:42.384 { 00:20:42.384 "subsystem": "nvmf", 00:20:42.384 "config": [ 00:20:42.384 { 00:20:42.384 "method": "nvmf_set_config", 00:20:42.384 "params": { 00:20:42.384 "discovery_filter": "match_any", 00:20:42.384 "admin_cmd_passthru": { 00:20:42.384 "identify_ctrlr": false 00:20:42.384 }, 00:20:42.384 "dhchap_digests": [ 00:20:42.384 "sha256", 00:20:42.384 "sha384", 00:20:42.384 "sha512" 00:20:42.384 ], 00:20:42.384 "dhchap_dhgroups": [ 00:20:42.384 "null", 00:20:42.384 "ffdhe2048", 00:20:42.384 "ffdhe3072", 00:20:42.384 "ffdhe4096", 00:20:42.384 "ffdhe6144", 00:20:42.384 "ffdhe8192" 00:20:42.384 ] 00:20:42.384 } 00:20:42.384 }, 00:20:42.384 { 00:20:42.384 "method": "nvmf_set_max_subsystems", 00:20:42.384 "params": { 00:20:42.384 "max_subsystems": 1024 00:20:42.384 } 00:20:42.384 }, 00:20:42.384 { 00:20:42.384 "method": "nvmf_set_crdt", 00:20:42.384 "params": { 00:20:42.384 "crdt1": 0, 00:20:42.384 "crdt2": 0, 00:20:42.384 "crdt3": 0 00:20:42.384 } 00:20:42.384 }, 00:20:42.384 { 00:20:42.384 "method": "nvmf_create_transport", 00:20:42.384 "params": { 00:20:42.384 "trtype": "TCP", 00:20:42.384 "max_queue_depth": 128, 00:20:42.384 "max_io_qpairs_per_ctrlr": 127, 00:20:42.384 "in_capsule_data_size": 4096, 00:20:42.384 "max_io_size": 131072, 00:20:42.384 "io_unit_size": 131072, 00:20:42.384 "max_aq_depth": 128, 00:20:42.384 "num_shared_buffers": 511, 00:20:42.384 "buf_cache_size": 4294967295, 00:20:42.384 "dif_insert_or_strip": false, 00:20:42.384 "zcopy": false, 00:20:42.384 "c2h_success": false, 00:20:42.384 "sock_priority": 0, 00:20:42.384 "abort_timeout_sec": 1, 00:20:42.384 "ack_timeout": 0, 00:20:42.384 "data_wr_pool_size": 0 00:20:42.384 } 00:20:42.384 }, 00:20:42.384 { 00:20:42.384 "method": "nvmf_create_subsystem", 00:20:42.384 "params": { 00:20:42.384 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:42.384 "allow_any_host": false, 00:20:42.384 "serial_number": "SPDK00000000000001", 00:20:42.384 "model_number": "SPDK bdev Controller", 00:20:42.384 "max_namespaces": 10, 00:20:42.384 "min_cntlid": 1, 00:20:42.384 "max_cntlid": 65519, 00:20:42.384 "ana_reporting": false 00:20:42.384 } 00:20:42.384 }, 00:20:42.384 { 00:20:42.384 "method": "nvmf_subsystem_add_host", 00:20:42.384 "params": { 00:20:42.384 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:42.384 "host": "nqn.2016-06.io.spdk:host1", 00:20:42.384 "psk": "key0" 00:20:42.384 } 00:20:42.384 }, 00:20:42.384 { 00:20:42.384 "method": "nvmf_subsystem_add_ns", 00:20:42.384 "params": { 00:20:42.384 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:42.384 "namespace": { 00:20:42.384 "nsid": 1, 00:20:42.384 "bdev_name": "malloc0", 00:20:42.384 "nguid": "A9EF2FA6FC154992BFA0EDC3C7182102", 00:20:42.384 "uuid": "a9ef2fa6-fc15-4992-bfa0-edc3c7182102", 00:20:42.384 "no_auto_visible": false 00:20:42.384 } 00:20:42.384 } 00:20:42.384 }, 00:20:42.384 { 00:20:42.384 "method": "nvmf_subsystem_add_listener", 00:20:42.384 "params": { 00:20:42.384 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:42.384 "listen_address": { 00:20:42.384 "trtype": "TCP", 00:20:42.384 "adrfam": "IPv4", 00:20:42.384 "traddr": "10.0.0.2", 00:20:42.384 "trsvcid": "4420" 00:20:42.384 }, 00:20:42.384 "secure_channel": true 00:20:42.384 } 00:20:42.384 } 00:20:42.384 ] 00:20:42.384 } 00:20:42.384 ] 00:20:42.384 }' 00:20:42.384 09:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:42.384 09:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=244517 00:20:42.384 09:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:20:42.384 09:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 244517 00:20:42.384 09:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 244517 ']' 00:20:42.384 09:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:42.384 09:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:42.384 09:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:42.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:42.384 09:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:42.384 09:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:42.384 [2024-10-07 09:41:31.324806] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:20:42.385 [2024-10-07 09:41:31.324896] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:42.643 [2024-10-07 09:41:31.386574] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:42.643 [2024-10-07 09:41:31.494981] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:42.643 [2024-10-07 09:41:31.495056] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:42.643 [2024-10-07 09:41:31.495069] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:42.643 [2024-10-07 09:41:31.495081] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:42.643 [2024-10-07 09:41:31.495090] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:42.643 [2024-10-07 09:41:31.495713] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:20:42.901 [2024-10-07 09:41:31.736679] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:42.901 [2024-10-07 09:41:31.768711] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:42.901 [2024-10-07 09:41:31.768976] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:43.469 09:41:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:43.469 09:41:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:43.469 09:41:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:20:43.469 09:41:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:43.469 09:41:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:43.469 09:41:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:43.469 09:41:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=244663 00:20:43.469 09:41:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 244663 /var/tmp/bdevperf.sock 00:20:43.469 09:41:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 244663 ']' 00:20:43.469 09:41:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:43.469 09:41:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:43.469 09:41:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:43.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:43.469 09:41:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:43.469 09:41:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:43.469 09:41:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:20:43.469 09:41:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:20:43.469 "subsystems": [ 00:20:43.469 { 00:20:43.469 "subsystem": "keyring", 00:20:43.469 "config": [ 00:20:43.469 { 00:20:43.469 "method": "keyring_file_add_key", 00:20:43.469 "params": { 00:20:43.469 "name": "key0", 00:20:43.469 "path": "/tmp/tmp.JkXXP8sNf2" 00:20:43.469 } 00:20:43.469 } 00:20:43.469 ] 00:20:43.469 }, 00:20:43.469 { 00:20:43.469 "subsystem": "iobuf", 00:20:43.469 "config": [ 00:20:43.469 { 00:20:43.469 "method": "iobuf_set_options", 00:20:43.469 "params": { 00:20:43.469 "small_pool_count": 8192, 00:20:43.469 "large_pool_count": 1024, 00:20:43.469 "small_bufsize": 8192, 00:20:43.469 "large_bufsize": 135168 00:20:43.469 } 00:20:43.469 } 00:20:43.469 ] 00:20:43.469 }, 00:20:43.469 { 00:20:43.469 "subsystem": "sock", 00:20:43.469 "config": [ 00:20:43.469 { 00:20:43.469 "method": "sock_set_default_impl", 00:20:43.469 "params": { 00:20:43.469 "impl_name": "posix" 00:20:43.469 } 00:20:43.469 }, 00:20:43.469 { 00:20:43.469 "method": "sock_impl_set_options", 00:20:43.469 "params": { 00:20:43.469 "impl_name": "ssl", 00:20:43.469 "recv_buf_size": 4096, 00:20:43.469 "send_buf_size": 4096, 00:20:43.469 "enable_recv_pipe": true, 00:20:43.469 "enable_quickack": false, 00:20:43.469 "enable_placement_id": 0, 00:20:43.469 "enable_zerocopy_send_server": true, 00:20:43.469 "enable_zerocopy_send_client": false, 00:20:43.469 "zerocopy_threshold": 0, 00:20:43.469 "tls_version": 0, 00:20:43.469 "enable_ktls": false 00:20:43.469 } 00:20:43.469 }, 00:20:43.469 { 00:20:43.469 "method": "sock_impl_set_options", 00:20:43.469 "params": { 00:20:43.469 "impl_name": "posix", 00:20:43.469 "recv_buf_size": 2097152, 00:20:43.469 "send_buf_size": 2097152, 00:20:43.469 "enable_recv_pipe": true, 00:20:43.469 "enable_quickack": false, 00:20:43.469 "enable_placement_id": 0, 00:20:43.469 "enable_zerocopy_send_server": true, 00:20:43.469 "enable_zerocopy_send_client": false, 00:20:43.469 "zerocopy_threshold": 0, 00:20:43.469 "tls_version": 0, 00:20:43.469 "enable_ktls": false 00:20:43.469 } 00:20:43.469 } 00:20:43.469 ] 00:20:43.469 }, 00:20:43.469 { 00:20:43.469 "subsystem": "vmd", 00:20:43.469 "config": [] 00:20:43.469 }, 00:20:43.469 { 00:20:43.469 "subsystem": "accel", 00:20:43.469 "config": [ 00:20:43.469 { 00:20:43.469 "method": "accel_set_options", 00:20:43.469 "params": { 00:20:43.470 "small_cache_size": 128, 00:20:43.470 "large_cache_size": 16, 00:20:43.470 "task_count": 2048, 00:20:43.470 "sequence_count": 2048, 00:20:43.470 "buf_count": 2048 00:20:43.470 } 00:20:43.470 } 00:20:43.470 ] 00:20:43.470 }, 00:20:43.470 { 00:20:43.470 "subsystem": "bdev", 00:20:43.470 "config": [ 00:20:43.470 { 00:20:43.470 "method": "bdev_set_options", 00:20:43.470 "params": { 00:20:43.470 "bdev_io_pool_size": 65535, 00:20:43.470 "bdev_io_cache_size": 256, 00:20:43.470 "bdev_auto_examine": true, 00:20:43.470 "iobuf_small_cache_size": 128, 00:20:43.470 "iobuf_large_cache_size": 16 00:20:43.470 } 00:20:43.470 }, 00:20:43.470 { 00:20:43.470 "method": "bdev_raid_set_options", 00:20:43.470 "params": { 00:20:43.470 "process_window_size_kb": 1024, 00:20:43.470 "process_max_bandwidth_mb_sec": 0 00:20:43.470 } 00:20:43.470 }, 00:20:43.470 { 00:20:43.470 "method": "bdev_iscsi_set_options", 00:20:43.470 "params": { 00:20:43.470 "timeout_sec": 30 00:20:43.470 } 00:20:43.470 }, 00:20:43.470 { 00:20:43.470 "method": "bdev_nvme_set_options", 00:20:43.470 "params": { 00:20:43.470 "action_on_timeout": "none", 00:20:43.470 "timeout_us": 0, 00:20:43.470 "timeout_admin_us": 0, 00:20:43.470 "keep_alive_timeout_ms": 10000, 00:20:43.470 "arbitration_burst": 0, 00:20:43.470 "low_priority_weight": 0, 00:20:43.470 "medium_priority_weight": 0, 00:20:43.470 "high_priority_weight": 0, 00:20:43.470 "nvme_adminq_poll_period_us": 10000, 00:20:43.470 "nvme_ioq_poll_period_us": 0, 00:20:43.470 "io_queue_requests": 512, 00:20:43.470 "delay_cmd_submit": true, 00:20:43.470 "transport_retry_count": 4, 00:20:43.470 "bdev_retry_count": 3, 00:20:43.470 "transport_ack_timeout": 0, 00:20:43.470 "ctrlr_loss_timeout_sec": 0, 00:20:43.470 "reconnect_delay_sec": 0, 00:20:43.470 "fast_io_fail_timeout_sec": 0, 00:20:43.470 "disable_auto_failback": false, 00:20:43.470 "generate_uuids": false, 00:20:43.470 "transport_tos": 0, 00:20:43.470 "nvme_error_stat": false, 00:20:43.470 "rdma_srq_size": 0, 00:20:43.470 "io_path_stat": false, 00:20:43.470 "allow_accel_sequence": false, 00:20:43.470 "rdma_max_cq_size": 0, 00:20:43.470 "rdma_cm_event_timeout_ms": 0, 00:20:43.470 "dhchap_digests": [ 00:20:43.470 "sha256", 00:20:43.470 "sha384", 00:20:43.470 "sha512" 00:20:43.470 ], 00:20:43.470 "dhchap_dhgroups": [ 00:20:43.470 "null", 00:20:43.470 "ffdhe2048", 00:20:43.470 "ffdhe3072", 00:20:43.470 "ffdhe4096", 00:20:43.470 "ffdhe6144", 00:20:43.470 "ffdhe8192" 00:20:43.470 ] 00:20:43.470 } 00:20:43.470 }, 00:20:43.470 { 00:20:43.470 "method": "bdev_nvme_attach_controller", 00:20:43.470 "params": { 00:20:43.470 "name": "TLSTEST", 00:20:43.470 "trtype": "TCP", 00:20:43.470 "adrfam": "IPv4", 00:20:43.470 "traddr": "10.0.0.2", 00:20:43.470 "trsvcid": "4420", 00:20:43.470 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:43.470 "prchk_reftag": false, 00:20:43.470 "prchk_guard": false, 00:20:43.470 "ctrlr_loss_timeout_sec": 0, 00:20:43.470 "reconnect_delay_sec": 0, 00:20:43.470 "fast_io_fail_timeout_sec": 0, 00:20:43.470 "psk": "key0", 00:20:43.470 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:43.470 "hdgst": false, 00:20:43.470 "ddgst": false 00:20:43.470 } 00:20:43.470 }, 00:20:43.470 { 00:20:43.470 "method": "bdev_nvme_set_hotplug", 00:20:43.470 "params": { 00:20:43.470 "period_us": 100000, 00:20:43.470 "enable": false 00:20:43.470 } 00:20:43.470 }, 00:20:43.470 { 00:20:43.470 "method": "bdev_wait_for_examine" 00:20:43.470 } 00:20:43.470 ] 00:20:43.470 }, 00:20:43.470 { 00:20:43.470 "subsystem": "nbd", 00:20:43.470 "config": [] 00:20:43.470 } 00:20:43.470 ] 00:20:43.470 }' 00:20:43.470 [2024-10-07 09:41:32.407171] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:20:43.470 [2024-10-07 09:41:32.407259] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid244663 ] 00:20:43.728 [2024-10-07 09:41:32.471954] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:43.728 [2024-10-07 09:41:32.588457] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:20:43.986 [2024-10-07 09:41:32.766114] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:44.552 09:41:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:44.552 09:41:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:44.552 09:41:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:44.552 Running I/O for 10 seconds... 00:20:54.870 3005.00 IOPS, 11.74 MiB/s 3244.00 IOPS, 12.67 MiB/s 3306.33 IOPS, 12.92 MiB/s 3342.50 IOPS, 13.06 MiB/s 3329.60 IOPS, 13.01 MiB/s 3330.83 IOPS, 13.01 MiB/s 3328.57 IOPS, 13.00 MiB/s 3350.38 IOPS, 13.09 MiB/s 3349.22 IOPS, 13.08 MiB/s 3356.00 IOPS, 13.11 MiB/s 00:20:54.870 Latency(us) 00:20:54.870 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:54.870 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:54.870 Verification LBA range: start 0x0 length 0x2000 00:20:54.870 TLSTESTn1 : 10.03 3359.04 13.12 0.00 0.00 38033.94 10437.21 44467.39 00:20:54.870 =================================================================================================================== 00:20:54.870 Total : 3359.04 13.12 0.00 0.00 38033.94 10437.21 44467.39 00:20:54.870 { 00:20:54.870 "results": [ 00:20:54.870 { 00:20:54.870 "job": "TLSTESTn1", 00:20:54.870 "core_mask": "0x4", 00:20:54.870 "workload": "verify", 00:20:54.870 "status": "finished", 00:20:54.870 "verify_range": { 00:20:54.870 "start": 0, 00:20:54.870 "length": 8192 00:20:54.870 }, 00:20:54.870 "queue_depth": 128, 00:20:54.870 "io_size": 4096, 00:20:54.870 "runtime": 10.028761, 00:20:54.870 "iops": 3359.0390677372807, 00:20:54.870 "mibps": 13.121246358348753, 00:20:54.870 "io_failed": 0, 00:20:54.870 "io_timeout": 0, 00:20:54.870 "avg_latency_us": 38033.9428178141, 00:20:54.870 "min_latency_us": 10437.214814814815, 00:20:54.870 "max_latency_us": 44467.38962962963 00:20:54.870 } 00:20:54.870 ], 00:20:54.870 "core_count": 1 00:20:54.870 } 00:20:54.870 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:54.870 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 244663 00:20:54.870 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 244663 ']' 00:20:54.870 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 244663 00:20:54.870 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:54.870 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:54.870 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 244663 00:20:54.870 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:54.870 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:54.870 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 244663' 00:20:54.870 killing process with pid 244663 00:20:54.870 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 244663 00:20:54.870 Received shutdown signal, test time was about 10.000000 seconds 00:20:54.870 00:20:54.870 Latency(us) 00:20:54.870 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:54.870 =================================================================================================================== 00:20:54.870 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:54.870 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 244663 00:20:55.128 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 244517 00:20:55.128 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 244517 ']' 00:20:55.128 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 244517 00:20:55.128 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:55.128 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:55.128 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 244517 00:20:55.128 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:55.128 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:55.128 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 244517' 00:20:55.128 killing process with pid 244517 00:20:55.128 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 244517 00:20:55.128 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 244517 00:20:55.387 09:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:20:55.387 09:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:20:55.387 09:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:55.387 09:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:55.387 09:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=246038 00:20:55.387 09:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:55.387 09:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 246038 00:20:55.387 09:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 246038 ']' 00:20:55.387 09:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:55.387 09:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:55.387 09:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:55.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:55.387 09:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:55.388 09:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:55.388 [2024-10-07 09:41:44.234387] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:20:55.388 [2024-10-07 09:41:44.234458] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:55.388 [2024-10-07 09:41:44.294464] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:55.646 [2024-10-07 09:41:44.401697] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:55.646 [2024-10-07 09:41:44.401769] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:55.646 [2024-10-07 09:41:44.401784] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:55.646 [2024-10-07 09:41:44.401796] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:55.646 [2024-10-07 09:41:44.401805] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:55.646 [2024-10-07 09:41:44.402358] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:20:55.646 09:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:55.646 09:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:55.646 09:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:20:55.646 09:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:55.646 09:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:55.646 09:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:55.646 09:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.JkXXP8sNf2 00:20:55.646 09:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.JkXXP8sNf2 00:20:55.646 09:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:55.903 [2024-10-07 09:41:44.794843] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:55.903 09:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:56.160 09:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:56.730 [2024-10-07 09:41:45.424574] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:56.730 [2024-10-07 09:41:45.424859] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:56.730 09:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:56.730 malloc0 00:20:56.988 09:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:57.245 09:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.JkXXP8sNf2 00:20:57.502 09:41:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:57.760 09:41:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=246319 00:20:57.760 09:41:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:57.760 09:41:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:57.760 09:41:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 246319 /var/tmp/bdevperf.sock 00:20:57.760 09:41:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 246319 ']' 00:20:57.760 09:41:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:57.760 09:41:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:57.760 09:41:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:57.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:57.760 09:41:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:57.760 09:41:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:57.760 [2024-10-07 09:41:46.649868] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:20:57.760 [2024-10-07 09:41:46.649939] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid246319 ] 00:20:57.760 [2024-10-07 09:41:46.704330] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:58.017 [2024-10-07 09:41:46.809981] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:20:58.018 09:41:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:58.018 09:41:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:58.018 09:41:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.JkXXP8sNf2 00:20:58.275 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:58.533 [2024-10-07 09:41:47.440344] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:58.533 nvme0n1 00:20:58.533 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:58.792 Running I/O for 1 seconds... 00:20:59.728 3235.00 IOPS, 12.64 MiB/s 00:20:59.728 Latency(us) 00:20:59.728 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:59.728 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:59.728 Verification LBA range: start 0x0 length 0x2000 00:20:59.728 nvme0n1 : 1.02 3289.46 12.85 0.00 0.00 38548.72 6553.60 36700.16 00:20:59.728 =================================================================================================================== 00:20:59.728 Total : 3289.46 12.85 0.00 0.00 38548.72 6553.60 36700.16 00:20:59.728 { 00:20:59.728 "results": [ 00:20:59.728 { 00:20:59.728 "job": "nvme0n1", 00:20:59.728 "core_mask": "0x2", 00:20:59.728 "workload": "verify", 00:20:59.728 "status": "finished", 00:20:59.728 "verify_range": { 00:20:59.728 "start": 0, 00:20:59.728 "length": 8192 00:20:59.728 }, 00:20:59.728 "queue_depth": 128, 00:20:59.728 "io_size": 4096, 00:20:59.728 "runtime": 1.022356, 00:20:59.728 "iops": 3289.460814041293, 00:20:59.728 "mibps": 12.8494563048488, 00:20:59.728 "io_failed": 0, 00:20:59.728 "io_timeout": 0, 00:20:59.728 "avg_latency_us": 38548.72398674023, 00:20:59.728 "min_latency_us": 6553.6, 00:20:59.728 "max_latency_us": 36700.16 00:20:59.728 } 00:20:59.728 ], 00:20:59.728 "core_count": 1 00:20:59.728 } 00:20:59.728 09:41:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 246319 00:20:59.728 09:41:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 246319 ']' 00:20:59.728 09:41:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 246319 00:20:59.728 09:41:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:59.728 09:41:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:59.728 09:41:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 246319 00:20:59.985 09:41:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:59.985 09:41:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:59.985 09:41:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 246319' 00:20:59.985 killing process with pid 246319 00:20:59.985 09:41:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 246319 00:20:59.985 Received shutdown signal, test time was about 1.000000 seconds 00:20:59.985 00:20:59.985 Latency(us) 00:20:59.985 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:59.985 =================================================================================================================== 00:20:59.985 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:59.985 09:41:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 246319 00:20:59.985 09:41:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 246038 00:20:59.985 09:41:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 246038 ']' 00:20:59.985 09:41:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 246038 00:20:59.985 09:41:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:59.985 09:41:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:59.985 09:41:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 246038 00:21:00.244 09:41:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:00.244 09:41:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:00.244 09:41:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 246038' 00:21:00.244 killing process with pid 246038 00:21:00.244 09:41:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 246038 00:21:00.244 09:41:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 246038 00:21:00.502 09:41:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:21:00.502 09:41:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:21:00.502 09:41:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:00.502 09:41:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:00.502 09:41:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=246592 00:21:00.502 09:41:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:00.502 09:41:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 246592 00:21:00.502 09:41:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 246592 ']' 00:21:00.502 09:41:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:00.502 09:41:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:00.502 09:41:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:00.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:00.502 09:41:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:00.502 09:41:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:00.502 [2024-10-07 09:41:49.315815] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:21:00.502 [2024-10-07 09:41:49.315897] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:00.502 [2024-10-07 09:41:49.375481] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:00.502 [2024-10-07 09:41:49.473599] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:00.502 [2024-10-07 09:41:49.473659] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:00.502 [2024-10-07 09:41:49.473689] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:00.502 [2024-10-07 09:41:49.473700] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:00.502 [2024-10-07 09:41:49.473709] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:00.502 [2024-10-07 09:41:49.474195] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:21:00.760 09:41:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:00.760 09:41:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:00.760 09:41:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:21:00.760 09:41:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:00.760 09:41:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:00.760 09:41:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:00.760 09:41:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:21:00.760 09:41:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.760 09:41:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:00.760 [2024-10-07 09:41:49.614000] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:00.760 malloc0 00:21:00.760 [2024-10-07 09:41:49.654128] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:00.760 [2024-10-07 09:41:49.654414] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:00.760 09:41:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.760 09:41:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=246625 00:21:00.760 09:41:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:00.760 09:41:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 246625 /var/tmp/bdevperf.sock 00:21:00.760 09:41:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 246625 ']' 00:21:00.760 09:41:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:00.760 09:41:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:00.760 09:41:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:00.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:00.760 09:41:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:00.760 09:41:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:00.760 [2024-10-07 09:41:49.725906] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:21:00.760 [2024-10-07 09:41:49.725976] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid246625 ] 00:21:01.018 [2024-10-07 09:41:49.783273] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:01.018 [2024-10-07 09:41:49.888806] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:21:01.018 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:01.018 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:01.018 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.JkXXP8sNf2 00:21:01.584 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:01.841 [2024-10-07 09:41:50.585148] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:01.841 nvme0n1 00:21:01.841 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:01.841 Running I/O for 1 seconds... 00:21:03.218 2946.00 IOPS, 11.51 MiB/s 00:21:03.218 Latency(us) 00:21:03.218 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:03.218 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:03.218 Verification LBA range: start 0x0 length 0x2000 00:21:03.218 nvme0n1 : 1.03 2987.30 11.67 0.00 0.00 42260.13 10582.85 39612.87 00:21:03.218 =================================================================================================================== 00:21:03.218 Total : 2987.30 11.67 0.00 0.00 42260.13 10582.85 39612.87 00:21:03.218 { 00:21:03.218 "results": [ 00:21:03.218 { 00:21:03.218 "job": "nvme0n1", 00:21:03.218 "core_mask": "0x2", 00:21:03.218 "workload": "verify", 00:21:03.218 "status": "finished", 00:21:03.218 "verify_range": { 00:21:03.218 "start": 0, 00:21:03.218 "length": 8192 00:21:03.218 }, 00:21:03.218 "queue_depth": 128, 00:21:03.218 "io_size": 4096, 00:21:03.218 "runtime": 1.029023, 00:21:03.218 "iops": 2987.2996036045843, 00:21:03.218 "mibps": 11.669139076580407, 00:21:03.218 "io_failed": 0, 00:21:03.218 "io_timeout": 0, 00:21:03.218 "avg_latency_us": 42260.13133810453, 00:21:03.218 "min_latency_us": 10582.85037037037, 00:21:03.218 "max_latency_us": 39612.87111111111 00:21:03.218 } 00:21:03.218 ], 00:21:03.218 "core_count": 1 00:21:03.218 } 00:21:03.218 09:41:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:21:03.218 09:41:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.218 09:41:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:03.219 09:41:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.219 09:41:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:21:03.219 "subsystems": [ 00:21:03.219 { 00:21:03.219 "subsystem": "keyring", 00:21:03.219 "config": [ 00:21:03.219 { 00:21:03.219 "method": "keyring_file_add_key", 00:21:03.219 "params": { 00:21:03.219 "name": "key0", 00:21:03.219 "path": "/tmp/tmp.JkXXP8sNf2" 00:21:03.219 } 00:21:03.219 } 00:21:03.219 ] 00:21:03.219 }, 00:21:03.219 { 00:21:03.219 "subsystem": "iobuf", 00:21:03.219 "config": [ 00:21:03.219 { 00:21:03.219 "method": "iobuf_set_options", 00:21:03.219 "params": { 00:21:03.219 "small_pool_count": 8192, 00:21:03.219 "large_pool_count": 1024, 00:21:03.219 "small_bufsize": 8192, 00:21:03.219 "large_bufsize": 135168 00:21:03.219 } 00:21:03.219 } 00:21:03.219 ] 00:21:03.219 }, 00:21:03.219 { 00:21:03.219 "subsystem": "sock", 00:21:03.219 "config": [ 00:21:03.219 { 00:21:03.219 "method": "sock_set_default_impl", 00:21:03.219 "params": { 00:21:03.219 "impl_name": "posix" 00:21:03.219 } 00:21:03.219 }, 00:21:03.219 { 00:21:03.219 "method": "sock_impl_set_options", 00:21:03.219 "params": { 00:21:03.219 "impl_name": "ssl", 00:21:03.219 "recv_buf_size": 4096, 00:21:03.219 "send_buf_size": 4096, 00:21:03.219 "enable_recv_pipe": true, 00:21:03.219 "enable_quickack": false, 00:21:03.219 "enable_placement_id": 0, 00:21:03.219 "enable_zerocopy_send_server": true, 00:21:03.219 "enable_zerocopy_send_client": false, 00:21:03.219 "zerocopy_threshold": 0, 00:21:03.219 "tls_version": 0, 00:21:03.219 "enable_ktls": false 00:21:03.219 } 00:21:03.219 }, 00:21:03.219 { 00:21:03.219 "method": "sock_impl_set_options", 00:21:03.219 "params": { 00:21:03.219 "impl_name": "posix", 00:21:03.219 "recv_buf_size": 2097152, 00:21:03.219 "send_buf_size": 2097152, 00:21:03.219 "enable_recv_pipe": true, 00:21:03.219 "enable_quickack": false, 00:21:03.219 "enable_placement_id": 0, 00:21:03.219 "enable_zerocopy_send_server": true, 00:21:03.219 "enable_zerocopy_send_client": false, 00:21:03.219 "zerocopy_threshold": 0, 00:21:03.219 "tls_version": 0, 00:21:03.219 "enable_ktls": false 00:21:03.219 } 00:21:03.219 } 00:21:03.219 ] 00:21:03.219 }, 00:21:03.219 { 00:21:03.219 "subsystem": "vmd", 00:21:03.219 "config": [] 00:21:03.219 }, 00:21:03.219 { 00:21:03.219 "subsystem": "accel", 00:21:03.219 "config": [ 00:21:03.219 { 00:21:03.219 "method": "accel_set_options", 00:21:03.219 "params": { 00:21:03.219 "small_cache_size": 128, 00:21:03.219 "large_cache_size": 16, 00:21:03.219 "task_count": 2048, 00:21:03.219 "sequence_count": 2048, 00:21:03.219 "buf_count": 2048 00:21:03.219 } 00:21:03.219 } 00:21:03.219 ] 00:21:03.219 }, 00:21:03.219 { 00:21:03.219 "subsystem": "bdev", 00:21:03.219 "config": [ 00:21:03.219 { 00:21:03.219 "method": "bdev_set_options", 00:21:03.219 "params": { 00:21:03.219 "bdev_io_pool_size": 65535, 00:21:03.219 "bdev_io_cache_size": 256, 00:21:03.219 "bdev_auto_examine": true, 00:21:03.219 "iobuf_small_cache_size": 128, 00:21:03.219 "iobuf_large_cache_size": 16 00:21:03.219 } 00:21:03.219 }, 00:21:03.219 { 00:21:03.219 "method": "bdev_raid_set_options", 00:21:03.219 "params": { 00:21:03.219 "process_window_size_kb": 1024, 00:21:03.219 "process_max_bandwidth_mb_sec": 0 00:21:03.219 } 00:21:03.219 }, 00:21:03.219 { 00:21:03.219 "method": "bdev_iscsi_set_options", 00:21:03.219 "params": { 00:21:03.219 "timeout_sec": 30 00:21:03.219 } 00:21:03.219 }, 00:21:03.219 { 00:21:03.219 "method": "bdev_nvme_set_options", 00:21:03.219 "params": { 00:21:03.219 "action_on_timeout": "none", 00:21:03.219 "timeout_us": 0, 00:21:03.219 "timeout_admin_us": 0, 00:21:03.219 "keep_alive_timeout_ms": 10000, 00:21:03.219 "arbitration_burst": 0, 00:21:03.219 "low_priority_weight": 0, 00:21:03.219 "medium_priority_weight": 0, 00:21:03.219 "high_priority_weight": 0, 00:21:03.219 "nvme_adminq_poll_period_us": 10000, 00:21:03.219 "nvme_ioq_poll_period_us": 0, 00:21:03.219 "io_queue_requests": 0, 00:21:03.219 "delay_cmd_submit": true, 00:21:03.219 "transport_retry_count": 4, 00:21:03.219 "bdev_retry_count": 3, 00:21:03.219 "transport_ack_timeout": 0, 00:21:03.219 "ctrlr_loss_timeout_sec": 0, 00:21:03.219 "reconnect_delay_sec": 0, 00:21:03.219 "fast_io_fail_timeout_sec": 0, 00:21:03.219 "disable_auto_failback": false, 00:21:03.219 "generate_uuids": false, 00:21:03.219 "transport_tos": 0, 00:21:03.219 "nvme_error_stat": false, 00:21:03.219 "rdma_srq_size": 0, 00:21:03.219 "io_path_stat": false, 00:21:03.219 "allow_accel_sequence": false, 00:21:03.219 "rdma_max_cq_size": 0, 00:21:03.219 "rdma_cm_event_timeout_ms": 0, 00:21:03.219 "dhchap_digests": [ 00:21:03.219 "sha256", 00:21:03.219 "sha384", 00:21:03.219 "sha512" 00:21:03.219 ], 00:21:03.219 "dhchap_dhgroups": [ 00:21:03.219 "null", 00:21:03.219 "ffdhe2048", 00:21:03.219 "ffdhe3072", 00:21:03.219 "ffdhe4096", 00:21:03.219 "ffdhe6144", 00:21:03.219 "ffdhe8192" 00:21:03.219 ] 00:21:03.219 } 00:21:03.219 }, 00:21:03.219 { 00:21:03.219 "method": "bdev_nvme_set_hotplug", 00:21:03.219 "params": { 00:21:03.219 "period_us": 100000, 00:21:03.219 "enable": false 00:21:03.219 } 00:21:03.219 }, 00:21:03.219 { 00:21:03.219 "method": "bdev_malloc_create", 00:21:03.219 "params": { 00:21:03.219 "name": "malloc0", 00:21:03.219 "num_blocks": 8192, 00:21:03.219 "block_size": 4096, 00:21:03.219 "physical_block_size": 4096, 00:21:03.219 "uuid": "c5f06b64-6c64-4292-a241-724aa1a337cf", 00:21:03.219 "optimal_io_boundary": 0, 00:21:03.219 "md_size": 0, 00:21:03.219 "dif_type": 0, 00:21:03.219 "dif_is_head_of_md": false, 00:21:03.219 "dif_pi_format": 0 00:21:03.219 } 00:21:03.219 }, 00:21:03.219 { 00:21:03.219 "method": "bdev_wait_for_examine" 00:21:03.219 } 00:21:03.219 ] 00:21:03.219 }, 00:21:03.219 { 00:21:03.219 "subsystem": "nbd", 00:21:03.219 "config": [] 00:21:03.219 }, 00:21:03.219 { 00:21:03.219 "subsystem": "scheduler", 00:21:03.219 "config": [ 00:21:03.219 { 00:21:03.219 "method": "framework_set_scheduler", 00:21:03.219 "params": { 00:21:03.219 "name": "static" 00:21:03.219 } 00:21:03.219 } 00:21:03.219 ] 00:21:03.219 }, 00:21:03.219 { 00:21:03.219 "subsystem": "nvmf", 00:21:03.219 "config": [ 00:21:03.220 { 00:21:03.220 "method": "nvmf_set_config", 00:21:03.220 "params": { 00:21:03.220 "discovery_filter": "match_any", 00:21:03.220 "admin_cmd_passthru": { 00:21:03.220 "identify_ctrlr": false 00:21:03.220 }, 00:21:03.220 "dhchap_digests": [ 00:21:03.220 "sha256", 00:21:03.220 "sha384", 00:21:03.220 "sha512" 00:21:03.220 ], 00:21:03.220 "dhchap_dhgroups": [ 00:21:03.220 "null", 00:21:03.220 "ffdhe2048", 00:21:03.220 "ffdhe3072", 00:21:03.220 "ffdhe4096", 00:21:03.220 "ffdhe6144", 00:21:03.220 "ffdhe8192" 00:21:03.220 ] 00:21:03.220 } 00:21:03.220 }, 00:21:03.220 { 00:21:03.220 "method": "nvmf_set_max_subsystems", 00:21:03.220 "params": { 00:21:03.220 "max_subsystems": 1024 00:21:03.220 } 00:21:03.220 }, 00:21:03.220 { 00:21:03.220 "method": "nvmf_set_crdt", 00:21:03.220 "params": { 00:21:03.220 "crdt1": 0, 00:21:03.220 "crdt2": 0, 00:21:03.220 "crdt3": 0 00:21:03.220 } 00:21:03.220 }, 00:21:03.220 { 00:21:03.220 "method": "nvmf_create_transport", 00:21:03.220 "params": { 00:21:03.220 "trtype": "TCP", 00:21:03.220 "max_queue_depth": 128, 00:21:03.220 "max_io_qpairs_per_ctrlr": 127, 00:21:03.220 "in_capsule_data_size": 4096, 00:21:03.220 "max_io_size": 131072, 00:21:03.220 "io_unit_size": 131072, 00:21:03.220 "max_aq_depth": 128, 00:21:03.220 "num_shared_buffers": 511, 00:21:03.220 "buf_cache_size": 4294967295, 00:21:03.220 "dif_insert_or_strip": false, 00:21:03.220 "zcopy": false, 00:21:03.220 "c2h_success": false, 00:21:03.220 "sock_priority": 0, 00:21:03.220 "abort_timeout_sec": 1, 00:21:03.220 "ack_timeout": 0, 00:21:03.220 "data_wr_pool_size": 0 00:21:03.220 } 00:21:03.220 }, 00:21:03.220 { 00:21:03.220 "method": "nvmf_create_subsystem", 00:21:03.220 "params": { 00:21:03.220 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:03.220 "allow_any_host": false, 00:21:03.220 "serial_number": "00000000000000000000", 00:21:03.220 "model_number": "SPDK bdev Controller", 00:21:03.220 "max_namespaces": 32, 00:21:03.220 "min_cntlid": 1, 00:21:03.220 "max_cntlid": 65519, 00:21:03.220 "ana_reporting": false 00:21:03.220 } 00:21:03.220 }, 00:21:03.220 { 00:21:03.220 "method": "nvmf_subsystem_add_host", 00:21:03.220 "params": { 00:21:03.220 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:03.220 "host": "nqn.2016-06.io.spdk:host1", 00:21:03.220 "psk": "key0" 00:21:03.220 } 00:21:03.220 }, 00:21:03.220 { 00:21:03.220 "method": "nvmf_subsystem_add_ns", 00:21:03.220 "params": { 00:21:03.220 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:03.220 "namespace": { 00:21:03.220 "nsid": 1, 00:21:03.220 "bdev_name": "malloc0", 00:21:03.220 "nguid": "C5F06B646C644292A241724AA1A337CF", 00:21:03.220 "uuid": "c5f06b64-6c64-4292-a241-724aa1a337cf", 00:21:03.220 "no_auto_visible": false 00:21:03.220 } 00:21:03.220 } 00:21:03.220 }, 00:21:03.220 { 00:21:03.220 "method": "nvmf_subsystem_add_listener", 00:21:03.220 "params": { 00:21:03.220 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:03.220 "listen_address": { 00:21:03.220 "trtype": "TCP", 00:21:03.220 "adrfam": "IPv4", 00:21:03.220 "traddr": "10.0.0.2", 00:21:03.220 "trsvcid": "4420" 00:21:03.220 }, 00:21:03.220 "secure_channel": false, 00:21:03.220 "sock_impl": "ssl" 00:21:03.220 } 00:21:03.220 } 00:21:03.220 ] 00:21:03.220 } 00:21:03.220 ] 00:21:03.220 }' 00:21:03.220 09:41:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:03.478 09:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:21:03.478 "subsystems": [ 00:21:03.478 { 00:21:03.478 "subsystem": "keyring", 00:21:03.478 "config": [ 00:21:03.478 { 00:21:03.478 "method": "keyring_file_add_key", 00:21:03.478 "params": { 00:21:03.478 "name": "key0", 00:21:03.478 "path": "/tmp/tmp.JkXXP8sNf2" 00:21:03.478 } 00:21:03.478 } 00:21:03.478 ] 00:21:03.478 }, 00:21:03.478 { 00:21:03.478 "subsystem": "iobuf", 00:21:03.478 "config": [ 00:21:03.478 { 00:21:03.478 "method": "iobuf_set_options", 00:21:03.478 "params": { 00:21:03.478 "small_pool_count": 8192, 00:21:03.478 "large_pool_count": 1024, 00:21:03.478 "small_bufsize": 8192, 00:21:03.478 "large_bufsize": 135168 00:21:03.478 } 00:21:03.478 } 00:21:03.478 ] 00:21:03.478 }, 00:21:03.478 { 00:21:03.478 "subsystem": "sock", 00:21:03.479 "config": [ 00:21:03.479 { 00:21:03.479 "method": "sock_set_default_impl", 00:21:03.479 "params": { 00:21:03.479 "impl_name": "posix" 00:21:03.479 } 00:21:03.479 }, 00:21:03.479 { 00:21:03.479 "method": "sock_impl_set_options", 00:21:03.479 "params": { 00:21:03.479 "impl_name": "ssl", 00:21:03.479 "recv_buf_size": 4096, 00:21:03.479 "send_buf_size": 4096, 00:21:03.479 "enable_recv_pipe": true, 00:21:03.479 "enable_quickack": false, 00:21:03.479 "enable_placement_id": 0, 00:21:03.479 "enable_zerocopy_send_server": true, 00:21:03.479 "enable_zerocopy_send_client": false, 00:21:03.479 "zerocopy_threshold": 0, 00:21:03.479 "tls_version": 0, 00:21:03.479 "enable_ktls": false 00:21:03.479 } 00:21:03.479 }, 00:21:03.479 { 00:21:03.479 "method": "sock_impl_set_options", 00:21:03.479 "params": { 00:21:03.479 "impl_name": "posix", 00:21:03.479 "recv_buf_size": 2097152, 00:21:03.479 "send_buf_size": 2097152, 00:21:03.479 "enable_recv_pipe": true, 00:21:03.479 "enable_quickack": false, 00:21:03.479 "enable_placement_id": 0, 00:21:03.479 "enable_zerocopy_send_server": true, 00:21:03.479 "enable_zerocopy_send_client": false, 00:21:03.479 "zerocopy_threshold": 0, 00:21:03.479 "tls_version": 0, 00:21:03.479 "enable_ktls": false 00:21:03.479 } 00:21:03.479 } 00:21:03.479 ] 00:21:03.479 }, 00:21:03.479 { 00:21:03.479 "subsystem": "vmd", 00:21:03.479 "config": [] 00:21:03.479 }, 00:21:03.479 { 00:21:03.479 "subsystem": "accel", 00:21:03.479 "config": [ 00:21:03.479 { 00:21:03.479 "method": "accel_set_options", 00:21:03.479 "params": { 00:21:03.479 "small_cache_size": 128, 00:21:03.479 "large_cache_size": 16, 00:21:03.479 "task_count": 2048, 00:21:03.479 "sequence_count": 2048, 00:21:03.479 "buf_count": 2048 00:21:03.479 } 00:21:03.479 } 00:21:03.479 ] 00:21:03.479 }, 00:21:03.479 { 00:21:03.479 "subsystem": "bdev", 00:21:03.479 "config": [ 00:21:03.479 { 00:21:03.479 "method": "bdev_set_options", 00:21:03.479 "params": { 00:21:03.479 "bdev_io_pool_size": 65535, 00:21:03.479 "bdev_io_cache_size": 256, 00:21:03.479 "bdev_auto_examine": true, 00:21:03.479 "iobuf_small_cache_size": 128, 00:21:03.479 "iobuf_large_cache_size": 16 00:21:03.479 } 00:21:03.479 }, 00:21:03.479 { 00:21:03.479 "method": "bdev_raid_set_options", 00:21:03.479 "params": { 00:21:03.479 "process_window_size_kb": 1024, 00:21:03.479 "process_max_bandwidth_mb_sec": 0 00:21:03.479 } 00:21:03.479 }, 00:21:03.479 { 00:21:03.479 "method": "bdev_iscsi_set_options", 00:21:03.479 "params": { 00:21:03.479 "timeout_sec": 30 00:21:03.479 } 00:21:03.479 }, 00:21:03.479 { 00:21:03.479 "method": "bdev_nvme_set_options", 00:21:03.479 "params": { 00:21:03.479 "action_on_timeout": "none", 00:21:03.479 "timeout_us": 0, 00:21:03.479 "timeout_admin_us": 0, 00:21:03.479 "keep_alive_timeout_ms": 10000, 00:21:03.479 "arbitration_burst": 0, 00:21:03.479 "low_priority_weight": 0, 00:21:03.479 "medium_priority_weight": 0, 00:21:03.479 "high_priority_weight": 0, 00:21:03.479 "nvme_adminq_poll_period_us": 10000, 00:21:03.479 "nvme_ioq_poll_period_us": 0, 00:21:03.479 "io_queue_requests": 512, 00:21:03.479 "delay_cmd_submit": true, 00:21:03.479 "transport_retry_count": 4, 00:21:03.479 "bdev_retry_count": 3, 00:21:03.479 "transport_ack_timeout": 0, 00:21:03.479 "ctrlr_loss_timeout_sec": 0, 00:21:03.479 "reconnect_delay_sec": 0, 00:21:03.479 "fast_io_fail_timeout_sec": 0, 00:21:03.479 "disable_auto_failback": false, 00:21:03.479 "generate_uuids": false, 00:21:03.479 "transport_tos": 0, 00:21:03.479 "nvme_error_stat": false, 00:21:03.479 "rdma_srq_size": 0, 00:21:03.479 "io_path_stat": false, 00:21:03.479 "allow_accel_sequence": false, 00:21:03.479 "rdma_max_cq_size": 0, 00:21:03.479 "rdma_cm_event_timeout_ms": 0, 00:21:03.479 "dhchap_digests": [ 00:21:03.479 "sha256", 00:21:03.479 "sha384", 00:21:03.479 "sha512" 00:21:03.479 ], 00:21:03.479 "dhchap_dhgroups": [ 00:21:03.479 "null", 00:21:03.479 "ffdhe2048", 00:21:03.479 "ffdhe3072", 00:21:03.479 "ffdhe4096", 00:21:03.479 "ffdhe6144", 00:21:03.479 "ffdhe8192" 00:21:03.479 ] 00:21:03.479 } 00:21:03.479 }, 00:21:03.479 { 00:21:03.479 "method": "bdev_nvme_attach_controller", 00:21:03.479 "params": { 00:21:03.479 "name": "nvme0", 00:21:03.479 "trtype": "TCP", 00:21:03.479 "adrfam": "IPv4", 00:21:03.479 "traddr": "10.0.0.2", 00:21:03.479 "trsvcid": "4420", 00:21:03.479 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:03.479 "prchk_reftag": false, 00:21:03.479 "prchk_guard": false, 00:21:03.479 "ctrlr_loss_timeout_sec": 0, 00:21:03.479 "reconnect_delay_sec": 0, 00:21:03.479 "fast_io_fail_timeout_sec": 0, 00:21:03.479 "psk": "key0", 00:21:03.479 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:03.479 "hdgst": false, 00:21:03.479 "ddgst": false 00:21:03.479 } 00:21:03.479 }, 00:21:03.479 { 00:21:03.479 "method": "bdev_nvme_set_hotplug", 00:21:03.479 "params": { 00:21:03.479 "period_us": 100000, 00:21:03.479 "enable": false 00:21:03.479 } 00:21:03.479 }, 00:21:03.479 { 00:21:03.479 "method": "bdev_enable_histogram", 00:21:03.479 "params": { 00:21:03.479 "name": "nvme0n1", 00:21:03.479 "enable": true 00:21:03.479 } 00:21:03.479 }, 00:21:03.479 { 00:21:03.479 "method": "bdev_wait_for_examine" 00:21:03.479 } 00:21:03.479 ] 00:21:03.479 }, 00:21:03.479 { 00:21:03.479 "subsystem": "nbd", 00:21:03.479 "config": [] 00:21:03.479 } 00:21:03.479 ] 00:21:03.479 }' 00:21:03.479 09:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 246625 00:21:03.479 09:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 246625 ']' 00:21:03.479 09:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 246625 00:21:03.479 09:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:03.479 09:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:03.479 09:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 246625 00:21:03.479 09:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:03.479 09:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:03.479 09:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 246625' 00:21:03.479 killing process with pid 246625 00:21:03.479 09:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 246625 00:21:03.479 Received shutdown signal, test time was about 1.000000 seconds 00:21:03.479 00:21:03.479 Latency(us) 00:21:03.479 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:03.479 =================================================================================================================== 00:21:03.479 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:03.479 09:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 246625 00:21:03.739 09:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 246592 00:21:03.739 09:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 246592 ']' 00:21:03.739 09:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 246592 00:21:03.739 09:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:03.739 09:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:03.739 09:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 246592 00:21:03.739 09:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:03.739 09:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:03.739 09:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 246592' 00:21:03.739 killing process with pid 246592 00:21:03.739 09:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 246592 00:21:03.739 09:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 246592 00:21:03.998 09:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:21:03.998 09:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:21:03.998 09:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:21:03.998 "subsystems": [ 00:21:03.998 { 00:21:03.998 "subsystem": "keyring", 00:21:03.998 "config": [ 00:21:03.998 { 00:21:03.998 "method": "keyring_file_add_key", 00:21:03.998 "params": { 00:21:03.998 "name": "key0", 00:21:03.998 "path": "/tmp/tmp.JkXXP8sNf2" 00:21:03.998 } 00:21:03.998 } 00:21:03.998 ] 00:21:03.998 }, 00:21:03.998 { 00:21:03.998 "subsystem": "iobuf", 00:21:03.998 "config": [ 00:21:03.998 { 00:21:03.998 "method": "iobuf_set_options", 00:21:03.998 "params": { 00:21:03.998 "small_pool_count": 8192, 00:21:03.998 "large_pool_count": 1024, 00:21:03.998 "small_bufsize": 8192, 00:21:03.998 "large_bufsize": 135168 00:21:03.998 } 00:21:03.998 } 00:21:03.998 ] 00:21:03.998 }, 00:21:03.998 { 00:21:03.998 "subsystem": "sock", 00:21:03.998 "config": [ 00:21:03.998 { 00:21:03.998 "method": "sock_set_default_impl", 00:21:03.998 "params": { 00:21:03.998 "impl_name": "posix" 00:21:03.998 } 00:21:03.998 }, 00:21:03.998 { 00:21:03.998 "method": "sock_impl_set_options", 00:21:03.998 "params": { 00:21:03.998 "impl_name": "ssl", 00:21:03.998 "recv_buf_size": 4096, 00:21:03.998 "send_buf_size": 4096, 00:21:03.998 "enable_recv_pipe": true, 00:21:03.998 "enable_quickack": false, 00:21:03.998 "enable_placement_id": 0, 00:21:03.998 "enable_zerocopy_send_server": true, 00:21:03.998 "enable_zerocopy_send_client": false, 00:21:03.998 "zerocopy_threshold": 0, 00:21:03.998 "tls_version": 0, 00:21:03.998 "enable_ktls": false 00:21:03.998 } 00:21:03.998 }, 00:21:03.998 { 00:21:03.998 "method": "sock_impl_set_options", 00:21:03.998 "params": { 00:21:03.998 "impl_name": "posix", 00:21:03.998 "recv_buf_size": 2097152, 00:21:03.998 "send_buf_size": 2097152, 00:21:03.998 "enable_recv_pipe": true, 00:21:03.998 "enable_quickack": false, 00:21:03.998 "enable_placement_id": 0, 00:21:03.998 "enable_zerocopy_send_server": true, 00:21:03.998 "enable_zerocopy_send_client": false, 00:21:03.998 "zerocopy_threshold": 0, 00:21:03.998 "tls_version": 0, 00:21:03.998 "enable_ktls": false 00:21:03.998 } 00:21:03.998 } 00:21:03.998 ] 00:21:03.998 }, 00:21:03.998 { 00:21:03.998 "subsystem": "vmd", 00:21:03.998 "config": [] 00:21:03.998 }, 00:21:03.998 { 00:21:03.998 "subsystem": "accel", 00:21:03.998 "config": [ 00:21:03.998 { 00:21:03.998 "method": "accel_set_options", 00:21:03.998 "params": { 00:21:03.998 "small_cache_size": 128, 00:21:03.998 "large_cache_size": 16, 00:21:03.998 "task_count": 2048, 00:21:03.998 "sequence_count": 2048, 00:21:03.998 "buf_count": 2048 00:21:03.998 } 00:21:03.998 } 00:21:03.998 ] 00:21:03.998 }, 00:21:03.998 { 00:21:03.998 "subsystem": "bdev", 00:21:03.998 "config": [ 00:21:03.998 { 00:21:03.998 "method": "bdev_set_options", 00:21:03.998 "params": { 00:21:03.998 "bdev_io_pool_size": 65535, 00:21:03.998 "bdev_io_cache_size": 256, 00:21:03.998 "bdev_auto_examine": true, 00:21:03.998 "iobuf_small_cache_size": 128, 00:21:03.998 "iobuf_large_cache_size": 16 00:21:03.998 } 00:21:03.998 }, 00:21:03.998 { 00:21:03.998 "method": "bdev_raid_set_options", 00:21:03.998 "params": { 00:21:03.998 "process_window_size_kb": 1024, 00:21:03.998 "process_max_bandwidth_mb_sec": 0 00:21:03.998 } 00:21:03.998 }, 00:21:03.998 { 00:21:03.998 "method": "bdev_iscsi_set_options", 00:21:03.998 "params": { 00:21:03.998 "timeout_sec": 30 00:21:03.998 } 00:21:03.998 }, 00:21:03.998 { 00:21:03.998 "method": "bdev_nvme_set_options", 00:21:03.998 "params": { 00:21:03.998 "action_on_timeout": "none", 00:21:03.998 "timeout_us": 0, 00:21:03.998 "timeout_admin_us": 0, 00:21:03.998 "keep_alive_timeout_ms": 10000, 00:21:03.998 "arbitration_burst": 0, 00:21:03.998 "low_priority_weight": 0, 00:21:03.998 "medium_priority_weight": 0, 00:21:03.998 "high_priority_weight": 0, 00:21:03.998 "nvme_adminq_poll_period_us": 10000, 00:21:03.998 "nvme_ioq_poll_period_us": 0, 00:21:03.998 "io_queue_requests": 0, 00:21:03.998 "delay_cmd_submit": true, 00:21:03.998 "transport_retry_count": 4, 00:21:03.998 "bdev_retry_count": 3, 00:21:03.998 "transport_ack_timeout": 0, 00:21:03.998 "ctrlr_loss_timeout_sec": 0, 00:21:03.998 "reconnect_delay_sec": 0, 00:21:03.998 "fast_io_fail_timeout_sec": 0, 00:21:03.998 "disable_auto_failback": false, 00:21:03.998 "generate_uuids": false, 00:21:03.998 "transport_tos": 0, 00:21:03.998 "nvme_error_stat": false, 00:21:03.998 "rdma_srq_size": 0, 00:21:03.998 "io_path_stat": false, 00:21:03.998 "allow_accel_sequence": false, 00:21:03.998 "rdma_max_cq_size": 0, 00:21:03.998 "rdma_cm_event_timeout_ms": 0, 00:21:03.998 "dhchap_digests": [ 00:21:03.998 "sha256", 00:21:03.998 "sha384", 00:21:03.998 "sha512" 00:21:03.998 ], 00:21:03.998 "dhchap_dhgroups": [ 00:21:03.998 "null", 00:21:03.998 "ffdhe2048", 00:21:03.998 "ffdhe3072", 00:21:03.998 "ffdhe4096", 00:21:03.998 "ffdhe6144", 00:21:03.998 "ffdhe8192" 00:21:03.998 ] 00:21:03.998 } 00:21:03.998 }, 00:21:03.998 { 00:21:03.998 "method": "bdev_nvme_set_hotplug", 00:21:03.998 "params": { 00:21:03.998 "period_us": 100000, 00:21:03.998 "enable": false 00:21:03.998 } 00:21:03.998 }, 00:21:03.999 { 00:21:03.999 "method": "bdev_malloc_create", 00:21:03.999 "params": { 00:21:03.999 "name": "malloc0", 00:21:03.999 "num_blocks": 8192, 00:21:03.999 "block_size": 4096, 00:21:03.999 "physical_block_size": 4096, 00:21:03.999 "uuid": "c5f06b64-6c64-4292-a241-724aa1a337cf", 00:21:03.999 "optimal_io_boundary": 0, 00:21:03.999 "md_size": 0, 00:21:03.999 "dif_type": 0, 00:21:03.999 "dif_is_head_of_md": false, 00:21:03.999 "dif_pi_format": 0 00:21:03.999 } 00:21:03.999 }, 00:21:03.999 { 00:21:03.999 "method": "bdev_wait_for_examine" 00:21:03.999 } 00:21:03.999 ] 00:21:03.999 }, 00:21:03.999 { 00:21:03.999 "subsystem": "nbd", 00:21:03.999 "config": [] 00:21:03.999 }, 00:21:03.999 { 00:21:03.999 "subsystem": "scheduler", 00:21:03.999 "config": [ 00:21:03.999 { 00:21:03.999 "method": "framework_set_scheduler", 00:21:03.999 "params": { 00:21:03.999 "name": "static" 00:21:03.999 } 00:21:03.999 } 00:21:03.999 ] 00:21:03.999 }, 00:21:03.999 { 00:21:03.999 "subsystem": "nvmf", 00:21:03.999 "config": [ 00:21:03.999 { 00:21:03.999 "method": "nvmf_set_config", 00:21:03.999 "params": { 00:21:03.999 "discovery_filter": "match_any", 00:21:03.999 "admin_cmd_passthru": { 00:21:03.999 "identify_ctrlr": false 00:21:03.999 }, 00:21:03.999 "dhchap_digests": [ 00:21:03.999 "sha256", 00:21:03.999 "sha384", 00:21:03.999 "sha512" 00:21:03.999 ], 00:21:03.999 "dhchap_dhgroups": [ 00:21:03.999 "null", 00:21:03.999 "ffdhe2048", 00:21:03.999 "ffdhe3072", 00:21:03.999 "ffdhe4096", 00:21:03.999 "ffdhe6144", 00:21:03.999 "ffdhe8192" 00:21:03.999 ] 00:21:03.999 } 00:21:03.999 }, 00:21:03.999 { 00:21:03.999 "method": "nvmf_set_max_subsystems", 00:21:03.999 "params": { 00:21:03.999 "max_subsystems": 1024 00:21:03.999 } 00:21:03.999 }, 00:21:03.999 { 00:21:03.999 "method": "nvmf_set_crdt", 00:21:03.999 "params": { 00:21:03.999 "crdt1": 0, 00:21:03.999 "crdt2": 0, 00:21:03.999 "crdt3": 0 00:21:03.999 } 00:21:03.999 }, 00:21:03.999 { 00:21:03.999 "method": "nvmf_create_transport", 00:21:03.999 "params": { 00:21:03.999 "trtype": "TCP", 00:21:03.999 "max_queue_depth": 128, 00:21:03.999 "max_io_qpairs_per_ctrlr": 127, 00:21:03.999 "in_capsule_data_size": 4096, 00:21:03.999 "max_io_size": 131072, 00:21:03.999 "io_unit_size": 131072, 00:21:03.999 "max_aq_depth": 128, 00:21:03.999 "num_shared_buffers": 511, 00:21:03.999 "buf_cache_size": 4294967295, 00:21:03.999 "dif_insert_or_strip": false, 00:21:03.999 "zcopy": false, 00:21:03.999 "c2h_success": false, 00:21:03.999 "sock_priority": 0, 00:21:03.999 "abort_timeout_sec": 1, 00:21:03.999 "ack_timeout": 0, 00:21:03.999 "data_wr_pool_size": 0 00:21:03.999 } 00:21:03.999 }, 00:21:03.999 { 00:21:03.999 "method": "nvmf_create_subsystem", 00:21:03.999 "params": { 00:21:03.999 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:03.999 "allow_any_host": false, 00:21:03.999 "serial_number": "00000000000000000000", 00:21:03.999 "model_number": "SPDK bdev Controller", 00:21:03.999 "max_namespaces": 32, 00:21:03.999 "min_cntlid": 1, 00:21:03.999 "max_cntlid": 65519, 00:21:03.999 "ana_reporting": false 00:21:03.999 } 00:21:03.999 }, 00:21:03.999 { 00:21:03.999 "method": "nvmf_subsystem_add_host", 00:21:03.999 "params": { 00:21:03.999 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:03.999 "host": "nqn.2016-06.io.spdk:host1", 00:21:03.999 "psk": "key0" 00:21:03.999 } 00:21:03.999 }, 00:21:03.999 { 00:21:03.999 "method": "nvmf_subsystem_add_ns", 00:21:03.999 "params": { 00:21:03.999 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:03.999 "namespace": { 00:21:03.999 "nsid": 1, 00:21:03.999 "bdev_name": "malloc0", 00:21:03.999 "nguid": "C5F06B646C644292A241724AA1A337CF", 00:21:03.999 "uuid": "c5f06b64-6c64-4292-a241-724aa1a337cf", 00:21:03.999 "no_auto_visible": false 00:21:03.999 } 00:21:03.999 } 00:21:03.999 }, 00:21:03.999 { 00:21:03.999 "method": "nvmf_subsystem_add_listener", 00:21:03.999 "params": { 00:21:03.999 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:03.999 "listen_address": { 00:21:03.999 "trtype": "TCP", 00:21:03.999 "adrfam": "IPv4", 00:21:03.999 "traddr": "10.0.0.2", 00:21:03.999 "trsvcid": "4420" 00:21:03.999 }, 00:21:03.999 "secure_channel": false, 00:21:03.999 "sock_impl": "ssl" 00:21:03.999 } 00:21:03.999 } 00:21:03.999 ] 00:21:03.999 } 00:21:03.999 ] 00:21:03.999 }' 00:21:03.999 09:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:03.999 09:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:03.999 09:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=247009 00:21:03.999 09:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:21:03.999 09:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 247009 00:21:03.999 09:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 247009 ']' 00:21:03.999 09:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:03.999 09:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:03.999 09:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:03.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:03.999 09:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:03.999 09:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:04.258 [2024-10-07 09:41:53.024847] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:21:04.258 [2024-10-07 09:41:53.024943] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:04.258 [2024-10-07 09:41:53.087060] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:04.258 [2024-10-07 09:41:53.194686] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:04.258 [2024-10-07 09:41:53.194756] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:04.258 [2024-10-07 09:41:53.194779] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:04.258 [2024-10-07 09:41:53.194790] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:04.258 [2024-10-07 09:41:53.194799] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:04.258 [2024-10-07 09:41:53.195319] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:21:04.517 [2024-10-07 09:41:53.442719] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:04.517 [2024-10-07 09:41:53.474770] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:04.517 [2024-10-07 09:41:53.475050] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:05.084 09:41:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:05.084 09:41:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:05.084 09:41:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:21:05.084 09:41:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:05.085 09:41:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:05.085 09:41:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:05.085 09:41:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=247156 00:21:05.085 09:41:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 247156 /var/tmp/bdevperf.sock 00:21:05.085 09:41:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 247156 ']' 00:21:05.085 09:41:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:05.085 09:41:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:05.085 09:41:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:21:05.085 09:41:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:05.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:05.085 09:41:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:05.085 09:41:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:21:05.085 "subsystems": [ 00:21:05.085 { 00:21:05.085 "subsystem": "keyring", 00:21:05.085 "config": [ 00:21:05.085 { 00:21:05.085 "method": "keyring_file_add_key", 00:21:05.085 "params": { 00:21:05.085 "name": "key0", 00:21:05.085 "path": "/tmp/tmp.JkXXP8sNf2" 00:21:05.085 } 00:21:05.085 } 00:21:05.085 ] 00:21:05.085 }, 00:21:05.085 { 00:21:05.085 "subsystem": "iobuf", 00:21:05.085 "config": [ 00:21:05.085 { 00:21:05.085 "method": "iobuf_set_options", 00:21:05.085 "params": { 00:21:05.085 "small_pool_count": 8192, 00:21:05.085 "large_pool_count": 1024, 00:21:05.085 "small_bufsize": 8192, 00:21:05.085 "large_bufsize": 135168 00:21:05.085 } 00:21:05.085 } 00:21:05.085 ] 00:21:05.085 }, 00:21:05.085 { 00:21:05.085 "subsystem": "sock", 00:21:05.085 "config": [ 00:21:05.085 { 00:21:05.085 "method": "sock_set_default_impl", 00:21:05.085 "params": { 00:21:05.085 "impl_name": "posix" 00:21:05.085 } 00:21:05.085 }, 00:21:05.085 { 00:21:05.085 "method": "sock_impl_set_options", 00:21:05.085 "params": { 00:21:05.085 "impl_name": "ssl", 00:21:05.085 "recv_buf_size": 4096, 00:21:05.085 "send_buf_size": 4096, 00:21:05.085 "enable_recv_pipe": true, 00:21:05.085 "enable_quickack": false, 00:21:05.085 "enable_placement_id": 0, 00:21:05.085 "enable_zerocopy_send_server": true, 00:21:05.085 "enable_zerocopy_send_client": false, 00:21:05.085 "zerocopy_threshold": 0, 00:21:05.085 "tls_version": 0, 00:21:05.085 "enable_ktls": false 00:21:05.085 } 00:21:05.085 }, 00:21:05.085 { 00:21:05.085 "method": "sock_impl_set_options", 00:21:05.085 "params": { 00:21:05.085 "impl_name": "posix", 00:21:05.085 "recv_buf_size": 2097152, 00:21:05.085 "send_buf_size": 2097152, 00:21:05.085 "enable_recv_pipe": true, 00:21:05.085 "enable_quickack": false, 00:21:05.085 "enable_placement_id": 0, 00:21:05.085 "enable_zerocopy_send_server": true, 00:21:05.085 "enable_zerocopy_send_client": false, 00:21:05.085 "zerocopy_threshold": 0, 00:21:05.085 "tls_version": 0, 00:21:05.085 "enable_ktls": false 00:21:05.085 } 00:21:05.085 } 00:21:05.085 ] 00:21:05.085 }, 00:21:05.085 { 00:21:05.085 "subsystem": "vmd", 00:21:05.085 "config": [] 00:21:05.085 }, 00:21:05.085 { 00:21:05.085 "subsystem": "accel", 00:21:05.085 "config": [ 00:21:05.085 { 00:21:05.085 "method": "accel_set_options", 00:21:05.085 "params": { 00:21:05.085 "small_cache_size": 128, 00:21:05.085 "large_cache_size": 16, 00:21:05.085 "task_count": 2048, 00:21:05.085 "sequence_count": 2048, 00:21:05.085 "buf_count": 2048 00:21:05.085 } 00:21:05.085 } 00:21:05.085 ] 00:21:05.085 }, 00:21:05.085 { 00:21:05.085 "subsystem": "bdev", 00:21:05.085 "config": [ 00:21:05.085 { 00:21:05.085 "method": "bdev_set_options", 00:21:05.085 "params": { 00:21:05.085 "bdev_io_pool_size": 65535, 00:21:05.085 "bdev_io_cache_size": 256, 00:21:05.085 "bdev_auto_examine": true, 00:21:05.085 "iobuf_small_cache_size": 128, 00:21:05.085 "iobuf_large_cache_size": 16 00:21:05.085 } 00:21:05.085 }, 00:21:05.085 { 00:21:05.085 "method": "bdev_raid_set_options", 00:21:05.085 "params": { 00:21:05.085 "process_window_size_kb": 1024, 00:21:05.085 "process_max_bandwidth_mb_sec": 0 00:21:05.085 } 00:21:05.085 }, 00:21:05.085 { 00:21:05.085 "method": "bdev_iscsi_set_options", 00:21:05.085 "params": { 00:21:05.085 "timeout_sec": 30 00:21:05.085 } 00:21:05.085 }, 00:21:05.085 { 00:21:05.085 "method": "bdev_nvme_set_options", 00:21:05.085 "params": { 00:21:05.085 "action_on_timeout": "none", 00:21:05.085 "timeout_us": 0, 00:21:05.085 "timeout_admin_us": 0, 00:21:05.085 "keep_alive_timeout_ms": 10000, 00:21:05.085 "arbitration_burst": 0, 00:21:05.085 "low_priority_weight": 0, 00:21:05.085 "medium_priority_weight": 0, 00:21:05.085 "high_priority_weight": 0, 00:21:05.085 "nvme_adminq_poll_period_us": 10000, 00:21:05.085 "nvme_ioq_poll_period_us": 0, 00:21:05.085 "io_queue_requests": 512, 00:21:05.085 "delay_cmd_submit": true, 00:21:05.085 "transport_retry_count": 4, 00:21:05.085 "bdev_retry_count": 3, 00:21:05.085 "transport_ack_timeout": 0, 00:21:05.085 "ctrlr_loss_timeout_sec": 0, 00:21:05.085 "reconnect_delay_sec": 0, 00:21:05.085 "fast_io_fail_timeout_sec": 0, 00:21:05.085 "disable_auto_failback": false, 00:21:05.085 "generate_uuids": false, 00:21:05.085 "transport_tos": 0, 00:21:05.085 "nvme_error_stat": false, 00:21:05.085 "rdma_srq_size": 0, 00:21:05.085 "io_path_stat": false, 00:21:05.085 "allow_accel_sequence": false, 00:21:05.085 "rdma_max_cq_size": 0, 00:21:05.085 "rdma_cm_event_timeout_ms": 0, 00:21:05.085 "dhchap_digests": [ 00:21:05.085 "sha256", 00:21:05.085 "sha384", 00:21:05.085 "sha512" 00:21:05.085 ], 00:21:05.085 "dhchap_dhgroups": [ 00:21:05.085 "null", 00:21:05.085 "ffdhe2048", 00:21:05.085 "ffdhe3072", 00:21:05.085 "ffdhe4096", 00:21:05.085 "ffdhe6144", 00:21:05.085 "ffdhe8192" 00:21:05.085 ] 00:21:05.085 } 00:21:05.085 }, 00:21:05.085 { 00:21:05.085 "method": "bdev_nvme_attach_controller", 00:21:05.085 "params": { 00:21:05.085 "name": "nvme0", 00:21:05.085 "trtype": "TCP", 00:21:05.085 "adrfam": "IPv4", 00:21:05.085 "traddr": "10.0.0.2", 00:21:05.085 "trsvcid": "4420", 00:21:05.085 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:05.085 "prchk_reftag": false, 00:21:05.085 "prchk_guard": false, 00:21:05.085 "ctrlr_loss_timeout_sec": 0, 00:21:05.085 "reconnect_delay_sec": 0, 00:21:05.086 "fast_io_fail_timeout_sec": 0, 00:21:05.086 "psk": "key0", 00:21:05.086 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:05.086 "hdgst": false, 00:21:05.086 "ddgst": false 00:21:05.086 } 00:21:05.086 }, 00:21:05.086 { 00:21:05.086 "method": "bdev_nvme_set_hotplug", 00:21:05.086 "params": { 00:21:05.086 "period_us": 100000, 00:21:05.086 "enable": false 00:21:05.086 } 00:21:05.086 }, 00:21:05.086 { 00:21:05.086 "method": "bdev_enable_histogram", 00:21:05.086 "params": { 00:21:05.086 "name": "nvme0n1", 00:21:05.086 "enable": true 00:21:05.086 } 00:21:05.086 }, 00:21:05.086 { 00:21:05.086 "method": "bdev_wait_for_examine" 00:21:05.086 } 00:21:05.086 ] 00:21:05.086 }, 00:21:05.086 { 00:21:05.086 "subsystem": "nbd", 00:21:05.086 "config": [] 00:21:05.086 } 00:21:05.086 ] 00:21:05.086 }' 00:21:05.086 09:41:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:05.086 [2024-10-07 09:41:54.073905] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:21:05.086 [2024-10-07 09:41:54.073991] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid247156 ] 00:21:05.346 [2024-10-07 09:41:54.128889] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:05.346 [2024-10-07 09:41:54.239423] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:21:05.605 [2024-10-07 09:41:54.419978] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:06.170 09:41:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:06.170 09:41:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:06.170 09:41:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:06.170 09:41:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:21:06.426 09:41:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.426 09:41:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:06.684 Running I/O for 1 seconds... 00:21:07.650 3189.00 IOPS, 12.46 MiB/s 00:21:07.650 Latency(us) 00:21:07.650 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:07.650 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:07.650 Verification LBA range: start 0x0 length 0x2000 00:21:07.650 nvme0n1 : 1.03 3216.44 12.56 0.00 0.00 39290.05 6310.87 60584.39 00:21:07.650 =================================================================================================================== 00:21:07.650 Total : 3216.44 12.56 0.00 0.00 39290.05 6310.87 60584.39 00:21:07.650 { 00:21:07.650 "results": [ 00:21:07.650 { 00:21:07.650 "job": "nvme0n1", 00:21:07.650 "core_mask": "0x2", 00:21:07.650 "workload": "verify", 00:21:07.650 "status": "finished", 00:21:07.650 "verify_range": { 00:21:07.650 "start": 0, 00:21:07.650 "length": 8192 00:21:07.650 }, 00:21:07.650 "queue_depth": 128, 00:21:07.650 "io_size": 4096, 00:21:07.650 "runtime": 1.031263, 00:21:07.650 "iops": 3216.444301793044, 00:21:07.650 "mibps": 12.564235553879078, 00:21:07.650 "io_failed": 0, 00:21:07.650 "io_timeout": 0, 00:21:07.650 "avg_latency_us": 39290.05367902723, 00:21:07.650 "min_latency_us": 6310.874074074074, 00:21:07.650 "max_latency_us": 60584.39111111111 00:21:07.650 } 00:21:07.650 ], 00:21:07.650 "core_count": 1 00:21:07.650 } 00:21:07.650 09:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:21:07.650 09:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:21:07.650 09:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:21:07.650 09:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:21:07.650 09:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:21:07.650 09:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:21:07.650 09:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:07.650 09:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:21:07.650 09:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:21:07.650 09:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:21:07.650 09:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:07.650 nvmf_trace.0 00:21:07.650 09:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:21:07.650 09:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 247156 00:21:07.650 09:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 247156 ']' 00:21:07.650 09:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 247156 00:21:07.650 09:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:07.650 09:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:07.650 09:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 247156 00:21:07.650 09:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:07.650 09:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:07.650 09:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 247156' 00:21:07.650 killing process with pid 247156 00:21:07.650 09:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 247156 00:21:07.650 Received shutdown signal, test time was about 1.000000 seconds 00:21:07.650 00:21:07.650 Latency(us) 00:21:07.650 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:07.650 =================================================================================================================== 00:21:07.650 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:07.650 09:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 247156 00:21:08.219 09:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:21:08.219 09:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@514 -- # nvmfcleanup 00:21:08.219 09:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:21:08.219 09:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:08.219 09:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:21:08.219 09:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:08.219 09:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:08.219 rmmod nvme_tcp 00:21:08.219 rmmod nvme_fabrics 00:21:08.219 rmmod nvme_keyring 00:21:08.219 09:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:08.219 09:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:21:08.219 09:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:21:08.219 09:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@515 -- # '[' -n 247009 ']' 00:21:08.219 09:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # killprocess 247009 00:21:08.219 09:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 247009 ']' 00:21:08.219 09:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 247009 00:21:08.219 09:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:08.219 09:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:08.219 09:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 247009 00:21:08.219 09:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:08.219 09:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:08.219 09:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 247009' 00:21:08.219 killing process with pid 247009 00:21:08.219 09:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 247009 00:21:08.219 09:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 247009 00:21:08.479 09:41:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:21:08.479 09:41:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:21:08.479 09:41:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:21:08.479 09:41:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:21:08.479 09:41:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # iptables-save 00:21:08.479 09:41:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:21:08.479 09:41:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # iptables-restore 00:21:08.479 09:41:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:08.479 09:41:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:08.479 09:41:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:08.479 09:41:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:08.479 09:41:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:10.384 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:10.384 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.rYgSbdgz6W /tmp/tmp.SbXL90oPP0 /tmp/tmp.JkXXP8sNf2 00:21:10.384 00:21:10.384 real 1m25.646s 00:21:10.384 user 2m25.487s 00:21:10.384 sys 0m24.401s 00:21:10.384 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:10.384 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:10.384 ************************************ 00:21:10.384 END TEST nvmf_tls 00:21:10.384 ************************************ 00:21:10.384 09:41:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:10.384 09:41:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:10.384 09:41:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:10.384 09:41:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:10.384 ************************************ 00:21:10.384 START TEST nvmf_fips 00:21:10.384 ************************************ 00:21:10.384 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:10.643 * Looking for test storage... 00:21:10.643 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:21:10.643 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:21:10.643 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # lcov --version 00:21:10.643 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:21:10.643 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:21:10.644 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:10.644 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:10.644 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:10.644 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:21:10.644 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:21:10.644 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:21:10.644 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:21:10.644 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:21:10.644 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:21:10.644 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:21:10.644 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:10.644 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:21:10.644 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:21:10.644 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:10.644 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:10.644 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:21:10.644 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:21:10.644 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:10.644 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:21:10.644 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:21:10.644 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:21:10.644 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:21:10.644 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:10.644 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:21:10.644 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:21:10.644 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:10.644 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:10.644 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:21:10.644 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:10.644 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:21:10.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:10.644 --rc genhtml_branch_coverage=1 00:21:10.644 --rc genhtml_function_coverage=1 00:21:10.644 --rc genhtml_legend=1 00:21:10.644 --rc geninfo_all_blocks=1 00:21:10.644 --rc geninfo_unexecuted_blocks=1 00:21:10.644 00:21:10.644 ' 00:21:10.644 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:21:10.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:10.644 --rc genhtml_branch_coverage=1 00:21:10.644 --rc genhtml_function_coverage=1 00:21:10.644 --rc genhtml_legend=1 00:21:10.644 --rc geninfo_all_blocks=1 00:21:10.644 --rc geninfo_unexecuted_blocks=1 00:21:10.644 00:21:10.644 ' 00:21:10.644 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:21:10.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:10.644 --rc genhtml_branch_coverage=1 00:21:10.644 --rc genhtml_function_coverage=1 00:21:10.644 --rc genhtml_legend=1 00:21:10.644 --rc geninfo_all_blocks=1 00:21:10.644 --rc geninfo_unexecuted_blocks=1 00:21:10.644 00:21:10.644 ' 00:21:10.644 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:21:10.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:10.644 --rc genhtml_branch_coverage=1 00:21:10.644 --rc genhtml_function_coverage=1 00:21:10.644 --rc genhtml_legend=1 00:21:10.644 --rc geninfo_all_blocks=1 00:21:10.644 --rc geninfo_unexecuted_blocks=1 00:21:10.644 00:21:10.644 ' 00:21:10.644 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:10.644 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:21:10.644 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:10.644 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:10.644 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:10.644 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:10.644 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:10.644 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:10.644 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:10.644 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:10.644 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:10.644 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:10.644 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:21:10.644 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:21:10.644 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:10.644 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:10.644 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:10.644 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:10.644 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:10.644 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:21:10.644 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:10.644 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:10.644 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:10.644 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.644 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.644 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.644 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:21:10.645 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.645 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:21:10.645 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:10.645 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:10.645 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:10.645 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:10.645 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:10.645 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:10.645 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:10.645 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:10.645 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:10.645 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:10.645 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:10.645 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:21:10.645 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:21:10.645 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:21:10.645 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:21:10.645 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:21:10.645 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:21:10.645 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:10.645 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:10.645 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:21:10.645 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:21:10.645 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:21:10.645 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:21:10.645 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:21:10.645 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:21:10.645 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:21:10.645 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:10.645 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:21:10.645 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:21:10.645 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:10.645 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:10.645 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:21:10.645 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:21:10.645 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:10.645 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:21:10.645 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:21:10.645 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:21:10.645 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:21:10.645 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:10.645 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:21:10.645 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:21:10.645 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:10.645 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:10.645 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:21:10.645 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:10.645 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:21:10.645 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:21:10.645 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:10.645 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:21:10.645 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:21:10.645 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:21:10.645 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:21:10.645 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:10.645 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:21:10.645 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:21:10.645 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:10.645 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:21:10.645 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:21:10.645 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:21:10.645 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:21:10.645 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:21:10.645 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:21:10.645 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:21:10.645 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:21:10.645 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:21:10.645 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:21:10.645 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:21:10.645 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:21:10.645 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:21:10.645 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:21:10.645 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:21:10.645 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:21:10.645 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:21:10.904 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:21:10.904 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:21:10.904 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:21:10.904 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:21:10.904 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:21:10.904 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:21:10.904 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:21:10.904 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:21:10.904 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:10.904 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:21:10.904 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:10.904 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:21:10.904 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:10.904 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:21:10.904 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:21:10.904 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:21:10.904 Error setting digest 00:21:10.904 40328968B07F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:21:10.904 40328968B07F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:21:10.904 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:21:10.904 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:10.904 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:10.904 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:10.904 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:21:10.904 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:21:10.904 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:10.904 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # prepare_net_devs 00:21:10.904 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@436 -- # local -g is_hw=no 00:21:10.904 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # remove_spdk_ns 00:21:10.904 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:10.904 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:10.904 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:10.904 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:21:10.904 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:21:10.904 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:21:10.904 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:12.802 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:12.802 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:21:12.802 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:12.802 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:12.802 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:12.802 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:12.802 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:12.802 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:21:12.802 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:12.802 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:21:12.802 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:21:12.802 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:21:12.802 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:21:12.802 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:21:12.802 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:21:12.802 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:12.802 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:12.802 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:12.802 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:12.802 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:12.802 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:12.802 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:12.802 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:12.802 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:12.802 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:12.802 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:12.802 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:12.802 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:12.802 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:12.802 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:12.802 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:12.802 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:12.802 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:12.802 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:12.802 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x1592)' 00:21:12.802 Found 0000:09:00.0 (0x8086 - 0x1592) 00:21:12.802 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:12.802 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:12.802 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:21:12.802 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:21:12.802 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:12.802 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:12.802 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x1592)' 00:21:12.802 Found 0000:09:00.1 (0x8086 - 0x1592) 00:21:12.802 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:12.802 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:12.802 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:21:12.802 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:21:12.802 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:12.802 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:12.803 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:12.803 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:12.803 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:12.803 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:12.803 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:12.803 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:12.803 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:12.803 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:12.803 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:12.803 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:21:12.803 Found net devices under 0000:09:00.0: cvl_0_0 00:21:12.803 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:12.803 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:12.803 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:12.803 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:12.803 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:12.803 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:12.803 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:12.803 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:12.803 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:21:12.803 Found net devices under 0000:09:00.1: cvl_0_1 00:21:12.803 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:12.803 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:21:12.803 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # is_hw=yes 00:21:12.803 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:21:12.803 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:21:12.803 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:21:12.803 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:12.803 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:12.803 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:12.803 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:12.803 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:12.803 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:12.803 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:12.803 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:12.803 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:12.803 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:12.803 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:12.803 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:12.803 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:12.803 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:12.803 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:13.061 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:13.061 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:13.061 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:13.061 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:13.061 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:13.061 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:13.061 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:13.061 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:13.061 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:13.061 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.246 ms 00:21:13.061 00:21:13.061 --- 10.0.0.2 ping statistics --- 00:21:13.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:13.061 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:21:13.061 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:13.061 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:13.061 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:21:13.061 00:21:13.061 --- 10.0.0.1 ping statistics --- 00:21:13.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:13.061 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:21:13.061 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:13.061 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@448 -- # return 0 00:21:13.061 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:21:13.061 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:13.061 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:21:13.061 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:21:13.061 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:13.061 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:21:13.061 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:21:13.061 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:21:13.061 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:21:13.061 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:13.061 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:13.061 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # nvmfpid=249524 00:21:13.061 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:13.061 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # waitforlisten 249524 00:21:13.061 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 249524 ']' 00:21:13.061 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:13.061 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:13.061 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:13.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:13.061 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:13.061 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:13.061 [2024-10-07 09:42:02.016728] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:21:13.061 [2024-10-07 09:42:02.016810] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:13.319 [2024-10-07 09:42:02.076101] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:13.319 [2024-10-07 09:42:02.189440] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:13.319 [2024-10-07 09:42:02.189511] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:13.319 [2024-10-07 09:42:02.189534] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:13.319 [2024-10-07 09:42:02.189545] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:13.319 [2024-10-07 09:42:02.189554] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:13.319 [2024-10-07 09:42:02.190125] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:21:13.319 09:42:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:13.319 09:42:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:21:13.319 09:42:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:21:13.319 09:42:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:13.319 09:42:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:13.319 09:42:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:13.319 09:42:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:21:13.319 09:42:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:13.579 09:42:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:21:13.579 09:42:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.FWj 00:21:13.579 09:42:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:13.579 09:42:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.FWj 00:21:13.579 09:42:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.FWj 00:21:13.579 09:42:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.FWj 00:21:13.579 09:42:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:13.837 [2024-10-07 09:42:02.576310] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:13.837 [2024-10-07 09:42:02.592289] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:13.837 [2024-10-07 09:42:02.592510] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:13.837 malloc0 00:21:13.837 09:42:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:13.837 09:42:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=249665 00:21:13.837 09:42:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:13.837 09:42:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 249665 /var/tmp/bdevperf.sock 00:21:13.837 09:42:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 249665 ']' 00:21:13.837 09:42:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:13.837 09:42:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:13.837 09:42:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:13.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:13.837 09:42:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:13.837 09:42:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:13.837 [2024-10-07 09:42:02.729192] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:21:13.837 [2024-10-07 09:42:02.729266] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid249665 ] 00:21:13.837 [2024-10-07 09:42:02.783901] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:14.095 [2024-10-07 09:42:02.890839] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:21:14.095 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:14.095 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:21:14.095 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.FWj 00:21:14.352 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:14.609 [2024-10-07 09:42:03.601504] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:14.867 TLSTESTn1 00:21:14.867 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:14.867 Running I/O for 10 seconds... 00:21:25.110 3305.00 IOPS, 12.91 MiB/s 3281.00 IOPS, 12.82 MiB/s 3323.67 IOPS, 12.98 MiB/s 3282.75 IOPS, 12.82 MiB/s 3274.80 IOPS, 12.79 MiB/s 3301.00 IOPS, 12.89 MiB/s 3314.00 IOPS, 12.95 MiB/s 3326.62 IOPS, 12.99 MiB/s 3333.89 IOPS, 13.02 MiB/s 3343.80 IOPS, 13.06 MiB/s 00:21:25.110 Latency(us) 00:21:25.110 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:25.110 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:25.110 Verification LBA range: start 0x0 length 0x2000 00:21:25.110 TLSTESTn1 : 10.04 3342.42 13.06 0.00 0.00 38200.23 11068.30 38253.61 00:21:25.110 =================================================================================================================== 00:21:25.110 Total : 3342.42 13.06 0.00 0.00 38200.23 11068.30 38253.61 00:21:25.110 { 00:21:25.110 "results": [ 00:21:25.110 { 00:21:25.110 "job": "TLSTESTn1", 00:21:25.110 "core_mask": "0x4", 00:21:25.110 "workload": "verify", 00:21:25.110 "status": "finished", 00:21:25.110 "verify_range": { 00:21:25.110 "start": 0, 00:21:25.110 "length": 8192 00:21:25.110 }, 00:21:25.110 "queue_depth": 128, 00:21:25.110 "io_size": 4096, 00:21:25.110 "runtime": 10.042124, 00:21:25.110 "iops": 3342.420388356089, 00:21:25.110 "mibps": 13.056329642015973, 00:21:25.110 "io_failed": 0, 00:21:25.110 "io_timeout": 0, 00:21:25.110 "avg_latency_us": 38200.225024170904, 00:21:25.110 "min_latency_us": 11068.302222222223, 00:21:25.110 "max_latency_us": 38253.60592592593 00:21:25.110 } 00:21:25.110 ], 00:21:25.110 "core_count": 1 00:21:25.110 } 00:21:25.110 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:21:25.110 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:21:25.110 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:21:25.110 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:21:25.110 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:21:25.110 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:25.110 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:21:25.111 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:21:25.111 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:21:25.111 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:25.111 nvmf_trace.0 00:21:25.111 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:21:25.111 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 249665 00:21:25.111 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 249665 ']' 00:21:25.111 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 249665 00:21:25.111 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:21:25.111 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:25.111 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 249665 00:21:25.111 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:21:25.111 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:21:25.111 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 249665' 00:21:25.111 killing process with pid 249665 00:21:25.111 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 249665 00:21:25.111 Received shutdown signal, test time was about 10.000000 seconds 00:21:25.111 00:21:25.111 Latency(us) 00:21:25.111 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:25.111 =================================================================================================================== 00:21:25.111 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:25.111 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 249665 00:21:25.370 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:21:25.370 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@514 -- # nvmfcleanup 00:21:25.370 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:21:25.370 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:25.370 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:21:25.370 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:25.370 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:25.370 rmmod nvme_tcp 00:21:25.370 rmmod nvme_fabrics 00:21:25.370 rmmod nvme_keyring 00:21:25.370 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:25.370 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:21:25.370 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:21:25.370 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@515 -- # '[' -n 249524 ']' 00:21:25.370 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # killprocess 249524 00:21:25.370 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 249524 ']' 00:21:25.370 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 249524 00:21:25.370 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:21:25.370 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:25.370 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 249524 00:21:25.370 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:25.370 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:25.370 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 249524' 00:21:25.370 killing process with pid 249524 00:21:25.370 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 249524 00:21:25.370 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 249524 00:21:25.628 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:21:25.628 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:21:25.628 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:21:25.628 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:21:25.628 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # iptables-save 00:21:25.628 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:21:25.628 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # iptables-restore 00:21:25.628 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:25.628 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:25.628 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:25.628 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:25.628 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:28.167 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:28.167 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.FWj 00:21:28.167 00:21:28.167 real 0m17.260s 00:21:28.167 user 0m23.143s 00:21:28.167 sys 0m5.295s 00:21:28.167 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:28.167 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:28.167 ************************************ 00:21:28.167 END TEST nvmf_fips 00:21:28.167 ************************************ 00:21:28.167 09:42:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:21:28.167 09:42:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:28.167 09:42:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:28.167 09:42:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:28.167 ************************************ 00:21:28.167 START TEST nvmf_control_msg_list 00:21:28.167 ************************************ 00:21:28.167 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:21:28.167 * Looking for test storage... 00:21:28.167 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:28.167 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:21:28.167 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # lcov --version 00:21:28.167 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:21:28.167 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:21:28.167 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:28.167 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:28.167 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:28.167 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:21:28.167 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:21:28.167 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:21:28.167 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:21:28.167 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:21:28.167 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:21:28.167 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:21:28.167 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:28.167 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:21:28.167 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:21:28.167 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:28.167 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:28.167 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:21:28.167 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:21:28.167 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:28.167 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:21:28.167 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:21:28.167 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:21:28.167 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:21:28.167 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:28.167 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:21:28.167 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:21:28.167 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:28.167 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:28.167 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:21:28.167 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:28.167 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:21:28.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:28.167 --rc genhtml_branch_coverage=1 00:21:28.167 --rc genhtml_function_coverage=1 00:21:28.167 --rc genhtml_legend=1 00:21:28.167 --rc geninfo_all_blocks=1 00:21:28.167 --rc geninfo_unexecuted_blocks=1 00:21:28.167 00:21:28.167 ' 00:21:28.167 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:21:28.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:28.167 --rc genhtml_branch_coverage=1 00:21:28.167 --rc genhtml_function_coverage=1 00:21:28.167 --rc genhtml_legend=1 00:21:28.167 --rc geninfo_all_blocks=1 00:21:28.167 --rc geninfo_unexecuted_blocks=1 00:21:28.167 00:21:28.167 ' 00:21:28.167 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:21:28.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:28.167 --rc genhtml_branch_coverage=1 00:21:28.167 --rc genhtml_function_coverage=1 00:21:28.167 --rc genhtml_legend=1 00:21:28.167 --rc geninfo_all_blocks=1 00:21:28.167 --rc geninfo_unexecuted_blocks=1 00:21:28.167 00:21:28.167 ' 00:21:28.167 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:21:28.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:28.167 --rc genhtml_branch_coverage=1 00:21:28.167 --rc genhtml_function_coverage=1 00:21:28.167 --rc genhtml_legend=1 00:21:28.167 --rc geninfo_all_blocks=1 00:21:28.167 --rc geninfo_unexecuted_blocks=1 00:21:28.167 00:21:28.167 ' 00:21:28.167 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:28.167 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:21:28.167 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:28.167 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:28.167 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:28.167 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:28.167 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:28.167 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:28.167 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:28.167 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:28.167 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:28.167 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:28.167 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:21:28.167 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:21:28.167 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:28.167 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:28.167 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:28.167 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:28.167 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:28.167 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:21:28.167 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:28.167 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:28.167 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:28.167 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:28.167 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:28.168 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:28.168 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:21:28.168 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:28.168 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:21:28.168 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:28.168 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:28.168 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:28.168 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:28.168 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:28.168 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:28.168 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:28.168 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:28.168 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:28.168 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:28.168 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:21:28.168 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:21:28.168 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:28.168 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # prepare_net_devs 00:21:28.168 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@436 -- # local -g is_hw=no 00:21:28.168 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # remove_spdk_ns 00:21:28.168 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:28.168 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:28.168 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:28.168 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:21:28.168 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:21:28.168 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:21:28.168 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:30.071 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:30.071 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:21:30.071 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:30.071 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:30.071 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:30.071 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:30.071 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:30.071 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:21:30.071 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:30.071 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:21:30.071 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:21:30.071 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:21:30.071 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:21:30.071 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:21:30.071 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:21:30.071 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:30.071 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:30.071 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:30.071 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:30.071 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:30.071 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:30.071 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:30.071 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:30.071 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:30.071 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:30.071 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:30.071 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:30.071 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:30.071 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:30.071 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:30.071 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:30.071 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:30.071 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:30.071 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:30.071 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x1592)' 00:21:30.071 Found 0000:09:00.0 (0x8086 - 0x1592) 00:21:30.072 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:30.072 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:30.072 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:21:30.072 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:21:30.072 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:30.072 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:30.072 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x1592)' 00:21:30.072 Found 0000:09:00.1 (0x8086 - 0x1592) 00:21:30.072 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:30.072 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:30.072 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:21:30.072 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:21:30.072 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:30.072 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:30.072 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:30.072 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:30.072 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:30.072 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:30.072 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:30.072 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:30.072 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:30.072 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:30.072 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:30.072 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:21:30.072 Found net devices under 0000:09:00.0: cvl_0_0 00:21:30.072 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:30.072 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:30.072 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:30.072 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:30.072 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:30.072 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:30.072 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:30.072 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:30.072 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:21:30.072 Found net devices under 0000:09:00.1: cvl_0_1 00:21:30.072 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:30.072 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:21:30.072 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # is_hw=yes 00:21:30.072 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:21:30.072 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:21:30.072 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:21:30.072 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:30.072 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:30.072 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:30.072 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:30.072 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:30.072 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:30.072 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:30.072 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:30.072 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:30.072 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:30.072 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:30.072 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:30.072 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:30.072 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:30.072 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:30.072 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:30.072 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:30.072 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:30.072 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:30.072 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:30.072 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:30.072 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:30.072 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:30.072 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:30.072 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.355 ms 00:21:30.072 00:21:30.072 --- 10.0.0.2 ping statistics --- 00:21:30.072 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:30.072 rtt min/avg/max/mdev = 0.355/0.355/0.355/0.000 ms 00:21:30.072 09:42:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:30.072 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:30.072 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:21:30.072 00:21:30.072 --- 10.0.0.1 ping statistics --- 00:21:30.072 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:30.072 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:21:30.072 09:42:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:30.072 09:42:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@448 -- # return 0 00:21:30.072 09:42:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:21:30.072 09:42:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:30.072 09:42:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:21:30.072 09:42:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:21:30.072 09:42:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:30.072 09:42:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:21:30.072 09:42:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:21:30.072 09:42:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:21:30.072 09:42:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:21:30.072 09:42:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:30.072 09:42:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:30.072 09:42:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # nvmfpid=253306 00:21:30.072 09:42:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:30.072 09:42:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # waitforlisten 253306 00:21:30.072 09:42:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@831 -- # '[' -z 253306 ']' 00:21:30.072 09:42:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:30.072 09:42:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:30.072 09:42:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:30.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:30.072 09:42:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:30.072 09:42:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:30.330 [2024-10-07 09:42:19.084473] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:21:30.330 [2024-10-07 09:42:19.084551] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:30.330 [2024-10-07 09:42:19.145052] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:30.330 [2024-10-07 09:42:19.251414] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:30.330 [2024-10-07 09:42:19.251471] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:30.330 [2024-10-07 09:42:19.251495] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:30.330 [2024-10-07 09:42:19.251506] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:30.330 [2024-10-07 09:42:19.251515] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:30.330 [2024-10-07 09:42:19.252103] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:21:30.588 09:42:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:30.588 09:42:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # return 0 00:21:30.588 09:42:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:21:30.588 09:42:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:30.588 09:42:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:30.588 09:42:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:30.588 09:42:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:21:30.588 09:42:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:30.588 09:42:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:21:30.588 09:42:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.588 09:42:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:30.588 [2024-10-07 09:42:19.382880] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:30.588 09:42:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.588 09:42:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:21:30.588 09:42:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.588 09:42:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:30.588 09:42:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.588 09:42:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:21:30.588 09:42:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.588 09:42:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:30.588 Malloc0 00:21:30.588 09:42:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.588 09:42:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:21:30.588 09:42:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.588 09:42:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:30.588 09:42:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.588 09:42:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:30.588 09:42:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.588 09:42:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:30.588 [2024-10-07 09:42:19.437984] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:30.588 09:42:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.588 09:42:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=253417 00:21:30.588 09:42:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:30.588 09:42:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=253418 00:21:30.588 09:42:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:30.588 09:42:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=253419 00:21:30.588 09:42:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:30.588 09:42:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 253417 00:21:30.588 [2024-10-07 09:42:19.506873] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:30.588 [2024-10-07 09:42:19.507258] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:30.588 [2024-10-07 09:42:19.507495] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:31.964 Initializing NVMe Controllers 00:21:31.964 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:31.964 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:21:31.964 Initialization complete. Launching workers. 00:21:31.964 ======================================================== 00:21:31.964 Latency(us) 00:21:31.964 Device Information : IOPS MiB/s Average min max 00:21:31.964 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 24.00 0.09 41737.31 40898.50 42119.89 00:21:31.964 ======================================================== 00:21:31.964 Total : 24.00 0.09 41737.31 40898.50 42119.89 00:21:31.964 00:21:31.964 Initializing NVMe Controllers 00:21:31.964 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:31.964 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:21:31.964 Initialization complete. Launching workers. 00:21:31.964 ======================================================== 00:21:31.964 Latency(us) 00:21:31.964 Device Information : IOPS MiB/s Average min max 00:21:31.964 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 25.00 0.10 41019.68 40866.38 41901.01 00:21:31.964 ======================================================== 00:21:31.964 Total : 25.00 0.10 41019.68 40866.38 41901.01 00:21:31.964 00:21:31.964 Initializing NVMe Controllers 00:21:31.964 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:31.964 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:21:31.964 Initialization complete. Launching workers. 00:21:31.964 ======================================================== 00:21:31.964 Latency(us) 00:21:31.964 Device Information : IOPS MiB/s Average min max 00:21:31.964 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 26.00 0.10 39321.11 186.45 40971.86 00:21:31.964 ======================================================== 00:21:31.964 Total : 26.00 0.10 39321.11 186.45 40971.86 00:21:31.964 00:21:31.964 09:42:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 253418 00:21:31.964 09:42:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 253419 00:21:31.964 09:42:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:21:31.964 09:42:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:21:31.964 09:42:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@514 -- # nvmfcleanup 00:21:31.964 09:42:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:21:31.964 09:42:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:31.964 09:42:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:21:31.964 09:42:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:31.964 09:42:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:31.964 rmmod nvme_tcp 00:21:31.964 rmmod nvme_fabrics 00:21:31.964 rmmod nvme_keyring 00:21:31.964 09:42:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:31.964 09:42:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:21:31.964 09:42:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:21:31.964 09:42:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@515 -- # '[' -n 253306 ']' 00:21:31.964 09:42:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # killprocess 253306 00:21:31.964 09:42:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@950 -- # '[' -z 253306 ']' 00:21:31.964 09:42:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # kill -0 253306 00:21:31.964 09:42:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # uname 00:21:31.964 09:42:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:31.964 09:42:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 253306 00:21:31.964 09:42:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:31.964 09:42:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:31.964 09:42:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@968 -- # echo 'killing process with pid 253306' 00:21:31.965 killing process with pid 253306 00:21:31.965 09:42:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@969 -- # kill 253306 00:21:31.965 09:42:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@974 -- # wait 253306 00:21:32.224 09:42:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:21:32.224 09:42:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:21:32.224 09:42:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:21:32.224 09:42:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:21:32.224 09:42:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # iptables-save 00:21:32.224 09:42:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:21:32.224 09:42:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # iptables-restore 00:21:32.225 09:42:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:32.225 09:42:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:32.225 09:42:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:32.225 09:42:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:32.225 09:42:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:34.765 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:34.765 00:21:34.765 real 0m6.492s 00:21:34.765 user 0m6.125s 00:21:34.765 sys 0m2.461s 00:21:34.765 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:34.765 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:34.765 ************************************ 00:21:34.765 END TEST nvmf_control_msg_list 00:21:34.765 ************************************ 00:21:34.765 09:42:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:21:34.765 09:42:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:34.765 09:42:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:34.765 09:42:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:34.765 ************************************ 00:21:34.765 START TEST nvmf_wait_for_buf 00:21:34.765 ************************************ 00:21:34.765 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:21:34.765 * Looking for test storage... 00:21:34.765 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:34.765 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:21:34.765 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # lcov --version 00:21:34.765 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:21:34.765 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:21:34.765 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:34.765 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:34.765 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:34.765 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:21:34.765 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:21:34.765 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:21:34.765 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:21:34.765 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:21:34.765 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:21:34.765 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:21:34.765 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:34.765 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:21:34.765 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:21:34.765 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:34.765 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:34.765 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:21:34.765 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:21:34.766 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:34.766 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:21:34.766 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:21:34.766 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:21:34.766 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:21:34.766 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:34.766 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:21:34.766 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:21:34.766 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:34.766 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:34.766 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:21:34.766 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:34.766 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:21:34.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:34.766 --rc genhtml_branch_coverage=1 00:21:34.766 --rc genhtml_function_coverage=1 00:21:34.766 --rc genhtml_legend=1 00:21:34.766 --rc geninfo_all_blocks=1 00:21:34.766 --rc geninfo_unexecuted_blocks=1 00:21:34.766 00:21:34.766 ' 00:21:34.766 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:21:34.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:34.766 --rc genhtml_branch_coverage=1 00:21:34.766 --rc genhtml_function_coverage=1 00:21:34.766 --rc genhtml_legend=1 00:21:34.766 --rc geninfo_all_blocks=1 00:21:34.766 --rc geninfo_unexecuted_blocks=1 00:21:34.766 00:21:34.766 ' 00:21:34.766 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:21:34.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:34.766 --rc genhtml_branch_coverage=1 00:21:34.766 --rc genhtml_function_coverage=1 00:21:34.766 --rc genhtml_legend=1 00:21:34.766 --rc geninfo_all_blocks=1 00:21:34.766 --rc geninfo_unexecuted_blocks=1 00:21:34.766 00:21:34.766 ' 00:21:34.766 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:21:34.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:34.766 --rc genhtml_branch_coverage=1 00:21:34.766 --rc genhtml_function_coverage=1 00:21:34.766 --rc genhtml_legend=1 00:21:34.766 --rc geninfo_all_blocks=1 00:21:34.766 --rc geninfo_unexecuted_blocks=1 00:21:34.766 00:21:34.766 ' 00:21:34.766 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:34.766 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:21:34.766 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:34.766 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:34.766 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:34.766 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:34.766 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:34.766 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:34.766 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:34.766 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:34.766 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:34.766 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:34.766 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:21:34.766 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:21:34.766 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:34.766 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:34.766 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:34.766 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:34.766 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:34.766 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:21:34.766 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:34.766 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:34.766 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:34.766 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.766 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.766 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.766 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:21:34.766 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.766 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:21:34.766 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:34.766 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:34.766 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:34.766 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:34.766 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:34.766 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:34.766 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:34.766 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:34.766 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:34.766 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:34.766 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:21:34.766 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:21:34.766 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:34.766 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # prepare_net_devs 00:21:34.766 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:21:34.766 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:21:34.766 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:34.766 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:34.766 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:34.766 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:21:34.766 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:21:34.766 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:21:34.766 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:36.671 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:36.671 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:21:36.671 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:36.671 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:36.671 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:36.671 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:36.671 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:36.672 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:21:36.672 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:36.672 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:21:36.672 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:21:36.672 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:21:36.672 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:21:36.672 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:21:36.672 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:21:36.672 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:36.672 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:36.672 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:36.672 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:36.672 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:36.672 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:36.672 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:36.672 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:36.672 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:36.672 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:36.672 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:36.672 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:36.672 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:36.672 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:36.672 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:36.672 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:36.672 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:36.672 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:36.672 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:36.672 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x1592)' 00:21:36.672 Found 0000:09:00.0 (0x8086 - 0x1592) 00:21:36.672 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:36.672 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:36.672 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:21:36.672 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:21:36.672 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:36.672 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:36.672 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x1592)' 00:21:36.672 Found 0000:09:00.1 (0x8086 - 0x1592) 00:21:36.672 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:36.672 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:36.672 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:21:36.672 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:21:36.672 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:36.672 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:36.672 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:36.672 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:36.672 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:36.672 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:36.672 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:36.672 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:36.672 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:36.672 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:36.672 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:36.672 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:21:36.672 Found net devices under 0000:09:00.0: cvl_0_0 00:21:36.672 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:36.672 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:36.672 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:36.672 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:36.672 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:36.672 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:36.672 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:36.672 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:36.672 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:21:36.672 Found net devices under 0000:09:00.1: cvl_0_1 00:21:36.672 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:36.672 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:21:36.672 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # is_hw=yes 00:21:36.672 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:21:36.672 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:21:36.672 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:21:36.672 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:36.672 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:36.672 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:36.672 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:36.672 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:36.672 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:36.672 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:36.672 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:36.672 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:36.672 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:36.672 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:36.672 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:36.673 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:36.673 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:36.673 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:36.673 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:36.673 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:36.673 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:36.673 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:36.673 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:36.673 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:36.673 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:36.673 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:36.673 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:36.673 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.358 ms 00:21:36.673 00:21:36.673 --- 10.0.0.2 ping statistics --- 00:21:36.673 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:36.673 rtt min/avg/max/mdev = 0.358/0.358/0.358/0.000 ms 00:21:36.673 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:36.673 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:36.673 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:21:36.673 00:21:36.673 --- 10.0.0.1 ping statistics --- 00:21:36.673 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:36.673 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:21:36.673 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:36.673 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@448 -- # return 0 00:21:36.673 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:21:36.673 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:36.673 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:21:36.673 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:21:36.673 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:36.673 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:21:36.673 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:21:36.673 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:21:36.673 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:21:36.673 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:36.673 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:36.673 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # nvmfpid=255391 00:21:36.673 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:21:36.673 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # waitforlisten 255391 00:21:36.673 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@831 -- # '[' -z 255391 ']' 00:21:36.673 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:36.673 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:36.673 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:36.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:36.673 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:36.673 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:36.673 [2024-10-07 09:42:25.664856] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:21:36.673 [2024-10-07 09:42:25.664936] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:36.931 [2024-10-07 09:42:25.732366] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:36.931 [2024-10-07 09:42:25.840811] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:36.931 [2024-10-07 09:42:25.840871] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:36.931 [2024-10-07 09:42:25.840883] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:36.931 [2024-10-07 09:42:25.840895] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:36.931 [2024-10-07 09:42:25.840904] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:36.931 [2024-10-07 09:42:25.841490] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:21:36.931 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:36.931 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # return 0 00:21:36.931 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:21:36.931 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:36.931 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:36.931 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:36.931 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:21:36.931 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:36.931 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:21:36.931 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.931 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:36.931 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.931 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:21:36.931 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.931 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:36.931 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.931 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:21:36.931 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.931 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:37.189 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.189 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:21:37.189 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.189 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:37.189 Malloc0 00:21:37.189 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.189 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:21:37.189 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.189 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:37.189 [2024-10-07 09:42:26.009873] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:37.189 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.189 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:21:37.189 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.189 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:37.189 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.189 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:21:37.189 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.189 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:37.189 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.189 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:37.189 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.189 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:37.189 [2024-10-07 09:42:26.034090] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:37.189 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.189 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:37.190 [2024-10-07 09:42:26.104787] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:38.566 Initializing NVMe Controllers 00:21:38.566 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:38.566 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:21:38.566 Initialization complete. Launching workers. 00:21:38.566 ======================================================== 00:21:38.566 Latency(us) 00:21:38.566 Device Information : IOPS MiB/s Average min max 00:21:38.566 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 125.00 15.62 33292.83 23967.78 63860.90 00:21:38.566 ======================================================== 00:21:38.566 Total : 125.00 15.62 33292.83 23967.78 63860.90 00:21:38.566 00:21:38.824 09:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:21:38.824 09:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:21:38.824 09:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.824 09:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:38.824 09:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.824 09:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=1974 00:21:38.824 09:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 1974 -eq 0 ]] 00:21:38.824 09:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:21:38.824 09:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:21:38.824 09:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@514 -- # nvmfcleanup 00:21:38.824 09:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:21:38.824 09:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:38.824 09:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:21:38.824 09:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:38.824 09:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:38.824 rmmod nvme_tcp 00:21:38.824 rmmod nvme_fabrics 00:21:38.824 rmmod nvme_keyring 00:21:38.824 09:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:38.824 09:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:21:38.824 09:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:21:38.824 09:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@515 -- # '[' -n 255391 ']' 00:21:38.824 09:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # killprocess 255391 00:21:38.824 09:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@950 -- # '[' -z 255391 ']' 00:21:38.824 09:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # kill -0 255391 00:21:38.824 09:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # uname 00:21:38.824 09:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:38.824 09:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 255391 00:21:38.824 09:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:38.824 09:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:38.824 09:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 255391' 00:21:38.824 killing process with pid 255391 00:21:38.824 09:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@969 -- # kill 255391 00:21:38.824 09:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@974 -- # wait 255391 00:21:39.082 09:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:21:39.082 09:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:21:39.082 09:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:21:39.082 09:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:21:39.082 09:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # iptables-save 00:21:39.082 09:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:21:39.082 09:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # iptables-restore 00:21:39.082 09:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:39.082 09:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:39.082 09:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:39.082 09:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:39.082 09:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:40.989 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:40.989 00:21:40.989 real 0m6.722s 00:21:40.989 user 0m3.175s 00:21:40.989 sys 0m1.980s 00:21:40.989 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:40.989 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:40.989 ************************************ 00:21:40.989 END TEST nvmf_wait_for_buf 00:21:40.989 ************************************ 00:21:40.989 09:42:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:21:40.989 09:42:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:21:40.989 09:42:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:21:40.989 09:42:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:21:40.989 09:42:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:21:40.989 09:42:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:43.518 09:42:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:43.518 09:42:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:21:43.518 09:42:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:43.518 09:42:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:43.518 09:42:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:43.518 09:42:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:43.518 09:42:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:43.518 09:42:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:21:43.518 09:42:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:43.518 09:42:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:21:43.518 09:42:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:21:43.518 09:42:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:21:43.518 09:42:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:21:43.518 09:42:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:21:43.518 09:42:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:21:43.518 09:42:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:43.518 09:42:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:43.518 09:42:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:43.518 09:42:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:43.518 09:42:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:43.518 09:42:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:43.518 09:42:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:43.518 09:42:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:43.518 09:42:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:43.518 09:42:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:43.518 09:42:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:43.518 09:42:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:43.518 09:42:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:43.518 09:42:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:43.518 09:42:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:43.518 09:42:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:43.518 09:42:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:43.518 09:42:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:43.518 09:42:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:43.518 09:42:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x1592)' 00:21:43.518 Found 0000:09:00.0 (0x8086 - 0x1592) 00:21:43.518 09:42:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:43.518 09:42:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:43.518 09:42:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:21:43.518 09:42:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:21:43.518 09:42:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:43.518 09:42:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:43.518 09:42:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x1592)' 00:21:43.518 Found 0000:09:00.1 (0x8086 - 0x1592) 00:21:43.518 09:42:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:43.518 09:42:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:43.518 09:42:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:21:43.518 09:42:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:21:43.518 09:42:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:43.518 09:42:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:43.518 09:42:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:43.518 09:42:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:43.518 09:42:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:43.518 09:42:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:43.518 09:42:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:43.518 09:42:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:43.518 09:42:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:43.518 09:42:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:43.518 09:42:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:43.518 09:42:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:21:43.518 Found net devices under 0000:09:00.0: cvl_0_0 00:21:43.519 09:42:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:43.519 09:42:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:43.519 09:42:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:43.519 09:42:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:43.519 09:42:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:43.519 09:42:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:43.519 09:42:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:43.519 09:42:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:43.519 09:42:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:21:43.519 Found net devices under 0000:09:00.1: cvl_0_1 00:21:43.519 09:42:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:43.519 09:42:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:21:43.519 09:42:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:43.519 09:42:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:21:43.519 09:42:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:43.519 09:42:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:43.519 09:42:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:43.519 09:42:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:43.519 ************************************ 00:21:43.519 START TEST nvmf_perf_adq 00:21:43.519 ************************************ 00:21:43.519 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:43.519 * Looking for test storage... 00:21:43.519 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:43.519 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:21:43.519 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1681 -- # lcov --version 00:21:43.519 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:21:43.519 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:21:43.519 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:43.519 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:43.519 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:43.519 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:21:43.519 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:21:43.519 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:21:43.519 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:21:43.519 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:21:43.519 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:21:43.519 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:21:43.519 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:43.519 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:21:43.519 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:21:43.519 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:43.519 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:43.519 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:21:43.519 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:21:43.519 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:43.519 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:21:43.519 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:21:43.519 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:21:43.519 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:21:43.519 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:43.519 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:21:43.519 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:21:43.519 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:43.519 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:43.519 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:21:43.519 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:43.519 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:21:43.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:43.519 --rc genhtml_branch_coverage=1 00:21:43.519 --rc genhtml_function_coverage=1 00:21:43.519 --rc genhtml_legend=1 00:21:43.519 --rc geninfo_all_blocks=1 00:21:43.519 --rc geninfo_unexecuted_blocks=1 00:21:43.519 00:21:43.519 ' 00:21:43.519 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:21:43.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:43.519 --rc genhtml_branch_coverage=1 00:21:43.519 --rc genhtml_function_coverage=1 00:21:43.519 --rc genhtml_legend=1 00:21:43.519 --rc geninfo_all_blocks=1 00:21:43.519 --rc geninfo_unexecuted_blocks=1 00:21:43.519 00:21:43.519 ' 00:21:43.519 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:21:43.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:43.519 --rc genhtml_branch_coverage=1 00:21:43.519 --rc genhtml_function_coverage=1 00:21:43.519 --rc genhtml_legend=1 00:21:43.519 --rc geninfo_all_blocks=1 00:21:43.519 --rc geninfo_unexecuted_blocks=1 00:21:43.519 00:21:43.519 ' 00:21:43.519 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:21:43.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:43.519 --rc genhtml_branch_coverage=1 00:21:43.519 --rc genhtml_function_coverage=1 00:21:43.519 --rc genhtml_legend=1 00:21:43.519 --rc geninfo_all_blocks=1 00:21:43.519 --rc geninfo_unexecuted_blocks=1 00:21:43.519 00:21:43.519 ' 00:21:43.519 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:43.519 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:21:43.519 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:43.519 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:43.519 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:43.519 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:43.519 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:43.519 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:43.519 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:43.519 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:43.519 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:43.519 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:43.519 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:21:43.519 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:21:43.519 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:43.519 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:43.519 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:43.519 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:43.519 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:43.519 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:21:43.519 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:43.519 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:43.519 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:43.519 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.519 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.519 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.519 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:21:43.520 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.520 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:21:43.520 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:43.520 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:43.520 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:43.520 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:43.520 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:43.520 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:43.520 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:43.520 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:43.520 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:43.520 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:43.520 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:21:43.520 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:43.520 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:45.418 09:42:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:45.418 09:42:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:45.418 09:42:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:45.418 09:42:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:45.418 09:42:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:45.418 09:42:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:45.418 09:42:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:45.418 09:42:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:45.418 09:42:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:45.418 09:42:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:45.418 09:42:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:45.418 09:42:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:45.418 09:42:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:45.418 09:42:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:45.418 09:42:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:45.418 09:42:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:45.418 09:42:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:45.418 09:42:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:45.418 09:42:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:45.418 09:42:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:45.418 09:42:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:45.419 09:42:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:45.419 09:42:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:45.419 09:42:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:45.419 09:42:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:45.419 09:42:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:45.419 09:42:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:45.419 09:42:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:45.419 09:42:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:45.419 09:42:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:45.419 09:42:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:45.419 09:42:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:45.419 09:42:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:45.419 09:42:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:45.419 09:42:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x1592)' 00:21:45.419 Found 0000:09:00.0 (0x8086 - 0x1592) 00:21:45.419 09:42:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:45.419 09:42:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:45.419 09:42:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:21:45.419 09:42:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:21:45.419 09:42:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:45.419 09:42:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:45.419 09:42:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x1592)' 00:21:45.419 Found 0000:09:00.1 (0x8086 - 0x1592) 00:21:45.419 09:42:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:45.419 09:42:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:45.419 09:42:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:21:45.419 09:42:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:21:45.419 09:42:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:45.419 09:42:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:45.419 09:42:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:45.419 09:42:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:45.419 09:42:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:45.419 09:42:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:45.419 09:42:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:45.419 09:42:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:45.419 09:42:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:45.419 09:42:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:45.419 09:42:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:45.419 09:42:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:21:45.419 Found net devices under 0000:09:00.0: cvl_0_0 00:21:45.419 09:42:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:45.419 09:42:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:45.419 09:42:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:45.419 09:42:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:45.419 09:42:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:45.419 09:42:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:45.419 09:42:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:45.419 09:42:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:45.419 09:42:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:21:45.419 Found net devices under 0000:09:00.1: cvl_0_1 00:21:45.419 09:42:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:45.419 09:42:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:21:45.419 09:42:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:45.419 09:42:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:21:45.419 09:42:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:45.419 09:42:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:21:45.419 09:42:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:21:45.419 09:42:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:21:45.984 09:42:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:21:49.268 09:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:21:54.555 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:21:54.555 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:21:54.555 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:54.555 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # prepare_net_devs 00:21:54.555 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@436 -- # local -g is_hw=no 00:21:54.555 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # remove_spdk_ns 00:21:54.555 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:54.555 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:54.555 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:54.555 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:21:54.555 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:21:54.555 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:54.555 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:54.555 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:54.555 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:54.555 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:54.555 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:54.555 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:54.555 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:54.555 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:54.555 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:54.555 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:54.555 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:54.555 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:54.555 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:54.555 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:54.555 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:54.555 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:54.555 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:54.555 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:54.555 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:54.555 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:54.555 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:54.555 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:54.555 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:54.555 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:54.555 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:54.555 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:54.555 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:54.555 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:54.555 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:54.556 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:54.556 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:54.556 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:54.556 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:54.556 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:54.556 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:54.556 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x1592)' 00:21:54.556 Found 0000:09:00.0 (0x8086 - 0x1592) 00:21:54.556 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:54.556 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:54.556 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:21:54.556 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:21:54.556 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:54.556 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:54.556 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x1592)' 00:21:54.556 Found 0000:09:00.1 (0x8086 - 0x1592) 00:21:54.556 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:54.556 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:54.556 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:21:54.556 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:21:54.556 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:54.556 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:54.556 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:54.556 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:54.556 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:54.556 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:54.556 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:54.556 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:54.556 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:54.556 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:54.556 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:54.556 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:21:54.556 Found net devices under 0000:09:00.0: cvl_0_0 00:21:54.556 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:54.556 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:54.556 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:54.556 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:54.556 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:54.556 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:54.556 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:54.556 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:54.556 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:21:54.556 Found net devices under 0000:09:00.1: cvl_0_1 00:21:54.556 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:54.556 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:21:54.556 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # is_hw=yes 00:21:54.556 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:21:54.556 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:21:54.556 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:21:54.556 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:54.556 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:54.556 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:54.556 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:54.556 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:54.556 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:54.556 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:54.556 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:54.556 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:54.556 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:54.556 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:54.556 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:54.556 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:54.556 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:54.556 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:54.556 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:54.556 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:54.556 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:54.556 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:54.556 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:54.556 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:54.556 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:54.556 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:54.556 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:54.556 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.602 ms 00:21:54.556 00:21:54.556 --- 10.0.0.2 ping statistics --- 00:21:54.556 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:54.556 rtt min/avg/max/mdev = 0.602/0.602/0.602/0.000 ms 00:21:54.556 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:54.556 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:54.556 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:21:54.556 00:21:54.556 --- 10.0.0.1 ping statistics --- 00:21:54.556 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:54.556 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:21:54.556 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:54.556 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # return 0 00:21:54.556 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:21:54.556 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:54.556 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:21:54.556 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:21:54.556 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:54.556 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:21:54.556 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:21:54.556 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:54.556 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:21:54.556 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:54.556 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:54.556 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # nvmfpid=260184 00:21:54.556 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:54.556 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # waitforlisten 260184 00:21:54.556 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 260184 ']' 00:21:54.556 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:54.556 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:54.556 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:54.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:54.556 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:54.556 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:54.556 [2024-10-07 09:42:43.458086] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:21:54.556 [2024-10-07 09:42:43.458182] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:54.556 [2024-10-07 09:42:43.518710] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:54.815 [2024-10-07 09:42:43.622037] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:54.815 [2024-10-07 09:42:43.622090] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:54.815 [2024-10-07 09:42:43.622113] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:54.815 [2024-10-07 09:42:43.622123] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:54.815 [2024-10-07 09:42:43.622132] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:54.815 [2024-10-07 09:42:43.623527] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:21:54.815 [2024-10-07 09:42:43.623633] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:21:54.815 [2024-10-07 09:42:43.623734] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:21:54.815 [2024-10-07 09:42:43.623738] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:21:54.815 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:54.815 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:21:54.815 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:21:54.815 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:54.815 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:54.815 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:54.815 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:21:54.815 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:54.815 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:54.815 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.815 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:54.815 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.815 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:54.815 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:21:54.815 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.815 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:54.815 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.815 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:54.815 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.815 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:55.074 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.074 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:21:55.074 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.074 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:55.074 [2024-10-07 09:42:43.871089] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:55.074 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.074 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:55.074 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.074 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:55.074 Malloc1 00:21:55.074 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.074 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:55.074 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.074 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:55.074 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.075 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:55.075 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.075 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:55.075 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.075 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:55.075 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.075 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:55.075 [2024-10-07 09:42:43.924300] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:55.075 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.075 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=260211 00:21:55.075 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:21:55.075 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:56.979 09:42:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:21:56.979 09:42:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.979 09:42:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:56.979 09:42:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.979 09:42:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:21:56.979 "tick_rate": 2700000000, 00:21:56.979 "poll_groups": [ 00:21:56.979 { 00:21:56.979 "name": "nvmf_tgt_poll_group_000", 00:21:56.979 "admin_qpairs": 1, 00:21:56.979 "io_qpairs": 1, 00:21:56.979 "current_admin_qpairs": 1, 00:21:56.979 "current_io_qpairs": 1, 00:21:56.979 "pending_bdev_io": 0, 00:21:56.979 "completed_nvme_io": 18802, 00:21:56.979 "transports": [ 00:21:56.979 { 00:21:56.979 "trtype": "TCP" 00:21:56.979 } 00:21:56.979 ] 00:21:56.979 }, 00:21:56.979 { 00:21:56.979 "name": "nvmf_tgt_poll_group_001", 00:21:56.979 "admin_qpairs": 0, 00:21:56.979 "io_qpairs": 1, 00:21:56.979 "current_admin_qpairs": 0, 00:21:56.979 "current_io_qpairs": 1, 00:21:56.979 "pending_bdev_io": 0, 00:21:56.979 "completed_nvme_io": 19133, 00:21:56.979 "transports": [ 00:21:56.979 { 00:21:56.979 "trtype": "TCP" 00:21:56.979 } 00:21:56.979 ] 00:21:56.979 }, 00:21:56.979 { 00:21:56.979 "name": "nvmf_tgt_poll_group_002", 00:21:56.979 "admin_qpairs": 0, 00:21:56.979 "io_qpairs": 1, 00:21:56.979 "current_admin_qpairs": 0, 00:21:56.979 "current_io_qpairs": 1, 00:21:56.979 "pending_bdev_io": 0, 00:21:56.979 "completed_nvme_io": 20121, 00:21:56.979 "transports": [ 00:21:56.979 { 00:21:56.979 "trtype": "TCP" 00:21:56.979 } 00:21:56.979 ] 00:21:56.979 }, 00:21:56.979 { 00:21:56.979 "name": "nvmf_tgt_poll_group_003", 00:21:56.979 "admin_qpairs": 0, 00:21:56.979 "io_qpairs": 1, 00:21:56.979 "current_admin_qpairs": 0, 00:21:56.979 "current_io_qpairs": 1, 00:21:56.979 "pending_bdev_io": 0, 00:21:56.979 "completed_nvme_io": 19970, 00:21:56.979 "transports": [ 00:21:56.979 { 00:21:56.979 "trtype": "TCP" 00:21:56.980 } 00:21:56.980 ] 00:21:56.980 } 00:21:56.980 ] 00:21:56.980 }' 00:21:56.980 09:42:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:21:56.980 09:42:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:21:57.237 09:42:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:21:57.237 09:42:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:21:57.237 09:42:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 260211 00:22:05.355 Initializing NVMe Controllers 00:22:05.355 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:05.355 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:05.355 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:05.355 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:05.355 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:05.355 Initialization complete. Launching workers. 00:22:05.355 ======================================================== 00:22:05.355 Latency(us) 00:22:05.355 Device Information : IOPS MiB/s Average min max 00:22:05.355 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10419.90 40.70 6144.50 2404.97 9722.01 00:22:05.355 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10056.00 39.28 6366.04 2650.16 10304.65 00:22:05.355 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10478.60 40.93 6109.78 2455.01 10337.32 00:22:05.355 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 9997.80 39.05 6402.09 2307.24 10985.90 00:22:05.355 ======================================================== 00:22:05.355 Total : 40952.29 159.97 6252.90 2307.24 10985.90 00:22:05.355 00:22:05.355 09:42:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:22:05.355 09:42:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@514 -- # nvmfcleanup 00:22:05.355 09:42:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:22:05.355 09:42:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:05.355 09:42:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:22:05.355 09:42:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:05.355 09:42:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:05.355 rmmod nvme_tcp 00:22:05.355 rmmod nvme_fabrics 00:22:05.355 rmmod nvme_keyring 00:22:05.355 09:42:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:05.355 09:42:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:22:05.355 09:42:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:22:05.355 09:42:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@515 -- # '[' -n 260184 ']' 00:22:05.355 09:42:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # killprocess 260184 00:22:05.355 09:42:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 260184 ']' 00:22:05.355 09:42:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 260184 00:22:05.355 09:42:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:22:05.355 09:42:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:05.356 09:42:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 260184 00:22:05.356 09:42:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:05.356 09:42:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:05.356 09:42:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 260184' 00:22:05.356 killing process with pid 260184 00:22:05.356 09:42:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 260184 00:22:05.356 09:42:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 260184 00:22:05.616 09:42:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:22:05.616 09:42:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:22:05.616 09:42:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:22:05.616 09:42:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:22:05.616 09:42:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-save 00:22:05.616 09:42:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:22:05.616 09:42:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-restore 00:22:05.616 09:42:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:05.616 09:42:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:05.616 09:42:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:05.616 09:42:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:05.616 09:42:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:07.541 09:42:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:07.541 09:42:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:22:07.541 09:42:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:22:07.541 09:42:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:22:08.597 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:22:10.665 09:42:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:22:15.935 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:22:15.935 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:22:15.935 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:15.935 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # prepare_net_devs 00:22:15.935 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@436 -- # local -g is_hw=no 00:22:15.935 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # remove_spdk_ns 00:22:15.935 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:15.935 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:15.935 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:15.935 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:22:15.935 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:22:15.935 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:22:15.935 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:15.935 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:15.935 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:22:15.935 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:15.935 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:15.935 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:15.935 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:15.935 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:15.935 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:22:15.935 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:15.935 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:22:15.935 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:22:15.935 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:22:15.935 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:22:15.935 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:22:15.935 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:22:15.935 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:15.935 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:15.935 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:15.935 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:15.936 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:15.936 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:15.936 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:15.936 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:15.936 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:15.936 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:15.936 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:15.936 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:15.936 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:15.936 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:15.936 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:15.936 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:15.936 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:15.936 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:15.936 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:15.936 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x1592)' 00:22:15.936 Found 0000:09:00.0 (0x8086 - 0x1592) 00:22:15.936 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:15.936 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:15.936 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:22:15.936 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:22:15.936 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:15.936 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:15.936 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x1592)' 00:22:15.936 Found 0000:09:00.1 (0x8086 - 0x1592) 00:22:15.936 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:15.936 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:15.936 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:22:15.936 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:22:15.936 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:15.936 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:15.936 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:15.936 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:15.936 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:15.936 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:15.936 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:15.936 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:15.936 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:15.936 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:15.936 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:15.936 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:22:15.936 Found net devices under 0000:09:00.0: cvl_0_0 00:22:15.936 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:15.936 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:15.936 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:15.936 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:15.936 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:15.936 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:15.936 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:15.936 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:15.936 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:22:15.936 Found net devices under 0000:09:00.1: cvl_0_1 00:22:15.936 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:15.936 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:22:15.936 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # is_hw=yes 00:22:15.936 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:22:15.936 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:22:15.936 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:22:15.936 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:15.936 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:15.936 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:15.936 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:15.936 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:15.936 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:15.936 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:15.936 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:15.936 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:15.936 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:15.936 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:15.936 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:15.936 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:15.936 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:15.936 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:15.936 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:15.936 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:15.936 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:15.936 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:15.936 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:15.936 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:15.936 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:15.936 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:15.936 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:15.936 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.392 ms 00:22:15.936 00:22:15.936 --- 10.0.0.2 ping statistics --- 00:22:15.936 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:15.936 rtt min/avg/max/mdev = 0.392/0.392/0.392/0.000 ms 00:22:15.936 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:15.936 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:15.936 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:22:15.936 00:22:15.936 --- 10.0.0.1 ping statistics --- 00:22:15.936 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:15.936 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:22:15.936 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:15.936 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # return 0 00:22:15.936 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:22:15.936 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:15.936 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:22:15.936 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:22:15.936 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:15.936 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:22:15.936 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:22:15.936 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:22:15.936 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:22:15.936 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:22:15.936 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:22:15.936 net.core.busy_poll = 1 00:22:15.936 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:22:15.936 net.core.busy_read = 1 00:22:15.936 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:22:15.936 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:22:15.936 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:22:15.936 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:22:15.937 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:22:15.937 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:15.937 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:15.937 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:15.937 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:15.937 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # nvmfpid=262921 00:22:15.937 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:15.937 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # waitforlisten 262921 00:22:15.937 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 262921 ']' 00:22:15.937 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:15.937 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:15.937 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:15.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:15.937 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:15.937 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:15.937 [2024-10-07 09:43:04.619217] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:22:15.937 [2024-10-07 09:43:04.619303] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:15.937 [2024-10-07 09:43:04.680395] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:15.937 [2024-10-07 09:43:04.786708] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:15.937 [2024-10-07 09:43:04.786767] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:15.937 [2024-10-07 09:43:04.786792] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:15.937 [2024-10-07 09:43:04.786803] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:15.937 [2024-10-07 09:43:04.786813] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:15.937 [2024-10-07 09:43:04.788265] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:22:15.937 [2024-10-07 09:43:04.788329] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:22:15.937 [2024-10-07 09:43:04.788396] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:22:15.937 [2024-10-07 09:43:04.788399] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:22:15.937 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:15.937 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:22:15.937 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:15.937 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:15.937 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:15.937 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:15.937 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:22:15.937 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:15.937 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:15.937 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.937 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:15.937 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.937 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:15.937 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:22:15.937 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.937 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:15.937 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.937 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:15.937 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.937 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:16.196 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.196 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:22:16.196 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.196 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:16.196 [2024-10-07 09:43:05.028364] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:16.196 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.196 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:16.196 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.196 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:16.196 Malloc1 00:22:16.196 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.196 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:16.196 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.196 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:16.196 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.197 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:16.197 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.197 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:16.197 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.197 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:16.197 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.197 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:16.197 [2024-10-07 09:43:05.080295] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:16.197 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.197 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=262949 00:22:16.197 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:22:16.197 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:18.101 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:22:18.101 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.101 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:18.358 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.358 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:22:18.358 "tick_rate": 2700000000, 00:22:18.358 "poll_groups": [ 00:22:18.358 { 00:22:18.358 "name": "nvmf_tgt_poll_group_000", 00:22:18.358 "admin_qpairs": 1, 00:22:18.358 "io_qpairs": 2, 00:22:18.358 "current_admin_qpairs": 1, 00:22:18.358 "current_io_qpairs": 2, 00:22:18.358 "pending_bdev_io": 0, 00:22:18.358 "completed_nvme_io": 26059, 00:22:18.358 "transports": [ 00:22:18.358 { 00:22:18.358 "trtype": "TCP" 00:22:18.358 } 00:22:18.358 ] 00:22:18.358 }, 00:22:18.358 { 00:22:18.358 "name": "nvmf_tgt_poll_group_001", 00:22:18.358 "admin_qpairs": 0, 00:22:18.358 "io_qpairs": 2, 00:22:18.358 "current_admin_qpairs": 0, 00:22:18.358 "current_io_qpairs": 2, 00:22:18.358 "pending_bdev_io": 0, 00:22:18.358 "completed_nvme_io": 25555, 00:22:18.358 "transports": [ 00:22:18.358 { 00:22:18.358 "trtype": "TCP" 00:22:18.358 } 00:22:18.358 ] 00:22:18.359 }, 00:22:18.359 { 00:22:18.359 "name": "nvmf_tgt_poll_group_002", 00:22:18.359 "admin_qpairs": 0, 00:22:18.359 "io_qpairs": 0, 00:22:18.359 "current_admin_qpairs": 0, 00:22:18.359 "current_io_qpairs": 0, 00:22:18.359 "pending_bdev_io": 0, 00:22:18.359 "completed_nvme_io": 0, 00:22:18.359 "transports": [ 00:22:18.359 { 00:22:18.359 "trtype": "TCP" 00:22:18.359 } 00:22:18.359 ] 00:22:18.359 }, 00:22:18.359 { 00:22:18.359 "name": "nvmf_tgt_poll_group_003", 00:22:18.359 "admin_qpairs": 0, 00:22:18.359 "io_qpairs": 0, 00:22:18.359 "current_admin_qpairs": 0, 00:22:18.359 "current_io_qpairs": 0, 00:22:18.359 "pending_bdev_io": 0, 00:22:18.359 "completed_nvme_io": 0, 00:22:18.359 "transports": [ 00:22:18.359 { 00:22:18.359 "trtype": "TCP" 00:22:18.359 } 00:22:18.359 ] 00:22:18.359 } 00:22:18.359 ] 00:22:18.359 }' 00:22:18.359 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:22:18.359 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:22:18.359 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:22:18.359 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:22:18.359 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 262949 00:22:26.474 Initializing NVMe Controllers 00:22:26.474 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:26.474 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:26.474 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:26.474 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:26.474 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:26.474 Initialization complete. Launching workers. 00:22:26.474 ======================================================== 00:22:26.474 Latency(us) 00:22:26.474 Device Information : IOPS MiB/s Average min max 00:22:26.474 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 6396.82 24.99 10006.93 1702.13 54085.16 00:22:26.474 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 7537.11 29.44 8494.97 1327.53 54246.87 00:22:26.474 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 5919.83 23.12 10812.14 1594.39 54474.33 00:22:26.474 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 6884.12 26.89 9307.09 1988.68 54275.54 00:22:26.474 ======================================================== 00:22:26.474 Total : 26737.88 104.44 9578.81 1327.53 54474.33 00:22:26.474 00:22:26.474 09:43:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:22:26.474 09:43:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@514 -- # nvmfcleanup 00:22:26.474 09:43:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:22:26.474 09:43:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:26.474 09:43:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:22:26.474 09:43:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:26.474 09:43:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:26.474 rmmod nvme_tcp 00:22:26.474 rmmod nvme_fabrics 00:22:26.474 rmmod nvme_keyring 00:22:26.474 09:43:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:26.474 09:43:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:22:26.474 09:43:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:22:26.474 09:43:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@515 -- # '[' -n 262921 ']' 00:22:26.474 09:43:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # killprocess 262921 00:22:26.474 09:43:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 262921 ']' 00:22:26.474 09:43:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 262921 00:22:26.474 09:43:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:22:26.474 09:43:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:26.474 09:43:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 262921 00:22:26.474 09:43:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:26.474 09:43:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:26.474 09:43:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 262921' 00:22:26.474 killing process with pid 262921 00:22:26.474 09:43:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 262921 00:22:26.474 09:43:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 262921 00:22:26.733 09:43:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:22:26.733 09:43:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:22:26.733 09:43:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:22:26.733 09:43:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:22:26.733 09:43:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-save 00:22:26.733 09:43:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:22:26.733 09:43:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-restore 00:22:26.733 09:43:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:26.733 09:43:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:26.733 09:43:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:26.733 09:43:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:26.733 09:43:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:30.022 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:30.022 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:22:30.022 00:22:30.022 real 0m46.678s 00:22:30.022 user 2m38.703s 00:22:30.022 sys 0m11.483s 00:22:30.022 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:30.022 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:30.022 ************************************ 00:22:30.022 END TEST nvmf_perf_adq 00:22:30.022 ************************************ 00:22:30.022 09:43:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:30.022 09:43:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:30.022 09:43:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:30.022 09:43:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:30.022 ************************************ 00:22:30.022 START TEST nvmf_shutdown 00:22:30.022 ************************************ 00:22:30.022 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:30.022 * Looking for test storage... 00:22:30.022 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:30.022 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:22:30.022 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1681 -- # lcov --version 00:22:30.022 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:22:30.022 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:22:30.022 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:30.022 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:30.022 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:30.022 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:22:30.022 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:22:30.022 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:22:30.022 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:22:30.022 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:22:30.022 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:22:30.022 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:22:30.022 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:30.022 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:22:30.022 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:22:30.022 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:30.022 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:30.022 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:22:30.022 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:22:30.022 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:30.022 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:22:30.022 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:22:30.022 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:22:30.022 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:22:30.022 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:30.022 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:22:30.022 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:22:30.022 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:30.022 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:30.022 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:22:30.022 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:30.022 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:22:30.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:30.022 --rc genhtml_branch_coverage=1 00:22:30.022 --rc genhtml_function_coverage=1 00:22:30.022 --rc genhtml_legend=1 00:22:30.022 --rc geninfo_all_blocks=1 00:22:30.022 --rc geninfo_unexecuted_blocks=1 00:22:30.022 00:22:30.022 ' 00:22:30.022 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:22:30.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:30.022 --rc genhtml_branch_coverage=1 00:22:30.022 --rc genhtml_function_coverage=1 00:22:30.022 --rc genhtml_legend=1 00:22:30.022 --rc geninfo_all_blocks=1 00:22:30.022 --rc geninfo_unexecuted_blocks=1 00:22:30.022 00:22:30.022 ' 00:22:30.022 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:22:30.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:30.022 --rc genhtml_branch_coverage=1 00:22:30.022 --rc genhtml_function_coverage=1 00:22:30.022 --rc genhtml_legend=1 00:22:30.022 --rc geninfo_all_blocks=1 00:22:30.022 --rc geninfo_unexecuted_blocks=1 00:22:30.022 00:22:30.022 ' 00:22:30.022 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:22:30.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:30.022 --rc genhtml_branch_coverage=1 00:22:30.022 --rc genhtml_function_coverage=1 00:22:30.022 --rc genhtml_legend=1 00:22:30.022 --rc geninfo_all_blocks=1 00:22:30.022 --rc geninfo_unexecuted_blocks=1 00:22:30.022 00:22:30.022 ' 00:22:30.022 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:30.022 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:22:30.022 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:30.022 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:30.022 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:30.023 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:30.023 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:30.023 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:30.023 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:30.023 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:30.023 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:30.023 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:30.023 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:22:30.023 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:22:30.023 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:30.023 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:30.023 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:30.023 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:30.023 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:30.023 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:22:30.023 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:30.023 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:30.023 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:30.023 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:30.023 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:30.023 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:30.023 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:22:30.023 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:30.023 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:22:30.023 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:30.023 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:30.023 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:30.023 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:30.023 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:30.023 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:30.023 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:30.023 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:30.023 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:30.023 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:30.023 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:30.023 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:30.023 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:22:30.023 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:30.023 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:30.023 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:30.023 ************************************ 00:22:30.023 START TEST nvmf_shutdown_tc1 00:22:30.023 ************************************ 00:22:30.023 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc1 00:22:30.023 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:22:30.023 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:30.023 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:22:30.023 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:30.023 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # prepare_net_devs 00:22:30.023 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:22:30.023 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:22:30.023 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:30.023 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:30.023 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:30.023 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:22:30.023 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:22:30.023 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:30.023 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:32.557 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:32.557 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:32.557 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:32.557 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:32.557 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:32.557 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:32.557 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:32.557 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:22:32.558 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:32.558 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:22:32.558 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:22:32.558 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:22:32.558 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:22:32.558 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:22:32.558 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:32.558 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:32.558 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:32.558 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:32.558 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:32.558 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:32.558 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:32.558 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:32.558 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:32.558 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:32.558 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:32.558 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:32.558 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:32.558 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:32.558 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:32.558 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:32.558 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:32.558 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:32.558 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:32.558 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:32.558 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x1592)' 00:22:32.558 Found 0000:09:00.0 (0x8086 - 0x1592) 00:22:32.558 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:32.558 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:32.558 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:22:32.558 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:22:32.558 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:32.558 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:32.558 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x1592)' 00:22:32.558 Found 0000:09:00.1 (0x8086 - 0x1592) 00:22:32.558 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:32.558 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:32.558 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:22:32.558 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:22:32.558 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:32.558 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:32.558 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:32.558 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:32.558 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:32.558 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:32.558 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:32.558 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:32.558 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:32.558 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:32.558 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:32.558 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:22:32.558 Found net devices under 0000:09:00.0: cvl_0_0 00:22:32.558 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:32.558 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:32.558 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:32.558 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:32.558 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:32.558 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:32.558 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:32.558 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:32.558 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:22:32.558 Found net devices under 0000:09:00.1: cvl_0_1 00:22:32.558 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:32.558 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:22:32.558 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # is_hw=yes 00:22:32.558 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:22:32.558 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:22:32.558 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:22:32.558 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:32.558 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:32.558 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:32.558 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:32.558 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:32.558 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:32.558 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:32.558 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:32.558 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:32.558 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:32.558 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:32.558 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:32.558 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:32.558 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:32.558 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:32.558 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:32.558 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:32.558 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:32.558 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:32.558 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:32.558 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:32.559 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:32.559 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:32.559 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:32.559 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:22:32.559 00:22:32.559 --- 10.0.0.2 ping statistics --- 00:22:32.559 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:32.559 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:22:32.559 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:32.559 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:32.559 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:22:32.559 00:22:32.559 --- 10.0.0.1 ping statistics --- 00:22:32.559 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:32.559 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:22:32.559 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:32.559 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # return 0 00:22:32.559 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:22:32.559 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:32.559 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:22:32.559 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:22:32.559 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:32.559 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:22:32.559 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:22:32.559 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:32.559 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:32.559 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:32.559 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:32.559 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # nvmfpid=266098 00:22:32.559 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # waitforlisten 266098 00:22:32.559 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:32.559 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 266098 ']' 00:22:32.559 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:32.559 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:32.559 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:32.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:32.559 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:32.559 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:32.559 [2024-10-07 09:43:21.265691] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:22:32.559 [2024-10-07 09:43:21.265780] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:32.559 [2024-10-07 09:43:21.329121] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:32.559 [2024-10-07 09:43:21.438316] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:32.559 [2024-10-07 09:43:21.438392] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:32.559 [2024-10-07 09:43:21.438406] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:32.559 [2024-10-07 09:43:21.438416] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:32.559 [2024-10-07 09:43:21.438425] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:32.559 [2024-10-07 09:43:21.440302] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:22:32.559 [2024-10-07 09:43:21.440367] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:22:32.559 [2024-10-07 09:43:21.440432] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:22:32.559 [2024-10-07 09:43:21.440435] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:22:32.818 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:32.818 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:22:32.818 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:32.818 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:32.818 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:32.818 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:32.818 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:32.818 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.818 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:32.818 [2024-10-07 09:43:21.583476] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:32.818 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.818 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:32.818 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:32.818 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:32.818 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:32.818 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:32.818 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:32.818 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:32.818 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:32.818 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:32.818 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:32.818 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:32.818 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:32.818 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:32.818 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:32.818 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:32.818 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:32.818 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:32.818 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:32.818 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:32.818 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:32.818 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:32.818 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:32.818 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:32.818 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:32.818 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:32.818 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:32.818 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.818 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:32.818 Malloc1 00:22:32.818 [2024-10-07 09:43:21.657869] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:32.818 Malloc2 00:22:32.818 Malloc3 00:22:32.818 Malloc4 00:22:33.077 Malloc5 00:22:33.077 Malloc6 00:22:33.077 Malloc7 00:22:33.077 Malloc8 00:22:33.077 Malloc9 00:22:33.077 Malloc10 00:22:33.337 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.337 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:33.337 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:33.337 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:33.337 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=266270 00:22:33.337 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 266270 /var/tmp/bdevperf.sock 00:22:33.337 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 266270 ']' 00:22:33.337 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:33.337 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:22:33.337 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:33.337 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:33.337 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # config=() 00:22:33.337 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:33.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:33.337 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # local subsystem config 00:22:33.337 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:33.337 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:33.337 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:33.337 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:33.337 { 00:22:33.337 "params": { 00:22:33.337 "name": "Nvme$subsystem", 00:22:33.337 "trtype": "$TEST_TRANSPORT", 00:22:33.337 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:33.337 "adrfam": "ipv4", 00:22:33.337 "trsvcid": "$NVMF_PORT", 00:22:33.337 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:33.337 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:33.337 "hdgst": ${hdgst:-false}, 00:22:33.337 "ddgst": ${ddgst:-false} 00:22:33.337 }, 00:22:33.337 "method": "bdev_nvme_attach_controller" 00:22:33.337 } 00:22:33.337 EOF 00:22:33.337 )") 00:22:33.337 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:33.337 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:33.337 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:33.337 { 00:22:33.337 "params": { 00:22:33.337 "name": "Nvme$subsystem", 00:22:33.338 "trtype": "$TEST_TRANSPORT", 00:22:33.338 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:33.338 "adrfam": "ipv4", 00:22:33.338 "trsvcid": "$NVMF_PORT", 00:22:33.338 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:33.338 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:33.338 "hdgst": ${hdgst:-false}, 00:22:33.338 "ddgst": ${ddgst:-false} 00:22:33.338 }, 00:22:33.338 "method": "bdev_nvme_attach_controller" 00:22:33.338 } 00:22:33.338 EOF 00:22:33.338 )") 00:22:33.338 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:33.338 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:33.338 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:33.338 { 00:22:33.338 "params": { 00:22:33.338 "name": "Nvme$subsystem", 00:22:33.338 "trtype": "$TEST_TRANSPORT", 00:22:33.338 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:33.338 "adrfam": "ipv4", 00:22:33.338 "trsvcid": "$NVMF_PORT", 00:22:33.338 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:33.338 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:33.338 "hdgst": ${hdgst:-false}, 00:22:33.338 "ddgst": ${ddgst:-false} 00:22:33.338 }, 00:22:33.338 "method": "bdev_nvme_attach_controller" 00:22:33.338 } 00:22:33.338 EOF 00:22:33.338 )") 00:22:33.338 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:33.338 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:33.338 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:33.338 { 00:22:33.338 "params": { 00:22:33.338 "name": "Nvme$subsystem", 00:22:33.338 "trtype": "$TEST_TRANSPORT", 00:22:33.338 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:33.338 "adrfam": "ipv4", 00:22:33.338 "trsvcid": "$NVMF_PORT", 00:22:33.338 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:33.338 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:33.338 "hdgst": ${hdgst:-false}, 00:22:33.338 "ddgst": ${ddgst:-false} 00:22:33.338 }, 00:22:33.338 "method": "bdev_nvme_attach_controller" 00:22:33.338 } 00:22:33.338 EOF 00:22:33.338 )") 00:22:33.338 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:33.338 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:33.338 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:33.338 { 00:22:33.338 "params": { 00:22:33.338 "name": "Nvme$subsystem", 00:22:33.338 "trtype": "$TEST_TRANSPORT", 00:22:33.338 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:33.338 "adrfam": "ipv4", 00:22:33.338 "trsvcid": "$NVMF_PORT", 00:22:33.338 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:33.338 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:33.338 "hdgst": ${hdgst:-false}, 00:22:33.338 "ddgst": ${ddgst:-false} 00:22:33.338 }, 00:22:33.338 "method": "bdev_nvme_attach_controller" 00:22:33.338 } 00:22:33.338 EOF 00:22:33.338 )") 00:22:33.338 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:33.338 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:33.338 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:33.338 { 00:22:33.338 "params": { 00:22:33.338 "name": "Nvme$subsystem", 00:22:33.338 "trtype": "$TEST_TRANSPORT", 00:22:33.338 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:33.338 "adrfam": "ipv4", 00:22:33.338 "trsvcid": "$NVMF_PORT", 00:22:33.338 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:33.338 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:33.338 "hdgst": ${hdgst:-false}, 00:22:33.338 "ddgst": ${ddgst:-false} 00:22:33.338 }, 00:22:33.338 "method": "bdev_nvme_attach_controller" 00:22:33.338 } 00:22:33.338 EOF 00:22:33.338 )") 00:22:33.338 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:33.338 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:33.338 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:33.338 { 00:22:33.338 "params": { 00:22:33.338 "name": "Nvme$subsystem", 00:22:33.338 "trtype": "$TEST_TRANSPORT", 00:22:33.338 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:33.338 "adrfam": "ipv4", 00:22:33.338 "trsvcid": "$NVMF_PORT", 00:22:33.338 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:33.338 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:33.338 "hdgst": ${hdgst:-false}, 00:22:33.338 "ddgst": ${ddgst:-false} 00:22:33.338 }, 00:22:33.338 "method": "bdev_nvme_attach_controller" 00:22:33.338 } 00:22:33.338 EOF 00:22:33.338 )") 00:22:33.338 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:33.338 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:33.338 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:33.338 { 00:22:33.338 "params": { 00:22:33.338 "name": "Nvme$subsystem", 00:22:33.338 "trtype": "$TEST_TRANSPORT", 00:22:33.338 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:33.338 "adrfam": "ipv4", 00:22:33.338 "trsvcid": "$NVMF_PORT", 00:22:33.338 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:33.338 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:33.338 "hdgst": ${hdgst:-false}, 00:22:33.338 "ddgst": ${ddgst:-false} 00:22:33.338 }, 00:22:33.338 "method": "bdev_nvme_attach_controller" 00:22:33.338 } 00:22:33.338 EOF 00:22:33.338 )") 00:22:33.338 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:33.338 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:33.338 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:33.338 { 00:22:33.338 "params": { 00:22:33.338 "name": "Nvme$subsystem", 00:22:33.338 "trtype": "$TEST_TRANSPORT", 00:22:33.338 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:33.338 "adrfam": "ipv4", 00:22:33.338 "trsvcid": "$NVMF_PORT", 00:22:33.338 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:33.338 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:33.338 "hdgst": ${hdgst:-false}, 00:22:33.338 "ddgst": ${ddgst:-false} 00:22:33.338 }, 00:22:33.338 "method": "bdev_nvme_attach_controller" 00:22:33.338 } 00:22:33.338 EOF 00:22:33.338 )") 00:22:33.338 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:33.338 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:33.338 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:33.338 { 00:22:33.338 "params": { 00:22:33.338 "name": "Nvme$subsystem", 00:22:33.338 "trtype": "$TEST_TRANSPORT", 00:22:33.338 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:33.338 "adrfam": "ipv4", 00:22:33.338 "trsvcid": "$NVMF_PORT", 00:22:33.338 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:33.338 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:33.338 "hdgst": ${hdgst:-false}, 00:22:33.338 "ddgst": ${ddgst:-false} 00:22:33.338 }, 00:22:33.338 "method": "bdev_nvme_attach_controller" 00:22:33.338 } 00:22:33.338 EOF 00:22:33.338 )") 00:22:33.338 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:33.338 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # jq . 00:22:33.338 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@583 -- # IFS=, 00:22:33.338 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:22:33.338 "params": { 00:22:33.338 "name": "Nvme1", 00:22:33.338 "trtype": "tcp", 00:22:33.338 "traddr": "10.0.0.2", 00:22:33.338 "adrfam": "ipv4", 00:22:33.338 "trsvcid": "4420", 00:22:33.338 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:33.338 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:33.338 "hdgst": false, 00:22:33.338 "ddgst": false 00:22:33.338 }, 00:22:33.338 "method": "bdev_nvme_attach_controller" 00:22:33.338 },{ 00:22:33.338 "params": { 00:22:33.338 "name": "Nvme2", 00:22:33.338 "trtype": "tcp", 00:22:33.338 "traddr": "10.0.0.2", 00:22:33.338 "adrfam": "ipv4", 00:22:33.338 "trsvcid": "4420", 00:22:33.338 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:33.338 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:33.338 "hdgst": false, 00:22:33.338 "ddgst": false 00:22:33.338 }, 00:22:33.338 "method": "bdev_nvme_attach_controller" 00:22:33.338 },{ 00:22:33.338 "params": { 00:22:33.338 "name": "Nvme3", 00:22:33.338 "trtype": "tcp", 00:22:33.338 "traddr": "10.0.0.2", 00:22:33.338 "adrfam": "ipv4", 00:22:33.338 "trsvcid": "4420", 00:22:33.339 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:33.339 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:33.339 "hdgst": false, 00:22:33.339 "ddgst": false 00:22:33.339 }, 00:22:33.339 "method": "bdev_nvme_attach_controller" 00:22:33.339 },{ 00:22:33.339 "params": { 00:22:33.339 "name": "Nvme4", 00:22:33.339 "trtype": "tcp", 00:22:33.339 "traddr": "10.0.0.2", 00:22:33.339 "adrfam": "ipv4", 00:22:33.339 "trsvcid": "4420", 00:22:33.339 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:33.339 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:33.339 "hdgst": false, 00:22:33.339 "ddgst": false 00:22:33.339 }, 00:22:33.339 "method": "bdev_nvme_attach_controller" 00:22:33.339 },{ 00:22:33.339 "params": { 00:22:33.339 "name": "Nvme5", 00:22:33.339 "trtype": "tcp", 00:22:33.339 "traddr": "10.0.0.2", 00:22:33.339 "adrfam": "ipv4", 00:22:33.339 "trsvcid": "4420", 00:22:33.339 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:33.339 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:33.339 "hdgst": false, 00:22:33.339 "ddgst": false 00:22:33.339 }, 00:22:33.339 "method": "bdev_nvme_attach_controller" 00:22:33.339 },{ 00:22:33.339 "params": { 00:22:33.339 "name": "Nvme6", 00:22:33.339 "trtype": "tcp", 00:22:33.339 "traddr": "10.0.0.2", 00:22:33.339 "adrfam": "ipv4", 00:22:33.339 "trsvcid": "4420", 00:22:33.339 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:33.339 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:33.339 "hdgst": false, 00:22:33.339 "ddgst": false 00:22:33.339 }, 00:22:33.339 "method": "bdev_nvme_attach_controller" 00:22:33.339 },{ 00:22:33.339 "params": { 00:22:33.339 "name": "Nvme7", 00:22:33.339 "trtype": "tcp", 00:22:33.339 "traddr": "10.0.0.2", 00:22:33.339 "adrfam": "ipv4", 00:22:33.339 "trsvcid": "4420", 00:22:33.339 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:33.339 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:33.339 "hdgst": false, 00:22:33.339 "ddgst": false 00:22:33.339 }, 00:22:33.339 "method": "bdev_nvme_attach_controller" 00:22:33.339 },{ 00:22:33.339 "params": { 00:22:33.339 "name": "Nvme8", 00:22:33.339 "trtype": "tcp", 00:22:33.339 "traddr": "10.0.0.2", 00:22:33.339 "adrfam": "ipv4", 00:22:33.339 "trsvcid": "4420", 00:22:33.339 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:33.339 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:33.339 "hdgst": false, 00:22:33.339 "ddgst": false 00:22:33.339 }, 00:22:33.339 "method": "bdev_nvme_attach_controller" 00:22:33.339 },{ 00:22:33.339 "params": { 00:22:33.339 "name": "Nvme9", 00:22:33.339 "trtype": "tcp", 00:22:33.339 "traddr": "10.0.0.2", 00:22:33.339 "adrfam": "ipv4", 00:22:33.339 "trsvcid": "4420", 00:22:33.339 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:33.339 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:33.339 "hdgst": false, 00:22:33.339 "ddgst": false 00:22:33.339 }, 00:22:33.339 "method": "bdev_nvme_attach_controller" 00:22:33.339 },{ 00:22:33.339 "params": { 00:22:33.339 "name": "Nvme10", 00:22:33.339 "trtype": "tcp", 00:22:33.339 "traddr": "10.0.0.2", 00:22:33.339 "adrfam": "ipv4", 00:22:33.339 "trsvcid": "4420", 00:22:33.339 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:33.339 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:33.339 "hdgst": false, 00:22:33.339 "ddgst": false 00:22:33.339 }, 00:22:33.339 "method": "bdev_nvme_attach_controller" 00:22:33.339 }' 00:22:33.339 [2024-10-07 09:43:22.158547] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:22:33.339 [2024-10-07 09:43:22.158614] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:22:33.339 [2024-10-07 09:43:22.218752] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:33.339 [2024-10-07 09:43:22.329308] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:22:35.244 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:35.244 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:22:35.244 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:35.244 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.244 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:35.244 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.244 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 266270 00:22:35.244 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:22:35.244 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:22:36.622 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 266270 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:22:36.622 09:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 266098 00:22:36.622 09:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:22:36.622 09:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:36.622 09:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # config=() 00:22:36.622 09:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # local subsystem config 00:22:36.622 09:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:36.622 09:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:36.622 { 00:22:36.622 "params": { 00:22:36.622 "name": "Nvme$subsystem", 00:22:36.622 "trtype": "$TEST_TRANSPORT", 00:22:36.622 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:36.622 "adrfam": "ipv4", 00:22:36.622 "trsvcid": "$NVMF_PORT", 00:22:36.622 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:36.622 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:36.622 "hdgst": ${hdgst:-false}, 00:22:36.622 "ddgst": ${ddgst:-false} 00:22:36.622 }, 00:22:36.622 "method": "bdev_nvme_attach_controller" 00:22:36.622 } 00:22:36.622 EOF 00:22:36.622 )") 00:22:36.622 09:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:36.622 09:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:36.622 09:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:36.622 { 00:22:36.622 "params": { 00:22:36.622 "name": "Nvme$subsystem", 00:22:36.622 "trtype": "$TEST_TRANSPORT", 00:22:36.622 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:36.622 "adrfam": "ipv4", 00:22:36.622 "trsvcid": "$NVMF_PORT", 00:22:36.622 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:36.622 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:36.622 "hdgst": ${hdgst:-false}, 00:22:36.622 "ddgst": ${ddgst:-false} 00:22:36.622 }, 00:22:36.622 "method": "bdev_nvme_attach_controller" 00:22:36.622 } 00:22:36.622 EOF 00:22:36.622 )") 00:22:36.622 09:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:36.622 09:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:36.622 09:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:36.622 { 00:22:36.622 "params": { 00:22:36.622 "name": "Nvme$subsystem", 00:22:36.622 "trtype": "$TEST_TRANSPORT", 00:22:36.622 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:36.622 "adrfam": "ipv4", 00:22:36.622 "trsvcid": "$NVMF_PORT", 00:22:36.622 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:36.622 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:36.622 "hdgst": ${hdgst:-false}, 00:22:36.622 "ddgst": ${ddgst:-false} 00:22:36.622 }, 00:22:36.622 "method": "bdev_nvme_attach_controller" 00:22:36.622 } 00:22:36.622 EOF 00:22:36.622 )") 00:22:36.622 09:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:36.622 09:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:36.622 09:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:36.622 { 00:22:36.622 "params": { 00:22:36.622 "name": "Nvme$subsystem", 00:22:36.622 "trtype": "$TEST_TRANSPORT", 00:22:36.622 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:36.622 "adrfam": "ipv4", 00:22:36.622 "trsvcid": "$NVMF_PORT", 00:22:36.622 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:36.622 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:36.622 "hdgst": ${hdgst:-false}, 00:22:36.622 "ddgst": ${ddgst:-false} 00:22:36.622 }, 00:22:36.622 "method": "bdev_nvme_attach_controller" 00:22:36.622 } 00:22:36.622 EOF 00:22:36.622 )") 00:22:36.622 09:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:36.622 09:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:36.622 09:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:36.622 { 00:22:36.622 "params": { 00:22:36.622 "name": "Nvme$subsystem", 00:22:36.622 "trtype": "$TEST_TRANSPORT", 00:22:36.622 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:36.622 "adrfam": "ipv4", 00:22:36.622 "trsvcid": "$NVMF_PORT", 00:22:36.622 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:36.622 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:36.622 "hdgst": ${hdgst:-false}, 00:22:36.622 "ddgst": ${ddgst:-false} 00:22:36.622 }, 00:22:36.622 "method": "bdev_nvme_attach_controller" 00:22:36.622 } 00:22:36.622 EOF 00:22:36.622 )") 00:22:36.622 09:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:36.622 09:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:36.622 09:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:36.622 { 00:22:36.622 "params": { 00:22:36.622 "name": "Nvme$subsystem", 00:22:36.622 "trtype": "$TEST_TRANSPORT", 00:22:36.622 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:36.622 "adrfam": "ipv4", 00:22:36.622 "trsvcid": "$NVMF_PORT", 00:22:36.622 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:36.622 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:36.622 "hdgst": ${hdgst:-false}, 00:22:36.622 "ddgst": ${ddgst:-false} 00:22:36.622 }, 00:22:36.622 "method": "bdev_nvme_attach_controller" 00:22:36.622 } 00:22:36.623 EOF 00:22:36.623 )") 00:22:36.623 09:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:36.623 09:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:36.623 09:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:36.623 { 00:22:36.623 "params": { 00:22:36.623 "name": "Nvme$subsystem", 00:22:36.623 "trtype": "$TEST_TRANSPORT", 00:22:36.623 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:36.623 "adrfam": "ipv4", 00:22:36.623 "trsvcid": "$NVMF_PORT", 00:22:36.623 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:36.623 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:36.623 "hdgst": ${hdgst:-false}, 00:22:36.623 "ddgst": ${ddgst:-false} 00:22:36.623 }, 00:22:36.623 "method": "bdev_nvme_attach_controller" 00:22:36.623 } 00:22:36.623 EOF 00:22:36.623 )") 00:22:36.623 09:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:36.623 09:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:36.623 09:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:36.623 { 00:22:36.623 "params": { 00:22:36.623 "name": "Nvme$subsystem", 00:22:36.623 "trtype": "$TEST_TRANSPORT", 00:22:36.623 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:36.623 "adrfam": "ipv4", 00:22:36.623 "trsvcid": "$NVMF_PORT", 00:22:36.623 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:36.623 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:36.623 "hdgst": ${hdgst:-false}, 00:22:36.623 "ddgst": ${ddgst:-false} 00:22:36.623 }, 00:22:36.623 "method": "bdev_nvme_attach_controller" 00:22:36.623 } 00:22:36.623 EOF 00:22:36.623 )") 00:22:36.623 09:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:36.623 09:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:36.623 09:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:36.623 { 00:22:36.623 "params": { 00:22:36.623 "name": "Nvme$subsystem", 00:22:36.623 "trtype": "$TEST_TRANSPORT", 00:22:36.623 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:36.623 "adrfam": "ipv4", 00:22:36.623 "trsvcid": "$NVMF_PORT", 00:22:36.623 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:36.623 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:36.623 "hdgst": ${hdgst:-false}, 00:22:36.623 "ddgst": ${ddgst:-false} 00:22:36.623 }, 00:22:36.623 "method": "bdev_nvme_attach_controller" 00:22:36.623 } 00:22:36.623 EOF 00:22:36.623 )") 00:22:36.623 09:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:36.623 09:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:36.623 09:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:36.623 { 00:22:36.623 "params": { 00:22:36.623 "name": "Nvme$subsystem", 00:22:36.623 "trtype": "$TEST_TRANSPORT", 00:22:36.623 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:36.623 "adrfam": "ipv4", 00:22:36.623 "trsvcid": "$NVMF_PORT", 00:22:36.623 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:36.623 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:36.623 "hdgst": ${hdgst:-false}, 00:22:36.623 "ddgst": ${ddgst:-false} 00:22:36.623 }, 00:22:36.623 "method": "bdev_nvme_attach_controller" 00:22:36.623 } 00:22:36.623 EOF 00:22:36.623 )") 00:22:36.623 09:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:36.623 09:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # jq . 00:22:36.623 09:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@583 -- # IFS=, 00:22:36.623 09:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:22:36.623 "params": { 00:22:36.623 "name": "Nvme1", 00:22:36.623 "trtype": "tcp", 00:22:36.623 "traddr": "10.0.0.2", 00:22:36.623 "adrfam": "ipv4", 00:22:36.623 "trsvcid": "4420", 00:22:36.623 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:36.623 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:36.623 "hdgst": false, 00:22:36.623 "ddgst": false 00:22:36.623 }, 00:22:36.623 "method": "bdev_nvme_attach_controller" 00:22:36.623 },{ 00:22:36.623 "params": { 00:22:36.623 "name": "Nvme2", 00:22:36.623 "trtype": "tcp", 00:22:36.623 "traddr": "10.0.0.2", 00:22:36.623 "adrfam": "ipv4", 00:22:36.623 "trsvcid": "4420", 00:22:36.623 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:36.623 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:36.623 "hdgst": false, 00:22:36.623 "ddgst": false 00:22:36.623 }, 00:22:36.623 "method": "bdev_nvme_attach_controller" 00:22:36.623 },{ 00:22:36.623 "params": { 00:22:36.623 "name": "Nvme3", 00:22:36.623 "trtype": "tcp", 00:22:36.623 "traddr": "10.0.0.2", 00:22:36.623 "adrfam": "ipv4", 00:22:36.623 "trsvcid": "4420", 00:22:36.623 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:36.623 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:36.623 "hdgst": false, 00:22:36.623 "ddgst": false 00:22:36.623 }, 00:22:36.623 "method": "bdev_nvme_attach_controller" 00:22:36.623 },{ 00:22:36.623 "params": { 00:22:36.623 "name": "Nvme4", 00:22:36.623 "trtype": "tcp", 00:22:36.623 "traddr": "10.0.0.2", 00:22:36.623 "adrfam": "ipv4", 00:22:36.623 "trsvcid": "4420", 00:22:36.623 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:36.623 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:36.623 "hdgst": false, 00:22:36.623 "ddgst": false 00:22:36.623 }, 00:22:36.623 "method": "bdev_nvme_attach_controller" 00:22:36.623 },{ 00:22:36.623 "params": { 00:22:36.623 "name": "Nvme5", 00:22:36.623 "trtype": "tcp", 00:22:36.623 "traddr": "10.0.0.2", 00:22:36.623 "adrfam": "ipv4", 00:22:36.623 "trsvcid": "4420", 00:22:36.623 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:36.623 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:36.623 "hdgst": false, 00:22:36.623 "ddgst": false 00:22:36.623 }, 00:22:36.623 "method": "bdev_nvme_attach_controller" 00:22:36.623 },{ 00:22:36.623 "params": { 00:22:36.623 "name": "Nvme6", 00:22:36.623 "trtype": "tcp", 00:22:36.623 "traddr": "10.0.0.2", 00:22:36.623 "adrfam": "ipv4", 00:22:36.623 "trsvcid": "4420", 00:22:36.623 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:36.623 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:36.623 "hdgst": false, 00:22:36.623 "ddgst": false 00:22:36.623 }, 00:22:36.623 "method": "bdev_nvme_attach_controller" 00:22:36.623 },{ 00:22:36.623 "params": { 00:22:36.623 "name": "Nvme7", 00:22:36.623 "trtype": "tcp", 00:22:36.623 "traddr": "10.0.0.2", 00:22:36.623 "adrfam": "ipv4", 00:22:36.623 "trsvcid": "4420", 00:22:36.623 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:36.623 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:36.623 "hdgst": false, 00:22:36.623 "ddgst": false 00:22:36.623 }, 00:22:36.623 "method": "bdev_nvme_attach_controller" 00:22:36.623 },{ 00:22:36.623 "params": { 00:22:36.623 "name": "Nvme8", 00:22:36.623 "trtype": "tcp", 00:22:36.623 "traddr": "10.0.0.2", 00:22:36.623 "adrfam": "ipv4", 00:22:36.623 "trsvcid": "4420", 00:22:36.623 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:36.623 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:36.623 "hdgst": false, 00:22:36.623 "ddgst": false 00:22:36.623 }, 00:22:36.623 "method": "bdev_nvme_attach_controller" 00:22:36.623 },{ 00:22:36.623 "params": { 00:22:36.623 "name": "Nvme9", 00:22:36.623 "trtype": "tcp", 00:22:36.623 "traddr": "10.0.0.2", 00:22:36.623 "adrfam": "ipv4", 00:22:36.623 "trsvcid": "4420", 00:22:36.623 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:36.623 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:36.623 "hdgst": false, 00:22:36.623 "ddgst": false 00:22:36.623 }, 00:22:36.623 "method": "bdev_nvme_attach_controller" 00:22:36.623 },{ 00:22:36.623 "params": { 00:22:36.623 "name": "Nvme10", 00:22:36.623 "trtype": "tcp", 00:22:36.623 "traddr": "10.0.0.2", 00:22:36.623 "adrfam": "ipv4", 00:22:36.623 "trsvcid": "4420", 00:22:36.623 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:36.623 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:36.623 "hdgst": false, 00:22:36.623 "ddgst": false 00:22:36.623 }, 00:22:36.623 "method": "bdev_nvme_attach_controller" 00:22:36.623 }' 00:22:36.623 [2024-10-07 09:43:25.270664] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:22:36.623 [2024-10-07 09:43:25.270757] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid266673 ] 00:22:36.623 [2024-10-07 09:43:25.331245] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:36.623 [2024-10-07 09:43:25.441866] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:22:37.998 Running I/O for 1 seconds... 00:22:39.196 1741.00 IOPS, 108.81 MiB/s 00:22:39.196 Latency(us) 00:22:39.196 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:39.196 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:39.196 Verification LBA range: start 0x0 length 0x400 00:22:39.196 Nvme1n1 : 1.02 187.42 11.71 0.00 0.00 337895.98 21262.79 274959.93 00:22:39.196 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:39.196 Verification LBA range: start 0x0 length 0x400 00:22:39.196 Nvme2n1 : 1.13 227.13 14.20 0.00 0.00 274444.89 20874.43 262532.36 00:22:39.196 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:39.196 Verification LBA range: start 0x0 length 0x400 00:22:39.196 Nvme3n1 : 1.12 232.98 14.56 0.00 0.00 261328.07 8786.68 239230.67 00:22:39.196 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:39.196 Verification LBA range: start 0x0 length 0x400 00:22:39.196 Nvme4n1 : 1.12 232.77 14.55 0.00 0.00 257212.93 7573.05 256318.58 00:22:39.196 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:39.196 Verification LBA range: start 0x0 length 0x400 00:22:39.196 Nvme5n1 : 1.14 224.60 14.04 0.00 0.00 263872.09 22039.51 259425.47 00:22:39.196 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:39.196 Verification LBA range: start 0x0 length 0x400 00:22:39.196 Nvme6n1 : 1.17 278.59 17.41 0.00 0.00 208459.15 17767.54 256318.58 00:22:39.196 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:39.196 Verification LBA range: start 0x0 length 0x400 00:22:39.196 Nvme7n1 : 1.13 226.03 14.13 0.00 0.00 252979.58 36311.80 239230.67 00:22:39.196 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:39.196 Verification LBA range: start 0x0 length 0x400 00:22:39.196 Nvme8n1 : 1.14 225.53 14.10 0.00 0.00 249053.30 17476.27 257872.02 00:22:39.196 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:39.196 Verification LBA range: start 0x0 length 0x400 00:22:39.196 Nvme9n1 : 1.17 218.00 13.62 0.00 0.00 254219.95 21554.06 292047.83 00:22:39.196 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:39.196 Verification LBA range: start 0x0 length 0x400 00:22:39.196 Nvme10n1 : 1.18 270.27 16.89 0.00 0.00 201635.73 4830.25 262532.36 00:22:39.196 =================================================================================================================== 00:22:39.196 Total : 2323.29 145.21 0.00 0.00 251565.88 4830.25 292047.83 00:22:39.458 09:43:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:22:39.458 09:43:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:39.458 09:43:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:39.458 09:43:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:39.458 09:43:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:39.458 09:43:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@514 -- # nvmfcleanup 00:22:39.458 09:43:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:22:39.458 09:43:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:39.458 09:43:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:22:39.458 09:43:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:39.458 09:43:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:39.458 rmmod nvme_tcp 00:22:39.458 rmmod nvme_fabrics 00:22:39.458 rmmod nvme_keyring 00:22:39.458 09:43:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:39.458 09:43:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:22:39.458 09:43:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:22:39.458 09:43:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@515 -- # '[' -n 266098 ']' 00:22:39.458 09:43:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # killprocess 266098 00:22:39.458 09:43:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # '[' -z 266098 ']' 00:22:39.458 09:43:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # kill -0 266098 00:22:39.458 09:43:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # uname 00:22:39.458 09:43:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:39.458 09:43:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 266098 00:22:39.458 09:43:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:39.458 09:43:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:39.459 09:43:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 266098' 00:22:39.459 killing process with pid 266098 00:22:39.459 09:43:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@969 -- # kill 266098 00:22:39.459 09:43:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@974 -- # wait 266098 00:22:40.028 09:43:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:22:40.028 09:43:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:22:40.028 09:43:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:22:40.028 09:43:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:22:40.028 09:43:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@789 -- # iptables-save 00:22:40.028 09:43:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:22:40.028 09:43:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@789 -- # iptables-restore 00:22:40.028 09:43:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:40.028 09:43:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:40.028 09:43:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:40.028 09:43:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:40.028 09:43:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:41.961 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:41.961 00:22:41.961 real 0m11.922s 00:22:41.961 user 0m34.370s 00:22:41.961 sys 0m3.245s 00:22:41.961 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:41.961 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:41.961 ************************************ 00:22:41.961 END TEST nvmf_shutdown_tc1 00:22:41.961 ************************************ 00:22:41.961 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:22:41.961 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:41.961 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:41.961 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:41.961 ************************************ 00:22:41.961 START TEST nvmf_shutdown_tc2 00:22:41.961 ************************************ 00:22:41.961 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc2 00:22:41.961 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:22:41.961 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:41.961 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:22:41.961 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:41.961 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # prepare_net_devs 00:22:41.961 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:22:41.961 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:22:41.961 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:41.961 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:41.961 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:41.961 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:22:41.961 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:22:41.961 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:41.961 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:41.961 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:41.961 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:41.961 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:41.961 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:41.961 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:41.961 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:41.961 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:41.961 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:22:41.961 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:41.961 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:22:41.961 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:22:41.961 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:22:41.961 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:22:41.961 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:22:41.961 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:41.961 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:41.961 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:41.961 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:41.961 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:41.961 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:41.961 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:41.961 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:41.961 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:41.961 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:41.961 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:41.961 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:41.961 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:41.961 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:42.221 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:42.221 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:42.221 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:42.221 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:42.221 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:42.221 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:42.221 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x1592)' 00:22:42.221 Found 0000:09:00.0 (0x8086 - 0x1592) 00:22:42.221 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:42.221 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:42.221 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:22:42.221 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:22:42.221 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:42.221 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:42.221 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x1592)' 00:22:42.221 Found 0000:09:00.1 (0x8086 - 0x1592) 00:22:42.221 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:42.221 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:42.221 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:22:42.221 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:22:42.221 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:42.221 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:42.221 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:42.221 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:42.221 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:42.221 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:42.221 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:42.221 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:42.221 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:42.221 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:42.221 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:42.221 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:22:42.221 Found net devices under 0000:09:00.0: cvl_0_0 00:22:42.221 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:42.221 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:42.221 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:42.221 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:42.221 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:42.221 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:42.221 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:42.221 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:42.221 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:22:42.221 Found net devices under 0000:09:00.1: cvl_0_1 00:22:42.221 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:42.221 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:22:42.221 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # is_hw=yes 00:22:42.221 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:22:42.221 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:22:42.221 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:22:42.221 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:42.221 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:42.221 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:42.221 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:42.221 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:42.222 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:42.222 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:42.222 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:42.222 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:42.222 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:42.222 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:42.222 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:42.222 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:42.222 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:42.222 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:42.222 09:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:42.222 09:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:42.222 09:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:42.222 09:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:42.222 09:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:42.222 09:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:42.222 09:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:42.222 09:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:42.222 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:42.222 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:22:42.222 00:22:42.222 --- 10.0.0.2 ping statistics --- 00:22:42.222 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:42.222 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:22:42.222 09:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:42.222 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:42.222 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.100 ms 00:22:42.222 00:22:42.222 --- 10.0.0.1 ping statistics --- 00:22:42.222 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:42.222 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:22:42.222 09:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:42.222 09:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # return 0 00:22:42.222 09:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:22:42.222 09:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:42.222 09:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:22:42.222 09:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:22:42.222 09:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:42.222 09:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:22:42.222 09:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:22:42.222 09:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:42.222 09:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:42.222 09:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:42.222 09:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:42.222 09:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # nvmfpid=267414 00:22:42.222 09:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:42.222 09:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # waitforlisten 267414 00:22:42.222 09:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 267414 ']' 00:22:42.222 09:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:42.222 09:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:42.222 09:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:42.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:42.222 09:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:42.222 09:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:42.222 [2024-10-07 09:43:31.168244] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:22:42.222 [2024-10-07 09:43:31.168339] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:42.481 [2024-10-07 09:43:31.231312] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:42.481 [2024-10-07 09:43:31.331755] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:42.481 [2024-10-07 09:43:31.331817] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:42.481 [2024-10-07 09:43:31.331841] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:42.481 [2024-10-07 09:43:31.331852] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:42.481 [2024-10-07 09:43:31.331862] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:42.481 [2024-10-07 09:43:31.333371] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:22:42.481 [2024-10-07 09:43:31.333433] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:22:42.481 [2024-10-07 09:43:31.333544] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:22:42.481 [2024-10-07 09:43:31.333552] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:22:42.481 09:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:42.481 09:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:22:42.481 09:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:42.481 09:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:42.481 09:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:42.741 09:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:42.741 09:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:42.741 09:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.741 09:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:42.741 [2024-10-07 09:43:31.485153] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:42.741 09:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.741 09:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:42.741 09:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:42.741 09:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:42.741 09:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:42.741 09:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:42.741 09:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:42.741 09:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:42.741 09:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:42.741 09:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:42.741 09:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:42.741 09:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:42.741 09:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:42.741 09:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:42.741 09:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:42.741 09:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:42.741 09:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:42.741 09:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:42.741 09:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:42.741 09:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:42.741 09:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:42.741 09:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:42.741 09:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:42.741 09:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:42.741 09:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:42.741 09:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:42.741 09:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:42.741 09:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.741 09:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:42.741 Malloc1 00:22:42.741 [2024-10-07 09:43:31.567990] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:42.741 Malloc2 00:22:42.741 Malloc3 00:22:42.741 Malloc4 00:22:42.741 Malloc5 00:22:43.000 Malloc6 00:22:43.000 Malloc7 00:22:43.000 Malloc8 00:22:43.000 Malloc9 00:22:43.000 Malloc10 00:22:43.258 09:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.258 09:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:43.258 09:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:43.258 09:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:43.258 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=267588 00:22:43.258 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 267588 /var/tmp/bdevperf.sock 00:22:43.258 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 267588 ']' 00:22:43.258 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:43.258 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:43.258 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:43.258 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:43.258 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # config=() 00:22:43.258 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:43.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:43.258 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # local subsystem config 00:22:43.258 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:43.258 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:43.258 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:43.258 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:43.258 { 00:22:43.258 "params": { 00:22:43.258 "name": "Nvme$subsystem", 00:22:43.258 "trtype": "$TEST_TRANSPORT", 00:22:43.258 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:43.258 "adrfam": "ipv4", 00:22:43.258 "trsvcid": "$NVMF_PORT", 00:22:43.258 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:43.258 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:43.258 "hdgst": ${hdgst:-false}, 00:22:43.258 "ddgst": ${ddgst:-false} 00:22:43.258 }, 00:22:43.258 "method": "bdev_nvme_attach_controller" 00:22:43.258 } 00:22:43.259 EOF 00:22:43.259 )") 00:22:43.259 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:22:43.259 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:43.259 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:43.259 { 00:22:43.259 "params": { 00:22:43.259 "name": "Nvme$subsystem", 00:22:43.259 "trtype": "$TEST_TRANSPORT", 00:22:43.259 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:43.259 "adrfam": "ipv4", 00:22:43.259 "trsvcid": "$NVMF_PORT", 00:22:43.259 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:43.259 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:43.259 "hdgst": ${hdgst:-false}, 00:22:43.259 "ddgst": ${ddgst:-false} 00:22:43.259 }, 00:22:43.259 "method": "bdev_nvme_attach_controller" 00:22:43.259 } 00:22:43.259 EOF 00:22:43.259 )") 00:22:43.259 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:22:43.259 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:43.259 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:43.259 { 00:22:43.259 "params": { 00:22:43.259 "name": "Nvme$subsystem", 00:22:43.259 "trtype": "$TEST_TRANSPORT", 00:22:43.259 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:43.259 "adrfam": "ipv4", 00:22:43.259 "trsvcid": "$NVMF_PORT", 00:22:43.259 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:43.259 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:43.259 "hdgst": ${hdgst:-false}, 00:22:43.259 "ddgst": ${ddgst:-false} 00:22:43.259 }, 00:22:43.259 "method": "bdev_nvme_attach_controller" 00:22:43.259 } 00:22:43.259 EOF 00:22:43.259 )") 00:22:43.259 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:22:43.259 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:43.259 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:43.259 { 00:22:43.259 "params": { 00:22:43.259 "name": "Nvme$subsystem", 00:22:43.259 "trtype": "$TEST_TRANSPORT", 00:22:43.259 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:43.259 "adrfam": "ipv4", 00:22:43.259 "trsvcid": "$NVMF_PORT", 00:22:43.259 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:43.259 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:43.259 "hdgst": ${hdgst:-false}, 00:22:43.259 "ddgst": ${ddgst:-false} 00:22:43.259 }, 00:22:43.259 "method": "bdev_nvme_attach_controller" 00:22:43.259 } 00:22:43.259 EOF 00:22:43.259 )") 00:22:43.259 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:22:43.259 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:43.259 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:43.259 { 00:22:43.259 "params": { 00:22:43.259 "name": "Nvme$subsystem", 00:22:43.259 "trtype": "$TEST_TRANSPORT", 00:22:43.259 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:43.259 "adrfam": "ipv4", 00:22:43.259 "trsvcid": "$NVMF_PORT", 00:22:43.259 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:43.259 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:43.259 "hdgst": ${hdgst:-false}, 00:22:43.259 "ddgst": ${ddgst:-false} 00:22:43.259 }, 00:22:43.259 "method": "bdev_nvme_attach_controller" 00:22:43.259 } 00:22:43.259 EOF 00:22:43.259 )") 00:22:43.259 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:22:43.259 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:43.259 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:43.259 { 00:22:43.259 "params": { 00:22:43.259 "name": "Nvme$subsystem", 00:22:43.259 "trtype": "$TEST_TRANSPORT", 00:22:43.259 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:43.259 "adrfam": "ipv4", 00:22:43.259 "trsvcid": "$NVMF_PORT", 00:22:43.259 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:43.259 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:43.259 "hdgst": ${hdgst:-false}, 00:22:43.259 "ddgst": ${ddgst:-false} 00:22:43.259 }, 00:22:43.259 "method": "bdev_nvme_attach_controller" 00:22:43.259 } 00:22:43.259 EOF 00:22:43.259 )") 00:22:43.259 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:22:43.259 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:43.259 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:43.259 { 00:22:43.259 "params": { 00:22:43.259 "name": "Nvme$subsystem", 00:22:43.259 "trtype": "$TEST_TRANSPORT", 00:22:43.259 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:43.259 "adrfam": "ipv4", 00:22:43.259 "trsvcid": "$NVMF_PORT", 00:22:43.259 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:43.259 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:43.259 "hdgst": ${hdgst:-false}, 00:22:43.259 "ddgst": ${ddgst:-false} 00:22:43.259 }, 00:22:43.259 "method": "bdev_nvme_attach_controller" 00:22:43.259 } 00:22:43.259 EOF 00:22:43.259 )") 00:22:43.259 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:22:43.259 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:43.259 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:43.259 { 00:22:43.259 "params": { 00:22:43.259 "name": "Nvme$subsystem", 00:22:43.259 "trtype": "$TEST_TRANSPORT", 00:22:43.259 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:43.259 "adrfam": "ipv4", 00:22:43.259 "trsvcid": "$NVMF_PORT", 00:22:43.259 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:43.259 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:43.259 "hdgst": ${hdgst:-false}, 00:22:43.259 "ddgst": ${ddgst:-false} 00:22:43.259 }, 00:22:43.259 "method": "bdev_nvme_attach_controller" 00:22:43.259 } 00:22:43.259 EOF 00:22:43.259 )") 00:22:43.259 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:22:43.259 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:43.259 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:43.259 { 00:22:43.259 "params": { 00:22:43.259 "name": "Nvme$subsystem", 00:22:43.259 "trtype": "$TEST_TRANSPORT", 00:22:43.259 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:43.259 "adrfam": "ipv4", 00:22:43.259 "trsvcid": "$NVMF_PORT", 00:22:43.259 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:43.259 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:43.259 "hdgst": ${hdgst:-false}, 00:22:43.259 "ddgst": ${ddgst:-false} 00:22:43.259 }, 00:22:43.259 "method": "bdev_nvme_attach_controller" 00:22:43.259 } 00:22:43.259 EOF 00:22:43.259 )") 00:22:43.259 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:22:43.259 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:43.259 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:43.259 { 00:22:43.259 "params": { 00:22:43.259 "name": "Nvme$subsystem", 00:22:43.259 "trtype": "$TEST_TRANSPORT", 00:22:43.259 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:43.259 "adrfam": "ipv4", 00:22:43.259 "trsvcid": "$NVMF_PORT", 00:22:43.259 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:43.259 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:43.259 "hdgst": ${hdgst:-false}, 00:22:43.259 "ddgst": ${ddgst:-false} 00:22:43.259 }, 00:22:43.259 "method": "bdev_nvme_attach_controller" 00:22:43.259 } 00:22:43.259 EOF 00:22:43.259 )") 00:22:43.259 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:22:43.259 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # jq . 00:22:43.259 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@583 -- # IFS=, 00:22:43.259 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:22:43.259 "params": { 00:22:43.259 "name": "Nvme1", 00:22:43.259 "trtype": "tcp", 00:22:43.259 "traddr": "10.0.0.2", 00:22:43.259 "adrfam": "ipv4", 00:22:43.259 "trsvcid": "4420", 00:22:43.259 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:43.259 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:43.259 "hdgst": false, 00:22:43.259 "ddgst": false 00:22:43.259 }, 00:22:43.259 "method": "bdev_nvme_attach_controller" 00:22:43.259 },{ 00:22:43.259 "params": { 00:22:43.259 "name": "Nvme2", 00:22:43.259 "trtype": "tcp", 00:22:43.259 "traddr": "10.0.0.2", 00:22:43.259 "adrfam": "ipv4", 00:22:43.259 "trsvcid": "4420", 00:22:43.259 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:43.259 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:43.259 "hdgst": false, 00:22:43.259 "ddgst": false 00:22:43.259 }, 00:22:43.259 "method": "bdev_nvme_attach_controller" 00:22:43.259 },{ 00:22:43.259 "params": { 00:22:43.259 "name": "Nvme3", 00:22:43.259 "trtype": "tcp", 00:22:43.259 "traddr": "10.0.0.2", 00:22:43.259 "adrfam": "ipv4", 00:22:43.259 "trsvcid": "4420", 00:22:43.259 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:43.259 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:43.259 "hdgst": false, 00:22:43.259 "ddgst": false 00:22:43.259 }, 00:22:43.259 "method": "bdev_nvme_attach_controller" 00:22:43.259 },{ 00:22:43.260 "params": { 00:22:43.260 "name": "Nvme4", 00:22:43.260 "trtype": "tcp", 00:22:43.260 "traddr": "10.0.0.2", 00:22:43.260 "adrfam": "ipv4", 00:22:43.260 "trsvcid": "4420", 00:22:43.260 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:43.260 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:43.260 "hdgst": false, 00:22:43.260 "ddgst": false 00:22:43.260 }, 00:22:43.260 "method": "bdev_nvme_attach_controller" 00:22:43.260 },{ 00:22:43.260 "params": { 00:22:43.260 "name": "Nvme5", 00:22:43.260 "trtype": "tcp", 00:22:43.260 "traddr": "10.0.0.2", 00:22:43.260 "adrfam": "ipv4", 00:22:43.260 "trsvcid": "4420", 00:22:43.260 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:43.260 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:43.260 "hdgst": false, 00:22:43.260 "ddgst": false 00:22:43.260 }, 00:22:43.260 "method": "bdev_nvme_attach_controller" 00:22:43.260 },{ 00:22:43.260 "params": { 00:22:43.260 "name": "Nvme6", 00:22:43.260 "trtype": "tcp", 00:22:43.260 "traddr": "10.0.0.2", 00:22:43.260 "adrfam": "ipv4", 00:22:43.260 "trsvcid": "4420", 00:22:43.260 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:43.260 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:43.260 "hdgst": false, 00:22:43.260 "ddgst": false 00:22:43.260 }, 00:22:43.260 "method": "bdev_nvme_attach_controller" 00:22:43.260 },{ 00:22:43.260 "params": { 00:22:43.260 "name": "Nvme7", 00:22:43.260 "trtype": "tcp", 00:22:43.260 "traddr": "10.0.0.2", 00:22:43.260 "adrfam": "ipv4", 00:22:43.260 "trsvcid": "4420", 00:22:43.260 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:43.260 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:43.260 "hdgst": false, 00:22:43.260 "ddgst": false 00:22:43.260 }, 00:22:43.260 "method": "bdev_nvme_attach_controller" 00:22:43.260 },{ 00:22:43.260 "params": { 00:22:43.260 "name": "Nvme8", 00:22:43.260 "trtype": "tcp", 00:22:43.260 "traddr": "10.0.0.2", 00:22:43.260 "adrfam": "ipv4", 00:22:43.260 "trsvcid": "4420", 00:22:43.260 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:43.260 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:43.260 "hdgst": false, 00:22:43.260 "ddgst": false 00:22:43.260 }, 00:22:43.260 "method": "bdev_nvme_attach_controller" 00:22:43.260 },{ 00:22:43.260 "params": { 00:22:43.260 "name": "Nvme9", 00:22:43.260 "trtype": "tcp", 00:22:43.260 "traddr": "10.0.0.2", 00:22:43.260 "adrfam": "ipv4", 00:22:43.260 "trsvcid": "4420", 00:22:43.260 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:43.260 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:43.260 "hdgst": false, 00:22:43.260 "ddgst": false 00:22:43.260 }, 00:22:43.260 "method": "bdev_nvme_attach_controller" 00:22:43.260 },{ 00:22:43.260 "params": { 00:22:43.260 "name": "Nvme10", 00:22:43.260 "trtype": "tcp", 00:22:43.260 "traddr": "10.0.0.2", 00:22:43.260 "adrfam": "ipv4", 00:22:43.260 "trsvcid": "4420", 00:22:43.260 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:43.260 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:43.260 "hdgst": false, 00:22:43.260 "ddgst": false 00:22:43.260 }, 00:22:43.260 "method": "bdev_nvme_attach_controller" 00:22:43.260 }' 00:22:43.260 [2024-10-07 09:43:32.066758] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:22:43.260 [2024-10-07 09:43:32.066835] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid267588 ] 00:22:43.260 [2024-10-07 09:43:32.127192] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:43.260 [2024-10-07 09:43:32.237697] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:22:45.168 Running I/O for 10 seconds... 00:22:45.168 09:43:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:45.168 09:43:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:22:45.168 09:43:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:45.168 09:43:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.168 09:43:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:45.168 09:43:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.168 09:43:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:45.168 09:43:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:45.168 09:43:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:22:45.168 09:43:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:22:45.168 09:43:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:22:45.168 09:43:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:22:45.168 09:43:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:45.168 09:43:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:45.168 09:43:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:45.168 09:43:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.168 09:43:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:45.168 09:43:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.168 09:43:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:22:45.168 09:43:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:22:45.168 09:43:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:45.427 09:43:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:45.427 09:43:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:45.427 09:43:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:45.427 09:43:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:45.427 09:43:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.427 09:43:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:45.427 09:43:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.427 09:43:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:22:45.427 09:43:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:22:45.427 09:43:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:45.686 09:43:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:45.686 09:43:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:45.686 09:43:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:45.686 09:43:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:45.686 09:43:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.686 09:43:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:45.944 09:43:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.944 09:43:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:22:45.944 09:43:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:22:45.944 09:43:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:22:45.944 09:43:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:22:45.944 09:43:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:22:45.944 09:43:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 267588 00:22:45.944 09:43:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 267588 ']' 00:22:45.944 09:43:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 267588 00:22:45.944 09:43:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:22:45.944 09:43:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:45.944 09:43:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 267588 00:22:45.944 09:43:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:45.944 09:43:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:45.944 09:43:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 267588' 00:22:45.944 killing process with pid 267588 00:22:45.944 09:43:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 267588 00:22:45.944 09:43:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 267588 00:22:45.944 1886.00 IOPS, 117.88 MiB/s Received shutdown signal, test time was about 1.031002 seconds 00:22:45.944 00:22:45.944 Latency(us) 00:22:45.944 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:45.944 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:45.944 Verification LBA range: start 0x0 length 0x400 00:22:45.944 Nvme1n1 : 0.99 194.07 12.13 0.00 0.00 325387.25 21651.15 268746.15 00:22:45.944 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:45.944 Verification LBA range: start 0x0 length 0x400 00:22:45.944 Nvme2n1 : 0.97 202.81 12.68 0.00 0.00 301499.48 2888.44 240784.12 00:22:45.944 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:45.944 Verification LBA range: start 0x0 length 0x400 00:22:45.944 Nvme3n1 : 0.97 197.11 12.32 0.00 0.00 304631.66 40389.59 260978.92 00:22:45.944 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:45.944 Verification LBA range: start 0x0 length 0x400 00:22:45.944 Nvme4n1 : 1.03 248.51 15.53 0.00 0.00 236551.21 19515.16 273406.48 00:22:45.944 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:45.944 Verification LBA range: start 0x0 length 0x400 00:22:45.944 Nvme5n1 : 0.97 203.08 12.69 0.00 0.00 277903.50 2936.98 222142.77 00:22:45.944 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:45.944 Verification LBA range: start 0x0 length 0x400 00:22:45.944 Nvme6n1 : 1.02 250.41 15.65 0.00 0.00 222397.82 20971.52 233016.89 00:22:45.945 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:45.945 Verification LBA range: start 0x0 length 0x400 00:22:45.945 Nvme7n1 : 1.03 254.38 15.90 0.00 0.00 213777.88 3883.61 240784.12 00:22:45.945 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:45.945 Verification LBA range: start 0x0 length 0x400 00:22:45.945 Nvme8n1 : 0.98 196.39 12.27 0.00 0.00 267038.21 18641.35 267192.70 00:22:45.945 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:45.945 Verification LBA range: start 0x0 length 0x400 00:22:45.945 Nvme9n1 : 1.02 188.57 11.79 0.00 0.00 272675.78 20388.98 290494.39 00:22:45.945 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:45.945 Verification LBA range: start 0x0 length 0x400 00:22:45.945 Nvme10n1 : 1.00 196.45 12.28 0.00 0.00 252786.73 2754.94 267192.70 00:22:45.945 =================================================================================================================== 00:22:45.945 Total : 2131.77 133.24 0.00 0.00 263521.48 2754.94 290494.39 00:22:46.203 09:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:22:47.579 09:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 267414 00:22:47.579 09:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:22:47.579 09:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:47.579 09:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:47.579 09:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:47.579 09:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:47.579 09:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@514 -- # nvmfcleanup 00:22:47.579 09:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:22:47.579 09:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:47.579 09:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:22:47.579 09:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:47.579 09:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:47.579 rmmod nvme_tcp 00:22:47.579 rmmod nvme_fabrics 00:22:47.579 rmmod nvme_keyring 00:22:47.579 09:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:47.579 09:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:22:47.579 09:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:22:47.579 09:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@515 -- # '[' -n 267414 ']' 00:22:47.579 09:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # killprocess 267414 00:22:47.579 09:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 267414 ']' 00:22:47.579 09:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 267414 00:22:47.579 09:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:22:47.579 09:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:47.579 09:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 267414 00:22:47.579 09:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:47.579 09:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:47.579 09:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 267414' 00:22:47.579 killing process with pid 267414 00:22:47.579 09:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 267414 00:22:47.579 09:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 267414 00:22:47.838 09:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:22:47.838 09:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:22:47.838 09:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:22:47.838 09:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:22:47.838 09:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@789 -- # iptables-save 00:22:47.838 09:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:22:47.838 09:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@789 -- # iptables-restore 00:22:47.838 09:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:47.838 09:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:47.838 09:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:47.838 09:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:47.838 09:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:50.375 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:50.375 00:22:50.375 real 0m7.916s 00:22:50.375 user 0m24.308s 00:22:50.375 sys 0m1.510s 00:22:50.375 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:50.375 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:50.375 ************************************ 00:22:50.375 END TEST nvmf_shutdown_tc2 00:22:50.375 ************************************ 00:22:50.375 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:22:50.375 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:50.375 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:50.375 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:50.375 ************************************ 00:22:50.375 START TEST nvmf_shutdown_tc3 00:22:50.375 ************************************ 00:22:50.375 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc3 00:22:50.375 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:22:50.375 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:50.375 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:22:50.375 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:50.375 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # prepare_net_devs 00:22:50.375 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:22:50.375 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:22:50.375 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:50.375 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:50.375 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:50.375 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:22:50.375 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:22:50.375 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:50.375 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:50.375 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:50.375 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:50.375 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:50.375 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:50.375 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:50.375 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:50.375 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:50.375 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:22:50.375 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:50.375 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:22:50.375 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:22:50.375 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:22:50.375 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:22:50.375 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:22:50.375 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:50.375 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:50.375 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:50.375 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:50.375 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:50.375 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:50.375 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:50.375 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:50.375 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:50.375 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:50.375 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:50.375 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:50.375 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:50.375 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:50.375 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:50.375 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:50.375 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:50.375 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:50.375 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:50.375 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:50.375 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x1592)' 00:22:50.375 Found 0000:09:00.0 (0x8086 - 0x1592) 00:22:50.375 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:50.375 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:50.375 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:22:50.376 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:22:50.376 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:50.376 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:50.376 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x1592)' 00:22:50.376 Found 0000:09:00.1 (0x8086 - 0x1592) 00:22:50.376 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:50.376 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:50.376 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:22:50.376 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:22:50.376 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:50.376 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:50.376 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:50.376 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:50.376 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:50.376 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:50.376 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:50.376 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:50.376 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:50.376 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:50.376 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:50.376 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:22:50.376 Found net devices under 0000:09:00.0: cvl_0_0 00:22:50.376 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:50.376 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:50.376 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:50.376 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:50.376 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:50.376 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:50.376 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:50.376 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:50.376 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:22:50.376 Found net devices under 0000:09:00.1: cvl_0_1 00:22:50.376 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:50.376 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:22:50.376 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # is_hw=yes 00:22:50.376 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:22:50.376 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:22:50.376 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:22:50.376 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:50.376 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:50.376 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:50.376 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:50.376 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:50.376 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:50.376 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:50.376 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:50.376 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:50.376 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:50.376 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:50.376 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:50.376 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:50.376 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:50.376 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:50.376 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:50.376 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:50.376 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:50.376 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:50.376 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:50.376 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:50.376 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:50.376 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:50.376 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:50.376 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.218 ms 00:22:50.376 00:22:50.376 --- 10.0.0.2 ping statistics --- 00:22:50.376 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:50.376 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:22:50.376 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:50.376 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:50.376 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.105 ms 00:22:50.376 00:22:50.376 --- 10.0.0.1 ping statistics --- 00:22:50.376 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:50.376 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:22:50.376 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:50.376 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # return 0 00:22:50.376 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:22:50.376 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:50.376 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:22:50.376 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:22:50.376 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:50.376 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:22:50.376 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:22:50.376 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:50.376 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:50.376 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:50.376 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:50.376 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # nvmfpid=268469 00:22:50.376 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:50.376 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # waitforlisten 268469 00:22:50.376 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 268469 ']' 00:22:50.376 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:50.376 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:50.376 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:50.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:50.376 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:50.376 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:50.376 [2024-10-07 09:43:39.139202] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:22:50.376 [2024-10-07 09:43:39.139268] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:50.376 [2024-10-07 09:43:39.197865] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:50.376 [2024-10-07 09:43:39.301216] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:50.376 [2024-10-07 09:43:39.301281] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:50.376 [2024-10-07 09:43:39.301309] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:50.377 [2024-10-07 09:43:39.301321] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:50.377 [2024-10-07 09:43:39.301330] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:50.377 [2024-10-07 09:43:39.302824] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:22:50.377 [2024-10-07 09:43:39.302886] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:22:50.377 [2024-10-07 09:43:39.302952] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:22:50.377 [2024-10-07 09:43:39.302955] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:22:50.636 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:50.636 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:22:50.636 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:50.636 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:50.636 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:50.636 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:50.636 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:50.636 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.636 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:50.636 [2024-10-07 09:43:39.455294] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:50.636 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.636 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:50.636 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:50.636 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:50.636 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:50.636 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:50.636 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:50.636 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:50.636 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:50.636 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:50.636 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:50.636 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:50.636 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:50.636 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:50.636 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:50.636 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:50.636 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:50.636 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:50.636 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:50.636 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:50.636 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:50.636 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:50.636 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:50.636 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:50.636 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:50.636 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:50.636 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:50.636 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.636 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:50.636 Malloc1 00:22:50.636 [2024-10-07 09:43:39.539033] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:50.636 Malloc2 00:22:50.636 Malloc3 00:22:50.895 Malloc4 00:22:50.895 Malloc5 00:22:50.895 Malloc6 00:22:50.895 Malloc7 00:22:50.895 Malloc8 00:22:51.156 Malloc9 00:22:51.156 Malloc10 00:22:51.156 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.156 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:51.156 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:51.156 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:51.156 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=268640 00:22:51.156 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 268640 /var/tmp/bdevperf.sock 00:22:51.156 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 268640 ']' 00:22:51.156 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:51.156 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:51.156 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:51.156 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:51.156 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # config=() 00:22:51.156 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:51.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:51.156 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # local subsystem config 00:22:51.156 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:51.156 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:51.156 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:51.156 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:51.156 { 00:22:51.156 "params": { 00:22:51.156 "name": "Nvme$subsystem", 00:22:51.156 "trtype": "$TEST_TRANSPORT", 00:22:51.156 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:51.156 "adrfam": "ipv4", 00:22:51.156 "trsvcid": "$NVMF_PORT", 00:22:51.156 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:51.156 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:51.156 "hdgst": ${hdgst:-false}, 00:22:51.156 "ddgst": ${ddgst:-false} 00:22:51.156 }, 00:22:51.156 "method": "bdev_nvme_attach_controller" 00:22:51.156 } 00:22:51.156 EOF 00:22:51.156 )") 00:22:51.156 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:22:51.156 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:51.156 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:51.156 { 00:22:51.156 "params": { 00:22:51.156 "name": "Nvme$subsystem", 00:22:51.156 "trtype": "$TEST_TRANSPORT", 00:22:51.156 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:51.156 "adrfam": "ipv4", 00:22:51.156 "trsvcid": "$NVMF_PORT", 00:22:51.156 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:51.156 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:51.156 "hdgst": ${hdgst:-false}, 00:22:51.156 "ddgst": ${ddgst:-false} 00:22:51.156 }, 00:22:51.156 "method": "bdev_nvme_attach_controller" 00:22:51.156 } 00:22:51.156 EOF 00:22:51.156 )") 00:22:51.156 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:22:51.156 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:51.156 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:51.156 { 00:22:51.156 "params": { 00:22:51.156 "name": "Nvme$subsystem", 00:22:51.156 "trtype": "$TEST_TRANSPORT", 00:22:51.156 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:51.156 "adrfam": "ipv4", 00:22:51.156 "trsvcid": "$NVMF_PORT", 00:22:51.156 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:51.156 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:51.156 "hdgst": ${hdgst:-false}, 00:22:51.156 "ddgst": ${ddgst:-false} 00:22:51.156 }, 00:22:51.156 "method": "bdev_nvme_attach_controller" 00:22:51.156 } 00:22:51.156 EOF 00:22:51.156 )") 00:22:51.156 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:22:51.156 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:51.156 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:51.156 { 00:22:51.156 "params": { 00:22:51.156 "name": "Nvme$subsystem", 00:22:51.156 "trtype": "$TEST_TRANSPORT", 00:22:51.156 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:51.156 "adrfam": "ipv4", 00:22:51.156 "trsvcid": "$NVMF_PORT", 00:22:51.156 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:51.156 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:51.156 "hdgst": ${hdgst:-false}, 00:22:51.156 "ddgst": ${ddgst:-false} 00:22:51.156 }, 00:22:51.156 "method": "bdev_nvme_attach_controller" 00:22:51.156 } 00:22:51.156 EOF 00:22:51.156 )") 00:22:51.156 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:22:51.156 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:51.156 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:51.156 { 00:22:51.156 "params": { 00:22:51.156 "name": "Nvme$subsystem", 00:22:51.156 "trtype": "$TEST_TRANSPORT", 00:22:51.156 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:51.156 "adrfam": "ipv4", 00:22:51.156 "trsvcid": "$NVMF_PORT", 00:22:51.156 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:51.156 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:51.156 "hdgst": ${hdgst:-false}, 00:22:51.156 "ddgst": ${ddgst:-false} 00:22:51.156 }, 00:22:51.156 "method": "bdev_nvme_attach_controller" 00:22:51.156 } 00:22:51.156 EOF 00:22:51.156 )") 00:22:51.156 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:22:51.156 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:51.156 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:51.156 { 00:22:51.156 "params": { 00:22:51.156 "name": "Nvme$subsystem", 00:22:51.156 "trtype": "$TEST_TRANSPORT", 00:22:51.156 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:51.156 "adrfam": "ipv4", 00:22:51.156 "trsvcid": "$NVMF_PORT", 00:22:51.156 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:51.156 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:51.156 "hdgst": ${hdgst:-false}, 00:22:51.156 "ddgst": ${ddgst:-false} 00:22:51.156 }, 00:22:51.156 "method": "bdev_nvme_attach_controller" 00:22:51.156 } 00:22:51.156 EOF 00:22:51.156 )") 00:22:51.157 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:22:51.157 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:51.157 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:51.157 { 00:22:51.157 "params": { 00:22:51.157 "name": "Nvme$subsystem", 00:22:51.157 "trtype": "$TEST_TRANSPORT", 00:22:51.157 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:51.157 "adrfam": "ipv4", 00:22:51.157 "trsvcid": "$NVMF_PORT", 00:22:51.157 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:51.157 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:51.157 "hdgst": ${hdgst:-false}, 00:22:51.157 "ddgst": ${ddgst:-false} 00:22:51.157 }, 00:22:51.157 "method": "bdev_nvme_attach_controller" 00:22:51.157 } 00:22:51.157 EOF 00:22:51.157 )") 00:22:51.157 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:22:51.157 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:51.157 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:51.157 { 00:22:51.157 "params": { 00:22:51.157 "name": "Nvme$subsystem", 00:22:51.157 "trtype": "$TEST_TRANSPORT", 00:22:51.157 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:51.157 "adrfam": "ipv4", 00:22:51.157 "trsvcid": "$NVMF_PORT", 00:22:51.157 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:51.157 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:51.157 "hdgst": ${hdgst:-false}, 00:22:51.157 "ddgst": ${ddgst:-false} 00:22:51.157 }, 00:22:51.157 "method": "bdev_nvme_attach_controller" 00:22:51.157 } 00:22:51.157 EOF 00:22:51.157 )") 00:22:51.157 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:22:51.157 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:51.157 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:51.157 { 00:22:51.157 "params": { 00:22:51.157 "name": "Nvme$subsystem", 00:22:51.157 "trtype": "$TEST_TRANSPORT", 00:22:51.157 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:51.157 "adrfam": "ipv4", 00:22:51.157 "trsvcid": "$NVMF_PORT", 00:22:51.157 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:51.157 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:51.157 "hdgst": ${hdgst:-false}, 00:22:51.157 "ddgst": ${ddgst:-false} 00:22:51.157 }, 00:22:51.157 "method": "bdev_nvme_attach_controller" 00:22:51.157 } 00:22:51.157 EOF 00:22:51.157 )") 00:22:51.157 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:22:51.157 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:51.157 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:51.157 { 00:22:51.157 "params": { 00:22:51.157 "name": "Nvme$subsystem", 00:22:51.157 "trtype": "$TEST_TRANSPORT", 00:22:51.157 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:51.157 "adrfam": "ipv4", 00:22:51.157 "trsvcid": "$NVMF_PORT", 00:22:51.157 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:51.157 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:51.157 "hdgst": ${hdgst:-false}, 00:22:51.157 "ddgst": ${ddgst:-false} 00:22:51.157 }, 00:22:51.157 "method": "bdev_nvme_attach_controller" 00:22:51.157 } 00:22:51.157 EOF 00:22:51.157 )") 00:22:51.157 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:22:51.157 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # jq . 00:22:51.157 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@583 -- # IFS=, 00:22:51.157 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:22:51.157 "params": { 00:22:51.157 "name": "Nvme1", 00:22:51.157 "trtype": "tcp", 00:22:51.157 "traddr": "10.0.0.2", 00:22:51.157 "adrfam": "ipv4", 00:22:51.157 "trsvcid": "4420", 00:22:51.157 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:51.157 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:51.157 "hdgst": false, 00:22:51.157 "ddgst": false 00:22:51.157 }, 00:22:51.157 "method": "bdev_nvme_attach_controller" 00:22:51.157 },{ 00:22:51.157 "params": { 00:22:51.157 "name": "Nvme2", 00:22:51.157 "trtype": "tcp", 00:22:51.157 "traddr": "10.0.0.2", 00:22:51.157 "adrfam": "ipv4", 00:22:51.157 "trsvcid": "4420", 00:22:51.157 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:51.157 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:51.157 "hdgst": false, 00:22:51.157 "ddgst": false 00:22:51.157 }, 00:22:51.157 "method": "bdev_nvme_attach_controller" 00:22:51.157 },{ 00:22:51.157 "params": { 00:22:51.157 "name": "Nvme3", 00:22:51.157 "trtype": "tcp", 00:22:51.157 "traddr": "10.0.0.2", 00:22:51.157 "adrfam": "ipv4", 00:22:51.157 "trsvcid": "4420", 00:22:51.157 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:51.157 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:51.157 "hdgst": false, 00:22:51.157 "ddgst": false 00:22:51.157 }, 00:22:51.157 "method": "bdev_nvme_attach_controller" 00:22:51.157 },{ 00:22:51.157 "params": { 00:22:51.157 "name": "Nvme4", 00:22:51.157 "trtype": "tcp", 00:22:51.157 "traddr": "10.0.0.2", 00:22:51.157 "adrfam": "ipv4", 00:22:51.157 "trsvcid": "4420", 00:22:51.157 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:51.157 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:51.157 "hdgst": false, 00:22:51.157 "ddgst": false 00:22:51.157 }, 00:22:51.157 "method": "bdev_nvme_attach_controller" 00:22:51.157 },{ 00:22:51.157 "params": { 00:22:51.157 "name": "Nvme5", 00:22:51.157 "trtype": "tcp", 00:22:51.157 "traddr": "10.0.0.2", 00:22:51.157 "adrfam": "ipv4", 00:22:51.157 "trsvcid": "4420", 00:22:51.157 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:51.157 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:51.157 "hdgst": false, 00:22:51.157 "ddgst": false 00:22:51.157 }, 00:22:51.157 "method": "bdev_nvme_attach_controller" 00:22:51.157 },{ 00:22:51.157 "params": { 00:22:51.157 "name": "Nvme6", 00:22:51.157 "trtype": "tcp", 00:22:51.157 "traddr": "10.0.0.2", 00:22:51.157 "adrfam": "ipv4", 00:22:51.157 "trsvcid": "4420", 00:22:51.157 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:51.157 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:51.157 "hdgst": false, 00:22:51.157 "ddgst": false 00:22:51.157 }, 00:22:51.157 "method": "bdev_nvme_attach_controller" 00:22:51.157 },{ 00:22:51.157 "params": { 00:22:51.157 "name": "Nvme7", 00:22:51.157 "trtype": "tcp", 00:22:51.157 "traddr": "10.0.0.2", 00:22:51.157 "adrfam": "ipv4", 00:22:51.157 "trsvcid": "4420", 00:22:51.157 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:51.157 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:51.157 "hdgst": false, 00:22:51.157 "ddgst": false 00:22:51.157 }, 00:22:51.157 "method": "bdev_nvme_attach_controller" 00:22:51.157 },{ 00:22:51.157 "params": { 00:22:51.157 "name": "Nvme8", 00:22:51.157 "trtype": "tcp", 00:22:51.157 "traddr": "10.0.0.2", 00:22:51.157 "adrfam": "ipv4", 00:22:51.157 "trsvcid": "4420", 00:22:51.157 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:51.157 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:51.157 "hdgst": false, 00:22:51.157 "ddgst": false 00:22:51.157 }, 00:22:51.157 "method": "bdev_nvme_attach_controller" 00:22:51.157 },{ 00:22:51.157 "params": { 00:22:51.157 "name": "Nvme9", 00:22:51.157 "trtype": "tcp", 00:22:51.157 "traddr": "10.0.0.2", 00:22:51.157 "adrfam": "ipv4", 00:22:51.157 "trsvcid": "4420", 00:22:51.157 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:51.157 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:51.157 "hdgst": false, 00:22:51.157 "ddgst": false 00:22:51.157 }, 00:22:51.157 "method": "bdev_nvme_attach_controller" 00:22:51.157 },{ 00:22:51.157 "params": { 00:22:51.157 "name": "Nvme10", 00:22:51.157 "trtype": "tcp", 00:22:51.157 "traddr": "10.0.0.2", 00:22:51.157 "adrfam": "ipv4", 00:22:51.157 "trsvcid": "4420", 00:22:51.157 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:51.157 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:51.157 "hdgst": false, 00:22:51.157 "ddgst": false 00:22:51.157 }, 00:22:51.157 "method": "bdev_nvme_attach_controller" 00:22:51.157 }' 00:22:51.157 [2024-10-07 09:43:40.065351] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:22:51.157 [2024-10-07 09:43:40.065426] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid268640 ] 00:22:51.157 [2024-10-07 09:43:40.125293] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:51.415 [2024-10-07 09:43:40.236284] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:22:53.318 Running I/O for 10 seconds... 00:22:53.318 09:43:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:53.318 09:43:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:22:53.318 09:43:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:53.319 09:43:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.319 09:43:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:53.319 09:43:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.319 09:43:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:53.319 09:43:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:53.319 09:43:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:53.319 09:43:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:22:53.319 09:43:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:22:53.319 09:43:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:22:53.319 09:43:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:22:53.319 09:43:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:53.319 09:43:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:53.319 09:43:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:53.319 09:43:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.319 09:43:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:53.319 09:43:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.319 09:43:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:22:53.319 09:43:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:22:53.319 09:43:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:53.577 09:43:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:53.577 09:43:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:53.577 09:43:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:53.577 09:43:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:53.577 09:43:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.577 09:43:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:53.577 09:43:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.577 09:43:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:22:53.577 09:43:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:22:53.577 09:43:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:53.879 09:43:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:53.879 09:43:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:53.879 09:43:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:53.879 09:43:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:53.879 09:43:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.879 09:43:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:53.879 09:43:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.879 09:43:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:22:53.879 09:43:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:22:53.879 09:43:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:22:53.879 09:43:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:22:53.879 09:43:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:22:53.879 09:43:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 268469 00:22:53.879 09:43:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 268469 ']' 00:22:53.879 09:43:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 268469 00:22:53.879 09:43:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # uname 00:22:53.879 09:43:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:53.879 09:43:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 268469 00:22:53.879 09:43:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:53.879 09:43:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:53.879 09:43:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 268469' 00:22:53.879 killing process with pid 268469 00:22:53.879 09:43:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@969 -- # kill 268469 00:22:53.879 09:43:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@974 -- # wait 268469 00:22:53.879 [2024-10-07 09:43:42.753199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146620 is same with the state(6) to be set 00:22:53.879 [2024-10-07 09:43:42.753311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146620 is same with the state(6) to be set 00:22:53.880 [2024-10-07 09:43:42.753328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146620 is same with the state(6) to be set 00:22:53.880 [2024-10-07 09:43:42.753341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146620 is same with the state(6) to be set 00:22:53.880 [2024-10-07 09:43:42.753353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146620 is same with the state(6) to be set 00:22:53.880 [2024-10-07 09:43:42.753366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146620 is same with the state(6) to be set 00:22:53.880 [2024-10-07 09:43:42.753378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146620 is same with the state(6) to be set 00:22:53.880 [2024-10-07 09:43:42.753390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146620 is same with the state(6) to be set 00:22:53.880 [2024-10-07 09:43:42.753402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146620 is same with the state(6) to be set 00:22:53.880 [2024-10-07 09:43:42.753414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146620 is same with the state(6) to be set 00:22:53.880 [2024-10-07 09:43:42.753425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146620 is same with the state(6) to be set 00:22:53.880 [2024-10-07 09:43:42.753438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146620 is same with the state(6) to be set 00:22:53.880 [2024-10-07 09:43:42.753450] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146620 is same with the state(6) to be set 00:22:53.880 [2024-10-07 09:43:42.753462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146620 is same with the state(6) to be set 00:22:53.880 [2024-10-07 09:43:42.753474] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146620 is same with the state(6) to be set 00:22:53.880 [2024-10-07 09:43:42.753486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146620 is same with the state(6) to be set 00:22:53.880 [2024-10-07 09:43:42.753498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146620 is same with the state(6) to be set 00:22:53.880 [2024-10-07 09:43:42.753510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146620 is same with the state(6) to be set 00:22:53.880 [2024-10-07 09:43:42.753521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146620 is same with the state(6) to be set 00:22:53.880 [2024-10-07 09:43:42.753533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146620 is same with the state(6) to be set 00:22:53.880 [2024-10-07 09:43:42.753546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146620 is same with the state(6) to be set 00:22:53.880 [2024-10-07 09:43:42.753557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146620 is same with the state(6) to be set 00:22:53.880 [2024-10-07 09:43:42.753569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146620 is same with the state(6) to be set 00:22:53.880 [2024-10-07 09:43:42.753581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146620 is same with the state(6) to be set 00:22:53.880 [2024-10-07 09:43:42.753593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146620 is same with the state(6) to be set 00:22:53.880 [2024-10-07 09:43:42.753605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146620 is same with the state(6) to be set 00:22:53.880 [2024-10-07 09:43:42.753642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146620 is same with the state(6) to be set 00:22:53.880 [2024-10-07 09:43:42.753655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146620 is same with the state(6) to be set 00:22:53.880 [2024-10-07 09:43:42.753675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146620 is same with the state(6) to be set 00:22:53.880 [2024-10-07 09:43:42.753689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146620 is same with the state(6) to be set 00:22:53.880 [2024-10-07 09:43:42.753701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146620 is same with the state(6) to be set 00:22:53.880 [2024-10-07 09:43:42.753721] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146620 is same with the state(6) to be set 00:22:53.880 [2024-10-07 09:43:42.753734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146620 is same with the state(6) to be set 00:22:53.880 [2024-10-07 09:43:42.753746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146620 is same with the state(6) to be set 00:22:53.880 [2024-10-07 09:43:42.753757] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146620 is same with the state(6) to be set 00:22:53.880 [2024-10-07 09:43:42.753769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146620 is same with the state(6) to be set 00:22:53.880 [2024-10-07 09:43:42.753780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146620 is same with the state(6) to be set 00:22:53.880 [2024-10-07 09:43:42.753792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146620 is same with the state(6) to be set 00:22:53.880 [2024-10-07 09:43:42.753803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146620 is same with the state(6) to be set 00:22:53.880 [2024-10-07 09:43:42.753815] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146620 is same with the state(6) to be set 00:22:53.880 [2024-10-07 09:43:42.753826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146620 is same with the state(6) to be set 00:22:53.880 [2024-10-07 09:43:42.753838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146620 is same with the state(6) to be set 00:22:53.880 [2024-10-07 09:43:42.753849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146620 is same with the state(6) to be set 00:22:53.880 [2024-10-07 09:43:42.753861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146620 is same with the state(6) to be set 00:22:53.880 [2024-10-07 09:43:42.753873] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146620 is same with the state(6) to be set 00:22:53.880 [2024-10-07 09:43:42.753884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146620 is same with the state(6) to be set 00:22:53.880 [2024-10-07 09:43:42.753896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146620 is same with the state(6) to be set 00:22:53.880 [2024-10-07 09:43:42.753907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146620 is same with the state(6) to be set 00:22:53.880 [2024-10-07 09:43:42.753919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146620 is same with the state(6) to be set 00:22:53.880 [2024-10-07 09:43:42.753931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146620 is same with the state(6) to be set 00:22:53.880 [2024-10-07 09:43:42.753942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146620 is same with the state(6) to be set 00:22:53.880 [2024-10-07 09:43:42.753953] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146620 is same with the state(6) to be set 00:22:53.880 [2024-10-07 09:43:42.753965] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146620 is same with the state(6) to be set 00:22:53.880 [2024-10-07 09:43:42.753988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146620 is same with the state(6) to be set 00:22:53.880 [2024-10-07 09:43:42.754001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146620 is same with the state(6) to be set 00:22:53.880 [2024-10-07 09:43:42.754013] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146620 is same with the state(6) to be set 00:22:53.880 [2024-10-07 09:43:42.754025] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146620 is same with the state(6) to be set 00:22:53.880 [2024-10-07 09:43:42.754036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146620 is same with the state(6) to be set 00:22:53.880 [2024-10-07 09:43:42.754048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146620 is same with the state(6) to be set 00:22:53.880 [2024-10-07 09:43:42.754059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146620 is same with the state(6) to be set 00:22:53.880 [2024-10-07 09:43:42.754070] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146620 is same with the state(6) to be set 00:22:53.880 [2024-10-07 09:43:42.754082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146620 is same with the state(6) to be set 00:22:53.880 [2024-10-07 09:43:42.754093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146620 is same with the state(6) to be set 00:22:53.880 [2024-10-07 09:43:42.755435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149080 is same with the state(6) to be set 00:22:53.880 [2024-10-07 09:43:42.755472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149080 is same with the state(6) to be set 00:22:53.880 [2024-10-07 09:43:42.755488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149080 is same with the state(6) to be set 00:22:53.880 [2024-10-07 09:43:42.755501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149080 is same with the state(6) to be set 00:22:53.880 [2024-10-07 09:43:42.755513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149080 is same with the state(6) to be set 00:22:53.880 [2024-10-07 09:43:42.755525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149080 is same with the state(6) to be set 00:22:53.880 [2024-10-07 09:43:42.755536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149080 is same with the state(6) to be set 00:22:53.880 [2024-10-07 09:43:42.755549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149080 is same with the state(6) to be set 00:22:53.880 [2024-10-07 09:43:42.755561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149080 is same with the state(6) to be set 00:22:53.880 [2024-10-07 09:43:42.755573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149080 is same with the state(6) to be set 00:22:53.880 [2024-10-07 09:43:42.755584] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149080 is same with the state(6) to be set 00:22:53.880 [2024-10-07 09:43:42.755596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149080 is same with the state(6) to be set 00:22:53.880 [2024-10-07 09:43:42.755608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149080 is same with the state(6) to be set 00:22:53.880 [2024-10-07 09:43:42.755620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149080 is same with the state(6) to be set 00:22:53.880 [2024-10-07 09:43:42.755631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149080 is same with the state(6) to be set 00:22:53.880 [2024-10-07 09:43:42.755643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149080 is same with the state(6) to be set 00:22:53.880 [2024-10-07 09:43:42.755655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149080 is same with the state(6) to be set 00:22:53.880 [2024-10-07 09:43:42.755674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149080 is same with the state(6) to be set 00:22:53.880 [2024-10-07 09:43:42.755694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149080 is same with the state(6) to be set 00:22:53.880 [2024-10-07 09:43:42.755707] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149080 is same with the state(6) to be set 00:22:53.880 [2024-10-07 09:43:42.755729] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149080 is same with the state(6) to be set 00:22:53.880 [2024-10-07 09:43:42.755741] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149080 is same with the state(6) to be set 00:22:53.880 [2024-10-07 09:43:42.755753] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149080 is same with the state(6) to be set 00:22:53.880 [2024-10-07 09:43:42.755764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149080 is same with the state(6) to be set 00:22:53.881 [2024-10-07 09:43:42.755776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149080 is same with the state(6) to be set 00:22:53.881 [2024-10-07 09:43:42.755788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149080 is same with the state(6) to be set 00:22:53.881 [2024-10-07 09:43:42.755799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149080 is same with the state(6) to be set 00:22:53.881 [2024-10-07 09:43:42.755811] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149080 is same with the state(6) to be set 00:22:53.881 [2024-10-07 09:43:42.755823] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149080 is same with the state(6) to be set 00:22:53.881 [2024-10-07 09:43:42.755835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149080 is same with the state(6) to be set 00:22:53.881 [2024-10-07 09:43:42.755846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149080 is same with the state(6) to be set 00:22:53.881 [2024-10-07 09:43:42.755858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149080 is same with the state(6) to be set 00:22:53.881 [2024-10-07 09:43:42.755869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149080 is same with the state(6) to be set 00:22:53.881 [2024-10-07 09:43:42.755881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149080 is same with the state(6) to be set 00:22:53.881 [2024-10-07 09:43:42.755893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149080 is same with the state(6) to be set 00:22:53.881 [2024-10-07 09:43:42.755904] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149080 is same with the state(6) to be set 00:22:53.881 [2024-10-07 09:43:42.755916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149080 is same with the state(6) to be set 00:22:53.881 [2024-10-07 09:43:42.755928] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149080 is same with the state(6) to be set 00:22:53.881 [2024-10-07 09:43:42.755946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149080 is same with the state(6) to be set 00:22:53.881 [2024-10-07 09:43:42.755959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149080 is same with the state(6) to be set 00:22:53.881 [2024-10-07 09:43:42.755970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149080 is same with the state(6) to be set 00:22:53.881 [2024-10-07 09:43:42.755982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149080 is same with the state(6) to be set 00:22:53.881 [2024-10-07 09:43:42.755994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149080 is same with the state(6) to be set 00:22:53.881 [2024-10-07 09:43:42.756006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149080 is same with the state(6) to be set 00:22:53.881 [2024-10-07 09:43:42.756017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149080 is same with the state(6) to be set 00:22:53.881 [2024-10-07 09:43:42.756033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149080 is same with the state(6) to be set 00:22:53.881 [2024-10-07 09:43:42.756046] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149080 is same with the state(6) to be set 00:22:53.881 [2024-10-07 09:43:42.756057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149080 is same with the state(6) to be set 00:22:53.881 [2024-10-07 09:43:42.756069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149080 is same with the state(6) to be set 00:22:53.881 [2024-10-07 09:43:42.756081] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149080 is same with the state(6) to be set 00:22:53.881 [2024-10-07 09:43:42.756092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149080 is same with the state(6) to be set 00:22:53.881 [2024-10-07 09:43:42.756103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149080 is same with the state(6) to be set 00:22:53.881 [2024-10-07 09:43:42.756115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149080 is same with the state(6) to be set 00:22:53.881 [2024-10-07 09:43:42.756127] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149080 is same with the state(6) to be set 00:22:53.881 [2024-10-07 09:43:42.756138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149080 is same with the state(6) to be set 00:22:53.881 [2024-10-07 09:43:42.756150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149080 is same with the state(6) to be set 00:22:53.881 [2024-10-07 09:43:42.756162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149080 is same with the state(6) to be set 00:22:53.881 [2024-10-07 09:43:42.756174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149080 is same with the state(6) to be set 00:22:53.881 [2024-10-07 09:43:42.756185] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149080 is same with the state(6) to be set 00:22:53.881 [2024-10-07 09:43:42.756196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149080 is same with the state(6) to be set 00:22:53.881 [2024-10-07 09:43:42.756208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149080 is same with the state(6) to be set 00:22:53.881 [2024-10-07 09:43:42.756220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149080 is same with the state(6) to be set 00:22:53.881 [2024-10-07 09:43:42.756231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149080 is same with the state(6) to be set 00:22:53.881 [2024-10-07 09:43:42.757525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146af0 is same with the state(6) to be set 00:22:53.881 [2024-10-07 09:43:42.757548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146af0 is same with the state(6) to be set 00:22:53.881 [2024-10-07 09:43:42.757560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146af0 is same with the state(6) to be set 00:22:53.881 [2024-10-07 09:43:42.757572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146af0 is same with the state(6) to be set 00:22:53.881 [2024-10-07 09:43:42.759052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146fc0 is same with the state(6) to be set 00:22:53.881 [2024-10-07 09:43:42.759087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146fc0 is same with the state(6) to be set 00:22:53.881 [2024-10-07 09:43:42.759103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146fc0 is same with the state(6) to be set 00:22:53.881 [2024-10-07 09:43:42.759116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146fc0 is same with the state(6) to be set 00:22:53.881 [2024-10-07 09:43:42.759134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146fc0 is same with the state(6) to be set 00:22:53.881 [2024-10-07 09:43:42.759159] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146fc0 is same with the state(6) to be set 00:22:53.881 [2024-10-07 09:43:42.759172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146fc0 is same with the state(6) to be set 00:22:53.881 [2024-10-07 09:43:42.759184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146fc0 is same with the state(6) to be set 00:22:53.881 [2024-10-07 09:43:42.759195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146fc0 is same with the state(6) to be set 00:22:53.881 [2024-10-07 09:43:42.759207] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146fc0 is same with the state(6) to be set 00:22:53.881 [2024-10-07 09:43:42.759219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146fc0 is same with the state(6) to be set 00:22:53.881 [2024-10-07 09:43:42.759230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146fc0 is same with the state(6) to be set 00:22:53.881 [2024-10-07 09:43:42.759242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146fc0 is same with the state(6) to be set 00:22:53.881 [2024-10-07 09:43:42.759253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146fc0 is same with the state(6) to be set 00:22:53.881 [2024-10-07 09:43:42.759265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146fc0 is same with the state(6) to be set 00:22:53.881 [2024-10-07 09:43:42.759277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146fc0 is same with the state(6) to be set 00:22:53.881 [2024-10-07 09:43:42.759288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146fc0 is same with the state(6) to be set 00:22:53.881 [2024-10-07 09:43:42.759300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146fc0 is same with the state(6) to be set 00:22:53.881 [2024-10-07 09:43:42.759311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146fc0 is same with the state(6) to be set 00:22:53.881 [2024-10-07 09:43:42.759323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146fc0 is same with the state(6) to be set 00:22:53.881 [2024-10-07 09:43:42.759334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146fc0 is same with the state(6) to be set 00:22:53.881 [2024-10-07 09:43:42.759346] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146fc0 is same with the state(6) to be set 00:22:53.881 [2024-10-07 09:43:42.759358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146fc0 is same with the state(6) to be set 00:22:53.881 [2024-10-07 09:43:42.759369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146fc0 is same with the state(6) to be set 00:22:53.881 [2024-10-07 09:43:42.759381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146fc0 is same with the state(6) to be set 00:22:53.881 [2024-10-07 09:43:42.759393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146fc0 is same with the state(6) to be set 00:22:53.881 [2024-10-07 09:43:42.759405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146fc0 is same with the state(6) to be set 00:22:53.881 [2024-10-07 09:43:42.759416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146fc0 is same with the state(6) to be set 00:22:53.881 [2024-10-07 09:43:42.759428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146fc0 is same with the state(6) to be set 00:22:53.881 [2024-10-07 09:43:42.759440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146fc0 is same with the state(6) to be set 00:22:53.881 [2024-10-07 09:43:42.759451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146fc0 is same with the state(6) to be set 00:22:53.881 [2024-10-07 09:43:42.759463] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146fc0 is same with the state(6) to be set 00:22:53.881 [2024-10-07 09:43:42.759482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146fc0 is same with the state(6) to be set 00:22:53.881 [2024-10-07 09:43:42.759495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146fc0 is same with the state(6) to be set 00:22:53.881 [2024-10-07 09:43:42.759506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146fc0 is same with the state(6) to be set 00:22:53.881 [2024-10-07 09:43:42.759518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146fc0 is same with the state(6) to be set 00:22:53.881 [2024-10-07 09:43:42.759529] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146fc0 is same with the state(6) to be set 00:22:53.881 [2024-10-07 09:43:42.759541] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146fc0 is same with the state(6) to be set 00:22:53.881 [2024-10-07 09:43:42.759552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146fc0 is same with the state(6) to be set 00:22:53.881 [2024-10-07 09:43:42.759563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146fc0 is same with the state(6) to be set 00:22:53.881 [2024-10-07 09:43:42.759575] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146fc0 is same with the state(6) to be set 00:22:53.881 [2024-10-07 09:43:42.759587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146fc0 is same with the state(6) to be set 00:22:53.882 [2024-10-07 09:43:42.759599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146fc0 is same with the state(6) to be set 00:22:53.882 [2024-10-07 09:43:42.759610] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146fc0 is same with the state(6) to be set 00:22:53.882 [2024-10-07 09:43:42.759622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146fc0 is same with the state(6) to be set 00:22:53.882 [2024-10-07 09:43:42.759634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146fc0 is same with the state(6) to be set 00:22:53.882 [2024-10-07 09:43:42.759645] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146fc0 is same with the state(6) to be set 00:22:53.882 [2024-10-07 09:43:42.759657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146fc0 is same with the state(6) to be set 00:22:53.882 [2024-10-07 09:43:42.759677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146fc0 is same with the state(6) to be set 00:22:53.882 [2024-10-07 09:43:42.759690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146fc0 is same with the state(6) to be set 00:22:53.882 [2024-10-07 09:43:42.759702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146fc0 is same with the state(6) to be set 00:22:53.882 [2024-10-07 09:43:42.759714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146fc0 is same with the state(6) to be set 00:22:53.882 [2024-10-07 09:43:42.759734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146fc0 is same with the state(6) to be set 00:22:53.882 [2024-10-07 09:43:42.759746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146fc0 is same with the state(6) to be set 00:22:53.882 [2024-10-07 09:43:42.759758] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146fc0 is same with the state(6) to be set 00:22:53.882 [2024-10-07 09:43:42.759776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146fc0 is same with the state(6) to be set 00:22:53.882 [2024-10-07 09:43:42.759788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146fc0 is same with the state(6) to be set 00:22:53.882 [2024-10-07 09:43:42.759800] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146fc0 is same with the state(6) to be set 00:22:53.882 [2024-10-07 09:43:42.759812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146fc0 is same with the state(6) to be set 00:22:53.882 [2024-10-07 09:43:42.759827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146fc0 is same with the state(6) to be set 00:22:53.882 [2024-10-07 09:43:42.759839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146fc0 is same with the state(6) to be set 00:22:53.882 [2024-10-07 09:43:42.759851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146fc0 is same with the state(6) to be set 00:22:53.882 [2024-10-07 09:43:42.759862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2146fc0 is same with the state(6) to be set 00:22:53.882 [2024-10-07 09:43:42.760959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21474b0 is same with the state(6) to be set 00:22:53.882 [2024-10-07 09:43:42.761002] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21474b0 is same with the state(6) to be set 00:22:53.882 [2024-10-07 09:43:42.761018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21474b0 is same with the state(6) to be set 00:22:53.882 [2024-10-07 09:43:42.761031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21474b0 is same with the state(6) to be set 00:22:53.882 [2024-10-07 09:43:42.761043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21474b0 is same with the state(6) to be set 00:22:53.882 [2024-10-07 09:43:42.761055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21474b0 is same with the state(6) to be set 00:22:53.882 [2024-10-07 09:43:42.761067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21474b0 is same with the state(6) to be set 00:22:53.882 [2024-10-07 09:43:42.761078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21474b0 is same with the state(6) to be set 00:22:53.882 [2024-10-07 09:43:42.761090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21474b0 is same with the state(6) to be set 00:22:53.882 [2024-10-07 09:43:42.761102] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21474b0 is same with the state(6) to be set 00:22:53.882 [2024-10-07 09:43:42.761113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21474b0 is same with the state(6) to be set 00:22:53.882 [2024-10-07 09:43:42.761125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21474b0 is same with the state(6) to be set 00:22:53.882 [2024-10-07 09:43:42.761136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21474b0 is same with the state(6) to be set 00:22:53.882 [2024-10-07 09:43:42.761148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21474b0 is same with the state(6) to be set 00:22:53.882 [2024-10-07 09:43:42.761159] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21474b0 is same with the state(6) to be set 00:22:53.882 [2024-10-07 09:43:42.761171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21474b0 is same with the state(6) to be set 00:22:53.882 [2024-10-07 09:43:42.761183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21474b0 is same with the state(6) to be set 00:22:53.882 [2024-10-07 09:43:42.761194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21474b0 is same with the state(6) to be set 00:22:53.882 [2024-10-07 09:43:42.761206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21474b0 is same with the state(6) to be set 00:22:53.882 [2024-10-07 09:43:42.761218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21474b0 is same with the state(6) to be set 00:22:53.882 [2024-10-07 09:43:42.761230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21474b0 is same with the state(6) to be set 00:22:53.882 [2024-10-07 09:43:42.761242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21474b0 is same with the state(6) to be set 00:22:53.882 [2024-10-07 09:43:42.761253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21474b0 is same with the state(6) to be set 00:22:53.882 [2024-10-07 09:43:42.761272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21474b0 is same with the state(6) to be set 00:22:53.882 [2024-10-07 09:43:42.761285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21474b0 is same with the state(6) to be set 00:22:53.882 [2024-10-07 09:43:42.761296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21474b0 is same with the state(6) to be set 00:22:53.882 [2024-10-07 09:43:42.761308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21474b0 is same with the state(6) to be set 00:22:53.882 [2024-10-07 09:43:42.761320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21474b0 is same with the state(6) to be set 00:22:53.882 [2024-10-07 09:43:42.761331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21474b0 is same with the state(6) to be set 00:22:53.882 [2024-10-07 09:43:42.761343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21474b0 is same with the state(6) to be set 00:22:53.882 [2024-10-07 09:43:42.761355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21474b0 is same with the state(6) to be set 00:22:53.882 [2024-10-07 09:43:42.761366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21474b0 is same with the state(6) to be set 00:22:53.882 [2024-10-07 09:43:42.761378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21474b0 is same with the state(6) to be set 00:22:53.882 [2024-10-07 09:43:42.761389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21474b0 is same with the state(6) to be set 00:22:53.882 [2024-10-07 09:43:42.761401] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21474b0 is same with the state(6) to be set 00:22:53.882 [2024-10-07 09:43:42.761413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21474b0 is same with the state(6) to be set 00:22:53.882 [2024-10-07 09:43:42.761424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21474b0 is same with the state(6) to be set 00:22:53.882 [2024-10-07 09:43:42.761435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21474b0 is same with the state(6) to be set 00:22:53.882 [2024-10-07 09:43:42.761446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21474b0 is same with the state(6) to be set 00:22:53.882 [2024-10-07 09:43:42.761458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21474b0 is same with the state(6) to be set 00:22:53.882 [2024-10-07 09:43:42.761470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21474b0 is same with the state(6) to be set 00:22:53.882 [2024-10-07 09:43:42.761482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21474b0 is same with the state(6) to be set 00:22:53.882 [2024-10-07 09:43:42.761494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21474b0 is same with the state(6) to be set 00:22:53.882 [2024-10-07 09:43:42.761505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21474b0 is same with the state(6) to be set 00:22:53.882 [2024-10-07 09:43:42.761517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21474b0 is same with the state(6) to be set 00:22:53.882 [2024-10-07 09:43:42.761528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21474b0 is same with the state(6) to be set 00:22:53.882 [2024-10-07 09:43:42.761540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21474b0 is same with the state(6) to be set 00:22:53.882 [2024-10-07 09:43:42.761551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21474b0 is same with the state(6) to be set 00:22:53.882 [2024-10-07 09:43:42.761562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21474b0 is same with the state(6) to be set 00:22:53.882 [2024-10-07 09:43:42.761574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21474b0 is same with the state(6) to be set 00:22:53.882 [2024-10-07 09:43:42.761589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21474b0 is same with the state(6) to be set 00:22:53.882 [2024-10-07 09:43:42.761601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21474b0 is same with the state(6) to be set 00:22:53.882 [2024-10-07 09:43:42.761613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21474b0 is same with the state(6) to be set 00:22:53.882 [2024-10-07 09:43:42.761624] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21474b0 is same with the state(6) to be set 00:22:53.882 [2024-10-07 09:43:42.761636] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21474b0 is same with the state(6) to be set 00:22:53.882 [2024-10-07 09:43:42.761648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21474b0 is same with the state(6) to be set 00:22:53.882 [2024-10-07 09:43:42.761660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21474b0 is same with the state(6) to be set 00:22:53.882 [2024-10-07 09:43:42.761679] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21474b0 is same with the state(6) to be set 00:22:53.882 [2024-10-07 09:43:42.761692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21474b0 is same with the state(6) to be set 00:22:53.882 [2024-10-07 09:43:42.761703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21474b0 is same with the state(6) to be set 00:22:53.882 [2024-10-07 09:43:42.761715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21474b0 is same with the state(6) to be set 00:22:53.882 [2024-10-07 09:43:42.761726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21474b0 is same with the state(6) to be set 00:22:53.882 [2024-10-07 09:43:42.761738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21474b0 is same with the state(6) to be set 00:22:53.883 [2024-10-07 09:43:42.763254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d00 is same with the state(6) to be set 00:22:53.883 [2024-10-07 09:43:42.763280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d00 is same with the state(6) to be set 00:22:53.883 [2024-10-07 09:43:42.763294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d00 is same with the state(6) to be set 00:22:53.883 [2024-10-07 09:43:42.763306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d00 is same with the state(6) to be set 00:22:53.883 [2024-10-07 09:43:42.763318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d00 is same with the state(6) to be set 00:22:53.883 [2024-10-07 09:43:42.763330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d00 is same with the state(6) to be set 00:22:53.883 [2024-10-07 09:43:42.763341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d00 is same with the state(6) to be set 00:22:53.883 [2024-10-07 09:43:42.763353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d00 is same with the state(6) to be set 00:22:53.883 [2024-10-07 09:43:42.763365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d00 is same with the state(6) to be set 00:22:53.883 [2024-10-07 09:43:42.763376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d00 is same with the state(6) to be set 00:22:53.883 [2024-10-07 09:43:42.763388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d00 is same with the state(6) to be set 00:22:53.883 [2024-10-07 09:43:42.763399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d00 is same with the state(6) to be set 00:22:53.883 [2024-10-07 09:43:42.763411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d00 is same with the state(6) to be set 00:22:53.883 [2024-10-07 09:43:42.763422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d00 is same with the state(6) to be set 00:22:53.883 [2024-10-07 09:43:42.763439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d00 is same with the state(6) to be set 00:22:53.883 [2024-10-07 09:43:42.763452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d00 is same with the state(6) to be set 00:22:53.883 [2024-10-07 09:43:42.763463] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d00 is same with the state(6) to be set 00:22:53.883 [2024-10-07 09:43:42.763475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d00 is same with the state(6) to be set 00:22:53.883 [2024-10-07 09:43:42.763486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d00 is same with the state(6) to be set 00:22:53.883 [2024-10-07 09:43:42.763498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d00 is same with the state(6) to be set 00:22:53.883 [2024-10-07 09:43:42.763509] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d00 is same with the state(6) to be set 00:22:53.883 [2024-10-07 09:43:42.763520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d00 is same with the state(6) to be set 00:22:53.883 [2024-10-07 09:43:42.763532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d00 is same with the state(6) to be set 00:22:53.883 [2024-10-07 09:43:42.763543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d00 is same with the state(6) to be set 00:22:53.883 [2024-10-07 09:43:42.763554] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d00 is same with the state(6) to be set 00:22:53.883 [2024-10-07 09:43:42.763566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d00 is same with the state(6) to be set 00:22:53.883 [2024-10-07 09:43:42.763577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d00 is same with the state(6) to be set 00:22:53.883 [2024-10-07 09:43:42.763588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d00 is same with the state(6) to be set 00:22:53.883 [2024-10-07 09:43:42.763600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d00 is same with the state(6) to be set 00:22:53.883 [2024-10-07 09:43:42.763611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d00 is same with the state(6) to be set 00:22:53.883 [2024-10-07 09:43:42.763622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d00 is same with the state(6) to be set 00:22:53.883 [2024-10-07 09:43:42.763634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d00 is same with the state(6) to be set 00:22:53.883 [2024-10-07 09:43:42.763645] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d00 is same with the state(6) to be set 00:22:53.883 [2024-10-07 09:43:42.763657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d00 is same with the state(6) to be set 00:22:53.883 [2024-10-07 09:43:42.763679] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d00 is same with the state(6) to be set 00:22:53.883 [2024-10-07 09:43:42.763693] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d00 is same with the state(6) to be set 00:22:53.883 [2024-10-07 09:43:42.763705] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d00 is same with the state(6) to be set 00:22:53.883 [2024-10-07 09:43:42.763716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d00 is same with the state(6) to be set 00:22:53.883 [2024-10-07 09:43:42.763727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d00 is same with the state(6) to be set 00:22:53.883 [2024-10-07 09:43:42.763739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d00 is same with the state(6) to be set 00:22:53.883 [2024-10-07 09:43:42.763750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d00 is same with the state(6) to be set 00:22:53.883 [2024-10-07 09:43:42.763762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d00 is same with the state(6) to be set 00:22:53.883 [2024-10-07 09:43:42.763778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d00 is same with the state(6) to be set 00:22:53.883 [2024-10-07 09:43:42.763790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d00 is same with the state(6) to be set 00:22:53.883 [2024-10-07 09:43:42.763801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d00 is same with the state(6) to be set 00:22:53.883 [2024-10-07 09:43:42.763813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d00 is same with the state(6) to be set 00:22:53.883 [2024-10-07 09:43:42.763825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d00 is same with the state(6) to be set 00:22:53.883 [2024-10-07 09:43:42.763836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d00 is same with the state(6) to be set 00:22:53.883 [2024-10-07 09:43:42.763848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d00 is same with the state(6) to be set 00:22:53.883 [2024-10-07 09:43:42.763859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d00 is same with the state(6) to be set 00:22:53.883 [2024-10-07 09:43:42.763870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d00 is same with the state(6) to be set 00:22:53.883 [2024-10-07 09:43:42.763882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d00 is same with the state(6) to be set 00:22:53.883 [2024-10-07 09:43:42.763893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d00 is same with the state(6) to be set 00:22:53.883 [2024-10-07 09:43:42.763905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d00 is same with the state(6) to be set 00:22:53.883 [2024-10-07 09:43:42.763922] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d00 is same with the state(6) to be set 00:22:53.883 [2024-10-07 09:43:42.763934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d00 is same with the state(6) to be set 00:22:53.883 [2024-10-07 09:43:42.763946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d00 is same with the state(6) to be set 00:22:53.883 [2024-10-07 09:43:42.763958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d00 is same with the state(6) to be set 00:22:53.883 [2024-10-07 09:43:42.763970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d00 is same with the state(6) to be set 00:22:53.883 [2024-10-07 09:43:42.763986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d00 is same with the state(6) to be set 00:22:53.883 [2024-10-07 09:43:42.763997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d00 is same with the state(6) to be set 00:22:53.883 [2024-10-07 09:43:42.764009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d00 is same with the state(6) to be set 00:22:53.883 [2024-10-07 09:43:42.764020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d00 is same with the state(6) to be set 00:22:53.883 [2024-10-07 09:43:42.765943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21486c0 is same with the state(6) to be set 00:22:53.883 [2024-10-07 09:43:42.765978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21486c0 is same with the state(6) to be set 00:22:53.883 [2024-10-07 09:43:42.765992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21486c0 is same with the state(6) to be set 00:22:53.883 [2024-10-07 09:43:42.766004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21486c0 is same with the state(6) to be set 00:22:53.883 [2024-10-07 09:43:42.766015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21486c0 is same with the state(6) to be set 00:22:53.883 [2024-10-07 09:43:42.766026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21486c0 is same with the state(6) to be set 00:22:53.883 [2024-10-07 09:43:42.766043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21486c0 is same with the state(6) to be set 00:22:53.883 [2024-10-07 09:43:42.766055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21486c0 is same with the state(6) to be set 00:22:53.883 [2024-10-07 09:43:42.766066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21486c0 is same with the state(6) to be set 00:22:53.883 [2024-10-07 09:43:42.766077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21486c0 is same with the state(6) to be set 00:22:53.883 [2024-10-07 09:43:42.766088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21486c0 is same with the state(6) to be set 00:22:53.883 [2024-10-07 09:43:42.766100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21486c0 is same with the state(6) to be set 00:22:53.883 [2024-10-07 09:43:42.766111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21486c0 is same with the state(6) to be set 00:22:53.883 [2024-10-07 09:43:42.766123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21486c0 is same with the state(6) to be set 00:22:53.883 [2024-10-07 09:43:42.766134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21486c0 is same with the state(6) to be set 00:22:53.883 [2024-10-07 09:43:42.766145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21486c0 is same with the state(6) to be set 00:22:53.883 [2024-10-07 09:43:42.766156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21486c0 is same with the state(6) to be set 00:22:53.883 [2024-10-07 09:43:42.766168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21486c0 is same with the state(6) to be set 00:22:53.883 [2024-10-07 09:43:42.766179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21486c0 is same with the state(6) to be set 00:22:53.883 [2024-10-07 09:43:42.766190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21486c0 is same with the state(6) to be set 00:22:53.883 [2024-10-07 09:43:42.766201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21486c0 is same with the state(6) to be set 00:22:53.883 [2024-10-07 09:43:42.766213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21486c0 is same with the state(6) to be set 00:22:53.884 [2024-10-07 09:43:42.766224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21486c0 is same with the state(6) to be set 00:22:53.884 [2024-10-07 09:43:42.766235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21486c0 is same with the state(6) to be set 00:22:53.884 [2024-10-07 09:43:42.766247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21486c0 is same with the state(6) to be set 00:22:53.884 [2024-10-07 09:43:42.766258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21486c0 is same with the state(6) to be set 00:22:53.884 [2024-10-07 09:43:42.766269] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21486c0 is same with the state(6) to be set 00:22:53.884 [2024-10-07 09:43:42.766281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21486c0 is same with the state(6) to be set 00:22:53.884 [2024-10-07 09:43:42.766292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21486c0 is same with the state(6) to be set 00:22:53.884 [2024-10-07 09:43:42.766303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21486c0 is same with the state(6) to be set 00:22:53.884 [2024-10-07 09:43:42.766314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21486c0 is same with the state(6) to be set 00:22:53.884 [2024-10-07 09:43:42.766326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21486c0 is same with the state(6) to be set 00:22:53.884 [2024-10-07 09:43:42.766337] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21486c0 is same with the state(6) to be set 00:22:53.884 [2024-10-07 09:43:42.766352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21486c0 is same with the state(6) to be set 00:22:53.884 [2024-10-07 09:43:42.766364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21486c0 is same with the state(6) to be set 00:22:53.884 [2024-10-07 09:43:42.766375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21486c0 is same with the state(6) to be set 00:22:53.884 [2024-10-07 09:43:42.766387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21486c0 is same with the state(6) to be set 00:22:53.884 [2024-10-07 09:43:42.766399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21486c0 is same with the state(6) to be set 00:22:53.884 [2024-10-07 09:43:42.766410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21486c0 is same with the state(6) to be set 00:22:53.884 [2024-10-07 09:43:42.766422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21486c0 is same with the state(6) to be set 00:22:53.884 [2024-10-07 09:43:42.766433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21486c0 is same with the state(6) to be set 00:22:53.884 [2024-10-07 09:43:42.766444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21486c0 is same with the state(6) to be set 00:22:53.884 [2024-10-07 09:43:42.766456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21486c0 is same with the state(6) to be set 00:22:53.884 [2024-10-07 09:43:42.766468] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21486c0 is same with the state(6) to be set 00:22:53.884 [2024-10-07 09:43:42.766479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21486c0 is same with the state(6) to be set 00:22:53.884 [2024-10-07 09:43:42.766491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21486c0 is same with the state(6) to be set 00:22:53.884 [2024-10-07 09:43:42.766502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21486c0 is same with the state(6) to be set 00:22:53.884 [2024-10-07 09:43:42.766514] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21486c0 is same with the state(6) to be set 00:22:53.884 [2024-10-07 09:43:42.766526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21486c0 is same with the state(6) to be set 00:22:53.884 [2024-10-07 09:43:42.766537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21486c0 is same with the state(6) to be set 00:22:53.884 [2024-10-07 09:43:42.766548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21486c0 is same with the state(6) to be set 00:22:53.884 [2024-10-07 09:43:42.766559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21486c0 is same with the state(6) to be set 00:22:53.884 [2024-10-07 09:43:42.766571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21486c0 is same with the state(6) to be set 00:22:53.884 [2024-10-07 09:43:42.766582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21486c0 is same with the state(6) to be set 00:22:53.884 [2024-10-07 09:43:42.766593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21486c0 is same with the state(6) to be set 00:22:53.884 [2024-10-07 09:43:42.766605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21486c0 is same with the state(6) to be set 00:22:53.884 [2024-10-07 09:43:42.766616] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21486c0 is same with the state(6) to be set 00:22:53.884 [2024-10-07 09:43:42.766628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21486c0 is same with the state(6) to be set 00:22:53.884 [2024-10-07 09:43:42.766639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21486c0 is same with the state(6) to be set 00:22:53.884 [2024-10-07 09:43:42.766651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21486c0 is same with the state(6) to be set 00:22:53.884 [2024-10-07 09:43:42.766672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21486c0 is same with the state(6) to be set 00:22:53.884 [2024-10-07 09:43:42.766686] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21486c0 is same with the state(6) to be set 00:22:53.884 [2024-10-07 09:43:42.766698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21486c0 is same with the state(6) to be set 00:22:53.884 [2024-10-07 09:43:42.767416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148b90 is same with the state(6) to be set 00:22:53.884 [2024-10-07 09:43:42.767442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148b90 is same with the state(6) to be set 00:22:53.884 [2024-10-07 09:43:42.767456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148b90 is same with the state(6) to be set 00:22:53.884 [2024-10-07 09:43:42.767468] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148b90 is same with the state(6) to be set 00:22:53.884 [2024-10-07 09:43:42.767480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148b90 is same with the state(6) to be set 00:22:53.884 [2024-10-07 09:43:42.767492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148b90 is same with the state(6) to be set 00:22:53.884 [2024-10-07 09:43:42.767504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148b90 is same with the state(6) to be set 00:22:53.884 [2024-10-07 09:43:42.767517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148b90 is same with the state(6) to be set 00:22:53.884 [2024-10-07 09:43:42.767528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148b90 is same with the state(6) to be set 00:22:53.884 [2024-10-07 09:43:42.767540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148b90 is same with the state(6) to be set 00:22:53.884 [2024-10-07 09:43:42.767552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148b90 is same with the state(6) to be set 00:22:53.884 [2024-10-07 09:43:42.767563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148b90 is same with the state(6) to be set 00:22:53.884 [2024-10-07 09:43:42.767575] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148b90 is same with the state(6) to be set 00:22:53.884 [2024-10-07 09:43:42.767587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148b90 is same with the state(6) to be set 00:22:53.884 [2024-10-07 09:43:42.767599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148b90 is same with the state(6) to be set 00:22:53.884 [2024-10-07 09:43:42.767612] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148b90 is same with the state(6) to be set 00:22:53.884 [2024-10-07 09:43:42.767624] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148b90 is same with the state(6) to be set 00:22:53.884 [2024-10-07 09:43:42.767636] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148b90 is same with the state(6) to be set 00:22:53.884 [2024-10-07 09:43:42.767648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148b90 is same with the state(6) to be set 00:22:53.884 [2024-10-07 09:43:42.767659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148b90 is same with the state(6) to be set 00:22:53.884 [2024-10-07 09:43:42.767682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148b90 is same with the state(6) to be set 00:22:53.884 [2024-10-07 09:43:42.767696] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148b90 is same with the state(6) to be set 00:22:53.884 [2024-10-07 09:43:42.767708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148b90 is same with the state(6) to be set 00:22:53.884 [2024-10-07 09:43:42.767722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148b90 is same with the state(6) to be set 00:22:53.884 [2024-10-07 09:43:42.767739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148b90 is same with the state(6) to be set 00:22:53.884 [2024-10-07 09:43:42.767752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148b90 is same with the state(6) to be set 00:22:53.884 [2024-10-07 09:43:42.767764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148b90 is same with the state(6) to be set 00:22:53.884 [2024-10-07 09:43:42.767775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148b90 is same with the state(6) to be set 00:22:53.884 [2024-10-07 09:43:42.767787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148b90 is same with the state(6) to be set 00:22:53.884 [2024-10-07 09:43:42.767798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148b90 is same with the state(6) to be set 00:22:53.885 [2024-10-07 09:43:42.767810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148b90 is same with the state(6) to be set 00:22:53.885 [2024-10-07 09:43:42.767822] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148b90 is same with the state(6) to be set 00:22:53.885 [2024-10-07 09:43:42.767833] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148b90 is same with the state(6) to be set 00:22:53.885 [2024-10-07 09:43:42.767850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148b90 is same with the state(6) to be set 00:22:53.885 [2024-10-07 09:43:42.767862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148b90 is same with the state(6) to be set 00:22:53.885 [2024-10-07 09:43:42.767874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148b90 is same with the state(6) to be set 00:22:53.885 [2024-10-07 09:43:42.767886] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148b90 is same with the state(6) to be set 00:22:53.885 [2024-10-07 09:43:42.767897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148b90 is same with the state(6) to be set 00:22:53.885 [2024-10-07 09:43:42.767909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148b90 is same with the state(6) to be set 00:22:53.885 [2024-10-07 09:43:42.767920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148b90 is same with the state(6) to be set 00:22:53.885 [2024-10-07 09:43:42.767931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148b90 is same with the state(6) to be set 00:22:53.885 [2024-10-07 09:43:42.767943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148b90 is same with the state(6) to be set 00:22:53.885 [2024-10-07 09:43:42.767955] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148b90 is same with the state(6) to be set 00:22:53.885 [2024-10-07 09:43:42.767966] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148b90 is same with the state(6) to be set 00:22:53.885 [2024-10-07 09:43:42.767980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148b90 is same with the state(6) to be set 00:22:53.885 [2024-10-07 09:43:42.767991] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148b90 is same with the state(6) to be set 00:22:53.885 [2024-10-07 09:43:42.768003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148b90 is same with the state(6) to be set 00:22:53.885 [2024-10-07 09:43:42.768016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148b90 is same with the state(6) to be set 00:22:53.885 [2024-10-07 09:43:42.768027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148b90 is same with the state(6) to be set 00:22:53.885 [2024-10-07 09:43:42.768039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148b90 is same with the state(6) to be set 00:22:53.885 [2024-10-07 09:43:42.768050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148b90 is same with the state(6) to be set 00:22:53.885 [2024-10-07 09:43:42.768065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148b90 is same with the state(6) to be set 00:22:53.885 [2024-10-07 09:43:42.768077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148b90 is same with the state(6) to be set 00:22:53.885 [2024-10-07 09:43:42.768089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148b90 is same with the state(6) to be set 00:22:53.885 [2024-10-07 09:43:42.768100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148b90 is same with the state(6) to be set 00:22:53.885 [2024-10-07 09:43:42.768111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148b90 is same with the state(6) to be set 00:22:53.885 [2024-10-07 09:43:42.768123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148b90 is same with the state(6) to be set 00:22:53.885 [2024-10-07 09:43:42.768135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148b90 is same with the state(6) to be set 00:22:53.885 [2024-10-07 09:43:42.768147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148b90 is same with the state(6) to be set 00:22:53.885 [2024-10-07 09:43:42.768158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148b90 is same with the state(6) to be set 00:22:53.885 [2024-10-07 09:43:42.768170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148b90 is same with the state(6) to be set 00:22:53.885 [2024-10-07 09:43:42.768182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148b90 is same with the state(6) to be set 00:22:53.885 [2024-10-07 09:43:42.768193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148b90 is same with the state(6) to be set 00:22:53.885 [2024-10-07 09:43:42.771456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.885 [2024-10-07 09:43:42.771503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.885 [2024-10-07 09:43:42.771535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.885 [2024-10-07 09:43:42.771552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.885 [2024-10-07 09:43:42.771569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.885 [2024-10-07 09:43:42.771583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.885 [2024-10-07 09:43:42.771600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.885 [2024-10-07 09:43:42.771613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.885 [2024-10-07 09:43:42.771629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.885 [2024-10-07 09:43:42.771643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.885 [2024-10-07 09:43:42.771659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.885 [2024-10-07 09:43:42.771682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.885 [2024-10-07 09:43:42.771711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.885 [2024-10-07 09:43:42.771725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.885 [2024-10-07 09:43:42.771750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.885 [2024-10-07 09:43:42.771765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.885 [2024-10-07 09:43:42.771780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.885 [2024-10-07 09:43:42.771794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.885 [2024-10-07 09:43:42.771810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.885 [2024-10-07 09:43:42.771823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.885 [2024-10-07 09:43:42.771839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.885 [2024-10-07 09:43:42.771852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.885 [2024-10-07 09:43:42.771867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.885 [2024-10-07 09:43:42.771881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.885 [2024-10-07 09:43:42.771897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.885 [2024-10-07 09:43:42.771910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.885 [2024-10-07 09:43:42.771925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.885 [2024-10-07 09:43:42.771939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.885 [2024-10-07 09:43:42.771955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.885 [2024-10-07 09:43:42.771974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.885 [2024-10-07 09:43:42.771990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.885 [2024-10-07 09:43:42.772004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.885 [2024-10-07 09:43:42.772019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.885 [2024-10-07 09:43:42.772033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.885 [2024-10-07 09:43:42.772048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.885 [2024-10-07 09:43:42.772062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.885 [2024-10-07 09:43:42.772077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.885 [2024-10-07 09:43:42.772091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.885 [2024-10-07 09:43:42.772107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.885 [2024-10-07 09:43:42.772125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.885 [2024-10-07 09:43:42.772141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.885 [2024-10-07 09:43:42.772154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.885 [2024-10-07 09:43:42.772169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.885 [2024-10-07 09:43:42.772183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.885 [2024-10-07 09:43:42.772199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.885 [2024-10-07 09:43:42.772213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.885 [2024-10-07 09:43:42.772229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.885 [2024-10-07 09:43:42.772243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.886 [2024-10-07 09:43:42.772258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.886 [2024-10-07 09:43:42.772272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.886 [2024-10-07 09:43:42.772288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.886 [2024-10-07 09:43:42.772302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.886 [2024-10-07 09:43:42.772318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.886 [2024-10-07 09:43:42.772331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.886 [2024-10-07 09:43:42.772348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.886 [2024-10-07 09:43:42.772361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.886 [2024-10-07 09:43:42.772377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.886 [2024-10-07 09:43:42.772390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.886 [2024-10-07 09:43:42.772406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.886 [2024-10-07 09:43:42.772420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.886 [2024-10-07 09:43:42.772436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.886 [2024-10-07 09:43:42.772449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.886 [2024-10-07 09:43:42.772464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.886 [2024-10-07 09:43:42.772478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.886 [2024-10-07 09:43:42.772498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.886 [2024-10-07 09:43:42.772512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.886 [2024-10-07 09:43:42.772528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.886 [2024-10-07 09:43:42.772541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.886 [2024-10-07 09:43:42.772557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.886 [2024-10-07 09:43:42.772570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.886 [2024-10-07 09:43:42.772586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.886 [2024-10-07 09:43:42.772599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.886 [2024-10-07 09:43:42.772615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.886 [2024-10-07 09:43:42.772628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.886 [2024-10-07 09:43:42.772644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.886 [2024-10-07 09:43:42.772658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.886 [2024-10-07 09:43:42.772682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.886 [2024-10-07 09:43:42.772698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.886 [2024-10-07 09:43:42.772724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.886 [2024-10-07 09:43:42.772738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.886 [2024-10-07 09:43:42.772754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.886 [2024-10-07 09:43:42.772767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.886 [2024-10-07 09:43:42.772783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.886 [2024-10-07 09:43:42.772796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.886 [2024-10-07 09:43:42.772812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.886 [2024-10-07 09:43:42.772825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.886 [2024-10-07 09:43:42.772840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.886 [2024-10-07 09:43:42.772853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.886 [2024-10-07 09:43:42.772869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.886 [2024-10-07 09:43:42.772890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.886 [2024-10-07 09:43:42.772907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.886 [2024-10-07 09:43:42.772920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.886 [2024-10-07 09:43:42.772936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.886 [2024-10-07 09:43:42.772949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.886 [2024-10-07 09:43:42.772966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.886 [2024-10-07 09:43:42.772979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.886 [2024-10-07 09:43:42.772995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.886 [2024-10-07 09:43:42.773008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.886 [2024-10-07 09:43:42.773023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.886 [2024-10-07 09:43:42.773036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.886 [2024-10-07 09:43:42.773051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.886 [2024-10-07 09:43:42.773064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.886 [2024-10-07 09:43:42.773080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.886 [2024-10-07 09:43:42.773093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.886 [2024-10-07 09:43:42.773108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.886 [2024-10-07 09:43:42.773122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.886 [2024-10-07 09:43:42.773137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.886 [2024-10-07 09:43:42.773151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.886 [2024-10-07 09:43:42.773166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.886 [2024-10-07 09:43:42.773179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.886 [2024-10-07 09:43:42.773195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.886 [2024-10-07 09:43:42.773208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.886 [2024-10-07 09:43:42.773223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.886 [2024-10-07 09:43:42.773236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.886 [2024-10-07 09:43:42.773255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.886 [2024-10-07 09:43:42.773268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.886 [2024-10-07 09:43:42.773284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.886 [2024-10-07 09:43:42.773297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.886 [2024-10-07 09:43:42.773312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.886 [2024-10-07 09:43:42.773325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.886 [2024-10-07 09:43:42.773340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.886 [2024-10-07 09:43:42.773353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.886 [2024-10-07 09:43:42.773368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.886 [2024-10-07 09:43:42.773381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.886 [2024-10-07 09:43:42.773396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.886 [2024-10-07 09:43:42.773409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.886 [2024-10-07 09:43:42.773424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.887 [2024-10-07 09:43:42.773437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.887 [2024-10-07 09:43:42.773483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:53.887 [2024-10-07 09:43:42.773570] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1a48b30 was disconnected and freed. reset controller. 00:22:53.887 [2024-10-07 09:43:42.773939] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.887 [2024-10-07 09:43:42.773963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.887 [2024-10-07 09:43:42.773979] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.887 [2024-10-07 09:43:42.773994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.887 [2024-10-07 09:43:42.774009] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.887 [2024-10-07 09:43:42.774024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.887 [2024-10-07 09:43:42.774039] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.887 [2024-10-07 09:43:42.774051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.887 [2024-10-07 09:43:42.774064] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1557dc0 is same with the state(6) to be set 00:22:53.887 [2024-10-07 09:43:42.774115] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.887 [2024-10-07 09:43:42.774141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.887 [2024-10-07 09:43:42.774157] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.887 [2024-10-07 09:43:42.774171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.887 [2024-10-07 09:43:42.774185] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.887 [2024-10-07 09:43:42.774197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.887 [2024-10-07 09:43:42.774210] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.887 [2024-10-07 09:43:42.774223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.887 [2024-10-07 09:43:42.774236] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1560fd0 is same with the state(6) to be set 00:22:53.887 [2024-10-07 09:43:42.774282] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.887 [2024-10-07 09:43:42.774302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.887 [2024-10-07 09:43:42.774316] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.887 [2024-10-07 09:43:42.774330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.887 [2024-10-07 09:43:42.774344] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.887 [2024-10-07 09:43:42.774357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.887 [2024-10-07 09:43:42.774370] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.887 [2024-10-07 09:43:42.774383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.887 [2024-10-07 09:43:42.774395] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14cb380 is same with the state(6) to be set 00:22:53.887 [2024-10-07 09:43:42.774439] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.887 [2024-10-07 09:43:42.774460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.887 [2024-10-07 09:43:42.774474] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.887 [2024-10-07 09:43:42.774487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.887 [2024-10-07 09:43:42.774500] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.887 [2024-10-07 09:43:42.774513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.887 [2024-10-07 09:43:42.774526] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.887 [2024-10-07 09:43:42.774539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.887 [2024-10-07 09:43:42.774556] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1983430 is same with the state(6) to be set 00:22:53.887 [2024-10-07 09:43:42.774608] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.887 [2024-10-07 09:43:42.774628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.887 [2024-10-07 09:43:42.774642] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.887 [2024-10-07 09:43:42.774656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.887 [2024-10-07 09:43:42.774680] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.887 [2024-10-07 09:43:42.774695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.887 [2024-10-07 09:43:42.774719] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.887 [2024-10-07 09:43:42.774731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.887 [2024-10-07 09:43:42.774745] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d4460 is same with the state(6) to be set 00:22:53.887 [2024-10-07 09:43:42.774793] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.887 [2024-10-07 09:43:42.774814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.887 [2024-10-07 09:43:42.774828] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.887 [2024-10-07 09:43:42.774841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.887 [2024-10-07 09:43:42.774855] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.887 [2024-10-07 09:43:42.774868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.887 [2024-10-07 09:43:42.774881] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.887 [2024-10-07 09:43:42.774893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.887 [2024-10-07 09:43:42.774906] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19837b0 is same with the state(6) to be set 00:22:53.887 [2024-10-07 09:43:42.774952] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.887 [2024-10-07 09:43:42.774972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.887 [2024-10-07 09:43:42.774986] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.887 [2024-10-07 09:43:42.775000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.887 [2024-10-07 09:43:42.775013] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.887 [2024-10-07 09:43:42.775026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.887 [2024-10-07 09:43:42.775039] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.887 [2024-10-07 09:43:42.775056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.887 [2024-10-07 09:43:42.775070] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198aae0 is same with the state(6) to be set 00:22:53.887 [2024-10-07 09:43:42.775110] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.887 [2024-10-07 09:43:42.775130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.887 [2024-10-07 09:43:42.775145] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.887 [2024-10-07 09:43:42.775158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.887 [2024-10-07 09:43:42.775173] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.887 [2024-10-07 09:43:42.775186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.887 [2024-10-07 09:43:42.775200] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.887 [2024-10-07 09:43:42.775213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.887 [2024-10-07 09:43:42.775225] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x155f790 is same with the state(6) to be set 00:22:53.887 [2024-10-07 09:43:42.775270] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.887 [2024-10-07 09:43:42.775290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.887 [2024-10-07 09:43:42.775305] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.887 [2024-10-07 09:43:42.775318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.887 [2024-10-07 09:43:42.775332] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.888 [2024-10-07 09:43:42.775345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.888 [2024-10-07 09:43:42.775360] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.888 [2024-10-07 09:43:42.775373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.888 [2024-10-07 09:43:42.775385] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x155a170 is same with the state(6) to be set 00:22:53.888 [2024-10-07 09:43:42.775429] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.888 [2024-10-07 09:43:42.775450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.888 [2024-10-07 09:43:42.775464] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.888 [2024-10-07 09:43:42.775477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.888 [2024-10-07 09:43:42.775490] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.888 [2024-10-07 09:43:42.775509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.888 [2024-10-07 09:43:42.775524] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.888 [2024-10-07 09:43:42.775537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.888 [2024-10-07 09:43:42.775549] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x154ac90 is same with the state(6) to be set 00:22:53.888 [2024-10-07 09:43:42.776062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.888 [2024-10-07 09:43:42.776087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.888 [2024-10-07 09:43:42.776109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.888 [2024-10-07 09:43:42.776124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.888 [2024-10-07 09:43:42.776140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.888 [2024-10-07 09:43:42.776154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.888 [2024-10-07 09:43:42.776170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.888 [2024-10-07 09:43:42.776184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.888 [2024-10-07 09:43:42.776199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.888 [2024-10-07 09:43:42.776212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.888 [2024-10-07 09:43:42.776228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.888 [2024-10-07 09:43:42.776242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.888 [2024-10-07 09:43:42.776257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.888 [2024-10-07 09:43:42.776270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.888 [2024-10-07 09:43:42.776286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.888 [2024-10-07 09:43:42.776300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.888 [2024-10-07 09:43:42.776316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.888 [2024-10-07 09:43:42.776329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.888 [2024-10-07 09:43:42.776344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.888 [2024-10-07 09:43:42.776358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.888 [2024-10-07 09:43:42.776373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.888 [2024-10-07 09:43:42.776395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.888 [2024-10-07 09:43:42.776411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.888 [2024-10-07 09:43:42.776425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.888 [2024-10-07 09:43:42.776440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.888 [2024-10-07 09:43:42.776454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.888 [2024-10-07 09:43:42.776470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.888 [2024-10-07 09:43:42.776483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.888 [2024-10-07 09:43:42.776499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.888 [2024-10-07 09:43:42.776512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.888 [2024-10-07 09:43:42.776527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.888 [2024-10-07 09:43:42.776541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.888 [2024-10-07 09:43:42.776557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.888 [2024-10-07 09:43:42.776570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.888 [2024-10-07 09:43:42.776586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.888 [2024-10-07 09:43:42.776599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.888 [2024-10-07 09:43:42.776615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.888 [2024-10-07 09:43:42.776628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.888 [2024-10-07 09:43:42.776643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.888 [2024-10-07 09:43:42.776657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.888 [2024-10-07 09:43:42.776681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.888 [2024-10-07 09:43:42.776697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.888 [2024-10-07 09:43:42.776713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.888 [2024-10-07 09:43:42.776726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.888 [2024-10-07 09:43:42.776742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.888 [2024-10-07 09:43:42.776755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.888 [2024-10-07 09:43:42.776775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.888 [2024-10-07 09:43:42.776789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.888 [2024-10-07 09:43:42.776805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.888 [2024-10-07 09:43:42.776819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.888 [2024-10-07 09:43:42.776834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.889 [2024-10-07 09:43:42.776848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.889 [2024-10-07 09:43:42.776863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.889 [2024-10-07 09:43:42.776876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.889 [2024-10-07 09:43:42.776892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.889 [2024-10-07 09:43:42.776905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.889 [2024-10-07 09:43:42.776921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.889 [2024-10-07 09:43:42.776934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.889 [2024-10-07 09:43:42.776950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.889 [2024-10-07 09:43:42.776964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.889 [2024-10-07 09:43:42.776979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.889 [2024-10-07 09:43:42.776993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.889 [2024-10-07 09:43:42.777008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.889 [2024-10-07 09:43:42.777022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.889 [2024-10-07 09:43:42.777037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.889 [2024-10-07 09:43:42.777050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.889 [2024-10-07 09:43:42.777066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.889 [2024-10-07 09:43:42.777079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.889 [2024-10-07 09:43:42.777095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.889 [2024-10-07 09:43:42.777108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.889 [2024-10-07 09:43:42.777123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.889 [2024-10-07 09:43:42.777139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.889 [2024-10-07 09:43:42.777156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.889 [2024-10-07 09:43:42.777169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.889 [2024-10-07 09:43:42.777185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.889 [2024-10-07 09:43:42.777199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.889 [2024-10-07 09:43:42.777214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.889 [2024-10-07 09:43:42.777227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.889 [2024-10-07 09:43:42.777243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.889 [2024-10-07 09:43:42.777256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.889 [2024-10-07 09:43:42.777271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.889 [2024-10-07 09:43:42.777285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.889 [2024-10-07 09:43:42.777300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.889 [2024-10-07 09:43:42.777314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.889 [2024-10-07 09:43:42.777329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.889 [2024-10-07 09:43:42.777343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.889 [2024-10-07 09:43:42.777358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.889 [2024-10-07 09:43:42.777372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.889 [2024-10-07 09:43:42.777388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.889 [2024-10-07 09:43:42.777402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.889 [2024-10-07 09:43:42.777417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.889 [2024-10-07 09:43:42.777430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.889 [2024-10-07 09:43:42.777446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.889 [2024-10-07 09:43:42.777459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.889 [2024-10-07 09:43:42.777475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.889 [2024-10-07 09:43:42.777488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.889 [2024-10-07 09:43:42.777507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.889 [2024-10-07 09:43:42.777521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.889 [2024-10-07 09:43:42.777537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.889 [2024-10-07 09:43:42.777550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.889 [2024-10-07 09:43:42.777566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.889 [2024-10-07 09:43:42.777579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.889 [2024-10-07 09:43:42.777595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.889 [2024-10-07 09:43:42.777608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.889 [2024-10-07 09:43:42.777623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.889 [2024-10-07 09:43:42.777637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.889 [2024-10-07 09:43:42.777652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.889 [2024-10-07 09:43:42.777681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.889 [2024-10-07 09:43:42.777711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.889 [2024-10-07 09:43:42.777726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.889 [2024-10-07 09:43:42.777742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.889 [2024-10-07 09:43:42.777764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.889 [2024-10-07 09:43:42.777781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.889 [2024-10-07 09:43:42.777795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.889 [2024-10-07 09:43:42.777811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.889 [2024-10-07 09:43:42.777825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.889 [2024-10-07 09:43:42.777840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.889 [2024-10-07 09:43:42.777854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.889 [2024-10-07 09:43:42.777870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.889 [2024-10-07 09:43:42.777883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.889 [2024-10-07 09:43:42.777899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.889 [2024-10-07 09:43:42.777916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.889 [2024-10-07 09:43:42.777932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.890 [2024-10-07 09:43:42.777946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.890 [2024-10-07 09:43:42.777970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.890 [2024-10-07 09:43:42.777983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.890 [2024-10-07 09:43:42.777998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.890 [2024-10-07 09:43:42.778012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.890 [2024-10-07 09:43:42.778050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:53.890 [2024-10-07 09:43:42.778128] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1766090 was disconnected and freed. reset controller. 00:22:53.890 [2024-10-07 09:43:42.778352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.890 [2024-10-07 09:43:42.778375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.890 [2024-10-07 09:43:42.778396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.890 [2024-10-07 09:43:42.778411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.890 [2024-10-07 09:43:42.778427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.890 [2024-10-07 09:43:42.778441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.890 [2024-10-07 09:43:42.778457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.890 [2024-10-07 09:43:42.778471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.890 [2024-10-07 09:43:42.778487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.890 [2024-10-07 09:43:42.778506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.890 [2024-10-07 09:43:42.778524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.890 [2024-10-07 09:43:42.778537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.890 [2024-10-07 09:43:42.778554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.890 [2024-10-07 09:43:42.778568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.890 [2024-10-07 09:43:42.778585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.890 [2024-10-07 09:43:42.778598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.890 [2024-10-07 09:43:42.778614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.890 [2024-10-07 09:43:42.778632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.890 [2024-10-07 09:43:42.778650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.890 [2024-10-07 09:43:42.778664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.890 [2024-10-07 09:43:42.778689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.890 [2024-10-07 09:43:42.778708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.890 [2024-10-07 09:43:42.778723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.890 [2024-10-07 09:43:42.778737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.890 [2024-10-07 09:43:42.778754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.890 [2024-10-07 09:43:42.778768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.890 [2024-10-07 09:43:42.778784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.890 [2024-10-07 09:43:42.778797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.890 [2024-10-07 09:43:42.778813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.890 [2024-10-07 09:43:42.778827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.890 [2024-10-07 09:43:42.778843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.890 [2024-10-07 09:43:42.778857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.890 [2024-10-07 09:43:42.778873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.890 [2024-10-07 09:43:42.778886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.890 [2024-10-07 09:43:42.778903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.890 [2024-10-07 09:43:42.778917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.890 [2024-10-07 09:43:42.778933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.890 [2024-10-07 09:43:42.778946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.890 [2024-10-07 09:43:42.778969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.890 [2024-10-07 09:43:42.778983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.890 [2024-10-07 09:43:42.778999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.890 [2024-10-07 09:43:42.779013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.890 [2024-10-07 09:43:42.779034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.890 [2024-10-07 09:43:42.779048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.890 [2024-10-07 09:43:42.779064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.890 [2024-10-07 09:43:42.779077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.890 [2024-10-07 09:43:42.779093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.890 [2024-10-07 09:43:42.779107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.890 [2024-10-07 09:43:42.779122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.890 [2024-10-07 09:43:42.779137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.890 [2024-10-07 09:43:42.779152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.890 [2024-10-07 09:43:42.779166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.890 [2024-10-07 09:43:42.779181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.890 [2024-10-07 09:43:42.779195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.890 [2024-10-07 09:43:42.779211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.890 [2024-10-07 09:43:42.779224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.890 [2024-10-07 09:43:42.779240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.890 [2024-10-07 09:43:42.779253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.890 [2024-10-07 09:43:42.779269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.890 [2024-10-07 09:43:42.779282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.890 [2024-10-07 09:43:42.779298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.890 [2024-10-07 09:43:42.779312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.890 [2024-10-07 09:43:42.779328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.890 [2024-10-07 09:43:42.779341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.890 [2024-10-07 09:43:42.779357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.890 [2024-10-07 09:43:42.779370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.890 [2024-10-07 09:43:42.779385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.890 [2024-10-07 09:43:42.779402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.890 [2024-10-07 09:43:42.779418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.890 [2024-10-07 09:43:42.779432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.891 [2024-10-07 09:43:42.779448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.891 [2024-10-07 09:43:42.779462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.891 [2024-10-07 09:43:42.779478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.891 [2024-10-07 09:43:42.779491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.891 [2024-10-07 09:43:42.779507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.891 [2024-10-07 09:43:42.779521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.891 [2024-10-07 09:43:42.779537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.891 [2024-10-07 09:43:42.779550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.891 [2024-10-07 09:43:42.779567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.891 [2024-10-07 09:43:42.779580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.891 [2024-10-07 09:43:42.779596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.891 [2024-10-07 09:43:42.779610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.891 [2024-10-07 09:43:42.779625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.891 [2024-10-07 09:43:42.779639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.891 [2024-10-07 09:43:42.779655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.891 [2024-10-07 09:43:42.779676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.891 [2024-10-07 09:43:42.779694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.891 [2024-10-07 09:43:42.779716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.891 [2024-10-07 09:43:42.779731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.891 [2024-10-07 09:43:42.779745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.891 [2024-10-07 09:43:42.779760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.891 [2024-10-07 09:43:42.779781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.891 [2024-10-07 09:43:42.779799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.891 [2024-10-07 09:43:42.779813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.891 [2024-10-07 09:43:42.779829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.891 [2024-10-07 09:43:42.779843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.891 [2024-10-07 09:43:42.779858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.891 [2024-10-07 09:43:42.779872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.891 [2024-10-07 09:43:42.779893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.891 [2024-10-07 09:43:42.779907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.891 [2024-10-07 09:43:42.779923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.891 [2024-10-07 09:43:42.779937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.891 [2024-10-07 09:43:42.779953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.891 [2024-10-07 09:43:42.779973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.891 [2024-10-07 09:43:42.779989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.891 [2024-10-07 09:43:42.780002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.891 [2024-10-07 09:43:42.780018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.891 [2024-10-07 09:43:42.780031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.891 [2024-10-07 09:43:42.780047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.891 [2024-10-07 09:43:42.780061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.891 [2024-10-07 09:43:42.780076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.891 [2024-10-07 09:43:42.780090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.891 [2024-10-07 09:43:42.780105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.891 [2024-10-07 09:43:42.780118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.891 [2024-10-07 09:43:42.780134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.891 [2024-10-07 09:43:42.780148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.891 [2024-10-07 09:43:42.780167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.891 [2024-10-07 09:43:42.780182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.891 [2024-10-07 09:43:42.780198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.891 [2024-10-07 09:43:42.780212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.891 [2024-10-07 09:43:42.780227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.891 [2024-10-07 09:43:42.780249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.891 [2024-10-07 09:43:42.780264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.891 [2024-10-07 09:43:42.780278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.891 [2024-10-07 09:43:42.780294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.891 [2024-10-07 09:43:42.780307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.891 [2024-10-07 09:43:42.780323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.891 [2024-10-07 09:43:42.780337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.891 [2024-10-07 09:43:42.780351] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a04070 is same with the state(6) to be set 00:22:53.891 [2024-10-07 09:43:42.780423] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1a04070 was disconnected and freed. reset controller. 00:22:53.891 [2024-10-07 09:43:42.784719] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:22:53.891 [2024-10-07 09:43:42.784770] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:22:53.891 [2024-10-07 09:43:42.784789] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:22:53.891 [2024-10-07 09:43:42.784818] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1560fd0 (9): Bad file descriptor 00:22:53.891 [2024-10-07 09:43:42.784842] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x155a170 (9): Bad file descriptor 00:22:53.891 [2024-10-07 09:43:42.784861] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x198aae0 (9): Bad file descriptor 00:22:53.891 [2024-10-07 09:43:42.784882] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1557dc0 (9): Bad file descriptor 00:22:53.891 [2024-10-07 09:43:42.784917] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14cb380 (9): Bad file descriptor 00:22:53.891 [2024-10-07 09:43:42.784951] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1983430 (9): Bad file descriptor 00:22:53.891 [2024-10-07 09:43:42.784982] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19d4460 (9): Bad file descriptor 00:22:53.891 [2024-10-07 09:43:42.785023] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19837b0 (9): Bad file descriptor 00:22:53.891 [2024-10-07 09:43:42.785057] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x155f790 (9): Bad file descriptor 00:22:53.891 [2024-10-07 09:43:42.785087] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x154ac90 (9): Bad file descriptor 00:22:53.891 [2024-10-07 09:43:42.786777] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:53.891 [2024-10-07 09:43:42.786875] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:53.891 [2024-10-07 09:43:42.786951] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:53.891 [2024-10-07 09:43:42.787025] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:53.892 [2024-10-07 09:43:42.787221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:53.892 [2024-10-07 09:43:42.787255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x198aae0 with addr=10.0.0.2, port=4420 00:22:53.892 [2024-10-07 09:43:42.787275] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198aae0 is same with the state(6) to be set 00:22:53.892 [2024-10-07 09:43:42.787356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:53.892 [2024-10-07 09:43:42.787383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x155a170 with addr=10.0.0.2, port=4420 00:22:53.892 [2024-10-07 09:43:42.787399] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x155a170 is same with the state(6) to be set 00:22:53.892 [2024-10-07 09:43:42.787481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:53.892 [2024-10-07 09:43:42.787505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1560fd0 with addr=10.0.0.2, port=4420 00:22:53.892 [2024-10-07 09:43:42.787520] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1560fd0 is same with the state(6) to be set 00:22:53.892 [2024-10-07 09:43:42.787610] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:53.892 [2024-10-07 09:43:42.787701] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:53.892 [2024-10-07 09:43:42.787778] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:53.892 [2024-10-07 09:43:42.787894] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x198aae0 (9): Bad file descriptor 00:22:53.892 [2024-10-07 09:43:42.787923] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x155a170 (9): Bad file descriptor 00:22:53.892 [2024-10-07 09:43:42.787942] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1560fd0 (9): Bad file descriptor 00:22:53.892 [2024-10-07 09:43:42.788037] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:22:53.892 [2024-10-07 09:43:42.788058] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:22:53.892 [2024-10-07 09:43:42.788076] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:22:53.892 [2024-10-07 09:43:42.788097] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:22:53.892 [2024-10-07 09:43:42.788111] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:22:53.892 [2024-10-07 09:43:42.788123] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:53.892 [2024-10-07 09:43:42.788140] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:22:53.892 [2024-10-07 09:43:42.788154] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:22:53.892 [2024-10-07 09:43:42.788165] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:22:53.892 [2024-10-07 09:43:42.788221] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:53.892 [2024-10-07 09:43:42.788240] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:53.892 [2024-10-07 09:43:42.788252] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:53.892 [2024-10-07 09:43:42.794945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.892 [2024-10-07 09:43:42.795018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.892 [2024-10-07 09:43:42.795052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.892 [2024-10-07 09:43:42.795068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.892 [2024-10-07 09:43:42.795085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.892 [2024-10-07 09:43:42.795100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.892 [2024-10-07 09:43:42.795116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.892 [2024-10-07 09:43:42.795130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.892 [2024-10-07 09:43:42.795146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.892 [2024-10-07 09:43:42.795160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.892 [2024-10-07 09:43:42.795176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.892 [2024-10-07 09:43:42.795190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.892 [2024-10-07 09:43:42.795205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.892 [2024-10-07 09:43:42.795219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.892 [2024-10-07 09:43:42.795236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.892 [2024-10-07 09:43:42.795249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.892 [2024-10-07 09:43:42.795265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.892 [2024-10-07 09:43:42.795279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.892 [2024-10-07 09:43:42.795295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.892 [2024-10-07 09:43:42.795309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.892 [2024-10-07 09:43:42.795324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.892 [2024-10-07 09:43:42.795338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.892 [2024-10-07 09:43:42.795354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.892 [2024-10-07 09:43:42.795368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.892 [2024-10-07 09:43:42.795383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.892 [2024-10-07 09:43:42.795396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.892 [2024-10-07 09:43:42.795416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.892 [2024-10-07 09:43:42.795430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.892 [2024-10-07 09:43:42.795446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.892 [2024-10-07 09:43:42.795460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.892 [2024-10-07 09:43:42.795476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.892 [2024-10-07 09:43:42.795489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.892 [2024-10-07 09:43:42.795505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.892 [2024-10-07 09:43:42.795519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.892 [2024-10-07 09:43:42.795535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.892 [2024-10-07 09:43:42.795549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.892 [2024-10-07 09:43:42.795564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.892 [2024-10-07 09:43:42.795578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.892 [2024-10-07 09:43:42.795594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.892 [2024-10-07 09:43:42.795608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.892 [2024-10-07 09:43:42.795624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.892 [2024-10-07 09:43:42.795637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.892 [2024-10-07 09:43:42.795653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.892 [2024-10-07 09:43:42.795674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.892 [2024-10-07 09:43:42.795692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.892 [2024-10-07 09:43:42.795707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.892 [2024-10-07 09:43:42.795722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.892 [2024-10-07 09:43:42.795736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.892 [2024-10-07 09:43:42.795751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.892 [2024-10-07 09:43:42.795765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.892 [2024-10-07 09:43:42.795781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.892 [2024-10-07 09:43:42.795798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.893 [2024-10-07 09:43:42.795815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.893 [2024-10-07 09:43:42.795828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.893 [2024-10-07 09:43:42.795844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.893 [2024-10-07 09:43:42.795858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.893 [2024-10-07 09:43:42.795874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.893 [2024-10-07 09:43:42.795887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.893 [2024-10-07 09:43:42.795905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.893 [2024-10-07 09:43:42.795919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.893 [2024-10-07 09:43:42.795935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.893 [2024-10-07 09:43:42.795949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.893 [2024-10-07 09:43:42.795965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.893 [2024-10-07 09:43:42.795978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.893 [2024-10-07 09:43:42.795994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.893 [2024-10-07 09:43:42.796008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.893 [2024-10-07 09:43:42.796025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.893 [2024-10-07 09:43:42.796038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.893 [2024-10-07 09:43:42.796054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.893 [2024-10-07 09:43:42.796069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.893 [2024-10-07 09:43:42.796085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.893 [2024-10-07 09:43:42.796098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.893 [2024-10-07 09:43:42.796114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.893 [2024-10-07 09:43:42.796128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.893 [2024-10-07 09:43:42.796143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.893 [2024-10-07 09:43:42.796157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.893 [2024-10-07 09:43:42.796177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.893 [2024-10-07 09:43:42.796191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.893 [2024-10-07 09:43:42.796207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.893 [2024-10-07 09:43:42.796220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.893 [2024-10-07 09:43:42.796237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.893 [2024-10-07 09:43:42.796250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.893 [2024-10-07 09:43:42.796266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.893 [2024-10-07 09:43:42.796285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.893 [2024-10-07 09:43:42.796301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.893 [2024-10-07 09:43:42.796315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.893 [2024-10-07 09:43:42.796334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.893 [2024-10-07 09:43:42.796348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.893 [2024-10-07 09:43:42.796364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.893 [2024-10-07 09:43:42.796380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.893 [2024-10-07 09:43:42.796397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.893 [2024-10-07 09:43:42.796411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.893 [2024-10-07 09:43:42.796427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.893 [2024-10-07 09:43:42.796441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.893 [2024-10-07 09:43:42.796457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.893 [2024-10-07 09:43:42.796471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.893 [2024-10-07 09:43:42.796486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.893 [2024-10-07 09:43:42.796500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.893 [2024-10-07 09:43:42.796520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.893 [2024-10-07 09:43:42.796534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.893 [2024-10-07 09:43:42.796550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.893 [2024-10-07 09:43:42.796570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.893 [2024-10-07 09:43:42.796587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.893 [2024-10-07 09:43:42.796601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.893 [2024-10-07 09:43:42.796619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.893 [2024-10-07 09:43:42.796633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.893 [2024-10-07 09:43:42.796649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.893 [2024-10-07 09:43:42.796663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.893 [2024-10-07 09:43:42.796686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.893 [2024-10-07 09:43:42.796701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.893 [2024-10-07 09:43:42.796716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.893 [2024-10-07 09:43:42.796730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.893 [2024-10-07 09:43:42.796746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.893 [2024-10-07 09:43:42.796759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.893 [2024-10-07 09:43:42.796775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.893 [2024-10-07 09:43:42.796788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.893 [2024-10-07 09:43:42.796804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.893 [2024-10-07 09:43:42.796818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.893 [2024-10-07 09:43:42.796834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.894 [2024-10-07 09:43:42.796848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.894 [2024-10-07 09:43:42.796864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.894 [2024-10-07 09:43:42.796877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.894 [2024-10-07 09:43:42.796894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.894 [2024-10-07 09:43:42.796908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.894 [2024-10-07 09:43:42.796923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.894 [2024-10-07 09:43:42.796937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.894 [2024-10-07 09:43:42.796957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.894 [2024-10-07 09:43:42.796973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.894 [2024-10-07 09:43:42.796988] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1764d30 is same with the state(6) to be set 00:22:53.894 [2024-10-07 09:43:42.798273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.894 [2024-10-07 09:43:42.798297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.894 [2024-10-07 09:43:42.798317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.894 [2024-10-07 09:43:42.798332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.894 [2024-10-07 09:43:42.798348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.894 [2024-10-07 09:43:42.798362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.894 [2024-10-07 09:43:42.798379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.894 [2024-10-07 09:43:42.798393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.894 [2024-10-07 09:43:42.798409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.894 [2024-10-07 09:43:42.798422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.894 [2024-10-07 09:43:42.798438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.894 [2024-10-07 09:43:42.798452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.894 [2024-10-07 09:43:42.798467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.894 [2024-10-07 09:43:42.798481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.894 [2024-10-07 09:43:42.798497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.894 [2024-10-07 09:43:42.798511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.894 [2024-10-07 09:43:42.798527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.894 [2024-10-07 09:43:42.798541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.894 [2024-10-07 09:43:42.798557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.894 [2024-10-07 09:43:42.798571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.894 [2024-10-07 09:43:42.798587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.894 [2024-10-07 09:43:42.798600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.894 [2024-10-07 09:43:42.798622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.894 [2024-10-07 09:43:42.798636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.894 [2024-10-07 09:43:42.798653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.894 [2024-10-07 09:43:42.798676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.894 [2024-10-07 09:43:42.798695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.894 [2024-10-07 09:43:42.798709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.894 [2024-10-07 09:43:42.798725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.894 [2024-10-07 09:43:42.798739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.894 [2024-10-07 09:43:42.798755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.894 [2024-10-07 09:43:42.798768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.894 [2024-10-07 09:43:42.798784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.894 [2024-10-07 09:43:42.798798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.894 [2024-10-07 09:43:42.798814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.894 [2024-10-07 09:43:42.798828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.894 [2024-10-07 09:43:42.798844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.894 [2024-10-07 09:43:42.798858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.894 [2024-10-07 09:43:42.798874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.894 [2024-10-07 09:43:42.798888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.894 [2024-10-07 09:43:42.798904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.894 [2024-10-07 09:43:42.798918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.894 [2024-10-07 09:43:42.798934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.894 [2024-10-07 09:43:42.798947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.894 [2024-10-07 09:43:42.798963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.894 [2024-10-07 09:43:42.798977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.894 [2024-10-07 09:43:42.798993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.894 [2024-10-07 09:43:42.799010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.894 [2024-10-07 09:43:42.799027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.894 [2024-10-07 09:43:42.799040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.894 [2024-10-07 09:43:42.799056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.894 [2024-10-07 09:43:42.799070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.894 [2024-10-07 09:43:42.799085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.894 [2024-10-07 09:43:42.799099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.894 [2024-10-07 09:43:42.799115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.894 [2024-10-07 09:43:42.799129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.894 [2024-10-07 09:43:42.799145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.894 [2024-10-07 09:43:42.799162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.894 [2024-10-07 09:43:42.799179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.894 [2024-10-07 09:43:42.799193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.894 [2024-10-07 09:43:42.799208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.894 [2024-10-07 09:43:42.799222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.894 [2024-10-07 09:43:42.799238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.894 [2024-10-07 09:43:42.799251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.895 [2024-10-07 09:43:42.799267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.895 [2024-10-07 09:43:42.799280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.895 [2024-10-07 09:43:42.799296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.895 [2024-10-07 09:43:42.799310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.895 [2024-10-07 09:43:42.799326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.895 [2024-10-07 09:43:42.799340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.895 [2024-10-07 09:43:42.799356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.895 [2024-10-07 09:43:42.799370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.895 [2024-10-07 09:43:42.799390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.895 [2024-10-07 09:43:42.799404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.895 [2024-10-07 09:43:42.799420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.895 [2024-10-07 09:43:42.799434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.895 [2024-10-07 09:43:42.799450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.895 [2024-10-07 09:43:42.799464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.895 [2024-10-07 09:43:42.799480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.895 [2024-10-07 09:43:42.799493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.895 [2024-10-07 09:43:42.799509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.895 [2024-10-07 09:43:42.799523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.895 [2024-10-07 09:43:42.799538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.895 [2024-10-07 09:43:42.799552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.895 [2024-10-07 09:43:42.799568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.895 [2024-10-07 09:43:42.799581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.895 [2024-10-07 09:43:42.799597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.895 [2024-10-07 09:43:42.799611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.895 [2024-10-07 09:43:42.799627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.895 [2024-10-07 09:43:42.799641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.895 [2024-10-07 09:43:42.799657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.895 [2024-10-07 09:43:42.799679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.895 [2024-10-07 09:43:42.799696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.895 [2024-10-07 09:43:42.799717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.895 [2024-10-07 09:43:42.799733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.895 [2024-10-07 09:43:42.799747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.895 [2024-10-07 09:43:42.799762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.895 [2024-10-07 09:43:42.799779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.895 [2024-10-07 09:43:42.799796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.895 [2024-10-07 09:43:42.799810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.895 [2024-10-07 09:43:42.799826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.895 [2024-10-07 09:43:42.799840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.895 [2024-10-07 09:43:42.799855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.895 [2024-10-07 09:43:42.799869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.895 [2024-10-07 09:43:42.799884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.895 [2024-10-07 09:43:42.799898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.895 [2024-10-07 09:43:42.799914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.895 [2024-10-07 09:43:42.799928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.895 [2024-10-07 09:43:42.799943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.895 [2024-10-07 09:43:42.799957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.895 [2024-10-07 09:43:42.799975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.895 [2024-10-07 09:43:42.799989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.895 [2024-10-07 09:43:42.800004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.895 [2024-10-07 09:43:42.800018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.895 [2024-10-07 09:43:42.800034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.895 [2024-10-07 09:43:42.800049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.895 [2024-10-07 09:43:42.800064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.895 [2024-10-07 09:43:42.800078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.895 [2024-10-07 09:43:42.800094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.895 [2024-10-07 09:43:42.800107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.895 [2024-10-07 09:43:42.800123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.895 [2024-10-07 09:43:42.800137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.895 [2024-10-07 09:43:42.800153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.895 [2024-10-07 09:43:42.800170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.895 [2024-10-07 09:43:42.800188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.895 [2024-10-07 09:43:42.800202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.895 [2024-10-07 09:43:42.800218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.895 [2024-10-07 09:43:42.800232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.895 [2024-10-07 09:43:42.800252] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a02a80 is same with the state(6) to be set 00:22:53.895 [2024-10-07 09:43:42.801507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.895 [2024-10-07 09:43:42.801530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.895 [2024-10-07 09:43:42.801552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.895 [2024-10-07 09:43:42.801567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.895 [2024-10-07 09:43:42.801583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.895 [2024-10-07 09:43:42.801597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.895 [2024-10-07 09:43:42.801612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.895 [2024-10-07 09:43:42.801626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.895 [2024-10-07 09:43:42.801642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.895 [2024-10-07 09:43:42.801656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.896 [2024-10-07 09:43:42.801679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.896 [2024-10-07 09:43:42.801695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.896 [2024-10-07 09:43:42.801711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.896 [2024-10-07 09:43:42.801725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.896 [2024-10-07 09:43:42.801741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.896 [2024-10-07 09:43:42.801756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.896 [2024-10-07 09:43:42.801772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.896 [2024-10-07 09:43:42.801786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.896 [2024-10-07 09:43:42.801801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.896 [2024-10-07 09:43:42.801819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.896 [2024-10-07 09:43:42.801836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.896 [2024-10-07 09:43:42.801850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.896 [2024-10-07 09:43:42.801866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.896 [2024-10-07 09:43:42.801880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.896 [2024-10-07 09:43:42.801895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.896 [2024-10-07 09:43:42.801909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.896 [2024-10-07 09:43:42.801925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.896 [2024-10-07 09:43:42.801939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.896 [2024-10-07 09:43:42.801954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.896 [2024-10-07 09:43:42.801968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.896 [2024-10-07 09:43:42.801984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.896 [2024-10-07 09:43:42.801997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.896 [2024-10-07 09:43:42.802013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.896 [2024-10-07 09:43:42.802027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.896 [2024-10-07 09:43:42.802043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.896 [2024-10-07 09:43:42.802056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.896 [2024-10-07 09:43:42.802072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.896 [2024-10-07 09:43:42.802085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.896 [2024-10-07 09:43:42.802101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.896 [2024-10-07 09:43:42.802115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.896 [2024-10-07 09:43:42.802130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.896 [2024-10-07 09:43:42.802144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.896 [2024-10-07 09:43:42.802160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.896 [2024-10-07 09:43:42.802173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.896 [2024-10-07 09:43:42.802197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.896 [2024-10-07 09:43:42.802211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.896 [2024-10-07 09:43:42.802227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.896 [2024-10-07 09:43:42.802241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.896 [2024-10-07 09:43:42.802257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.896 [2024-10-07 09:43:42.802272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.896 [2024-10-07 09:43:42.802287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.896 [2024-10-07 09:43:42.802301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.896 [2024-10-07 09:43:42.802317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.896 [2024-10-07 09:43:42.802331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.896 [2024-10-07 09:43:42.802347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.896 [2024-10-07 09:43:42.802361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.896 [2024-10-07 09:43:42.802377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.896 [2024-10-07 09:43:42.802390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.896 [2024-10-07 09:43:42.802406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.896 [2024-10-07 09:43:42.802419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.896 [2024-10-07 09:43:42.802435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.896 [2024-10-07 09:43:42.802448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.896 [2024-10-07 09:43:42.802464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.896 [2024-10-07 09:43:42.802478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.896 [2024-10-07 09:43:42.802494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.896 [2024-10-07 09:43:42.802508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.896 [2024-10-07 09:43:42.802523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.896 [2024-10-07 09:43:42.802537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.896 [2024-10-07 09:43:42.802553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.896 [2024-10-07 09:43:42.802570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.896 [2024-10-07 09:43:42.802586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.896 [2024-10-07 09:43:42.802600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.896 [2024-10-07 09:43:42.802615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.896 [2024-10-07 09:43:42.802629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.896 [2024-10-07 09:43:42.802644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.896 [2024-10-07 09:43:42.802658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.896 [2024-10-07 09:43:42.802681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.896 [2024-10-07 09:43:42.802696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.896 [2024-10-07 09:43:42.802712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.896 [2024-10-07 09:43:42.802726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.896 [2024-10-07 09:43:42.802742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.896 [2024-10-07 09:43:42.802755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.897 [2024-10-07 09:43:42.802771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.897 [2024-10-07 09:43:42.802785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.897 [2024-10-07 09:43:42.802801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.897 [2024-10-07 09:43:42.802815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.897 [2024-10-07 09:43:42.802830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.897 [2024-10-07 09:43:42.802844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.897 [2024-10-07 09:43:42.802860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.897 [2024-10-07 09:43:42.802873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.897 [2024-10-07 09:43:42.802889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.897 [2024-10-07 09:43:42.802902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.897 [2024-10-07 09:43:42.802918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.897 [2024-10-07 09:43:42.802931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.897 [2024-10-07 09:43:42.802952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.897 [2024-10-07 09:43:42.802971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.897 [2024-10-07 09:43:42.802986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.897 [2024-10-07 09:43:42.803000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.897 [2024-10-07 09:43:42.803015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.897 [2024-10-07 09:43:42.803029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.897 [2024-10-07 09:43:42.803045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.897 [2024-10-07 09:43:42.803059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.897 [2024-10-07 09:43:42.803075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.897 [2024-10-07 09:43:42.803088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.897 [2024-10-07 09:43:42.803112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.897 [2024-10-07 09:43:42.803125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.897 [2024-10-07 09:43:42.803141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.897 [2024-10-07 09:43:42.803154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.897 [2024-10-07 09:43:42.803170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.897 [2024-10-07 09:43:42.803183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.897 [2024-10-07 09:43:42.803199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.897 [2024-10-07 09:43:42.803212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.897 [2024-10-07 09:43:42.803228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.897 [2024-10-07 09:43:42.803242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.897 [2024-10-07 09:43:42.803258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.897 [2024-10-07 09:43:42.803280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.897 [2024-10-07 09:43:42.803296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.897 [2024-10-07 09:43:42.803310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.897 [2024-10-07 09:43:42.803326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.897 [2024-10-07 09:43:42.803343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.897 [2024-10-07 09:43:42.803359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.897 [2024-10-07 09:43:42.803373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.897 [2024-10-07 09:43:42.803389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.897 [2024-10-07 09:43:42.803403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.897 [2024-10-07 09:43:42.803418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.897 [2024-10-07 09:43:42.803432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.897 [2024-10-07 09:43:42.803448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.897 [2024-10-07 09:43:42.803461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.897 [2024-10-07 09:43:42.803475] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1967e30 is same with the state(6) to be set 00:22:53.897 [2024-10-07 09:43:42.804755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.897 [2024-10-07 09:43:42.804779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.897 [2024-10-07 09:43:42.804799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.897 [2024-10-07 09:43:42.804813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.897 [2024-10-07 09:43:42.804830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.897 [2024-10-07 09:43:42.804843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.897 [2024-10-07 09:43:42.804859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.897 [2024-10-07 09:43:42.804873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.897 [2024-10-07 09:43:42.804889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.897 [2024-10-07 09:43:42.804903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.897 [2024-10-07 09:43:42.804918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.897 [2024-10-07 09:43:42.804932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.897 [2024-10-07 09:43:42.804948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.897 [2024-10-07 09:43:42.804962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.897 [2024-10-07 09:43:42.804977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.897 [2024-10-07 09:43:42.804995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.897 [2024-10-07 09:43:42.805012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.897 [2024-10-07 09:43:42.805026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.897 [2024-10-07 09:43:42.805043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.897 [2024-10-07 09:43:42.805057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.897 [2024-10-07 09:43:42.805074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.897 [2024-10-07 09:43:42.805088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.897 [2024-10-07 09:43:42.805104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.897 [2024-10-07 09:43:42.805118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.897 [2024-10-07 09:43:42.805134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.897 [2024-10-07 09:43:42.805147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.897 [2024-10-07 09:43:42.805163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.897 [2024-10-07 09:43:42.805176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.897 [2024-10-07 09:43:42.805192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.897 [2024-10-07 09:43:42.805206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.897 [2024-10-07 09:43:42.805222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.897 [2024-10-07 09:43:42.805236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.897 [2024-10-07 09:43:42.805252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.898 [2024-10-07 09:43:42.805266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.898 [2024-10-07 09:43:42.805281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.898 [2024-10-07 09:43:42.805295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.898 [2024-10-07 09:43:42.805311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.898 [2024-10-07 09:43:42.805325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.898 [2024-10-07 09:43:42.805341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.898 [2024-10-07 09:43:42.805354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.898 [2024-10-07 09:43:42.805374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.898 [2024-10-07 09:43:42.805388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.898 [2024-10-07 09:43:42.805404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.898 [2024-10-07 09:43:42.805418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.898 [2024-10-07 09:43:42.805434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.898 [2024-10-07 09:43:42.805448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.898 [2024-10-07 09:43:42.805464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.898 [2024-10-07 09:43:42.805477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.898 [2024-10-07 09:43:42.805493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.898 [2024-10-07 09:43:42.805507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.898 [2024-10-07 09:43:42.805523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.898 [2024-10-07 09:43:42.805537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.898 [2024-10-07 09:43:42.805553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.898 [2024-10-07 09:43:42.805567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.898 [2024-10-07 09:43:42.805583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.898 [2024-10-07 09:43:42.805597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.898 [2024-10-07 09:43:42.805612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.898 [2024-10-07 09:43:42.805626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.898 [2024-10-07 09:43:42.805641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.898 [2024-10-07 09:43:42.805655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.898 [2024-10-07 09:43:42.805682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.898 [2024-10-07 09:43:42.805698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.898 [2024-10-07 09:43:42.805714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.898 [2024-10-07 09:43:42.805727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.898 [2024-10-07 09:43:42.805743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.898 [2024-10-07 09:43:42.805760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.898 [2024-10-07 09:43:42.805776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.898 [2024-10-07 09:43:42.805790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.898 [2024-10-07 09:43:42.805805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.898 [2024-10-07 09:43:42.805820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.898 [2024-10-07 09:43:42.805836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.898 [2024-10-07 09:43:42.805849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.898 [2024-10-07 09:43:42.805864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.898 [2024-10-07 09:43:42.805877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.898 [2024-10-07 09:43:42.805893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.898 [2024-10-07 09:43:42.805906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.898 [2024-10-07 09:43:42.805922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.898 [2024-10-07 09:43:42.805935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.898 [2024-10-07 09:43:42.805951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.898 [2024-10-07 09:43:42.805965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.898 [2024-10-07 09:43:42.805981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.898 [2024-10-07 09:43:42.805994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.898 [2024-10-07 09:43:42.806010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.898 [2024-10-07 09:43:42.806024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.898 [2024-10-07 09:43:42.806039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.898 [2024-10-07 09:43:42.806053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.898 [2024-10-07 09:43:42.806068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.898 [2024-10-07 09:43:42.806081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.898 [2024-10-07 09:43:42.806097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.898 [2024-10-07 09:43:42.806111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.898 [2024-10-07 09:43:42.806130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.898 [2024-10-07 09:43:42.806144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.898 [2024-10-07 09:43:42.806160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.898 [2024-10-07 09:43:42.806174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.898 [2024-10-07 09:43:42.806190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.898 [2024-10-07 09:43:42.806204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.898 [2024-10-07 09:43:42.806219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.898 [2024-10-07 09:43:42.806233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.898 [2024-10-07 09:43:42.806249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.899 [2024-10-07 09:43:42.806263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.899 [2024-10-07 09:43:42.806278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.899 [2024-10-07 09:43:42.806293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.899 [2024-10-07 09:43:42.806309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.899 [2024-10-07 09:43:42.806322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.899 [2024-10-07 09:43:42.806338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.899 [2024-10-07 09:43:42.806352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.899 [2024-10-07 09:43:42.806367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.899 [2024-10-07 09:43:42.806381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.899 [2024-10-07 09:43:42.806396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.899 [2024-10-07 09:43:42.806409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.899 [2024-10-07 09:43:42.806425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.899 [2024-10-07 09:43:42.806439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.899 [2024-10-07 09:43:42.806456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.899 [2024-10-07 09:43:42.806470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.899 [2024-10-07 09:43:42.806486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.899 [2024-10-07 09:43:42.806502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.899 [2024-10-07 09:43:42.806519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.899 [2024-10-07 09:43:42.806533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.899 [2024-10-07 09:43:42.806549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.899 [2024-10-07 09:43:42.806562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.899 [2024-10-07 09:43:42.806578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.899 [2024-10-07 09:43:42.806591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.899 [2024-10-07 09:43:42.806607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.899 [2024-10-07 09:43:42.806621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.899 [2024-10-07 09:43:42.806637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.899 [2024-10-07 09:43:42.806650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.899 [2024-10-07 09:43:42.806674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.899 [2024-10-07 09:43:42.806690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.899 [2024-10-07 09:43:42.806705] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a05560 is same with the state(6) to be set 00:22:53.899 [2024-10-07 09:43:42.807945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.899 [2024-10-07 09:43:42.807969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.899 [2024-10-07 09:43:42.807991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.899 [2024-10-07 09:43:42.808005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.899 [2024-10-07 09:43:42.808022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.899 [2024-10-07 09:43:42.808036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.899 [2024-10-07 09:43:42.808052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.899 [2024-10-07 09:43:42.808066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.899 [2024-10-07 09:43:42.808083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.899 [2024-10-07 09:43:42.808097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.899 [2024-10-07 09:43:42.808112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.899 [2024-10-07 09:43:42.808131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.899 [2024-10-07 09:43:42.808148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.899 [2024-10-07 09:43:42.808162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.899 [2024-10-07 09:43:42.808178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.899 [2024-10-07 09:43:42.808192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.899 [2024-10-07 09:43:42.808208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.899 [2024-10-07 09:43:42.808222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.899 [2024-10-07 09:43:42.808238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.899 [2024-10-07 09:43:42.808252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.899 [2024-10-07 09:43:42.808268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.899 [2024-10-07 09:43:42.808281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.899 [2024-10-07 09:43:42.808298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.899 [2024-10-07 09:43:42.808311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.899 [2024-10-07 09:43:42.808327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.899 [2024-10-07 09:43:42.808341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.899 [2024-10-07 09:43:42.808357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.899 [2024-10-07 09:43:42.808371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.899 [2024-10-07 09:43:42.808386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.899 [2024-10-07 09:43:42.808400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.899 [2024-10-07 09:43:42.808416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.899 [2024-10-07 09:43:42.808430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.899 [2024-10-07 09:43:42.808447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.899 [2024-10-07 09:43:42.808460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.899 [2024-10-07 09:43:42.808476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.899 [2024-10-07 09:43:42.808490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.899 [2024-10-07 09:43:42.808509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.899 [2024-10-07 09:43:42.808524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.899 [2024-10-07 09:43:42.808540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.899 [2024-10-07 09:43:42.808553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.899 [2024-10-07 09:43:42.808569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.899 [2024-10-07 09:43:42.808583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.899 [2024-10-07 09:43:42.808599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.899 [2024-10-07 09:43:42.808613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.899 [2024-10-07 09:43:42.808628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.899 [2024-10-07 09:43:42.808642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.899 [2024-10-07 09:43:42.808658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.900 [2024-10-07 09:43:42.808680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.900 [2024-10-07 09:43:42.808697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.900 [2024-10-07 09:43:42.808710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.900 [2024-10-07 09:43:42.808726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.900 [2024-10-07 09:43:42.808740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.900 [2024-10-07 09:43:42.808755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.900 [2024-10-07 09:43:42.808769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.900 [2024-10-07 09:43:42.808785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.900 [2024-10-07 09:43:42.808799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.900 [2024-10-07 09:43:42.808815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.900 [2024-10-07 09:43:42.808828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.900 [2024-10-07 09:43:42.808843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.900 [2024-10-07 09:43:42.808857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.900 [2024-10-07 09:43:42.808873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.900 [2024-10-07 09:43:42.808887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.900 [2024-10-07 09:43:42.808907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.900 [2024-10-07 09:43:42.808921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.900 [2024-10-07 09:43:42.808937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.900 [2024-10-07 09:43:42.808950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.900 [2024-10-07 09:43:42.808966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.900 [2024-10-07 09:43:42.808979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.900 [2024-10-07 09:43:42.808995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.900 [2024-10-07 09:43:42.809009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.900 [2024-10-07 09:43:42.809024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.900 [2024-10-07 09:43:42.809038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.900 [2024-10-07 09:43:42.809053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.900 [2024-10-07 09:43:42.809067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.900 [2024-10-07 09:43:42.809082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.900 [2024-10-07 09:43:42.809096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.900 [2024-10-07 09:43:42.809112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.900 [2024-10-07 09:43:42.809125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.900 [2024-10-07 09:43:42.809141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.900 [2024-10-07 09:43:42.809154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.900 [2024-10-07 09:43:42.809170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.900 [2024-10-07 09:43:42.809183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.900 [2024-10-07 09:43:42.809199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.900 [2024-10-07 09:43:42.809212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.900 [2024-10-07 09:43:42.809228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.900 [2024-10-07 09:43:42.809241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.900 [2024-10-07 09:43:42.809263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.900 [2024-10-07 09:43:42.809280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.900 [2024-10-07 09:43:42.809296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.900 [2024-10-07 09:43:42.809310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.900 [2024-10-07 09:43:42.809328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.900 [2024-10-07 09:43:42.809342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.900 [2024-10-07 09:43:42.809357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.900 [2024-10-07 09:43:42.809371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.900 [2024-10-07 09:43:42.809387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.900 [2024-10-07 09:43:42.809401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.900 [2024-10-07 09:43:42.809417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.900 [2024-10-07 09:43:42.809431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.900 [2024-10-07 09:43:42.809446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.900 [2024-10-07 09:43:42.809460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.900 [2024-10-07 09:43:42.809486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.900 [2024-10-07 09:43:42.809500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.900 [2024-10-07 09:43:42.809515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.900 [2024-10-07 09:43:42.809529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.900 [2024-10-07 09:43:42.809544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.900 [2024-10-07 09:43:42.809558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.900 [2024-10-07 09:43:42.809574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.900 [2024-10-07 09:43:42.809588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.900 [2024-10-07 09:43:42.809604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.900 [2024-10-07 09:43:42.809617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.900 [2024-10-07 09:43:42.809633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.900 [2024-10-07 09:43:42.809647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.900 [2024-10-07 09:43:42.809672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.900 [2024-10-07 09:43:42.809689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.900 [2024-10-07 09:43:42.809705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.900 [2024-10-07 09:43:42.809718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.900 [2024-10-07 09:43:42.809734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.900 [2024-10-07 09:43:42.809748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.900 [2024-10-07 09:43:42.809763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.900 [2024-10-07 09:43:42.809777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.900 [2024-10-07 09:43:42.809792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.900 [2024-10-07 09:43:42.809806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.900 [2024-10-07 09:43:42.809822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.900 [2024-10-07 09:43:42.809835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.900 [2024-10-07 09:43:42.809851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.900 [2024-10-07 09:43:42.809864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.901 [2024-10-07 09:43:42.809881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.901 [2024-10-07 09:43:42.809894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.901 [2024-10-07 09:43:42.809908] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4a060 is same with the state(6) to be set 00:22:53.901 [2024-10-07 09:43:42.811141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.901 [2024-10-07 09:43:42.811165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.901 [2024-10-07 09:43:42.811186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.901 [2024-10-07 09:43:42.811201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.901 [2024-10-07 09:43:42.811217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.901 [2024-10-07 09:43:42.811230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.901 [2024-10-07 09:43:42.811247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.901 [2024-10-07 09:43:42.811261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.901 [2024-10-07 09:43:42.811282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.901 [2024-10-07 09:43:42.811296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.901 [2024-10-07 09:43:42.811313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.901 [2024-10-07 09:43:42.811327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.901 [2024-10-07 09:43:42.811343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.901 [2024-10-07 09:43:42.811358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.901 [2024-10-07 09:43:42.811373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.901 [2024-10-07 09:43:42.811387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.901 [2024-10-07 09:43:42.811403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.901 [2024-10-07 09:43:42.811417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.901 [2024-10-07 09:43:42.811433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.901 [2024-10-07 09:43:42.811446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.901 [2024-10-07 09:43:42.811462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.901 [2024-10-07 09:43:42.811475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.901 [2024-10-07 09:43:42.811492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.901 [2024-10-07 09:43:42.811506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.901 [2024-10-07 09:43:42.811521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.901 [2024-10-07 09:43:42.811535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.901 [2024-10-07 09:43:42.811551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.901 [2024-10-07 09:43:42.811565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.901 [2024-10-07 09:43:42.811580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.901 [2024-10-07 09:43:42.811594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.901 [2024-10-07 09:43:42.811609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.901 [2024-10-07 09:43:42.811623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.901 [2024-10-07 09:43:42.811639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.901 [2024-10-07 09:43:42.811661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.901 [2024-10-07 09:43:42.811691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.901 [2024-10-07 09:43:42.811706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.901 [2024-10-07 09:43:42.811722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.901 [2024-10-07 09:43:42.811736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.901 [2024-10-07 09:43:42.811752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.901 [2024-10-07 09:43:42.811765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.901 [2024-10-07 09:43:42.811781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.901 [2024-10-07 09:43:42.811795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.901 [2024-10-07 09:43:42.811810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.901 [2024-10-07 09:43:42.811825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.901 [2024-10-07 09:43:42.811840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.901 [2024-10-07 09:43:42.811854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.901 [2024-10-07 09:43:42.811870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.901 [2024-10-07 09:43:42.811883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.901 [2024-10-07 09:43:42.811899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.901 [2024-10-07 09:43:42.811913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.901 [2024-10-07 09:43:42.811928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.901 [2024-10-07 09:43:42.811941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.901 [2024-10-07 09:43:42.811957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.901 [2024-10-07 09:43:42.811970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.901 [2024-10-07 09:43:42.811986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.901 [2024-10-07 09:43:42.812000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.901 [2024-10-07 09:43:42.812016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.901 [2024-10-07 09:43:42.812029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.901 [2024-10-07 09:43:42.812049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.901 [2024-10-07 09:43:42.812063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.901 [2024-10-07 09:43:42.812079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.901 [2024-10-07 09:43:42.812092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.901 [2024-10-07 09:43:42.812108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.901 [2024-10-07 09:43:42.812122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.901 [2024-10-07 09:43:42.812137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.901 [2024-10-07 09:43:42.812151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.901 [2024-10-07 09:43:42.812166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.901 [2024-10-07 09:43:42.812180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.901 [2024-10-07 09:43:42.812195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.901 [2024-10-07 09:43:42.812209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.901 [2024-10-07 09:43:42.812224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.901 [2024-10-07 09:43:42.812237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.901 [2024-10-07 09:43:42.812253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.901 [2024-10-07 09:43:42.812266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.901 [2024-10-07 09:43:42.812282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.901 [2024-10-07 09:43:42.812295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.901 [2024-10-07 09:43:42.812311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.902 [2024-10-07 09:43:42.812324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.902 [2024-10-07 09:43:42.812339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.902 [2024-10-07 09:43:42.812353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.902 [2024-10-07 09:43:42.812368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.902 [2024-10-07 09:43:42.812382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.902 [2024-10-07 09:43:42.812398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.902 [2024-10-07 09:43:42.812415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.902 [2024-10-07 09:43:42.812432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.902 [2024-10-07 09:43:42.812445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.902 [2024-10-07 09:43:42.812461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.902 [2024-10-07 09:43:42.812474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.902 [2024-10-07 09:43:42.812491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.902 [2024-10-07 09:43:42.812504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.902 [2024-10-07 09:43:42.812520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.902 [2024-10-07 09:43:42.812533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.902 [2024-10-07 09:43:42.812549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.902 [2024-10-07 09:43:42.812562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.902 [2024-10-07 09:43:42.812578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.902 [2024-10-07 09:43:42.812591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.902 [2024-10-07 09:43:42.812607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.902 [2024-10-07 09:43:42.812620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.902 [2024-10-07 09:43:42.812636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.902 [2024-10-07 09:43:42.812650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.902 [2024-10-07 09:43:42.812673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.902 [2024-10-07 09:43:42.812689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.902 [2024-10-07 09:43:42.812705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.902 [2024-10-07 09:43:42.812719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.902 [2024-10-07 09:43:42.812734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.902 [2024-10-07 09:43:42.812748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.902 [2024-10-07 09:43:42.812764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.902 [2024-10-07 09:43:42.812777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.902 [2024-10-07 09:43:42.812798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.902 [2024-10-07 09:43:42.812812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.902 [2024-10-07 09:43:42.812828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.902 [2024-10-07 09:43:42.812842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.902 [2024-10-07 09:43:42.812857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.902 [2024-10-07 09:43:42.812871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.902 [2024-10-07 09:43:42.812887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.902 [2024-10-07 09:43:42.812900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.902 [2024-10-07 09:43:42.812916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.902 [2024-10-07 09:43:42.812929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.902 [2024-10-07 09:43:42.812945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.902 [2024-10-07 09:43:42.812958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.902 [2024-10-07 09:43:42.812974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.902 [2024-10-07 09:43:42.812988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.902 [2024-10-07 09:43:42.813003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.902 [2024-10-07 09:43:42.813017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.902 [2024-10-07 09:43:42.813032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.902 [2024-10-07 09:43:42.813045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.902 [2024-10-07 09:43:42.813061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.902 [2024-10-07 09:43:42.813074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.902 [2024-10-07 09:43:42.813088] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4b3d0 is same with the state(6) to be set 00:22:53.902 [2024-10-07 09:43:42.815397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.902 [2024-10-07 09:43:42.815424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.902 [2024-10-07 09:43:42.815447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.902 [2024-10-07 09:43:42.815462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.902 [2024-10-07 09:43:42.815484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.902 [2024-10-07 09:43:42.815499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.902 [2024-10-07 09:43:42.815516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.902 [2024-10-07 09:43:42.815529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.902 [2024-10-07 09:43:42.815546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.902 [2024-10-07 09:43:42.815560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.902 [2024-10-07 09:43:42.815576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.902 [2024-10-07 09:43:42.815590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.902 [2024-10-07 09:43:42.815605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.902 [2024-10-07 09:43:42.815619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.902 [2024-10-07 09:43:42.815635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.902 [2024-10-07 09:43:42.815649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.902 [2024-10-07 09:43:42.815674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.903 [2024-10-07 09:43:42.815691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.903 [2024-10-07 09:43:42.815706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.903 [2024-10-07 09:43:42.815720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.903 [2024-10-07 09:43:42.815736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.903 [2024-10-07 09:43:42.815750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.903 [2024-10-07 09:43:42.815765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.903 [2024-10-07 09:43:42.815778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.903 [2024-10-07 09:43:42.815794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.903 [2024-10-07 09:43:42.815808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.903 [2024-10-07 09:43:42.815824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.903 [2024-10-07 09:43:42.815837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.903 [2024-10-07 09:43:42.815853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.903 [2024-10-07 09:43:42.815870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.903 [2024-10-07 09:43:42.815887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.903 [2024-10-07 09:43:42.815901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.903 [2024-10-07 09:43:42.815917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.903 [2024-10-07 09:43:42.815931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.903 [2024-10-07 09:43:42.815947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.903 [2024-10-07 09:43:42.815961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.903 [2024-10-07 09:43:42.815976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.903 [2024-10-07 09:43:42.815990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.903 [2024-10-07 09:43:42.816006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.903 [2024-10-07 09:43:42.816019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.903 [2024-10-07 09:43:42.816035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.903 [2024-10-07 09:43:42.816048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.903 [2024-10-07 09:43:42.816064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.903 [2024-10-07 09:43:42.816077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.903 [2024-10-07 09:43:42.816093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.903 [2024-10-07 09:43:42.816106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.903 [2024-10-07 09:43:42.816122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.903 [2024-10-07 09:43:42.816135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.903 [2024-10-07 09:43:42.816150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.903 [2024-10-07 09:43:42.816164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.903 [2024-10-07 09:43:42.816179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.903 [2024-10-07 09:43:42.816193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.903 [2024-10-07 09:43:42.816208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.903 [2024-10-07 09:43:42.816222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.903 [2024-10-07 09:43:42.816241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.903 [2024-10-07 09:43:42.816255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.903 [2024-10-07 09:43:42.816270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.903 [2024-10-07 09:43:42.816284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.903 [2024-10-07 09:43:42.816299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.903 [2024-10-07 09:43:42.816312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.903 [2024-10-07 09:43:42.816328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.903 [2024-10-07 09:43:42.816341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.903 [2024-10-07 09:43:42.816357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.903 [2024-10-07 09:43:42.816370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.903 [2024-10-07 09:43:42.816385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.903 [2024-10-07 09:43:42.816399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.903 [2024-10-07 09:43:42.816414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.903 [2024-10-07 09:43:42.816428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.903 [2024-10-07 09:43:42.816443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.903 [2024-10-07 09:43:42.816456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.903 [2024-10-07 09:43:42.816472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.903 [2024-10-07 09:43:42.816485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.903 [2024-10-07 09:43:42.816502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.903 [2024-10-07 09:43:42.816515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.903 [2024-10-07 09:43:42.816531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.903 [2024-10-07 09:43:42.816544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.903 [2024-10-07 09:43:42.816560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.903 [2024-10-07 09:43:42.816574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.903 [2024-10-07 09:43:42.816590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.903 [2024-10-07 09:43:42.816607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.903 [2024-10-07 09:43:42.816623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.903 [2024-10-07 09:43:42.816637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.903 [2024-10-07 09:43:42.816653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.903 [2024-10-07 09:43:42.816673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.903 [2024-10-07 09:43:42.816691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.903 [2024-10-07 09:43:42.816705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.903 [2024-10-07 09:43:42.816721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.903 [2024-10-07 09:43:42.816734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.903 [2024-10-07 09:43:42.816750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.903 [2024-10-07 09:43:42.816764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.903 [2024-10-07 09:43:42.816779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.903 [2024-10-07 09:43:42.816793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.903 [2024-10-07 09:43:42.816809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.903 [2024-10-07 09:43:42.816823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.903 [2024-10-07 09:43:42.816838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.903 [2024-10-07 09:43:42.816852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.904 [2024-10-07 09:43:42.816868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.904 [2024-10-07 09:43:42.816881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.904 [2024-10-07 09:43:42.816897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.904 [2024-10-07 09:43:42.816911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.904 [2024-10-07 09:43:42.816926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.904 [2024-10-07 09:43:42.816940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.904 [2024-10-07 09:43:42.816955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.904 [2024-10-07 09:43:42.816969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.904 [2024-10-07 09:43:42.816989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.904 [2024-10-07 09:43:42.817004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.904 [2024-10-07 09:43:42.817020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.904 [2024-10-07 09:43:42.817033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.904 [2024-10-07 09:43:42.817049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.904 [2024-10-07 09:43:42.817063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.904 [2024-10-07 09:43:42.817078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.904 [2024-10-07 09:43:42.817092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.904 [2024-10-07 09:43:42.817108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.904 [2024-10-07 09:43:42.817121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.904 [2024-10-07 09:43:42.817137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.904 [2024-10-07 09:43:42.817150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.904 [2024-10-07 09:43:42.817166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.904 [2024-10-07 09:43:42.817179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.904 [2024-10-07 09:43:42.817195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.904 [2024-10-07 09:43:42.817208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.904 [2024-10-07 09:43:42.817224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.904 [2024-10-07 09:43:42.817238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.904 [2024-10-07 09:43:42.817253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.904 [2024-10-07 09:43:42.817266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.904 [2024-10-07 09:43:42.817282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.904 [2024-10-07 09:43:42.817295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.904 [2024-10-07 09:43:42.817311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.904 [2024-10-07 09:43:42.817324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.904 [2024-10-07 09:43:42.817338] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4c8b0 is same with the state(6) to be set 00:22:53.904 [2024-10-07 09:43:42.819659] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:53.904 [2024-10-07 09:43:42.819698] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:22:53.904 [2024-10-07 09:43:42.819718] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:22:53.904 [2024-10-07 09:43:42.819736] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:22:53.904 [2024-10-07 09:43:42.819856] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:53.904 [2024-10-07 09:43:42.819882] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:53.904 [2024-10-07 09:43:42.819903] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:53.904 [2024-10-07 09:43:42.820011] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:22:53.904 [2024-10-07 09:43:42.820035] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:22:53.904 task offset: 29696 on job bdev=Nvme7n1 fails 00:22:53.904 00:22:53.904 Latency(us) 00:22:53.904 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:53.904 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:53.904 Job: Nvme1n1 ended in about 0.97 seconds with error 00:22:53.904 Verification LBA range: start 0x0 length 0x400 00:22:53.904 Nvme1n1 : 0.97 131.63 8.23 65.81 0.00 320785.32 21845.33 251658.24 00:22:53.904 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:53.904 Job: Nvme2n1 ended in about 0.96 seconds with error 00:22:53.904 Verification LBA range: start 0x0 length 0x400 00:22:53.904 Nvme2n1 : 0.96 200.52 12.53 66.84 0.00 232262.45 6359.42 233016.89 00:22:53.904 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:53.904 Job: Nvme3n1 ended in about 0.98 seconds with error 00:22:53.904 Verification LBA range: start 0x0 length 0x400 00:22:53.904 Nvme3n1 : 0.98 196.79 12.30 65.60 0.00 232205.08 20000.62 250104.79 00:22:53.904 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:53.904 Job: Nvme4n1 ended in about 0.98 seconds with error 00:22:53.904 Verification LBA range: start 0x0 length 0x400 00:22:53.904 Nvme4n1 : 0.98 201.25 12.58 65.38 0.00 224036.59 19612.25 251658.24 00:22:53.904 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:53.904 Job: Nvme5n1 ended in about 0.96 seconds with error 00:22:53.904 Verification LBA range: start 0x0 length 0x400 00:22:53.904 Nvme5n1 : 0.96 200.26 12.52 66.75 0.00 218833.92 12718.84 257872.02 00:22:53.904 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:53.904 Job: Nvme6n1 ended in about 0.98 seconds with error 00:22:53.904 Verification LBA range: start 0x0 length 0x400 00:22:53.904 Nvme6n1 : 0.98 130.33 8.15 65.17 0.00 293664.74 21359.88 278066.82 00:22:53.904 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:53.904 Job: Nvme7n1 ended in about 0.96 seconds with error 00:22:53.904 Verification LBA range: start 0x0 length 0x400 00:22:53.904 Nvme7n1 : 0.96 200.84 12.55 66.95 0.00 209142.33 9417.77 253211.69 00:22:53.904 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:53.904 Job: Nvme8n1 ended in about 0.99 seconds with error 00:22:53.904 Verification LBA range: start 0x0 length 0x400 00:22:53.904 Nvme8n1 : 0.99 194.87 12.18 64.96 0.00 212074.57 24272.59 256318.58 00:22:53.904 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:53.904 Job: Nvme9n1 ended in about 0.99 seconds with error 00:22:53.904 Verification LBA range: start 0x0 length 0x400 00:22:53.904 Nvme9n1 : 0.99 129.49 8.09 64.75 0.00 278000.58 23495.87 292047.83 00:22:53.904 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:53.904 Job: Nvme10n1 ended in about 0.99 seconds with error 00:22:53.904 Verification LBA range: start 0x0 length 0x400 00:22:53.904 Nvme10n1 : 0.99 128.94 8.06 64.47 0.00 273565.77 21165.70 264085.81 00:22:53.904 =================================================================================================================== 00:22:53.904 Total : 1714.91 107.18 656.67 0.00 244740.32 6359.42 292047.83 00:22:53.904 [2024-10-07 09:43:42.848944] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:53.904 [2024-10-07 09:43:42.849021] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:22:53.904 [2024-10-07 09:43:42.849297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:53.904 [2024-10-07 09:43:42.849333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x154ac90 with addr=10.0.0.2, port=4420 00:22:53.904 [2024-10-07 09:43:42.849354] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x154ac90 is same with the state(6) to be set 00:22:53.904 [2024-10-07 09:43:42.849449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:53.904 [2024-10-07 09:43:42.849486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x155f790 with addr=10.0.0.2, port=4420 00:22:53.904 [2024-10-07 09:43:42.849502] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x155f790 is same with the state(6) to be set 00:22:53.904 [2024-10-07 09:43:42.849589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:53.904 [2024-10-07 09:43:42.849613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1557dc0 with addr=10.0.0.2, port=4420 00:22:53.904 [2024-10-07 09:43:42.849629] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1557dc0 is same with the state(6) to be set 00:22:53.904 [2024-10-07 09:43:42.849729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:53.904 [2024-10-07 09:43:42.849754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14cb380 with addr=10.0.0.2, port=4420 00:22:53.904 [2024-10-07 09:43:42.849770] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14cb380 is same with the state(6) to be set 00:22:54.166 [2024-10-07 09:43:42.851751] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:22:54.166 [2024-10-07 09:43:42.851783] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:22:54.166 [2024-10-07 09:43:42.851917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:54.166 [2024-10-07 09:43:42.851946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1983430 with addr=10.0.0.2, port=4420 00:22:54.166 [2024-10-07 09:43:42.851963] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1983430 is same with the state(6) to be set 00:22:54.166 [2024-10-07 09:43:42.852050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:54.166 [2024-10-07 09:43:42.852076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19837b0 with addr=10.0.0.2, port=4420 00:22:54.166 [2024-10-07 09:43:42.852091] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19837b0 is same with the state(6) to be set 00:22:54.166 [2024-10-07 09:43:42.852175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:54.166 [2024-10-07 09:43:42.852201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19d4460 with addr=10.0.0.2, port=4420 00:22:54.166 [2024-10-07 09:43:42.852226] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d4460 is same with the state(6) to be set 00:22:54.166 [2024-10-07 09:43:42.852251] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x154ac90 (9): Bad file descriptor 00:22:54.166 [2024-10-07 09:43:42.852273] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x155f790 (9): Bad file descriptor 00:22:54.166 [2024-10-07 09:43:42.852304] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1557dc0 (9): Bad file descriptor 00:22:54.166 [2024-10-07 09:43:42.852322] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14cb380 (9): Bad file descriptor 00:22:54.166 [2024-10-07 09:43:42.852370] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:54.166 [2024-10-07 09:43:42.852407] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:54.166 [2024-10-07 09:43:42.852426] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:54.166 [2024-10-07 09:43:42.852446] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:54.166 [2024-10-07 09:43:42.852466] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:54.166 [2024-10-07 09:43:42.852544] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:22:54.166 [2024-10-07 09:43:42.852671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:54.166 [2024-10-07 09:43:42.852700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1560fd0 with addr=10.0.0.2, port=4420 00:22:54.166 [2024-10-07 09:43:42.852715] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1560fd0 is same with the state(6) to be set 00:22:54.166 [2024-10-07 09:43:42.852795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:54.166 [2024-10-07 09:43:42.852820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x155a170 with addr=10.0.0.2, port=4420 00:22:54.166 [2024-10-07 09:43:42.852836] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x155a170 is same with the state(6) to be set 00:22:54.166 [2024-10-07 09:43:42.852854] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1983430 (9): Bad file descriptor 00:22:54.166 [2024-10-07 09:43:42.852872] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19837b0 (9): Bad file descriptor 00:22:54.166 [2024-10-07 09:43:42.852889] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19d4460 (9): Bad file descriptor 00:22:54.166 [2024-10-07 09:43:42.852905] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:54.166 [2024-10-07 09:43:42.852917] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:54.166 [2024-10-07 09:43:42.852932] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:54.166 [2024-10-07 09:43:42.852952] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:22:54.166 [2024-10-07 09:43:42.852975] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:22:54.166 [2024-10-07 09:43:42.852987] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:22:54.166 [2024-10-07 09:43:42.853003] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:22:54.166 [2024-10-07 09:43:42.853016] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:22:54.166 [2024-10-07 09:43:42.853027] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:22:54.166 [2024-10-07 09:43:42.853043] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:22:54.166 [2024-10-07 09:43:42.853056] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:22:54.166 [2024-10-07 09:43:42.853068] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:22:54.166 [2024-10-07 09:43:42.853168] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:54.166 [2024-10-07 09:43:42.853195] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:54.166 [2024-10-07 09:43:42.853209] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:54.166 [2024-10-07 09:43:42.853220] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:54.166 [2024-10-07 09:43:42.853305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:54.166 [2024-10-07 09:43:42.853330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x198aae0 with addr=10.0.0.2, port=4420 00:22:54.166 [2024-10-07 09:43:42.853345] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198aae0 is same with the state(6) to be set 00:22:54.166 [2024-10-07 09:43:42.853362] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1560fd0 (9): Bad file descriptor 00:22:54.166 [2024-10-07 09:43:42.853380] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x155a170 (9): Bad file descriptor 00:22:54.166 [2024-10-07 09:43:42.853395] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:22:54.166 [2024-10-07 09:43:42.853407] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:22:54.166 [2024-10-07 09:43:42.853419] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:22:54.166 [2024-10-07 09:43:42.853436] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:22:54.166 [2024-10-07 09:43:42.853449] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:22:54.166 [2024-10-07 09:43:42.853461] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:22:54.166 [2024-10-07 09:43:42.853476] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:22:54.166 [2024-10-07 09:43:42.853489] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:22:54.166 [2024-10-07 09:43:42.853501] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:22:54.166 [2024-10-07 09:43:42.853540] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:54.166 [2024-10-07 09:43:42.853559] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:54.166 [2024-10-07 09:43:42.853571] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:54.166 [2024-10-07 09:43:42.853588] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x198aae0 (9): Bad file descriptor 00:22:54.166 [2024-10-07 09:43:42.853604] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:22:54.166 [2024-10-07 09:43:42.853616] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:22:54.166 [2024-10-07 09:43:42.853629] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:22:54.166 [2024-10-07 09:43:42.853644] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:22:54.166 [2024-10-07 09:43:42.853658] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:22:54.166 [2024-10-07 09:43:42.853690] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:54.166 [2024-10-07 09:43:42.853734] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:54.166 [2024-10-07 09:43:42.853752] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:54.166 [2024-10-07 09:43:42.853765] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:22:54.166 [2024-10-07 09:43:42.853777] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:22:54.166 [2024-10-07 09:43:42.853794] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:22:54.166 [2024-10-07 09:43:42.853831] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:54.426 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:22:55.364 09:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 268640 00:22:55.364 09:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@650 -- # local es=0 00:22:55.364 09:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 268640 00:22:55.364 09:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@638 -- # local arg=wait 00:22:55.364 09:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:55.364 09:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # type -t wait 00:22:55.364 09:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:55.364 09:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # wait 268640 00:22:55.364 09:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # es=255 00:22:55.364 09:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:55.364 09:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@662 -- # es=127 00:22:55.364 09:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # case "$es" in 00:22:55.364 09:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@670 -- # es=1 00:22:55.364 09:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:55.364 09:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:22:55.364 09:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:55.364 09:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:55.364 09:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:55.364 09:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:55.365 09:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@514 -- # nvmfcleanup 00:22:55.365 09:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:22:55.365 09:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:55.365 09:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:22:55.365 09:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:55.365 09:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:55.625 rmmod nvme_tcp 00:22:55.625 rmmod nvme_fabrics 00:22:55.625 rmmod nvme_keyring 00:22:55.625 09:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:55.625 09:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:22:55.625 09:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:22:55.625 09:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@515 -- # '[' -n 268469 ']' 00:22:55.625 09:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # killprocess 268469 00:22:55.625 09:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 268469 ']' 00:22:55.625 09:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 268469 00:22:55.625 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (268469) - No such process 00:22:55.625 09:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@977 -- # echo 'Process with pid 268469 is not found' 00:22:55.625 Process with pid 268469 is not found 00:22:55.625 09:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:22:55.625 09:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:22:55.625 09:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:22:55.625 09:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:22:55.625 09:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@789 -- # iptables-save 00:22:55.625 09:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:22:55.625 09:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@789 -- # iptables-restore 00:22:55.625 09:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:55.625 09:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:55.625 09:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:55.625 09:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:55.625 09:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:57.533 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:57.533 00:22:57.533 real 0m7.536s 00:22:57.533 user 0m18.555s 00:22:57.533 sys 0m1.451s 00:22:57.533 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:57.533 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:57.533 ************************************ 00:22:57.533 END TEST nvmf_shutdown_tc3 00:22:57.533 ************************************ 00:22:57.533 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:22:57.533 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:22:57.533 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:22:57.533 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:57.533 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:57.533 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:57.533 ************************************ 00:22:57.533 START TEST nvmf_shutdown_tc4 00:22:57.533 ************************************ 00:22:57.533 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc4 00:22:57.533 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:22:57.533 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:57.534 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:22:57.534 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:57.534 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # prepare_net_devs 00:22:57.534 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:22:57.534 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:22:57.534 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:57.534 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:57.534 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:57.534 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:22:57.534 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:22:57.534 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:57.534 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:57.534 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:57.534 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:57.534 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:57.534 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:57.534 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:57.534 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:57.534 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:57.534 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:22:57.534 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:57.534 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:22:57.534 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:22:57.534 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:22:57.534 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:22:57.534 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:22:57.534 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:57.534 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:57.534 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:57.534 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:57.534 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:57.534 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:57.534 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:57.534 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:57.534 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:57.534 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:57.534 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:57.534 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:57.534 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:57.534 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:57.534 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:57.534 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:57.534 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:57.534 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:57.534 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:57.534 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:57.534 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x1592)' 00:22:57.534 Found 0000:09:00.0 (0x8086 - 0x1592) 00:22:57.534 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:57.534 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:57.534 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:22:57.534 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:22:57.534 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:57.534 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:57.534 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x1592)' 00:22:57.534 Found 0000:09:00.1 (0x8086 - 0x1592) 00:22:57.534 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:57.534 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:57.534 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:22:57.534 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:22:57.534 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:57.534 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:57.534 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:57.534 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:57.534 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:57.534 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:57.534 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:57.534 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:57.534 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:57.534 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:57.534 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:57.534 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:22:57.534 Found net devices under 0000:09:00.0: cvl_0_0 00:22:57.534 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:57.534 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:57.534 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:57.534 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:57.534 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:57.534 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:57.534 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:57.534 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:57.534 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:22:57.534 Found net devices under 0000:09:00.1: cvl_0_1 00:22:57.535 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:57.535 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:22:57.535 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # is_hw=yes 00:22:57.535 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:22:57.535 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:22:57.535 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:22:57.795 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:57.795 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:57.795 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:57.795 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:57.795 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:57.795 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:57.795 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:57.795 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:57.795 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:57.795 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:57.795 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:57.795 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:57.795 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:57.795 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:57.795 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:57.795 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:57.795 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:57.795 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:57.795 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:57.795 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:57.795 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:57.795 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:57.795 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:57.795 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:57.795 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.282 ms 00:22:57.795 00:22:57.795 --- 10.0.0.2 ping statistics --- 00:22:57.795 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:57.795 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:22:57.795 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:57.795 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:57.795 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.185 ms 00:22:57.795 00:22:57.795 --- 10.0.0.1 ping statistics --- 00:22:57.795 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:57.795 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:22:57.795 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:57.795 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@448 -- # return 0 00:22:57.795 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:22:57.795 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:57.795 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:22:57.795 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:22:57.795 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:57.795 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:22:57.795 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:22:57.795 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:57.795 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:57.795 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:57.795 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:57.795 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # nvmfpid=269514 00:22:57.795 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # waitforlisten 269514 00:22:57.795 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@831 -- # '[' -z 269514 ']' 00:22:57.795 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:57.795 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:57.795 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:57.795 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:57.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:57.795 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:57.795 09:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:57.795 [2024-10-07 09:43:46.745206] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:22:57.795 [2024-10-07 09:43:46.745312] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:58.056 [2024-10-07 09:43:46.808458] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:58.056 [2024-10-07 09:43:46.918200] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:58.056 [2024-10-07 09:43:46.918270] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:58.056 [2024-10-07 09:43:46.918299] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:58.056 [2024-10-07 09:43:46.918311] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:58.056 [2024-10-07 09:43:46.918321] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:58.056 [2024-10-07 09:43:46.919950] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:22:58.056 [2024-10-07 09:43:46.920014] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:22:58.056 [2024-10-07 09:43:46.920082] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:22:58.056 [2024-10-07 09:43:46.920085] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:22:58.315 09:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:58.315 09:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # return 0 00:22:58.315 09:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:58.315 09:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:58.315 09:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:58.315 09:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:58.315 09:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:58.315 09:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.315 09:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:58.315 [2024-10-07 09:43:47.089124] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:58.315 09:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.315 09:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:58.315 09:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:58.315 09:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:58.316 09:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:58.316 09:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:58.316 09:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:58.316 09:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:58.316 09:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:58.316 09:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:58.316 09:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:58.316 09:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:58.316 09:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:58.316 09:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:58.316 09:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:58.316 09:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:58.316 09:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:58.316 09:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:58.316 09:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:58.316 09:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:58.316 09:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:58.316 09:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:58.316 09:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:58.316 09:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:58.316 09:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:58.316 09:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:58.316 09:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:58.316 09:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.316 09:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:58.316 Malloc1 00:22:58.316 [2024-10-07 09:43:47.178312] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:58.316 Malloc2 00:22:58.316 Malloc3 00:22:58.316 Malloc4 00:22:58.574 Malloc5 00:22:58.574 Malloc6 00:22:58.574 Malloc7 00:22:58.574 Malloc8 00:22:58.574 Malloc9 00:22:58.833 Malloc10 00:22:58.833 09:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.833 09:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:58.833 09:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:58.833 09:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:58.833 09:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=269687 00:22:58.833 09:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:22:58.833 09:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:22:58.833 [2024-10-07 09:43:47.672715] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:04.116 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:04.116 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 269514 00:23:04.116 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@950 -- # '[' -z 269514 ']' 00:23:04.116 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # kill -0 269514 00:23:04.116 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@955 -- # uname 00:23:04.116 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:04.116 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 269514 00:23:04.116 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:04.116 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:04.116 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 269514' 00:23:04.116 killing process with pid 269514 00:23:04.116 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@969 -- # kill 269514 00:23:04.116 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@974 -- # wait 269514 00:23:04.116 [2024-10-07 09:43:52.678972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc1320 is same with the state(6) to be set 00:23:04.116 [2024-10-07 09:43:52.679052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc1320 is same with the state(6) to be set 00:23:04.116 [2024-10-07 09:43:52.679073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc1320 is same with the state(6) to be set 00:23:04.116 [2024-10-07 09:43:52.679090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc1320 is same with the state(6) to be set 00:23:04.116 [2024-10-07 09:43:52.679103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc1320 is same with the state(6) to be set 00:23:04.116 [2024-10-07 09:43:52.679122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc1320 is same with the state(6) to be set 00:23:04.116 [2024-10-07 09:43:52.679137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc1320 is same with the state(6) to be set 00:23:04.116 [2024-10-07 09:43:52.679152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc1320 is same with the state(6) to be set 00:23:04.116 [2024-10-07 09:43:52.679166] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc1320 is same with the state(6) to be set 00:23:04.116 [2024-10-07 09:43:52.681275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc1cc0 is same with the state(6) to be set 00:23:04.116 [2024-10-07 09:43:52.681317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc1cc0 is same with the state(6) to be set 00:23:04.116 [2024-10-07 09:43:52.681338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc1cc0 is same with the state(6) to be set 00:23:04.116 [2024-10-07 09:43:52.681355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc1cc0 is same with the state(6) to be set 00:23:04.116 [2024-10-07 09:43:52.681369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc1cc0 is same with the state(6) to be set 00:23:04.116 [2024-10-07 09:43:52.681384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc1cc0 is same with the state(6) to be set 00:23:04.116 [2024-10-07 09:43:52.681399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc1cc0 is same with the state(6) to be set 00:23:04.116 [2024-10-07 09:43:52.681414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc1cc0 is same with the state(6) to be set 00:23:04.116 [2024-10-07 09:43:52.681442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc1cc0 is same with the state(6) to be set 00:23:04.116 Write completed with error (sct=0, sc=8) 00:23:04.116 Write completed with error (sct=0, sc=8) 00:23:04.116 Write completed with error (sct=0, sc=8) 00:23:04.116 starting I/O failed: -6 00:23:04.116 Write completed with error (sct=0, sc=8) 00:23:04.116 Write completed with error (sct=0, sc=8) 00:23:04.116 Write completed with error (sct=0, sc=8) 00:23:04.116 Write completed with error (sct=0, sc=8) 00:23:04.116 starting I/O failed: -6 00:23:04.116 Write completed with error (sct=0, sc=8) 00:23:04.116 Write completed with error (sct=0, sc=8) 00:23:04.116 Write completed with error (sct=0, sc=8) 00:23:04.116 Write completed with error (sct=0, sc=8) 00:23:04.116 starting I/O failed: -6 00:23:04.116 Write completed with error (sct=0, sc=8) 00:23:04.116 Write completed with error (sct=0, sc=8) 00:23:04.116 Write completed with error (sct=0, sc=8) 00:23:04.116 Write completed with error (sct=0, sc=8) 00:23:04.116 starting I/O failed: -6 00:23:04.116 Write completed with error (sct=0, sc=8) 00:23:04.116 Write completed with error (sct=0, sc=8) 00:23:04.116 Write completed with error (sct=0, sc=8) 00:23:04.116 Write completed with error (sct=0, sc=8) 00:23:04.116 starting I/O failed: -6 00:23:04.116 Write completed with error (sct=0, sc=8) 00:23:04.116 Write completed with error (sct=0, sc=8) 00:23:04.116 Write completed with error (sct=0, sc=8) 00:23:04.116 Write completed with error (sct=0, sc=8) 00:23:04.116 starting I/O failed: -6 00:23:04.116 Write completed with error (sct=0, sc=8) 00:23:04.116 Write completed with error (sct=0, sc=8) 00:23:04.116 Write completed with error (sct=0, sc=8) 00:23:04.116 Write completed with error (sct=0, sc=8) 00:23:04.116 starting I/O failed: -6 00:23:04.116 Write completed with error (sct=0, sc=8) 00:23:04.116 Write completed with error (sct=0, sc=8) 00:23:04.116 Write completed with error (sct=0, sc=8) 00:23:04.116 Write completed with error (sct=0, sc=8) 00:23:04.116 starting I/O failed: -6 00:23:04.116 Write completed with error (sct=0, sc=8) 00:23:04.116 Write completed with error (sct=0, sc=8) 00:23:04.116 Write completed with error (sct=0, sc=8) 00:23:04.116 Write completed with error (sct=0, sc=8) 00:23:04.116 starting I/O failed: -6 00:23:04.116 Write completed with error (sct=0, sc=8) 00:23:04.116 Write completed with error (sct=0, sc=8) 00:23:04.116 Write completed with error (sct=0, sc=8) 00:23:04.116 Write completed with error (sct=0, sc=8) 00:23:04.116 starting I/O failed: -6 00:23:04.116 Write completed with error (sct=0, sc=8) 00:23:04.116 Write completed with error (sct=0, sc=8) 00:23:04.116 [2024-10-07 09:43:52.685089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:04.116 starting I/O failed: -6 00:23:04.116 Write completed with error (sct=0, sc=8) 00:23:04.116 Write completed with error (sct=0, sc=8) 00:23:04.116 starting I/O failed: -6 00:23:04.116 Write completed with error (sct=0, sc=8) 00:23:04.116 starting I/O failed: -6 00:23:04.116 Write completed with error (sct=0, sc=8) 00:23:04.116 Write completed with error (sct=0, sc=8) 00:23:04.116 Write completed with error (sct=0, sc=8) 00:23:04.116 starting I/O failed: -6 00:23:04.116 Write completed with error (sct=0, sc=8) 00:23:04.116 starting I/O failed: -6 00:23:04.116 Write completed with error (sct=0, sc=8) 00:23:04.116 Write completed with error (sct=0, sc=8) 00:23:04.116 Write completed with error (sct=0, sc=8) 00:23:04.116 starting I/O failed: -6 00:23:04.116 Write completed with error (sct=0, sc=8) 00:23:04.116 starting I/O failed: -6 00:23:04.116 Write completed with error (sct=0, sc=8) 00:23:04.116 Write completed with error (sct=0, sc=8) 00:23:04.116 Write completed with error (sct=0, sc=8) 00:23:04.116 [2024-10-07 09:43:52.685601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc2eb0 is same with starting I/O failed: -6 00:23:04.116 the state(6) to be set 00:23:04.116 Write completed with error (sct=0, sc=8) 00:23:04.116 starting I/O failed: -6 00:23:04.116 [2024-10-07 09:43:52.685638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc2eb0 is same with the state(6) to be set 00:23:04.116 Write completed with error (sct=0, sc=8) 00:23:04.116 [2024-10-07 09:43:52.685663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc2eb0 is same with the state(6) to be set 00:23:04.117 Write completed with error (sct=0, sc=8) 00:23:04.117 [2024-10-07 09:43:52.685686] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc2eb0 is same with the state(6) to be set 00:23:04.117 [2024-10-07 09:43:52.685702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc2eb0 is same with the state(6) to be set 00:23:04.117 Write completed with error (sct=0, sc=8) 00:23:04.117 starting I/O failed: -6 00:23:04.117 [2024-10-07 09:43:52.685714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc2eb0 is same with the state(6) to be set 00:23:04.117 Write completed with error (sct=0, sc=8) 00:23:04.117 [2024-10-07 09:43:52.685734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc2eb0 is same with starting I/O failed: -6 00:23:04.117 the state(6) to be set 00:23:04.117 Write completed with error (sct=0, sc=8) 00:23:04.117 [2024-10-07 09:43:52.685749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc2eb0 is same with the state(6) to be set 00:23:04.117 [2024-10-07 09:43:52.685762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc2eb0 is same with the state(6) to be set 00:23:04.117 Write completed with error (sct=0, sc=8) 00:23:04.117 [2024-10-07 09:43:52.685774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc2eb0 is same with the state(6) to be set 00:23:04.117 Write completed with error (sct=0, sc=8) 00:23:04.117 starting I/O failed: -6 00:23:04.117 Write completed with error (sct=0, sc=8) 00:23:04.117 starting I/O failed: -6 00:23:04.117 Write completed with error (sct=0, sc=8) 00:23:04.117 Write completed with error (sct=0, sc=8) 00:23:04.117 Write completed with error (sct=0, sc=8) 00:23:04.117 starting I/O failed: -6 00:23:04.117 Write completed with error (sct=0, sc=8) 00:23:04.117 starting I/O failed: -6 00:23:04.117 Write completed with error (sct=0, sc=8) 00:23:04.117 Write completed with error (sct=0, sc=8) 00:23:04.117 Write completed with error (sct=0, sc=8) 00:23:04.117 starting I/O failed: -6 00:23:04.117 Write completed with error (sct=0, sc=8) 00:23:04.117 starting I/O failed: -6 00:23:04.117 Write completed with error (sct=0, sc=8) 00:23:04.117 Write completed with error (sct=0, sc=8) 00:23:04.117 Write completed with error (sct=0, sc=8) 00:23:04.117 starting I/O failed: -6 00:23:04.117 Write completed with error (sct=0, sc=8) 00:23:04.117 starting I/O failed: -6 00:23:04.117 Write completed with error (sct=0, sc=8) 00:23:04.117 Write completed with error (sct=0, sc=8) 00:23:04.117 Write completed with error (sct=0, sc=8) 00:23:04.117 starting I/O failed: -6 00:23:04.117 Write completed with error (sct=0, sc=8) 00:23:04.117 starting I/O failed: -6 00:23:04.117 [2024-10-07 09:43:52.686166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:04.117 Write completed with error (sct=0, sc=8) 00:23:04.117 Write completed with error (sct=0, sc=8) 00:23:04.117 starting I/O failed: -6 00:23:04.117 Write completed with error (sct=0, sc=8) 00:23:04.117 starting I/O failed: -6 00:23:04.117 Write completed with error (sct=0, sc=8) 00:23:04.117 starting I/O failed: -6 00:23:04.117 Write completed with error (sct=0, sc=8) 00:23:04.117 Write completed with error (sct=0, sc=8) 00:23:04.117 starting I/O failed: -6 00:23:04.117 Write completed with error (sct=0, sc=8) 00:23:04.117 starting I/O failed: -6 00:23:04.117 Write completed with error (sct=0, sc=8) 00:23:04.117 starting I/O failed: -6 00:23:04.117 Write completed with error (sct=0, sc=8) 00:23:04.117 Write completed with error (sct=0, sc=8) 00:23:04.117 starting I/O failed: -6 00:23:04.117 Write completed with error (sct=0, sc=8) 00:23:04.117 starting I/O failed: -6 00:23:04.117 Write completed with error (sct=0, sc=8) 00:23:04.117 starting I/O failed: -6 00:23:04.117 Write completed with error (sct=0, sc=8) 00:23:04.117 Write completed with error (sct=0, sc=8) 00:23:04.117 starting I/O failed: -6 00:23:04.117 Write completed with error (sct=0, sc=8) 00:23:04.117 starting I/O failed: -6 00:23:04.117 Write completed with error (sct=0, sc=8) 00:23:04.117 starting I/O failed: -6 00:23:04.117 Write completed with error (sct=0, sc=8) 00:23:04.117 Write completed with error (sct=0, sc=8) 00:23:04.117 starting I/O failed: -6 00:23:04.117 Write completed with error (sct=0, sc=8) 00:23:04.117 starting I/O failed: -6 00:23:04.117 Write completed with error (sct=0, sc=8) 00:23:04.117 starting I/O failed: -6 00:23:04.117 Write completed with error (sct=0, sc=8) 00:23:04.117 Write completed with error (sct=0, sc=8) 00:23:04.117 starting I/O failed: -6 00:23:04.117 Write completed with error (sct=0, sc=8) 00:23:04.117 starting I/O failed: -6 00:23:04.117 Write completed with error (sct=0, sc=8) 00:23:04.117 starting I/O failed: -6 00:23:04.117 Write completed with error (sct=0, sc=8) 00:23:04.117 Write completed with error (sct=0, sc=8) 00:23:04.117 starting I/O failed: -6 00:23:04.117 [2024-10-07 09:43:52.686814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc49e0 is same with the state(6) to be set 00:23:04.117 Write completed with error (sct=0, sc=8) 00:23:04.117 starting I/O failed: -6 00:23:04.117 [2024-10-07 09:43:52.686845] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc49e0 is same with the state(6) to be set 00:23:04.117 Write completed with error (sct=0, sc=8) 00:23:04.117 [2024-10-07 09:43:52.686860] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc49e0 is same with starting I/O failed: -6 00:23:04.117 the state(6) to be set 00:23:04.117 [2024-10-07 09:43:52.686879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc49e0 is same with Write completed with error (sct=0, sc=8) 00:23:04.117 the state(6) to be set 00:23:04.117 [2024-10-07 09:43:52.686893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc49e0 is same with the state(6) to be set 00:23:04.117 Write completed with error (sct=0, sc=8) 00:23:04.117 [2024-10-07 09:43:52.686906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc49e0 is same with the state(6) to be set 00:23:04.117 starting I/O failed: -6 00:23:04.117 Write completed with error (sct=0, sc=8) 00:23:04.117 starting I/O failed: -6 00:23:04.117 Write completed with error (sct=0, sc=8) 00:23:04.117 starting I/O failed: -6 00:23:04.117 Write completed with error (sct=0, sc=8) 00:23:04.117 Write completed with error (sct=0, sc=8) 00:23:04.117 starting I/O failed: -6 00:23:04.117 Write completed with error (sct=0, sc=8) 00:23:04.117 starting I/O failed: -6 00:23:04.117 Write completed with error (sct=0, sc=8) 00:23:04.117 starting I/O failed: -6 00:23:04.117 Write completed with error (sct=0, sc=8) 00:23:04.117 Write completed with error (sct=0, sc=8) 00:23:04.117 starting I/O failed: -6 00:23:04.117 Write completed with error (sct=0, sc=8) 00:23:04.117 starting I/O failed: -6 00:23:04.117 Write completed with error (sct=0, sc=8) 00:23:04.117 starting I/O failed: -6 00:23:04.117 Write completed with error (sct=0, sc=8) 00:23:04.117 Write completed with error (sct=0, sc=8) 00:23:04.117 starting I/O failed: -6 00:23:04.117 Write completed with error (sct=0, sc=8) 00:23:04.117 starting I/O failed: -6 00:23:04.117 Write completed with error (sct=0, sc=8) 00:23:04.117 starting I/O failed: -6 00:23:04.117 Write completed with error (sct=0, sc=8) 00:23:04.117 Write completed with error (sct=0, sc=8) 00:23:04.117 starting I/O failed: -6 00:23:04.117 Write completed with error (sct=0, sc=8) 00:23:04.117 starting I/O failed: -6 00:23:04.117 Write completed with error (sct=0, sc=8) 00:23:04.117 starting I/O failed: -6 00:23:04.117 Write completed with error (sct=0, sc=8) 00:23:04.117 [2024-10-07 09:43:52.687285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc4d60 is same with [2024-10-07 09:43:52.687293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such devithe state(6) to be set 00:23:04.117 ce or address) on qpair id 1 00:23:04.117 [2024-10-07 09:43:52.687319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc4d60 is same with the state(6) to be set 00:23:04.117 [2024-10-07 09:43:52.687339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc4d60 is same with the state(6) to be set 00:23:04.117 [2024-10-07 09:43:52.687353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc4d60 is same with the state(6) to be set 00:23:04.117 [2024-10-07 09:43:52.687365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc4d60 is same with the state(6) to be set 00:23:04.117 [2024-10-07 09:43:52.687378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc4d60 is same with the state(6) to be set 00:23:04.117 [2024-10-07 09:43:52.687395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc4d60 is same with the state(6) to be set 00:23:04.117 [2024-10-07 09:43:52.687408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc4d60 is same with the state(6) to be set 00:23:04.117 [2024-10-07 09:43:52.687420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc4d60 is same with the state(6) to be set 00:23:04.117 [2024-10-07 09:43:52.687435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc4d60 is same with the state(6) to be set 00:23:04.117 [2024-10-07 09:43:52.687449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc4d60 is same with the state(6) to be set 00:23:04.117 Write completed with error (sct=0, sc=8) 00:23:04.117 starting I/O failed: -6 00:23:04.117 Write completed with error (sct=0, sc=8) 00:23:04.117 starting I/O failed: -6 00:23:04.117 Write completed with error (sct=0, sc=8) 00:23:04.117 starting I/O failed: -6 00:23:04.117 Write completed with error (sct=0, sc=8) 00:23:04.117 starting I/O failed: -6 00:23:04.117 Write completed with error (sct=0, sc=8) 00:23:04.117 starting I/O failed: -6 00:23:04.117 Write completed with error (sct=0, sc=8) 00:23:04.117 starting I/O failed: -6 00:23:04.117 Write completed with error (sct=0, sc=8) 00:23:04.117 starting I/O failed: -6 00:23:04.117 Write completed with error (sct=0, sc=8) 00:23:04.117 starting I/O failed: -6 00:23:04.117 Write completed with error (sct=0, sc=8) 00:23:04.117 starting I/O failed: -6 00:23:04.117 Write completed with error (sct=0, sc=8) 00:23:04.117 starting I/O failed: -6 00:23:04.117 Write completed with error (sct=0, sc=8) 00:23:04.117 starting I/O failed: -6 00:23:04.117 Write completed with error (sct=0, sc=8) 00:23:04.117 starting I/O failed: -6 00:23:04.117 Write completed with error (sct=0, sc=8) 00:23:04.117 starting I/O failed: -6 00:23:04.117 Write completed with error (sct=0, sc=8) 00:23:04.117 starting I/O failed: -6 00:23:04.117 Write completed with error (sct=0, sc=8) 00:23:04.117 starting I/O failed: -6 00:23:04.117 Write completed with error (sct=0, sc=8) 00:23:04.117 starting I/O failed: -6 00:23:04.117 Write completed with error (sct=0, sc=8) 00:23:04.117 starting I/O failed: -6 00:23:04.118 Write completed with error (sct=0, sc=8) 00:23:04.118 starting I/O failed: -6 00:23:04.118 Write completed with error (sct=0, sc=8) 00:23:04.118 starting I/O failed: -6 00:23:04.118 Write completed with error (sct=0, sc=8) 00:23:04.118 starting I/O failed: -6 00:23:04.118 Write completed with error (sct=0, sc=8) 00:23:04.118 starting I/O failed: -6 00:23:04.118 [2024-10-07 09:43:52.688009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc50e0 is same with Write completed with error (sct=0, sc=8) 00:23:04.118 the state(6) to be set 00:23:04.118 starting I/O failed: -6 00:23:04.118 [2024-10-07 09:43:52.688037] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc50e0 is same with Write completed with error (sct=0, sc=8) 00:23:04.118 the state(6) to be set 00:23:04.118 starting I/O failed: -6 00:23:04.118 [2024-10-07 09:43:52.688052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc50e0 is same with the state(6) to be set 00:23:04.118 Write completed with error (sct=0, sc=8) 00:23:04.118 [2024-10-07 09:43:52.688065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc50e0 is same with the state(6) to be set 00:23:04.118 starting I/O failed: -6 00:23:04.118 [2024-10-07 09:43:52.688077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc50e0 is same with the state(6) to be set 00:23:04.118 Write completed with error (sct=0, sc=8) 00:23:04.118 starting I/O failed: -6 00:23:04.118 Write completed with error (sct=0, sc=8) 00:23:04.118 starting I/O failed: -6 00:23:04.118 Write completed with error (sct=0, sc=8) 00:23:04.118 starting I/O failed: -6 00:23:04.118 Write completed with error (sct=0, sc=8) 00:23:04.118 starting I/O failed: -6 00:23:04.118 Write completed with error (sct=0, sc=8) 00:23:04.118 starting I/O failed: -6 00:23:04.118 Write completed with error (sct=0, sc=8) 00:23:04.118 starting I/O failed: -6 00:23:04.118 Write completed with error (sct=0, sc=8) 00:23:04.118 starting I/O failed: -6 00:23:04.118 Write completed with error (sct=0, sc=8) 00:23:04.118 starting I/O failed: -6 00:23:04.118 Write completed with error (sct=0, sc=8) 00:23:04.118 starting I/O failed: -6 00:23:04.118 Write completed with error (sct=0, sc=8) 00:23:04.118 starting I/O failed: -6 00:23:04.118 Write completed with error (sct=0, sc=8) 00:23:04.118 starting I/O failed: -6 00:23:04.118 Write completed with error (sct=0, sc=8) 00:23:04.118 starting I/O failed: -6 00:23:04.118 Write completed with error (sct=0, sc=8) 00:23:04.118 starting I/O failed: -6 00:23:04.118 Write completed with error (sct=0, sc=8) 00:23:04.118 starting I/O failed: -6 00:23:04.118 Write completed with error (sct=0, sc=8) 00:23:04.118 starting I/O failed: -6 00:23:04.118 Write completed with error (sct=0, sc=8) 00:23:04.118 starting I/O failed: -6 00:23:04.118 Write completed with error (sct=0, sc=8) 00:23:04.118 starting I/O failed: -6 00:23:04.118 Write completed with error (sct=0, sc=8) 00:23:04.118 starting I/O failed: -6 00:23:04.118 Write completed with error (sct=0, sc=8) 00:23:04.118 starting I/O failed: -6 00:23:04.118 Write completed with error (sct=0, sc=8) 00:23:04.118 starting I/O failed: -6 00:23:04.118 [2024-10-07 09:43:52.688438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baea40 is same with the state(6) to be set 00:23:04.118 Write completed with error (sct=0, sc=8) 00:23:04.118 starting I/O failed: -6 00:23:04.118 [2024-10-07 09:43:52.688470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baea40 is same with the state(6) to be set 00:23:04.118 Write completed with error (sct=0, sc=8) 00:23:04.118 [2024-10-07 09:43:52.688485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baea40 is same with starting I/O failed: -6 00:23:04.118 the state(6) to be set 00:23:04.118 [2024-10-07 09:43:52.688498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baea40 is same with Write completed with error (sct=0, sc=8) 00:23:04.118 the state(6) to be set 00:23:04.118 starting I/O failed: -6 00:23:04.118 [2024-10-07 09:43:52.688512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baea40 is same with the state(6) to be set 00:23:04.118 Write completed with error (sct=0, sc=8) 00:23:04.118 starting I/O failed: -6 00:23:04.118 [2024-10-07 09:43:52.688524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baea40 is same with Write completed with error (sct=0, sc=8) 00:23:04.118 the state(6) to be set 00:23:04.118 starting I/O failed: -6 00:23:04.118 [2024-10-07 09:43:52.688551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baea40 is same with the state(6) to be set 00:23:04.118 Write completed with error (sct=0, sc=8) 00:23:04.118 starting I/O failed: -6 00:23:04.118 [2024-10-07 09:43:52.688565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baea40 is same with the state(6) to be set 00:23:04.118 Write completed with error (sct=0, sc=8) 00:23:04.118 [2024-10-07 09:43:52.688578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baea40 is same with the state(6) to be set 00:23:04.118 starting I/O failed: -6 00:23:04.118 Write completed with error (sct=0, sc=8) 00:23:04.118 starting I/O failed: -6 00:23:04.118 Write completed with error (sct=0, sc=8) 00:23:04.118 starting I/O failed: -6 00:23:04.118 Write completed with error (sct=0, sc=8) 00:23:04.118 starting I/O failed: -6 00:23:04.118 Write completed with error (sct=0, sc=8) 00:23:04.118 starting I/O failed: -6 00:23:04.118 Write completed with error (sct=0, sc=8) 00:23:04.118 starting I/O failed: -6 00:23:04.118 Write completed with error (sct=0, sc=8) 00:23:04.118 starting I/O failed: -6 00:23:04.118 Write completed with error (sct=0, sc=8) 00:23:04.118 starting I/O failed: -6 00:23:04.118 Write completed with error (sct=0, sc=8) 00:23:04.118 starting I/O failed: -6 00:23:04.118 Write completed with error (sct=0, sc=8) 00:23:04.118 starting I/O failed: -6 00:23:04.118 Write completed with error (sct=0, sc=8) 00:23:04.118 starting I/O failed: -6 00:23:04.118 [2024-10-07 09:43:52.688992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:23:04.118 NVMe io qpair process completion error 00:23:04.118 [2024-10-07 09:43:52.691700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c30230 is same with the state(6) to be set 00:23:04.118 [2024-10-07 09:43:52.691734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c30230 is same with the state(6) to be set 00:23:04.118 [2024-10-07 09:43:52.691753] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c30230 is same with the state(6) to be set 00:23:04.118 [2024-10-07 09:43:52.691859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94d60 is same with the state(6) to be set 00:23:04.118 [2024-10-07 09:43:52.691887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94d60 is same with the state(6) to be set 00:23:04.118 [2024-10-07 09:43:52.691903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94d60 is same with the state(6) to be set 00:23:04.118 [2024-10-07 09:43:52.697349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b960e0 is same with the state(6) to be set 00:23:04.118 [2024-10-07 09:43:52.697395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b960e0 is same with the state(6) to be set 00:23:04.118 [2024-10-07 09:43:52.697410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b960e0 is same with the state(6) to be set 00:23:04.118 [2024-10-07 09:43:52.697424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b960e0 is same with the state(6) to be set 00:23:04.118 [2024-10-07 09:43:52.697438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b960e0 is same with the state(6) to be set 00:23:04.118 [2024-10-07 09:43:52.697451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b960e0 is same with the state(6) to be set 00:23:04.118 [2024-10-07 09:43:52.698441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b965b0 is same with the state(6) to be set 00:23:04.118 [2024-10-07 09:43:52.698474] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b965b0 is same with the state(6) to be set 00:23:04.118 [2024-10-07 09:43:52.698490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b965b0 is same with the state(6) to be set 00:23:04.118 [2024-10-07 09:43:52.698503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b965b0 is same with the state(6) to be set 00:23:04.118 [2024-10-07 09:43:52.698516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b965b0 is same with the state(6) to be set 00:23:04.118 [2024-10-07 09:43:52.698540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b965b0 is same with the state(6) to be set 00:23:04.118 [2024-10-07 09:43:52.698553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b965b0 is same with the state(6) to be set 00:23:04.118 [2024-10-07 09:43:52.698565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b965b0 is same with the state(6) to be set 00:23:04.118 [2024-10-07 09:43:52.699099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95740 is same with the state(6) to be set 00:23:04.118 [2024-10-07 09:43:52.699139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95740 is same with the state(6) to be set 00:23:04.118 [2024-10-07 09:43:52.699158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95740 is same with the state(6) to be set 00:23:04.118 [2024-10-07 09:43:52.699171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95740 is same with the state(6) to be set 00:23:04.118 Write completed with error (sct=0, sc=8) 00:23:04.118 [2024-10-07 09:43:52.699184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95740 is same with the state(6) to be set 00:23:04.118 Write completed with error (sct=0, sc=8) 00:23:04.118 [2024-10-07 09:43:52.699201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95740 is same with the state(6) to be set 00:23:04.118 starting I/O failed: -6 00:23:04.118 [2024-10-07 09:43:52.699224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95740 is same with the state(6) to be set 00:23:04.118 Write completed with error (sct=0, sc=8) 00:23:04.118 [2024-10-07 09:43:52.699239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95740 is same with the state(6) to be set 00:23:04.118 Write completed with error (sct=0, sc=8) 00:23:04.118 Write completed with error (sct=0, sc=8) 00:23:04.118 Write completed with error (sct=0, sc=8) 00:23:04.118 starting I/O failed: -6 00:23:04.118 Write completed with error (sct=0, sc=8) 00:23:04.118 Write completed with error (sct=0, sc=8) 00:23:04.118 Write completed with error (sct=0, sc=8) 00:23:04.118 Write completed with error (sct=0, sc=8) 00:23:04.118 starting I/O failed: -6 00:23:04.119 Write completed with error (sct=0, sc=8) 00:23:04.119 Write completed with error (sct=0, sc=8) 00:23:04.119 Write completed with error (sct=0, sc=8) 00:23:04.119 Write completed with error (sct=0, sc=8) 00:23:04.119 starting I/O failed: -6 00:23:04.119 Write completed with error (sct=0, sc=8) 00:23:04.119 Write completed with error (sct=0, sc=8) 00:23:04.119 Write completed with error (sct=0, sc=8) 00:23:04.119 Write completed with error (sct=0, sc=8) 00:23:04.119 starting I/O failed: -6 00:23:04.119 Write completed with error (sct=0, sc=8) 00:23:04.119 Write completed with error (sct=0, sc=8) 00:23:04.119 Write completed with error (sct=0, sc=8) 00:23:04.119 Write completed with error (sct=0, sc=8) 00:23:04.119 starting I/O failed: -6 00:23:04.119 Write completed with error (sct=0, sc=8) 00:23:04.119 Write completed with error (sct=0, sc=8) 00:23:04.119 Write completed with error (sct=0, sc=8) 00:23:04.119 Write completed with error (sct=0, sc=8) 00:23:04.119 starting I/O failed: -6 00:23:04.119 Write completed with error (sct=0, sc=8) 00:23:04.119 Write completed with error (sct=0, sc=8) 00:23:04.119 Write completed with error (sct=0, sc=8) 00:23:04.119 Write completed with error (sct=0, sc=8) 00:23:04.119 starting I/O failed: -6 00:23:04.119 Write completed with error (sct=0, sc=8) 00:23:04.119 Write completed with error (sct=0, sc=8) 00:23:04.119 Write completed with error (sct=0, sc=8) 00:23:04.119 Write completed with error (sct=0, sc=8) 00:23:04.119 starting I/O failed: -6 00:23:04.119 Write completed with error (sct=0, sc=8) 00:23:04.119 [2024-10-07 09:43:52.699919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96f70 is same with the state(6) to be set 00:23:04.119 Write completed with error (sct=0, sc=8) 00:23:04.119 [2024-10-07 09:43:52.699962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96f70 is same with the state(6) to be set 00:23:04.119 [2024-10-07 09:43:52.699980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96f70 is same with the state(6) to be set 00:23:04.119 [2024-10-07 09:43:52.699993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96f70 is same with the state(6) to be set 00:23:04.119 [2024-10-07 09:43:52.700005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96f70 is same with [2024-10-07 09:43:52.700000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such devithe state(6) to be set 00:23:04.119 ce or address) on qpair id 3 00:23:04.119 [2024-10-07 09:43:52.700035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96f70 is same with the state(6) to be set 00:23:04.119 Write completed with error (sct=0, sc=8) 00:23:04.119 starting I/O failed: -6 00:23:04.119 Write completed with error (sct=0, sc=8) 00:23:04.119 Write completed with error (sct=0, sc=8) 00:23:04.119 Write completed with error (sct=0, sc=8) 00:23:04.119 starting I/O failed: -6 00:23:04.119 Write completed with error (sct=0, sc=8) 00:23:04.119 starting I/O failed: -6 00:23:04.119 Write completed with error (sct=0, sc=8) 00:23:04.119 Write completed with error (sct=0, sc=8) 00:23:04.119 Write completed with error (sct=0, sc=8) 00:23:04.119 starting I/O failed: -6 00:23:04.119 Write completed with error (sct=0, sc=8) 00:23:04.119 starting I/O failed: -6 00:23:04.119 [2024-10-07 09:43:52.700375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b97440 is same with the state(6) to be set 00:23:04.119 Write completed with error (sct=0, sc=8) 00:23:04.119 Write completed with error (sct=0, sc=8) 00:23:04.119 [2024-10-07 09:43:52.700405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b97440 is same with the state(6) to be set 00:23:04.119 [2024-10-07 09:43:52.700422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b97440 is same with the state(6) to be set 00:23:04.119 Write completed with error (sct=0, sc=8) 00:23:04.119 starting I/O failed: -6 00:23:04.119 [2024-10-07 09:43:52.700435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b97440 is same with the state(6) to be set 00:23:04.119 Write completed with error (sct=0, sc=8) 00:23:04.119 [2024-10-07 09:43:52.700447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b97440 is same with the state(6) to be set 00:23:04.119 starting I/O failed: -6 00:23:04.119 Write completed with error (sct=0, sc=8) 00:23:04.119 [2024-10-07 09:43:52.700466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b97440 is same with the state(6) to be set 00:23:04.119 [2024-10-07 09:43:52.700479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b97440 is same with the state(6) to be set 00:23:04.119 Write completed with error (sct=0, sc=8) 00:23:04.119 [2024-10-07 09:43:52.700491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b97440 is same with the state(6) to be set 00:23:04.119 Write completed with error (sct=0, sc=8) 00:23:04.119 [2024-10-07 09:43:52.700506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b97440 is same with the state(6) to be set 00:23:04.119 starting I/O failed: -6 00:23:04.119 Write completed with error (sct=0, sc=8) 00:23:04.119 starting I/O failed: -6 00:23:04.119 Write completed with error (sct=0, sc=8) 00:23:04.119 Write completed with error (sct=0, sc=8) 00:23:04.119 Write completed with error (sct=0, sc=8) 00:23:04.119 starting I/O failed: -6 00:23:04.119 Write completed with error (sct=0, sc=8) 00:23:04.119 starting I/O failed: -6 00:23:04.119 Write completed with error (sct=0, sc=8) 00:23:04.119 Write completed with error (sct=0, sc=8) 00:23:04.119 Write completed with error (sct=0, sc=8) 00:23:04.119 starting I/O failed: -6 00:23:04.119 Write completed with error (sct=0, sc=8) 00:23:04.119 starting I/O failed: -6 00:23:04.119 Write completed with error (sct=0, sc=8) 00:23:04.119 Write completed with error (sct=0, sc=8) 00:23:04.119 Write completed with error (sct=0, sc=8) 00:23:04.119 starting I/O failed: -6 00:23:04.119 Write completed with error (sct=0, sc=8) 00:23:04.119 starting I/O failed: -6 00:23:04.119 Write completed with error (sct=0, sc=8) 00:23:04.119 Write completed with error (sct=0, sc=8) 00:23:04.119 Write completed with error (sct=0, sc=8) 00:23:04.119 starting I/O failed: -6 00:23:04.119 Write completed with error (sct=0, sc=8) 00:23:04.119 starting I/O failed: -6 00:23:04.119 Write completed with error (sct=0, sc=8) 00:23:04.119 [2024-10-07 09:43:52.700883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b97910 is same with the state(6) to be set 00:23:04.119 Write completed with error (sct=0, sc=8) 00:23:04.119 [2024-10-07 09:43:52.700911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b97910 is same with the state(6) to be set 00:23:04.119 Write completed with error (sct=0, sc=8) 00:23:04.119 starting I/O failed: -6 00:23:04.119 [2024-10-07 09:43:52.700925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b97910 is same with the state(6) to be set 00:23:04.119 Write completed with error (sct=0, sc=8) 00:23:04.119 starting I/O failed: -6 00:23:04.119 [2024-10-07 09:43:52.700943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b97910 is same with the state(6) to be set 00:23:04.119 Write completed with error (sct=0, sc=8) 00:23:04.119 [2024-10-07 09:43:52.700965] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b97910 is same with the state(6) to be set 00:23:04.119 Write completed with error (sct=0, sc=8) 00:23:04.119 Write completed with error (sct=0, sc=8) 00:23:04.119 starting I/O failed: -6 00:23:04.119 Write completed with error (sct=0, sc=8) 00:23:04.119 starting I/O failed: -6 00:23:04.119 Write completed with error (sct=0, sc=8) 00:23:04.119 Write completed with error (sct=0, sc=8) 00:23:04.119 Write completed with error (sct=0, sc=8) 00:23:04.119 starting I/O failed: -6 00:23:04.119 Write completed with error (sct=0, sc=8) 00:23:04.120 starting I/O failed: -6 00:23:04.120 [2024-10-07 09:43:52.701132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:04.120 Write completed with error (sct=0, sc=8) 00:23:04.120 starting I/O failed: -6 00:23:04.120 Write completed with error (sct=0, sc=8) 00:23:04.120 starting I/O failed: -6 00:23:04.120 Write completed with error (sct=0, sc=8) 00:23:04.120 starting I/O failed: -6 00:23:04.120 Write completed with error (sct=0, sc=8) 00:23:04.120 Write completed with error (sct=0, sc=8) 00:23:04.120 starting I/O failed: -6 00:23:04.120 Write completed with error (sct=0, sc=8) 00:23:04.120 starting I/O failed: -6 00:23:04.120 Write completed with error (sct=0, sc=8) 00:23:04.120 starting I/O failed: -6 00:23:04.120 [2024-10-07 09:43:52.701417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96aa0 is same with the state(6) to be set 00:23:04.120 Write completed with error (sct=0, sc=8) 00:23:04.120 [2024-10-07 09:43:52.701445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96aa0 is same with Write completed with error (sct=0, sc=8) 00:23:04.120 the state(6) to be set 00:23:04.120 starting I/O failed: -6 00:23:04.120 [2024-10-07 09:43:52.701461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96aa0 is same with the state(6) to be set 00:23:04.120 Write completed with error (sct=0, sc=8) 00:23:04.120 [2024-10-07 09:43:52.701474] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96aa0 is same with the state(6) to be set 00:23:04.120 starting I/O failed: -6 00:23:04.120 [2024-10-07 09:43:52.701487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96aa0 is same with the state(6) to be set 00:23:04.120 Write completed with error (sct=0, sc=8) 00:23:04.120 starting I/O failed: -6 00:23:04.120 [2024-10-07 09:43:52.701499] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96aa0 is same with the state(6) to be set 00:23:04.120 Write completed with error (sct=0, sc=8) 00:23:04.120 [2024-10-07 09:43:52.701512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96aa0 is same with the state(6) to be set 00:23:04.120 [2024-10-07 09:43:52.701524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96aa0 is same with the state(6) to be set 00:23:04.120 Write completed with error (sct=0, sc=8) 00:23:04.120 starting I/O failed: -6 00:23:04.120 Write completed with error (sct=0, sc=8) 00:23:04.120 starting I/O failed: -6 00:23:04.120 Write completed with error (sct=0, sc=8) 00:23:04.120 starting I/O failed: -6 00:23:04.120 Write completed with error (sct=0, sc=8) 00:23:04.120 Write completed with error (sct=0, sc=8) 00:23:04.120 starting I/O failed: -6 00:23:04.120 Write completed with error (sct=0, sc=8) 00:23:04.120 starting I/O failed: -6 00:23:04.120 Write completed with error (sct=0, sc=8) 00:23:04.120 starting I/O failed: -6 00:23:04.120 Write completed with error (sct=0, sc=8) 00:23:04.120 Write completed with error (sct=0, sc=8) 00:23:04.120 starting I/O failed: -6 00:23:04.120 Write completed with error (sct=0, sc=8) 00:23:04.120 starting I/O failed: -6 00:23:04.120 Write completed with error (sct=0, sc=8) 00:23:04.120 starting I/O failed: -6 00:23:04.120 Write completed with error (sct=0, sc=8) 00:23:04.120 Write completed with error (sct=0, sc=8) 00:23:04.120 starting I/O failed: -6 00:23:04.120 Write completed with error (sct=0, sc=8) 00:23:04.120 starting I/O failed: -6 00:23:04.120 Write completed with error (sct=0, sc=8) 00:23:04.120 starting I/O failed: -6 00:23:04.120 Write completed with error (sct=0, sc=8) 00:23:04.120 Write completed with error (sct=0, sc=8) 00:23:04.120 starting I/O failed: -6 00:23:04.120 Write completed with error (sct=0, sc=8) 00:23:04.120 starting I/O failed: -6 00:23:04.120 Write completed with error (sct=0, sc=8) 00:23:04.120 starting I/O failed: -6 00:23:04.120 Write completed with error (sct=0, sc=8) 00:23:04.120 Write completed with error (sct=0, sc=8) 00:23:04.120 starting I/O failed: -6 00:23:04.120 Write completed with error (sct=0, sc=8) 00:23:04.120 starting I/O failed: -6 00:23:04.120 Write completed with error (sct=0, sc=8) 00:23:04.120 starting I/O failed: -6 00:23:04.120 Write completed with error (sct=0, sc=8) 00:23:04.120 Write completed with error (sct=0, sc=8) 00:23:04.120 starting I/O failed: -6 00:23:04.120 Write completed with error (sct=0, sc=8) 00:23:04.120 starting I/O failed: -6 00:23:04.120 Write completed with error (sct=0, sc=8) 00:23:04.120 starting I/O failed: -6 00:23:04.120 Write completed with error (sct=0, sc=8) 00:23:04.120 Write completed with error (sct=0, sc=8) 00:23:04.120 starting I/O failed: -6 00:23:04.120 Write completed with error (sct=0, sc=8) 00:23:04.120 starting I/O failed: -6 00:23:04.120 Write completed with error (sct=0, sc=8) 00:23:04.120 starting I/O failed: -6 00:23:04.120 Write completed with error (sct=0, sc=8) 00:23:04.120 Write completed with error (sct=0, sc=8) 00:23:04.120 starting I/O failed: -6 00:23:04.120 Write completed with error (sct=0, sc=8) 00:23:04.120 starting I/O failed: -6 00:23:04.120 Write completed with error (sct=0, sc=8) 00:23:04.120 starting I/O failed: -6 00:23:04.120 Write completed with error (sct=0, sc=8) 00:23:04.120 [2024-10-07 09:43:52.702284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:23:04.120 Write completed with error (sct=0, sc=8) 00:23:04.120 starting I/O failed: -6 00:23:04.120 Write completed with error (sct=0, sc=8) 00:23:04.120 starting I/O failed: -6 00:23:04.120 Write completed with error (sct=0, sc=8) 00:23:04.120 starting I/O failed: -6 00:23:04.120 Write completed with error (sct=0, sc=8) 00:23:04.120 starting I/O failed: -6 00:23:04.120 Write completed with error (sct=0, sc=8) 00:23:04.120 starting I/O failed: -6 00:23:04.120 Write completed with error (sct=0, sc=8) 00:23:04.120 starting I/O failed: -6 00:23:04.120 Write completed with error (sct=0, sc=8) 00:23:04.120 starting I/O failed: -6 00:23:04.120 Write completed with error (sct=0, sc=8) 00:23:04.120 starting I/O failed: -6 00:23:04.120 Write completed with error (sct=0, sc=8) 00:23:04.120 starting I/O failed: -6 00:23:04.120 Write completed with error (sct=0, sc=8) 00:23:04.120 starting I/O failed: -6 00:23:04.120 Write completed with error (sct=0, sc=8) 00:23:04.120 starting I/O failed: -6 00:23:04.120 Write completed with error (sct=0, sc=8) 00:23:04.120 starting I/O failed: -6 00:23:04.120 Write completed with error (sct=0, sc=8) 00:23:04.120 starting I/O failed: -6 00:23:04.120 Write completed with error (sct=0, sc=8) 00:23:04.120 starting I/O failed: -6 00:23:04.120 Write completed with error (sct=0, sc=8) 00:23:04.120 starting I/O failed: -6 00:23:04.120 Write completed with error (sct=0, sc=8) 00:23:04.120 starting I/O failed: -6 00:23:04.120 Write completed with error (sct=0, sc=8) 00:23:04.120 starting I/O failed: -6 00:23:04.120 Write completed with error (sct=0, sc=8) 00:23:04.120 starting I/O failed: -6 00:23:04.120 Write completed with error (sct=0, sc=8) 00:23:04.120 starting I/O failed: -6 00:23:04.120 Write completed with error (sct=0, sc=8) 00:23:04.120 starting I/O failed: -6 00:23:04.120 Write completed with error (sct=0, sc=8) 00:23:04.120 starting I/O failed: -6 00:23:04.120 Write completed with error (sct=0, sc=8) 00:23:04.120 starting I/O failed: -6 00:23:04.120 Write completed with error (sct=0, sc=8) 00:23:04.120 starting I/O failed: -6 00:23:04.120 Write completed with error (sct=0, sc=8) 00:23:04.120 starting I/O failed: -6 00:23:04.120 Write completed with error (sct=0, sc=8) 00:23:04.120 starting I/O failed: -6 00:23:04.120 Write completed with error (sct=0, sc=8) 00:23:04.120 starting I/O failed: -6 00:23:04.120 Write completed with error (sct=0, sc=8) 00:23:04.120 starting I/O failed: -6 00:23:04.120 Write completed with error (sct=0, sc=8) 00:23:04.120 starting I/O failed: -6 00:23:04.120 Write completed with error (sct=0, sc=8) 00:23:04.120 starting I/O failed: -6 00:23:04.120 Write completed with error (sct=0, sc=8) 00:23:04.120 starting I/O failed: -6 00:23:04.120 Write completed with error (sct=0, sc=8) 00:23:04.120 starting I/O failed: -6 00:23:04.120 Write completed with error (sct=0, sc=8) 00:23:04.120 starting I/O failed: -6 00:23:04.120 Write completed with error (sct=0, sc=8) 00:23:04.120 starting I/O failed: -6 00:23:04.120 Write completed with error (sct=0, sc=8) 00:23:04.120 starting I/O failed: -6 00:23:04.120 Write completed with error (sct=0, sc=8) 00:23:04.120 starting I/O failed: -6 00:23:04.120 Write completed with error (sct=0, sc=8) 00:23:04.120 starting I/O failed: -6 00:23:04.120 Write completed with error (sct=0, sc=8) 00:23:04.120 starting I/O failed: -6 00:23:04.120 Write completed with error (sct=0, sc=8) 00:23:04.120 starting I/O failed: -6 00:23:04.120 Write completed with error (sct=0, sc=8) 00:23:04.120 starting I/O failed: -6 00:23:04.120 Write completed with error (sct=0, sc=8) 00:23:04.120 starting I/O failed: -6 00:23:04.120 Write completed with error (sct=0, sc=8) 00:23:04.120 starting I/O failed: -6 00:23:04.120 Write completed with error (sct=0, sc=8) 00:23:04.120 starting I/O failed: -6 00:23:04.120 Write completed with error (sct=0, sc=8) 00:23:04.120 starting I/O failed: -6 00:23:04.120 Write completed with error (sct=0, sc=8) 00:23:04.120 starting I/O failed: -6 00:23:04.120 Write completed with error (sct=0, sc=8) 00:23:04.120 starting I/O failed: -6 00:23:04.120 Write completed with error (sct=0, sc=8) 00:23:04.120 starting I/O failed: -6 00:23:04.120 Write completed with error (sct=0, sc=8) 00:23:04.120 starting I/O failed: -6 00:23:04.120 Write completed with error (sct=0, sc=8) 00:23:04.120 starting I/O failed: -6 00:23:04.120 Write completed with error (sct=0, sc=8) 00:23:04.120 starting I/O failed: -6 00:23:04.120 Write completed with error (sct=0, sc=8) 00:23:04.120 starting I/O failed: -6 00:23:04.120 Write completed with error (sct=0, sc=8) 00:23:04.120 starting I/O failed: -6 00:23:04.120 Write completed with error (sct=0, sc=8) 00:23:04.120 starting I/O failed: -6 00:23:04.120 Write completed with error (sct=0, sc=8) 00:23:04.120 starting I/O failed: -6 00:23:04.120 Write completed with error (sct=0, sc=8) 00:23:04.120 starting I/O failed: -6 00:23:04.120 Write completed with error (sct=0, sc=8) 00:23:04.120 starting I/O failed: -6 00:23:04.120 Write completed with error (sct=0, sc=8) 00:23:04.120 starting I/O failed: -6 00:23:04.120 Write completed with error (sct=0, sc=8) 00:23:04.121 starting I/O failed: -6 00:23:04.121 Write completed with error (sct=0, sc=8) 00:23:04.121 starting I/O failed: -6 00:23:04.121 Write completed with error (sct=0, sc=8) 00:23:04.121 starting I/O failed: -6 00:23:04.121 Write completed with error (sct=0, sc=8) 00:23:04.121 starting I/O failed: -6 00:23:04.121 [2024-10-07 09:43:52.703903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:23:04.121 NVMe io qpair process completion error 00:23:04.121 Write completed with error (sct=0, sc=8) 00:23:04.121 Write completed with error (sct=0, sc=8) 00:23:04.121 Write completed with error (sct=0, sc=8) 00:23:04.121 Write completed with error (sct=0, sc=8) 00:23:04.121 starting I/O failed: -6 00:23:04.121 Write completed with error (sct=0, sc=8) 00:23:04.121 Write completed with error (sct=0, sc=8) 00:23:04.121 Write completed with error (sct=0, sc=8) 00:23:04.121 Write completed with error (sct=0, sc=8) 00:23:04.121 starting I/O failed: -6 00:23:04.121 Write completed with error (sct=0, sc=8) 00:23:04.121 Write completed with error (sct=0, sc=8) 00:23:04.121 Write completed with error (sct=0, sc=8) 00:23:04.121 Write completed with error (sct=0, sc=8) 00:23:04.121 starting I/O failed: -6 00:23:04.121 Write completed with error (sct=0, sc=8) 00:23:04.121 Write completed with error (sct=0, sc=8) 00:23:04.121 Write completed with error (sct=0, sc=8) 00:23:04.121 Write completed with error (sct=0, sc=8) 00:23:04.121 starting I/O failed: -6 00:23:04.121 Write completed with error (sct=0, sc=8) 00:23:04.121 Write completed with error (sct=0, sc=8) 00:23:04.121 Write completed with error (sct=0, sc=8) 00:23:04.121 Write completed with error (sct=0, sc=8) 00:23:04.121 starting I/O failed: -6 00:23:04.121 Write completed with error (sct=0, sc=8) 00:23:04.121 Write completed with error (sct=0, sc=8) 00:23:04.121 Write completed with error (sct=0, sc=8) 00:23:04.121 Write completed with error (sct=0, sc=8) 00:23:04.121 starting I/O failed: -6 00:23:04.121 Write completed with error (sct=0, sc=8) 00:23:04.121 Write completed with error (sct=0, sc=8) 00:23:04.121 Write completed with error (sct=0, sc=8) 00:23:04.121 Write completed with error (sct=0, sc=8) 00:23:04.121 starting I/O failed: -6 00:23:04.121 Write completed with error (sct=0, sc=8) 00:23:04.121 Write completed with error (sct=0, sc=8) 00:23:04.121 Write completed with error (sct=0, sc=8) 00:23:04.121 Write completed with error (sct=0, sc=8) 00:23:04.121 starting I/O failed: -6 00:23:04.121 Write completed with error (sct=0, sc=8) 00:23:04.121 Write completed with error (sct=0, sc=8) 00:23:04.121 Write completed with error (sct=0, sc=8) 00:23:04.121 Write completed with error (sct=0, sc=8) 00:23:04.121 starting I/O failed: -6 00:23:04.121 Write completed with error (sct=0, sc=8) 00:23:04.121 Write completed with error (sct=0, sc=8) 00:23:04.121 Write completed with error (sct=0, sc=8) 00:23:04.121 Write completed with error (sct=0, sc=8) 00:23:04.121 starting I/O failed: -6 00:23:04.121 Write completed with error (sct=0, sc=8) 00:23:04.121 [2024-10-07 09:43:52.705239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:04.121 Write completed with error (sct=0, sc=8) 00:23:04.121 starting I/O failed: -6 00:23:04.121 Write completed with error (sct=0, sc=8) 00:23:04.121 starting I/O failed: -6 00:23:04.121 Write completed with error (sct=0, sc=8) 00:23:04.121 Write completed with error (sct=0, sc=8) 00:23:04.121 Write completed with error (sct=0, sc=8) 00:23:04.121 starting I/O failed: -6 00:23:04.121 Write completed with error (sct=0, sc=8) 00:23:04.121 starting I/O failed: -6 00:23:04.121 Write completed with error (sct=0, sc=8) 00:23:04.121 Write completed with error (sct=0, sc=8) 00:23:04.121 Write completed with error (sct=0, sc=8) 00:23:04.121 starting I/O failed: -6 00:23:04.121 Write completed with error (sct=0, sc=8) 00:23:04.121 starting I/O failed: -6 00:23:04.121 Write completed with error (sct=0, sc=8) 00:23:04.121 Write completed with error (sct=0, sc=8) 00:23:04.121 Write completed with error (sct=0, sc=8) 00:23:04.121 starting I/O failed: -6 00:23:04.121 Write completed with error (sct=0, sc=8) 00:23:04.121 starting I/O failed: -6 00:23:04.121 Write completed with error (sct=0, sc=8) 00:23:04.121 Write completed with error (sct=0, sc=8) 00:23:04.121 Write completed with error (sct=0, sc=8) 00:23:04.121 starting I/O failed: -6 00:23:04.121 Write completed with error (sct=0, sc=8) 00:23:04.121 starting I/O failed: -6 00:23:04.121 Write completed with error (sct=0, sc=8) 00:23:04.121 Write completed with error (sct=0, sc=8) 00:23:04.121 Write completed with error (sct=0, sc=8) 00:23:04.121 starting I/O failed: -6 00:23:04.121 Write completed with error (sct=0, sc=8) 00:23:04.121 starting I/O failed: -6 00:23:04.121 Write completed with error (sct=0, sc=8) 00:23:04.121 Write completed with error (sct=0, sc=8) 00:23:04.121 Write completed with error (sct=0, sc=8) 00:23:04.121 starting I/O failed: -6 00:23:04.121 Write completed with error (sct=0, sc=8) 00:23:04.121 starting I/O failed: -6 00:23:04.121 Write completed with error (sct=0, sc=8) 00:23:04.121 Write completed with error (sct=0, sc=8) 00:23:04.121 Write completed with error (sct=0, sc=8) 00:23:04.121 starting I/O failed: -6 00:23:04.121 Write completed with error (sct=0, sc=8) 00:23:04.121 starting I/O failed: -6 00:23:04.121 Write completed with error (sct=0, sc=8) 00:23:04.121 Write completed with error (sct=0, sc=8) 00:23:04.121 Write completed with error (sct=0, sc=8) 00:23:04.121 starting I/O failed: -6 00:23:04.121 Write completed with error (sct=0, sc=8) 00:23:04.121 starting I/O failed: -6 00:23:04.121 Write completed with error (sct=0, sc=8) 00:23:04.121 Write completed with error (sct=0, sc=8) 00:23:04.121 Write completed with error (sct=0, sc=8) 00:23:04.121 starting I/O failed: -6 00:23:04.121 Write completed with error (sct=0, sc=8) 00:23:04.121 starting I/O failed: -6 00:23:04.121 Write completed with error (sct=0, sc=8) 00:23:04.121 Write completed with error (sct=0, sc=8) 00:23:04.121 Write completed with error (sct=0, sc=8) 00:23:04.121 starting I/O failed: -6 00:23:04.121 Write completed with error (sct=0, sc=8) 00:23:04.121 starting I/O failed: -6 00:23:04.121 Write completed with error (sct=0, sc=8) 00:23:04.121 [2024-10-07 09:43:52.706306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:04.121 Write completed with error (sct=0, sc=8) 00:23:04.121 starting I/O failed: -6 00:23:04.121 Write completed with error (sct=0, sc=8) 00:23:04.121 starting I/O failed: -6 00:23:04.121 Write completed with error (sct=0, sc=8) 00:23:04.121 Write completed with error (sct=0, sc=8) 00:23:04.121 starting I/O failed: -6 00:23:04.121 Write completed with error (sct=0, sc=8) 00:23:04.121 starting I/O failed: -6 00:23:04.121 Write completed with error (sct=0, sc=8) 00:23:04.121 starting I/O failed: -6 00:23:04.121 Write completed with error (sct=0, sc=8) 00:23:04.121 Write completed with error (sct=0, sc=8) 00:23:04.122 starting I/O failed: -6 00:23:04.122 Write completed with error (sct=0, sc=8) 00:23:04.122 starting I/O failed: -6 00:23:04.122 Write completed with error (sct=0, sc=8) 00:23:04.122 starting I/O failed: -6 00:23:04.122 Write completed with error (sct=0, sc=8) 00:23:04.122 Write completed with error (sct=0, sc=8) 00:23:04.122 starting I/O failed: -6 00:23:04.122 Write completed with error (sct=0, sc=8) 00:23:04.122 starting I/O failed: -6 00:23:04.122 Write completed with error (sct=0, sc=8) 00:23:04.122 starting I/O failed: -6 00:23:04.122 Write completed with error (sct=0, sc=8) 00:23:04.122 Write completed with error (sct=0, sc=8) 00:23:04.122 starting I/O failed: -6 00:23:04.122 Write completed with error (sct=0, sc=8) 00:23:04.122 starting I/O failed: -6 00:23:04.122 Write completed with error (sct=0, sc=8) 00:23:04.122 starting I/O failed: -6 00:23:04.122 Write completed with error (sct=0, sc=8) 00:23:04.122 Write completed with error (sct=0, sc=8) 00:23:04.122 starting I/O failed: -6 00:23:04.122 Write completed with error (sct=0, sc=8) 00:23:04.122 starting I/O failed: -6 00:23:04.122 Write completed with error (sct=0, sc=8) 00:23:04.122 starting I/O failed: -6 00:23:04.122 Write completed with error (sct=0, sc=8) 00:23:04.122 Write completed with error (sct=0, sc=8) 00:23:04.122 starting I/O failed: -6 00:23:04.122 Write completed with error (sct=0, sc=8) 00:23:04.122 starting I/O failed: -6 00:23:04.122 Write completed with error (sct=0, sc=8) 00:23:04.122 starting I/O failed: -6 00:23:04.122 Write completed with error (sct=0, sc=8) 00:23:04.122 Write completed with error (sct=0, sc=8) 00:23:04.122 starting I/O failed: -6 00:23:04.122 Write completed with error (sct=0, sc=8) 00:23:04.122 starting I/O failed: -6 00:23:04.122 Write completed with error (sct=0, sc=8) 00:23:04.122 starting I/O failed: -6 00:23:04.122 Write completed with error (sct=0, sc=8) 00:23:04.122 Write completed with error (sct=0, sc=8) 00:23:04.122 starting I/O failed: -6 00:23:04.122 Write completed with error (sct=0, sc=8) 00:23:04.122 starting I/O failed: -6 00:23:04.122 Write completed with error (sct=0, sc=8) 00:23:04.122 starting I/O failed: -6 00:23:04.122 Write completed with error (sct=0, sc=8) 00:23:04.122 Write completed with error (sct=0, sc=8) 00:23:04.122 starting I/O failed: -6 00:23:04.122 Write completed with error (sct=0, sc=8) 00:23:04.122 starting I/O failed: -6 00:23:04.122 Write completed with error (sct=0, sc=8) 00:23:04.122 starting I/O failed: -6 00:23:04.122 Write completed with error (sct=0, sc=8) 00:23:04.122 Write completed with error (sct=0, sc=8) 00:23:04.122 starting I/O failed: -6 00:23:04.122 Write completed with error (sct=0, sc=8) 00:23:04.122 starting I/O failed: -6 00:23:04.122 Write completed with error (sct=0, sc=8) 00:23:04.122 starting I/O failed: -6 00:23:04.122 Write completed with error (sct=0, sc=8) 00:23:04.122 Write completed with error (sct=0, sc=8) 00:23:04.122 starting I/O failed: -6 00:23:04.122 Write completed with error (sct=0, sc=8) 00:23:04.122 starting I/O failed: -6 00:23:04.122 Write completed with error (sct=0, sc=8) 00:23:04.122 starting I/O failed: -6 00:23:04.122 Write completed with error (sct=0, sc=8) 00:23:04.122 Write completed with error (sct=0, sc=8) 00:23:04.122 starting I/O failed: -6 00:23:04.122 Write completed with error (sct=0, sc=8) 00:23:04.122 starting I/O failed: -6 00:23:04.122 [2024-10-07 09:43:52.707471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:23:04.122 Write completed with error (sct=0, sc=8) 00:23:04.122 starting I/O failed: -6 00:23:04.122 Write completed with error (sct=0, sc=8) 00:23:04.122 starting I/O failed: -6 00:23:04.122 Write completed with error (sct=0, sc=8) 00:23:04.122 starting I/O failed: -6 00:23:04.122 Write completed with error (sct=0, sc=8) 00:23:04.122 starting I/O failed: -6 00:23:04.122 Write completed with error (sct=0, sc=8) 00:23:04.122 starting I/O failed: -6 00:23:04.122 Write completed with error (sct=0, sc=8) 00:23:04.122 starting I/O failed: -6 00:23:04.122 Write completed with error (sct=0, sc=8) 00:23:04.122 starting I/O failed: -6 00:23:04.122 Write completed with error (sct=0, sc=8) 00:23:04.122 starting I/O failed: -6 00:23:04.122 Write completed with error (sct=0, sc=8) 00:23:04.122 starting I/O failed: -6 00:23:04.122 Write completed with error (sct=0, sc=8) 00:23:04.122 starting I/O failed: -6 00:23:04.122 Write completed with error (sct=0, sc=8) 00:23:04.122 starting I/O failed: -6 00:23:04.122 Write completed with error (sct=0, sc=8) 00:23:04.122 starting I/O failed: -6 00:23:04.122 Write completed with error (sct=0, sc=8) 00:23:04.122 starting I/O failed: -6 00:23:04.122 Write completed with error (sct=0, sc=8) 00:23:04.122 starting I/O failed: -6 00:23:04.122 Write completed with error (sct=0, sc=8) 00:23:04.122 starting I/O failed: -6 00:23:04.122 Write completed with error (sct=0, sc=8) 00:23:04.122 starting I/O failed: -6 00:23:04.122 Write completed with error (sct=0, sc=8) 00:23:04.122 starting I/O failed: -6 00:23:04.122 Write completed with error (sct=0, sc=8) 00:23:04.122 starting I/O failed: -6 00:23:04.122 Write completed with error (sct=0, sc=8) 00:23:04.122 starting I/O failed: -6 00:23:04.122 Write completed with error (sct=0, sc=8) 00:23:04.122 starting I/O failed: -6 00:23:04.122 Write completed with error (sct=0, sc=8) 00:23:04.122 starting I/O failed: -6 00:23:04.122 Write completed with error (sct=0, sc=8) 00:23:04.122 starting I/O failed: -6 00:23:04.122 Write completed with error (sct=0, sc=8) 00:23:04.122 starting I/O failed: -6 00:23:04.122 Write completed with error (sct=0, sc=8) 00:23:04.122 starting I/O failed: -6 00:23:04.122 Write completed with error (sct=0, sc=8) 00:23:04.122 starting I/O failed: -6 00:23:04.122 Write completed with error (sct=0, sc=8) 00:23:04.122 starting I/O failed: -6 00:23:04.122 Write completed with error (sct=0, sc=8) 00:23:04.122 starting I/O failed: -6 00:23:04.122 Write completed with error (sct=0, sc=8) 00:23:04.122 starting I/O failed: -6 00:23:04.122 Write completed with error (sct=0, sc=8) 00:23:04.122 starting I/O failed: -6 00:23:04.122 Write completed with error (sct=0, sc=8) 00:23:04.122 starting I/O failed: -6 00:23:04.122 Write completed with error (sct=0, sc=8) 00:23:04.122 starting I/O failed: -6 00:23:04.122 Write completed with error (sct=0, sc=8) 00:23:04.122 starting I/O failed: -6 00:23:04.122 Write completed with error (sct=0, sc=8) 00:23:04.122 starting I/O failed: -6 00:23:04.122 Write completed with error (sct=0, sc=8) 00:23:04.122 starting I/O failed: -6 00:23:04.122 Write completed with error (sct=0, sc=8) 00:23:04.122 starting I/O failed: -6 00:23:04.122 Write completed with error (sct=0, sc=8) 00:23:04.122 starting I/O failed: -6 00:23:04.122 Write completed with error (sct=0, sc=8) 00:23:04.122 starting I/O failed: -6 00:23:04.122 Write completed with error (sct=0, sc=8) 00:23:04.122 starting I/O failed: -6 00:23:04.122 Write completed with error (sct=0, sc=8) 00:23:04.122 starting I/O failed: -6 00:23:04.122 Write completed with error (sct=0, sc=8) 00:23:04.122 starting I/O failed: -6 00:23:04.122 Write completed with error (sct=0, sc=8) 00:23:04.122 starting I/O failed: -6 00:23:04.122 Write completed with error (sct=0, sc=8) 00:23:04.122 starting I/O failed: -6 00:23:04.122 Write completed with error (sct=0, sc=8) 00:23:04.122 starting I/O failed: -6 00:23:04.122 Write completed with error (sct=0, sc=8) 00:23:04.122 starting I/O failed: -6 00:23:04.122 Write completed with error (sct=0, sc=8) 00:23:04.122 starting I/O failed: -6 00:23:04.122 Write completed with error (sct=0, sc=8) 00:23:04.122 starting I/O failed: -6 00:23:04.122 Write completed with error (sct=0, sc=8) 00:23:04.122 starting I/O failed: -6 00:23:04.122 Write completed with error (sct=0, sc=8) 00:23:04.122 starting I/O failed: -6 00:23:04.122 Write completed with error (sct=0, sc=8) 00:23:04.122 starting I/O failed: -6 00:23:04.122 Write completed with error (sct=0, sc=8) 00:23:04.122 starting I/O failed: -6 00:23:04.122 Write completed with error (sct=0, sc=8) 00:23:04.122 starting I/O failed: -6 00:23:04.122 Write completed with error (sct=0, sc=8) 00:23:04.122 starting I/O failed: -6 00:23:04.122 Write completed with error (sct=0, sc=8) 00:23:04.122 starting I/O failed: -6 00:23:04.122 Write completed with error (sct=0, sc=8) 00:23:04.122 starting I/O failed: -6 00:23:04.122 Write completed with error (sct=0, sc=8) 00:23:04.122 starting I/O failed: -6 00:23:04.122 Write completed with error (sct=0, sc=8) 00:23:04.122 starting I/O failed: -6 00:23:04.122 Write completed with error (sct=0, sc=8) 00:23:04.122 starting I/O failed: -6 00:23:04.122 Write completed with error (sct=0, sc=8) 00:23:04.122 starting I/O failed: -6 00:23:04.122 Write completed with error (sct=0, sc=8) 00:23:04.122 starting I/O failed: -6 00:23:04.122 [2024-10-07 09:43:52.709177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:23:04.122 NVMe io qpair process completion error 00:23:04.122 Write completed with error (sct=0, sc=8) 00:23:04.123 Write completed with error (sct=0, sc=8) 00:23:04.123 Write completed with error (sct=0, sc=8) 00:23:04.123 starting I/O failed: -6 00:23:04.123 Write completed with error (sct=0, sc=8) 00:23:04.123 Write completed with error (sct=0, sc=8) 00:23:04.123 Write completed with error (sct=0, sc=8) 00:23:04.123 Write completed with error (sct=0, sc=8) 00:23:04.123 starting I/O failed: -6 00:23:04.123 Write completed with error (sct=0, sc=8) 00:23:04.123 Write completed with error (sct=0, sc=8) 00:23:04.123 Write completed with error (sct=0, sc=8) 00:23:04.123 Write completed with error (sct=0, sc=8) 00:23:04.123 starting I/O failed: -6 00:23:04.123 Write completed with error (sct=0, sc=8) 00:23:04.123 Write completed with error (sct=0, sc=8) 00:23:04.123 Write completed with error (sct=0, sc=8) 00:23:04.123 Write completed with error (sct=0, sc=8) 00:23:04.123 starting I/O failed: -6 00:23:04.123 Write completed with error (sct=0, sc=8) 00:23:04.123 Write completed with error (sct=0, sc=8) 00:23:04.123 Write completed with error (sct=0, sc=8) 00:23:04.123 Write completed with error (sct=0, sc=8) 00:23:04.123 starting I/O failed: -6 00:23:04.123 Write completed with error (sct=0, sc=8) 00:23:04.123 Write completed with error (sct=0, sc=8) 00:23:04.123 Write completed with error (sct=0, sc=8) 00:23:04.123 Write completed with error (sct=0, sc=8) 00:23:04.123 starting I/O failed: -6 00:23:04.123 Write completed with error (sct=0, sc=8) 00:23:04.123 Write completed with error (sct=0, sc=8) 00:23:04.123 Write completed with error (sct=0, sc=8) 00:23:04.123 Write completed with error (sct=0, sc=8) 00:23:04.123 starting I/O failed: -6 00:23:04.123 Write completed with error (sct=0, sc=8) 00:23:04.123 Write completed with error (sct=0, sc=8) 00:23:04.123 Write completed with error (sct=0, sc=8) 00:23:04.123 Write completed with error (sct=0, sc=8) 00:23:04.123 starting I/O failed: -6 00:23:04.123 Write completed with error (sct=0, sc=8) 00:23:04.123 Write completed with error (sct=0, sc=8) 00:23:04.123 Write completed with error (sct=0, sc=8) 00:23:04.123 Write completed with error (sct=0, sc=8) 00:23:04.123 starting I/O failed: -6 00:23:04.123 Write completed with error (sct=0, sc=8) 00:23:04.123 Write completed with error (sct=0, sc=8) 00:23:04.123 Write completed with error (sct=0, sc=8) 00:23:04.123 Write completed with error (sct=0, sc=8) 00:23:04.123 starting I/O failed: -6 00:23:04.123 Write completed with error (sct=0, sc=8) 00:23:04.123 Write completed with error (sct=0, sc=8) 00:23:04.123 Write completed with error (sct=0, sc=8) 00:23:04.123 [2024-10-07 09:43:52.710472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:04.123 starting I/O failed: -6 00:23:04.123 Write completed with error (sct=0, sc=8) 00:23:04.123 Write completed with error (sct=0, sc=8) 00:23:04.123 starting I/O failed: -6 00:23:04.123 Write completed with error (sct=0, sc=8) 00:23:04.123 starting I/O failed: -6 00:23:04.123 Write completed with error (sct=0, sc=8) 00:23:04.123 Write completed with error (sct=0, sc=8) 00:23:04.123 Write completed with error (sct=0, sc=8) 00:23:04.123 starting I/O failed: -6 00:23:04.123 Write completed with error (sct=0, sc=8) 00:23:04.123 starting I/O failed: -6 00:23:04.123 Write completed with error (sct=0, sc=8) 00:23:04.123 Write completed with error (sct=0, sc=8) 00:23:04.123 Write completed with error (sct=0, sc=8) 00:23:04.123 starting I/O failed: -6 00:23:04.123 Write completed with error (sct=0, sc=8) 00:23:04.123 starting I/O failed: -6 00:23:04.123 Write completed with error (sct=0, sc=8) 00:23:04.123 Write completed with error (sct=0, sc=8) 00:23:04.123 Write completed with error (sct=0, sc=8) 00:23:04.123 starting I/O failed: -6 00:23:04.123 Write completed with error (sct=0, sc=8) 00:23:04.123 starting I/O failed: -6 00:23:04.123 Write completed with error (sct=0, sc=8) 00:23:04.123 Write completed with error (sct=0, sc=8) 00:23:04.123 Write completed with error (sct=0, sc=8) 00:23:04.123 starting I/O failed: -6 00:23:04.123 Write completed with error (sct=0, sc=8) 00:23:04.123 starting I/O failed: -6 00:23:04.123 Write completed with error (sct=0, sc=8) 00:23:04.123 Write completed with error (sct=0, sc=8) 00:23:04.123 Write completed with error (sct=0, sc=8) 00:23:04.123 starting I/O failed: -6 00:23:04.123 Write completed with error (sct=0, sc=8) 00:23:04.123 starting I/O failed: -6 00:23:04.123 Write completed with error (sct=0, sc=8) 00:23:04.123 Write completed with error (sct=0, sc=8) 00:23:04.123 Write completed with error (sct=0, sc=8) 00:23:04.123 starting I/O failed: -6 00:23:04.123 Write completed with error (sct=0, sc=8) 00:23:04.123 starting I/O failed: -6 00:23:04.123 Write completed with error (sct=0, sc=8) 00:23:04.123 Write completed with error (sct=0, sc=8) 00:23:04.123 Write completed with error (sct=0, sc=8) 00:23:04.123 starting I/O failed: -6 00:23:04.123 Write completed with error (sct=0, sc=8) 00:23:04.123 starting I/O failed: -6 00:23:04.123 Write completed with error (sct=0, sc=8) 00:23:04.123 Write completed with error (sct=0, sc=8) 00:23:04.123 Write completed with error (sct=0, sc=8) 00:23:04.123 starting I/O failed: -6 00:23:04.123 Write completed with error (sct=0, sc=8) 00:23:04.123 starting I/O failed: -6 00:23:04.123 Write completed with error (sct=0, sc=8) 00:23:04.123 Write completed with error (sct=0, sc=8) 00:23:04.123 [2024-10-07 09:43:52.711422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:04.123 Write completed with error (sct=0, sc=8) 00:23:04.123 starting I/O failed: -6 00:23:04.123 Write completed with error (sct=0, sc=8) 00:23:04.123 starting I/O failed: -6 00:23:04.123 Write completed with error (sct=0, sc=8) 00:23:04.123 Write completed with error (sct=0, sc=8) 00:23:04.123 starting I/O failed: -6 00:23:04.123 Write completed with error (sct=0, sc=8) 00:23:04.123 starting I/O failed: -6 00:23:04.123 Write completed with error (sct=0, sc=8) 00:23:04.123 starting I/O failed: -6 00:23:04.123 Write completed with error (sct=0, sc=8) 00:23:04.123 Write completed with error (sct=0, sc=8) 00:23:04.123 starting I/O failed: -6 00:23:04.123 Write completed with error (sct=0, sc=8) 00:23:04.123 starting I/O failed: -6 00:23:04.123 Write completed with error (sct=0, sc=8) 00:23:04.123 starting I/O failed: -6 00:23:04.123 Write completed with error (sct=0, sc=8) 00:23:04.123 Write completed with error (sct=0, sc=8) 00:23:04.123 starting I/O failed: -6 00:23:04.123 Write completed with error (sct=0, sc=8) 00:23:04.123 starting I/O failed: -6 00:23:04.123 Write completed with error (sct=0, sc=8) 00:23:04.123 starting I/O failed: -6 00:23:04.123 Write completed with error (sct=0, sc=8) 00:23:04.123 Write completed with error (sct=0, sc=8) 00:23:04.123 starting I/O failed: -6 00:23:04.123 Write completed with error (sct=0, sc=8) 00:23:04.123 starting I/O failed: -6 00:23:04.123 Write completed with error (sct=0, sc=8) 00:23:04.123 starting I/O failed: -6 00:23:04.123 Write completed with error (sct=0, sc=8) 00:23:04.123 Write completed with error (sct=0, sc=8) 00:23:04.123 starting I/O failed: -6 00:23:04.123 Write completed with error (sct=0, sc=8) 00:23:04.123 starting I/O failed: -6 00:23:04.123 Write completed with error (sct=0, sc=8) 00:23:04.123 starting I/O failed: -6 00:23:04.123 Write completed with error (sct=0, sc=8) 00:23:04.123 Write completed with error (sct=0, sc=8) 00:23:04.123 starting I/O failed: -6 00:23:04.123 Write completed with error (sct=0, sc=8) 00:23:04.123 starting I/O failed: -6 00:23:04.123 Write completed with error (sct=0, sc=8) 00:23:04.123 starting I/O failed: -6 00:23:04.123 Write completed with error (sct=0, sc=8) 00:23:04.123 Write completed with error (sct=0, sc=8) 00:23:04.123 starting I/O failed: -6 00:23:04.123 Write completed with error (sct=0, sc=8) 00:23:04.123 starting I/O failed: -6 00:23:04.123 Write completed with error (sct=0, sc=8) 00:23:04.123 starting I/O failed: -6 00:23:04.123 Write completed with error (sct=0, sc=8) 00:23:04.123 Write completed with error (sct=0, sc=8) 00:23:04.123 starting I/O failed: -6 00:23:04.123 Write completed with error (sct=0, sc=8) 00:23:04.123 starting I/O failed: -6 00:23:04.123 Write completed with error (sct=0, sc=8) 00:23:04.123 starting I/O failed: -6 00:23:04.123 Write completed with error (sct=0, sc=8) 00:23:04.123 Write completed with error (sct=0, sc=8) 00:23:04.123 starting I/O failed: -6 00:23:04.123 Write completed with error (sct=0, sc=8) 00:23:04.123 starting I/O failed: -6 00:23:04.123 Write completed with error (sct=0, sc=8) 00:23:04.123 starting I/O failed: -6 00:23:04.123 Write completed with error (sct=0, sc=8) 00:23:04.123 Write completed with error (sct=0, sc=8) 00:23:04.123 starting I/O failed: -6 00:23:04.123 Write completed with error (sct=0, sc=8) 00:23:04.123 starting I/O failed: -6 00:23:04.123 Write completed with error (sct=0, sc=8) 00:23:04.124 starting I/O failed: -6 00:23:04.124 Write completed with error (sct=0, sc=8) 00:23:04.124 Write completed with error (sct=0, sc=8) 00:23:04.124 starting I/O failed: -6 00:23:04.124 Write completed with error (sct=0, sc=8) 00:23:04.124 starting I/O failed: -6 00:23:04.124 Write completed with error (sct=0, sc=8) 00:23:04.124 starting I/O failed: -6 00:23:04.124 Write completed with error (sct=0, sc=8) 00:23:04.124 Write completed with error (sct=0, sc=8) 00:23:04.124 starting I/O failed: -6 00:23:04.124 Write completed with error (sct=0, sc=8) 00:23:04.124 starting I/O failed: -6 00:23:04.124 Write completed with error (sct=0, sc=8) 00:23:04.124 starting I/O failed: -6 00:23:04.124 [2024-10-07 09:43:52.712579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:23:04.124 Write completed with error (sct=0, sc=8) 00:23:04.124 starting I/O failed: -6 00:23:04.124 Write completed with error (sct=0, sc=8) 00:23:04.124 starting I/O failed: -6 00:23:04.124 Write completed with error (sct=0, sc=8) 00:23:04.124 starting I/O failed: -6 00:23:04.124 Write completed with error (sct=0, sc=8) 00:23:04.124 starting I/O failed: -6 00:23:04.124 Write completed with error (sct=0, sc=8) 00:23:04.124 starting I/O failed: -6 00:23:04.124 Write completed with error (sct=0, sc=8) 00:23:04.124 starting I/O failed: -6 00:23:04.124 Write completed with error (sct=0, sc=8) 00:23:04.124 starting I/O failed: -6 00:23:04.124 Write completed with error (sct=0, sc=8) 00:23:04.124 starting I/O failed: -6 00:23:04.124 Write completed with error (sct=0, sc=8) 00:23:04.124 starting I/O failed: -6 00:23:04.124 Write completed with error (sct=0, sc=8) 00:23:04.124 starting I/O failed: -6 00:23:04.124 Write completed with error (sct=0, sc=8) 00:23:04.124 starting I/O failed: -6 00:23:04.124 Write completed with error (sct=0, sc=8) 00:23:04.124 starting I/O failed: -6 00:23:04.124 Write completed with error (sct=0, sc=8) 00:23:04.124 starting I/O failed: -6 00:23:04.124 Write completed with error (sct=0, sc=8) 00:23:04.124 starting I/O failed: -6 00:23:04.124 Write completed with error (sct=0, sc=8) 00:23:04.124 starting I/O failed: -6 00:23:04.124 Write completed with error (sct=0, sc=8) 00:23:04.124 starting I/O failed: -6 00:23:04.124 Write completed with error (sct=0, sc=8) 00:23:04.124 starting I/O failed: -6 00:23:04.124 Write completed with error (sct=0, sc=8) 00:23:04.124 starting I/O failed: -6 00:23:04.124 Write completed with error (sct=0, sc=8) 00:23:04.124 starting I/O failed: -6 00:23:04.124 Write completed with error (sct=0, sc=8) 00:23:04.124 starting I/O failed: -6 00:23:04.124 Write completed with error (sct=0, sc=8) 00:23:04.124 starting I/O failed: -6 00:23:04.124 Write completed with error (sct=0, sc=8) 00:23:04.124 starting I/O failed: -6 00:23:04.124 Write completed with error (sct=0, sc=8) 00:23:04.124 starting I/O failed: -6 00:23:04.124 Write completed with error (sct=0, sc=8) 00:23:04.124 starting I/O failed: -6 00:23:04.124 Write completed with error (sct=0, sc=8) 00:23:04.124 starting I/O failed: -6 00:23:04.124 Write completed with error (sct=0, sc=8) 00:23:04.124 starting I/O failed: -6 00:23:04.124 Write completed with error (sct=0, sc=8) 00:23:04.124 starting I/O failed: -6 00:23:04.124 Write completed with error (sct=0, sc=8) 00:23:04.124 starting I/O failed: -6 00:23:04.124 Write completed with error (sct=0, sc=8) 00:23:04.124 starting I/O failed: -6 00:23:04.124 Write completed with error (sct=0, sc=8) 00:23:04.124 starting I/O failed: -6 00:23:04.124 Write completed with error (sct=0, sc=8) 00:23:04.124 starting I/O failed: -6 00:23:04.124 Write completed with error (sct=0, sc=8) 00:23:04.124 starting I/O failed: -6 00:23:04.124 Write completed with error (sct=0, sc=8) 00:23:04.124 starting I/O failed: -6 00:23:04.124 Write completed with error (sct=0, sc=8) 00:23:04.124 starting I/O failed: -6 00:23:04.124 Write completed with error (sct=0, sc=8) 00:23:04.124 starting I/O failed: -6 00:23:04.124 Write completed with error (sct=0, sc=8) 00:23:04.124 starting I/O failed: -6 00:23:04.124 Write completed with error (sct=0, sc=8) 00:23:04.124 starting I/O failed: -6 00:23:04.124 Write completed with error (sct=0, sc=8) 00:23:04.124 starting I/O failed: -6 00:23:04.124 Write completed with error (sct=0, sc=8) 00:23:04.124 starting I/O failed: -6 00:23:04.124 Write completed with error (sct=0, sc=8) 00:23:04.124 starting I/O failed: -6 00:23:04.124 Write completed with error (sct=0, sc=8) 00:23:04.124 starting I/O failed: -6 00:23:04.124 Write completed with error (sct=0, sc=8) 00:23:04.124 starting I/O failed: -6 00:23:04.124 Write completed with error (sct=0, sc=8) 00:23:04.124 starting I/O failed: -6 00:23:04.124 Write completed with error (sct=0, sc=8) 00:23:04.124 starting I/O failed: -6 00:23:04.124 Write completed with error (sct=0, sc=8) 00:23:04.124 starting I/O failed: -6 00:23:04.124 Write completed with error (sct=0, sc=8) 00:23:04.124 starting I/O failed: -6 00:23:04.124 Write completed with error (sct=0, sc=8) 00:23:04.124 starting I/O failed: -6 00:23:04.124 Write completed with error (sct=0, sc=8) 00:23:04.124 starting I/O failed: -6 00:23:04.124 Write completed with error (sct=0, sc=8) 00:23:04.124 starting I/O failed: -6 00:23:04.124 Write completed with error (sct=0, sc=8) 00:23:04.124 starting I/O failed: -6 00:23:04.124 Write completed with error (sct=0, sc=8) 00:23:04.124 starting I/O failed: -6 00:23:04.124 Write completed with error (sct=0, sc=8) 00:23:04.124 starting I/O failed: -6 00:23:04.124 Write completed with error (sct=0, sc=8) 00:23:04.124 starting I/O failed: -6 00:23:04.124 Write completed with error (sct=0, sc=8) 00:23:04.124 starting I/O failed: -6 00:23:04.124 Write completed with error (sct=0, sc=8) 00:23:04.124 starting I/O failed: -6 00:23:04.124 Write completed with error (sct=0, sc=8) 00:23:04.124 starting I/O failed: -6 00:23:04.124 Write completed with error (sct=0, sc=8) 00:23:04.124 starting I/O failed: -6 00:23:04.124 Write completed with error (sct=0, sc=8) 00:23:04.124 starting I/O failed: -6 00:23:04.124 Write completed with error (sct=0, sc=8) 00:23:04.124 starting I/O failed: -6 00:23:04.124 Write completed with error (sct=0, sc=8) 00:23:04.124 starting I/O failed: -6 00:23:04.124 Write completed with error (sct=0, sc=8) 00:23:04.124 starting I/O failed: -6 00:23:04.124 [2024-10-07 09:43:52.715520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:23:04.124 NVMe io qpair process completion error 00:23:04.124 Write completed with error (sct=0, sc=8) 00:23:04.124 Write completed with error (sct=0, sc=8) 00:23:04.124 starting I/O failed: -6 00:23:04.124 Write completed with error (sct=0, sc=8) 00:23:04.124 Write completed with error (sct=0, sc=8) 00:23:04.124 Write completed with error (sct=0, sc=8) 00:23:04.124 Write completed with error (sct=0, sc=8) 00:23:04.124 starting I/O failed: -6 00:23:04.124 Write completed with error (sct=0, sc=8) 00:23:04.124 Write completed with error (sct=0, sc=8) 00:23:04.124 Write completed with error (sct=0, sc=8) 00:23:04.124 Write completed with error (sct=0, sc=8) 00:23:04.124 starting I/O failed: -6 00:23:04.124 Write completed with error (sct=0, sc=8) 00:23:04.124 Write completed with error (sct=0, sc=8) 00:23:04.124 Write completed with error (sct=0, sc=8) 00:23:04.124 Write completed with error (sct=0, sc=8) 00:23:04.124 starting I/O failed: -6 00:23:04.124 Write completed with error (sct=0, sc=8) 00:23:04.124 Write completed with error (sct=0, sc=8) 00:23:04.124 Write completed with error (sct=0, sc=8) 00:23:04.124 Write completed with error (sct=0, sc=8) 00:23:04.124 starting I/O failed: -6 00:23:04.124 Write completed with error (sct=0, sc=8) 00:23:04.124 Write completed with error (sct=0, sc=8) 00:23:04.124 Write completed with error (sct=0, sc=8) 00:23:04.124 Write completed with error (sct=0, sc=8) 00:23:04.124 starting I/O failed: -6 00:23:04.124 Write completed with error (sct=0, sc=8) 00:23:04.124 Write completed with error (sct=0, sc=8) 00:23:04.124 Write completed with error (sct=0, sc=8) 00:23:04.124 Write completed with error (sct=0, sc=8) 00:23:04.124 starting I/O failed: -6 00:23:04.124 Write completed with error (sct=0, sc=8) 00:23:04.124 Write completed with error (sct=0, sc=8) 00:23:04.124 Write completed with error (sct=0, sc=8) 00:23:04.124 Write completed with error (sct=0, sc=8) 00:23:04.124 starting I/O failed: -6 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 starting I/O failed: -6 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 starting I/O failed: -6 00:23:04.125 [2024-10-07 09:43:52.716745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 starting I/O failed: -6 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 starting I/O failed: -6 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 starting I/O failed: -6 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 starting I/O failed: -6 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 starting I/O failed: -6 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 starting I/O failed: -6 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 starting I/O failed: -6 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 starting I/O failed: -6 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 starting I/O failed: -6 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 starting I/O failed: -6 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 starting I/O failed: -6 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 starting I/O failed: -6 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 starting I/O failed: -6 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 starting I/O failed: -6 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 starting I/O failed: -6 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 starting I/O failed: -6 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 starting I/O failed: -6 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 starting I/O failed: -6 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 starting I/O failed: -6 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 starting I/O failed: -6 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 starting I/O failed: -6 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 starting I/O failed: -6 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 [2024-10-07 09:43:52.717816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 starting I/O failed: -6 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 starting I/O failed: -6 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 starting I/O failed: -6 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 starting I/O failed: -6 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 starting I/O failed: -6 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 starting I/O failed: -6 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 starting I/O failed: -6 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 starting I/O failed: -6 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 starting I/O failed: -6 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 starting I/O failed: -6 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 starting I/O failed: -6 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 starting I/O failed: -6 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 starting I/O failed: -6 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 starting I/O failed: -6 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 starting I/O failed: -6 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 starting I/O failed: -6 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 starting I/O failed: -6 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 starting I/O failed: -6 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 starting I/O failed: -6 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 starting I/O failed: -6 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 starting I/O failed: -6 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 starting I/O failed: -6 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 starting I/O failed: -6 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 starting I/O failed: -6 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 starting I/O failed: -6 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 starting I/O failed: -6 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 starting I/O failed: -6 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 starting I/O failed: -6 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 starting I/O failed: -6 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 starting I/O failed: -6 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 starting I/O failed: -6 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 starting I/O failed: -6 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 starting I/O failed: -6 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 starting I/O failed: -6 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 starting I/O failed: -6 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 starting I/O failed: -6 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 [2024-10-07 09:43:52.718938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 starting I/O failed: -6 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 starting I/O failed: -6 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 starting I/O failed: -6 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 starting I/O failed: -6 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 starting I/O failed: -6 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 starting I/O failed: -6 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 starting I/O failed: -6 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 starting I/O failed: -6 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 starting I/O failed: -6 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 starting I/O failed: -6 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 starting I/O failed: -6 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 starting I/O failed: -6 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 starting I/O failed: -6 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 starting I/O failed: -6 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 starting I/O failed: -6 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 starting I/O failed: -6 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 starting I/O failed: -6 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 starting I/O failed: -6 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 starting I/O failed: -6 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 starting I/O failed: -6 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 starting I/O failed: -6 00:23:04.125 Write completed with error (sct=0, sc=8) 00:23:04.125 starting I/O failed: -6 00:23:04.126 Write completed with error (sct=0, sc=8) 00:23:04.126 starting I/O failed: -6 00:23:04.126 Write completed with error (sct=0, sc=8) 00:23:04.126 starting I/O failed: -6 00:23:04.126 Write completed with error (sct=0, sc=8) 00:23:04.126 starting I/O failed: -6 00:23:04.126 Write completed with error (sct=0, sc=8) 00:23:04.126 starting I/O failed: -6 00:23:04.126 Write completed with error (sct=0, sc=8) 00:23:04.126 starting I/O failed: -6 00:23:04.126 Write completed with error (sct=0, sc=8) 00:23:04.126 starting I/O failed: -6 00:23:04.126 Write completed with error (sct=0, sc=8) 00:23:04.126 starting I/O failed: -6 00:23:04.126 Write completed with error (sct=0, sc=8) 00:23:04.126 starting I/O failed: -6 00:23:04.126 Write completed with error (sct=0, sc=8) 00:23:04.126 starting I/O failed: -6 00:23:04.126 Write completed with error (sct=0, sc=8) 00:23:04.126 starting I/O failed: -6 00:23:04.126 Write completed with error (sct=0, sc=8) 00:23:04.126 starting I/O failed: -6 00:23:04.126 Write completed with error (sct=0, sc=8) 00:23:04.126 starting I/O failed: -6 00:23:04.126 Write completed with error (sct=0, sc=8) 00:23:04.126 starting I/O failed: -6 00:23:04.126 Write completed with error (sct=0, sc=8) 00:23:04.126 starting I/O failed: -6 00:23:04.126 Write completed with error (sct=0, sc=8) 00:23:04.126 starting I/O failed: -6 00:23:04.126 Write completed with error (sct=0, sc=8) 00:23:04.126 starting I/O failed: -6 00:23:04.126 Write completed with error (sct=0, sc=8) 00:23:04.126 starting I/O failed: -6 00:23:04.126 Write completed with error (sct=0, sc=8) 00:23:04.126 starting I/O failed: -6 00:23:04.126 Write completed with error (sct=0, sc=8) 00:23:04.126 starting I/O failed: -6 00:23:04.126 Write completed with error (sct=0, sc=8) 00:23:04.126 starting I/O failed: -6 00:23:04.126 Write completed with error (sct=0, sc=8) 00:23:04.126 starting I/O failed: -6 00:23:04.126 Write completed with error (sct=0, sc=8) 00:23:04.126 starting I/O failed: -6 00:23:04.126 Write completed with error (sct=0, sc=8) 00:23:04.126 starting I/O failed: -6 00:23:04.126 Write completed with error (sct=0, sc=8) 00:23:04.126 starting I/O failed: -6 00:23:04.126 Write completed with error (sct=0, sc=8) 00:23:04.126 starting I/O failed: -6 00:23:04.126 Write completed with error (sct=0, sc=8) 00:23:04.126 starting I/O failed: -6 00:23:04.126 Write completed with error (sct=0, sc=8) 00:23:04.126 starting I/O failed: -6 00:23:04.126 Write completed with error (sct=0, sc=8) 00:23:04.126 starting I/O failed: -6 00:23:04.126 Write completed with error (sct=0, sc=8) 00:23:04.126 starting I/O failed: -6 00:23:04.126 Write completed with error (sct=0, sc=8) 00:23:04.126 starting I/O failed: -6 00:23:04.126 Write completed with error (sct=0, sc=8) 00:23:04.126 starting I/O failed: -6 00:23:04.126 Write completed with error (sct=0, sc=8) 00:23:04.126 starting I/O failed: -6 00:23:04.126 Write completed with error (sct=0, sc=8) 00:23:04.126 starting I/O failed: -6 00:23:04.126 Write completed with error (sct=0, sc=8) 00:23:04.126 starting I/O failed: -6 00:23:04.126 Write completed with error (sct=0, sc=8) 00:23:04.126 starting I/O failed: -6 00:23:04.126 Write completed with error (sct=0, sc=8) 00:23:04.126 starting I/O failed: -6 00:23:04.126 Write completed with error (sct=0, sc=8) 00:23:04.126 starting I/O failed: -6 00:23:04.126 Write completed with error (sct=0, sc=8) 00:23:04.126 starting I/O failed: -6 00:23:04.126 [2024-10-07 09:43:52.721699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:23:04.126 NVMe io qpair process completion error 00:23:04.126 Write completed with error (sct=0, sc=8) 00:23:04.126 starting I/O failed: -6 00:23:04.126 Write completed with error (sct=0, sc=8) 00:23:04.126 Write completed with error (sct=0, sc=8) 00:23:04.126 Write completed with error (sct=0, sc=8) 00:23:04.126 Write completed with error (sct=0, sc=8) 00:23:04.126 starting I/O failed: -6 00:23:04.126 Write completed with error (sct=0, sc=8) 00:23:04.126 Write completed with error (sct=0, sc=8) 00:23:04.126 Write completed with error (sct=0, sc=8) 00:23:04.126 Write completed with error (sct=0, sc=8) 00:23:04.126 starting I/O failed: -6 00:23:04.126 Write completed with error (sct=0, sc=8) 00:23:04.126 Write completed with error (sct=0, sc=8) 00:23:04.126 Write completed with error (sct=0, sc=8) 00:23:04.126 Write completed with error (sct=0, sc=8) 00:23:04.126 starting I/O failed: -6 00:23:04.126 Write completed with error (sct=0, sc=8) 00:23:04.126 Write completed with error (sct=0, sc=8) 00:23:04.126 Write completed with error (sct=0, sc=8) 00:23:04.126 Write completed with error (sct=0, sc=8) 00:23:04.126 starting I/O failed: -6 00:23:04.126 Write completed with error (sct=0, sc=8) 00:23:04.126 Write completed with error (sct=0, sc=8) 00:23:04.126 Write completed with error (sct=0, sc=8) 00:23:04.126 Write completed with error (sct=0, sc=8) 00:23:04.126 starting I/O failed: -6 00:23:04.126 Write completed with error (sct=0, sc=8) 00:23:04.126 Write completed with error (sct=0, sc=8) 00:23:04.126 Write completed with error (sct=0, sc=8) 00:23:04.126 Write completed with error (sct=0, sc=8) 00:23:04.126 starting I/O failed: -6 00:23:04.126 Write completed with error (sct=0, sc=8) 00:23:04.126 Write completed with error (sct=0, sc=8) 00:23:04.126 Write completed with error (sct=0, sc=8) 00:23:04.126 Write completed with error (sct=0, sc=8) 00:23:04.126 starting I/O failed: -6 00:23:04.126 Write completed with error (sct=0, sc=8) 00:23:04.126 Write completed with error (sct=0, sc=8) 00:23:04.126 Write completed with error (sct=0, sc=8) 00:23:04.126 Write completed with error (sct=0, sc=8) 00:23:04.126 starting I/O failed: -6 00:23:04.126 Write completed with error (sct=0, sc=8) 00:23:04.126 Write completed with error (sct=0, sc=8) 00:23:04.126 Write completed with error (sct=0, sc=8) 00:23:04.126 Write completed with error (sct=0, sc=8) 00:23:04.126 starting I/O failed: -6 00:23:04.126 Write completed with error (sct=0, sc=8) 00:23:04.126 Write completed with error (sct=0, sc=8) 00:23:04.126 [2024-10-07 09:43:52.723023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:04.126 Write completed with error (sct=0, sc=8) 00:23:04.126 starting I/O failed: -6 00:23:04.126 Write completed with error (sct=0, sc=8) 00:23:04.126 starting I/O failed: -6 00:23:04.126 Write completed with error (sct=0, sc=8) 00:23:04.126 Write completed with error (sct=0, sc=8) 00:23:04.126 Write completed with error (sct=0, sc=8) 00:23:04.126 starting I/O failed: -6 00:23:04.126 Write completed with error (sct=0, sc=8) 00:23:04.126 starting I/O failed: -6 00:23:04.126 Write completed with error (sct=0, sc=8) 00:23:04.126 Write completed with error (sct=0, sc=8) 00:23:04.126 Write completed with error (sct=0, sc=8) 00:23:04.126 starting I/O failed: -6 00:23:04.126 Write completed with error (sct=0, sc=8) 00:23:04.126 starting I/O failed: -6 00:23:04.126 Write completed with error (sct=0, sc=8) 00:23:04.127 Write completed with error (sct=0, sc=8) 00:23:04.127 Write completed with error (sct=0, sc=8) 00:23:04.127 starting I/O failed: -6 00:23:04.127 Write completed with error (sct=0, sc=8) 00:23:04.127 starting I/O failed: -6 00:23:04.127 Write completed with error (sct=0, sc=8) 00:23:04.127 Write completed with error (sct=0, sc=8) 00:23:04.127 Write completed with error (sct=0, sc=8) 00:23:04.127 starting I/O failed: -6 00:23:04.127 Write completed with error (sct=0, sc=8) 00:23:04.127 starting I/O failed: -6 00:23:04.127 Write completed with error (sct=0, sc=8) 00:23:04.127 Write completed with error (sct=0, sc=8) 00:23:04.127 Write completed with error (sct=0, sc=8) 00:23:04.127 starting I/O failed: -6 00:23:04.127 Write completed with error (sct=0, sc=8) 00:23:04.127 starting I/O failed: -6 00:23:04.127 Write completed with error (sct=0, sc=8) 00:23:04.127 Write completed with error (sct=0, sc=8) 00:23:04.127 Write completed with error (sct=0, sc=8) 00:23:04.127 starting I/O failed: -6 00:23:04.127 Write completed with error (sct=0, sc=8) 00:23:04.127 starting I/O failed: -6 00:23:04.127 Write completed with error (sct=0, sc=8) 00:23:04.127 Write completed with error (sct=0, sc=8) 00:23:04.127 Write completed with error (sct=0, sc=8) 00:23:04.127 starting I/O failed: -6 00:23:04.127 Write completed with error (sct=0, sc=8) 00:23:04.127 starting I/O failed: -6 00:23:04.127 Write completed with error (sct=0, sc=8) 00:23:04.127 Write completed with error (sct=0, sc=8) 00:23:04.127 Write completed with error (sct=0, sc=8) 00:23:04.127 starting I/O failed: -6 00:23:04.127 Write completed with error (sct=0, sc=8) 00:23:04.127 starting I/O failed: -6 00:23:04.127 Write completed with error (sct=0, sc=8) 00:23:04.127 Write completed with error (sct=0, sc=8) 00:23:04.127 Write completed with error (sct=0, sc=8) 00:23:04.127 starting I/O failed: -6 00:23:04.127 Write completed with error (sct=0, sc=8) 00:23:04.127 starting I/O failed: -6 00:23:04.127 Write completed with error (sct=0, sc=8) 00:23:04.127 Write completed with error (sct=0, sc=8) 00:23:04.127 Write completed with error (sct=0, sc=8) 00:23:04.127 starting I/O failed: -6 00:23:04.127 Write completed with error (sct=0, sc=8) 00:23:04.127 starting I/O failed: -6 00:23:04.127 Write completed with error (sct=0, sc=8) 00:23:04.127 Write completed with error (sct=0, sc=8) 00:23:04.127 Write completed with error (sct=0, sc=8) 00:23:04.127 starting I/O failed: -6 00:23:04.127 Write completed with error (sct=0, sc=8) 00:23:04.127 starting I/O failed: -6 00:23:04.127 [2024-10-07 09:43:52.724081] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:04.127 Write completed with error (sct=0, sc=8) 00:23:04.127 Write completed with error (sct=0, sc=8) 00:23:04.127 starting I/O failed: -6 00:23:04.127 Write completed with error (sct=0, sc=8) 00:23:04.127 starting I/O failed: -6 00:23:04.127 Write completed with error (sct=0, sc=8) 00:23:04.127 starting I/O failed: -6 00:23:04.127 Write completed with error (sct=0, sc=8) 00:23:04.127 Write completed with error (sct=0, sc=8) 00:23:04.127 starting I/O failed: -6 00:23:04.127 Write completed with error (sct=0, sc=8) 00:23:04.127 starting I/O failed: -6 00:23:04.127 Write completed with error (sct=0, sc=8) 00:23:04.127 starting I/O failed: -6 00:23:04.127 Write completed with error (sct=0, sc=8) 00:23:04.127 Write completed with error (sct=0, sc=8) 00:23:04.127 starting I/O failed: -6 00:23:04.127 Write completed with error (sct=0, sc=8) 00:23:04.127 starting I/O failed: -6 00:23:04.127 Write completed with error (sct=0, sc=8) 00:23:04.127 starting I/O failed: -6 00:23:04.127 Write completed with error (sct=0, sc=8) 00:23:04.127 Write completed with error (sct=0, sc=8) 00:23:04.127 starting I/O failed: -6 00:23:04.127 Write completed with error (sct=0, sc=8) 00:23:04.127 starting I/O failed: -6 00:23:04.127 Write completed with error (sct=0, sc=8) 00:23:04.127 starting I/O failed: -6 00:23:04.127 Write completed with error (sct=0, sc=8) 00:23:04.127 Write completed with error (sct=0, sc=8) 00:23:04.127 starting I/O failed: -6 00:23:04.127 Write completed with error (sct=0, sc=8) 00:23:04.127 starting I/O failed: -6 00:23:04.127 Write completed with error (sct=0, sc=8) 00:23:04.127 starting I/O failed: -6 00:23:04.127 Write completed with error (sct=0, sc=8) 00:23:04.127 Write completed with error (sct=0, sc=8) 00:23:04.127 starting I/O failed: -6 00:23:04.127 Write completed with error (sct=0, sc=8) 00:23:04.127 starting I/O failed: -6 00:23:04.127 Write completed with error (sct=0, sc=8) 00:23:04.127 starting I/O failed: -6 00:23:04.127 Write completed with error (sct=0, sc=8) 00:23:04.127 Write completed with error (sct=0, sc=8) 00:23:04.127 starting I/O failed: -6 00:23:04.127 Write completed with error (sct=0, sc=8) 00:23:04.127 starting I/O failed: -6 00:23:04.127 Write completed with error (sct=0, sc=8) 00:23:04.127 starting I/O failed: -6 00:23:04.127 Write completed with error (sct=0, sc=8) 00:23:04.127 Write completed with error (sct=0, sc=8) 00:23:04.127 starting I/O failed: -6 00:23:04.127 Write completed with error (sct=0, sc=8) 00:23:04.127 starting I/O failed: -6 00:23:04.127 Write completed with error (sct=0, sc=8) 00:23:04.127 starting I/O failed: -6 00:23:04.127 Write completed with error (sct=0, sc=8) 00:23:04.127 Write completed with error (sct=0, sc=8) 00:23:04.127 starting I/O failed: -6 00:23:04.127 Write completed with error (sct=0, sc=8) 00:23:04.127 starting I/O failed: -6 00:23:04.127 Write completed with error (sct=0, sc=8) 00:23:04.127 starting I/O failed: -6 00:23:04.127 Write completed with error (sct=0, sc=8) 00:23:04.127 Write completed with error (sct=0, sc=8) 00:23:04.127 starting I/O failed: -6 00:23:04.127 Write completed with error (sct=0, sc=8) 00:23:04.127 starting I/O failed: -6 00:23:04.127 Write completed with error (sct=0, sc=8) 00:23:04.127 starting I/O failed: -6 00:23:04.127 Write completed with error (sct=0, sc=8) 00:23:04.127 Write completed with error (sct=0, sc=8) 00:23:04.127 starting I/O failed: -6 00:23:04.127 Write completed with error (sct=0, sc=8) 00:23:04.127 starting I/O failed: -6 00:23:04.127 Write completed with error (sct=0, sc=8) 00:23:04.127 starting I/O failed: -6 00:23:04.127 Write completed with error (sct=0, sc=8) 00:23:04.127 Write completed with error (sct=0, sc=8) 00:23:04.127 starting I/O failed: -6 00:23:04.127 [2024-10-07 09:43:52.725177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:23:04.127 starting I/O failed: -6 00:23:04.127 Write completed with error (sct=0, sc=8) 00:23:04.127 starting I/O failed: -6 00:23:04.127 Write completed with error (sct=0, sc=8) 00:23:04.127 starting I/O failed: -6 00:23:04.127 Write completed with error (sct=0, sc=8) 00:23:04.127 starting I/O failed: -6 00:23:04.127 Write completed with error (sct=0, sc=8) 00:23:04.127 starting I/O failed: -6 00:23:04.127 Write completed with error (sct=0, sc=8) 00:23:04.127 starting I/O failed: -6 00:23:04.127 Write completed with error (sct=0, sc=8) 00:23:04.127 starting I/O failed: -6 00:23:04.127 Write completed with error (sct=0, sc=8) 00:23:04.127 starting I/O failed: -6 00:23:04.127 Write completed with error (sct=0, sc=8) 00:23:04.127 starting I/O failed: -6 00:23:04.127 Write completed with error (sct=0, sc=8) 00:23:04.127 starting I/O failed: -6 00:23:04.127 Write completed with error (sct=0, sc=8) 00:23:04.127 starting I/O failed: -6 00:23:04.127 Write completed with error (sct=0, sc=8) 00:23:04.127 starting I/O failed: -6 00:23:04.127 Write completed with error (sct=0, sc=8) 00:23:04.127 starting I/O failed: -6 00:23:04.127 Write completed with error (sct=0, sc=8) 00:23:04.127 starting I/O failed: -6 00:23:04.127 Write completed with error (sct=0, sc=8) 00:23:04.127 starting I/O failed: -6 00:23:04.127 Write completed with error (sct=0, sc=8) 00:23:04.127 starting I/O failed: -6 00:23:04.127 Write completed with error (sct=0, sc=8) 00:23:04.127 starting I/O failed: -6 00:23:04.127 Write completed with error (sct=0, sc=8) 00:23:04.127 starting I/O failed: -6 00:23:04.127 Write completed with error (sct=0, sc=8) 00:23:04.127 starting I/O failed: -6 00:23:04.127 Write completed with error (sct=0, sc=8) 00:23:04.127 starting I/O failed: -6 00:23:04.127 Write completed with error (sct=0, sc=8) 00:23:04.127 starting I/O failed: -6 00:23:04.127 Write completed with error (sct=0, sc=8) 00:23:04.127 starting I/O failed: -6 00:23:04.127 Write completed with error (sct=0, sc=8) 00:23:04.127 starting I/O failed: -6 00:23:04.127 Write completed with error (sct=0, sc=8) 00:23:04.127 starting I/O failed: -6 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 starting I/O failed: -6 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 starting I/O failed: -6 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 starting I/O failed: -6 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 starting I/O failed: -6 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 starting I/O failed: -6 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 starting I/O failed: -6 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 starting I/O failed: -6 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 starting I/O failed: -6 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 starting I/O failed: -6 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 starting I/O failed: -6 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 starting I/O failed: -6 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 starting I/O failed: -6 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 starting I/O failed: -6 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 starting I/O failed: -6 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 starting I/O failed: -6 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 starting I/O failed: -6 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 starting I/O failed: -6 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 starting I/O failed: -6 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 starting I/O failed: -6 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 starting I/O failed: -6 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 starting I/O failed: -6 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 starting I/O failed: -6 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 starting I/O failed: -6 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 starting I/O failed: -6 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 starting I/O failed: -6 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 starting I/O failed: -6 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 starting I/O failed: -6 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 starting I/O failed: -6 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 starting I/O failed: -6 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 starting I/O failed: -6 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 starting I/O failed: -6 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 starting I/O failed: -6 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 starting I/O failed: -6 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 starting I/O failed: -6 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 starting I/O failed: -6 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 starting I/O failed: -6 00:23:04.128 [2024-10-07 09:43:52.728506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:23:04.128 NVMe io qpair process completion error 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 starting I/O failed: -6 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 starting I/O failed: -6 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 starting I/O failed: -6 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 starting I/O failed: -6 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 starting I/O failed: -6 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 starting I/O failed: -6 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 starting I/O failed: -6 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 starting I/O failed: -6 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 starting I/O failed: -6 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 [2024-10-07 09:43:52.729842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 starting I/O failed: -6 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 starting I/O failed: -6 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 starting I/O failed: -6 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 starting I/O failed: -6 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 starting I/O failed: -6 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 starting I/O failed: -6 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 starting I/O failed: -6 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 starting I/O failed: -6 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 starting I/O failed: -6 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 starting I/O failed: -6 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 starting I/O failed: -6 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 starting I/O failed: -6 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 starting I/O failed: -6 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 starting I/O failed: -6 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 starting I/O failed: -6 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 starting I/O failed: -6 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 starting I/O failed: -6 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 starting I/O failed: -6 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 starting I/O failed: -6 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 starting I/O failed: -6 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 starting I/O failed: -6 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 starting I/O failed: -6 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 starting I/O failed: -6 00:23:04.128 [2024-10-07 09:43:52.730899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:04.128 starting I/O failed: -6 00:23:04.128 starting I/O failed: -6 00:23:04.128 starting I/O failed: -6 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 starting I/O failed: -6 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 starting I/O failed: -6 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 starting I/O failed: -6 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 starting I/O failed: -6 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 starting I/O failed: -6 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 starting I/O failed: -6 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 starting I/O failed: -6 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 Write completed with error (sct=0, sc=8) 00:23:04.128 starting I/O failed: -6 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 starting I/O failed: -6 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 starting I/O failed: -6 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 starting I/O failed: -6 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 starting I/O failed: -6 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 starting I/O failed: -6 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 starting I/O failed: -6 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 starting I/O failed: -6 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 starting I/O failed: -6 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 starting I/O failed: -6 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 starting I/O failed: -6 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 starting I/O failed: -6 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 starting I/O failed: -6 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 starting I/O failed: -6 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 starting I/O failed: -6 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 starting I/O failed: -6 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 starting I/O failed: -6 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 starting I/O failed: -6 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 starting I/O failed: -6 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 starting I/O failed: -6 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 starting I/O failed: -6 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 starting I/O failed: -6 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 starting I/O failed: -6 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 starting I/O failed: -6 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 starting I/O failed: -6 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 starting I/O failed: -6 00:23:04.129 [2024-10-07 09:43:52.732202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 starting I/O failed: -6 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 starting I/O failed: -6 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 starting I/O failed: -6 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 starting I/O failed: -6 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 starting I/O failed: -6 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 starting I/O failed: -6 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 starting I/O failed: -6 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 starting I/O failed: -6 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 starting I/O failed: -6 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 starting I/O failed: -6 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 starting I/O failed: -6 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 starting I/O failed: -6 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 starting I/O failed: -6 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 starting I/O failed: -6 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 starting I/O failed: -6 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 starting I/O failed: -6 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 starting I/O failed: -6 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 starting I/O failed: -6 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 starting I/O failed: -6 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 starting I/O failed: -6 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 starting I/O failed: -6 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 starting I/O failed: -6 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 starting I/O failed: -6 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 starting I/O failed: -6 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 starting I/O failed: -6 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 starting I/O failed: -6 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 starting I/O failed: -6 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 starting I/O failed: -6 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 starting I/O failed: -6 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 starting I/O failed: -6 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 starting I/O failed: -6 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 starting I/O failed: -6 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 starting I/O failed: -6 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 starting I/O failed: -6 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 starting I/O failed: -6 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 starting I/O failed: -6 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 starting I/O failed: -6 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 starting I/O failed: -6 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 starting I/O failed: -6 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 starting I/O failed: -6 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 starting I/O failed: -6 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 starting I/O failed: -6 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 starting I/O failed: -6 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 starting I/O failed: -6 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 starting I/O failed: -6 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 starting I/O failed: -6 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 starting I/O failed: -6 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 starting I/O failed: -6 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 starting I/O failed: -6 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 starting I/O failed: -6 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 starting I/O failed: -6 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 starting I/O failed: -6 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 starting I/O failed: -6 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 starting I/O failed: -6 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 starting I/O failed: -6 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 starting I/O failed: -6 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 starting I/O failed: -6 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 starting I/O failed: -6 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 starting I/O failed: -6 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 starting I/O failed: -6 00:23:04.129 [2024-10-07 09:43:52.734267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:23:04.129 NVMe io qpair process completion error 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 starting I/O failed: -6 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 starting I/O failed: -6 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 starting I/O failed: -6 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 starting I/O failed: -6 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 starting I/O failed: -6 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 starting I/O failed: -6 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 starting I/O failed: -6 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 starting I/O failed: -6 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.129 starting I/O failed: -6 00:23:04.129 Write completed with error (sct=0, sc=8) 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 starting I/O failed: -6 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 starting I/O failed: -6 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 starting I/O failed: -6 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 starting I/O failed: -6 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 starting I/O failed: -6 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 starting I/O failed: -6 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 starting I/O failed: -6 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 starting I/O failed: -6 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 starting I/O failed: -6 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 starting I/O failed: -6 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 starting I/O failed: -6 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 starting I/O failed: -6 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 starting I/O failed: -6 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 starting I/O failed: -6 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 starting I/O failed: -6 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 starting I/O failed: -6 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 starting I/O failed: -6 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 starting I/O failed: -6 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 starting I/O failed: -6 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 starting I/O failed: -6 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 starting I/O failed: -6 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 starting I/O failed: -6 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 starting I/O failed: -6 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 starting I/O failed: -6 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 starting I/O failed: -6 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 starting I/O failed: -6 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 starting I/O failed: -6 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 starting I/O failed: -6 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 starting I/O failed: -6 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 starting I/O failed: -6 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 starting I/O failed: -6 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 starting I/O failed: -6 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 starting I/O failed: -6 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 starting I/O failed: -6 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 starting I/O failed: -6 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 starting I/O failed: -6 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 starting I/O failed: -6 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 starting I/O failed: -6 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 starting I/O failed: -6 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 starting I/O failed: -6 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 starting I/O failed: -6 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 starting I/O failed: -6 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 starting I/O failed: -6 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 starting I/O failed: -6 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 starting I/O failed: -6 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 starting I/O failed: -6 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 starting I/O failed: -6 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 starting I/O failed: -6 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 starting I/O failed: -6 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 starting I/O failed: -6 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 starting I/O failed: -6 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 starting I/O failed: -6 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 starting I/O failed: -6 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 starting I/O failed: -6 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 starting I/O failed: -6 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 starting I/O failed: -6 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 starting I/O failed: -6 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 starting I/O failed: -6 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 starting I/O failed: -6 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 starting I/O failed: -6 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 starting I/O failed: -6 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 starting I/O failed: -6 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 starting I/O failed: -6 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 starting I/O failed: -6 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 starting I/O failed: -6 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 starting I/O failed: -6 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 starting I/O failed: -6 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 starting I/O failed: -6 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 starting I/O failed: -6 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 starting I/O failed: -6 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 starting I/O failed: -6 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 starting I/O failed: -6 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 starting I/O failed: -6 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 starting I/O failed: -6 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 starting I/O failed: -6 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 starting I/O failed: -6 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 starting I/O failed: -6 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 starting I/O failed: -6 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 starting I/O failed: -6 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 starting I/O failed: -6 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 starting I/O failed: -6 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 starting I/O failed: -6 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 starting I/O failed: -6 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 starting I/O failed: -6 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 starting I/O failed: -6 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 starting I/O failed: -6 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 starting I/O failed: -6 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 starting I/O failed: -6 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 starting I/O failed: -6 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 starting I/O failed: -6 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 starting I/O failed: -6 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 starting I/O failed: -6 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 starting I/O failed: -6 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 starting I/O failed: -6 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 starting I/O failed: -6 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 starting I/O failed: -6 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 starting I/O failed: -6 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 starting I/O failed: -6 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.130 starting I/O failed: -6 00:23:04.130 Write completed with error (sct=0, sc=8) 00:23:04.131 starting I/O failed: -6 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 starting I/O failed: -6 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 starting I/O failed: -6 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 starting I/O failed: -6 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 starting I/O failed: -6 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 starting I/O failed: -6 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 starting I/O failed: -6 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 starting I/O failed: -6 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 starting I/O failed: -6 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 starting I/O failed: -6 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 starting I/O failed: -6 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 starting I/O failed: -6 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 starting I/O failed: -6 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 starting I/O failed: -6 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 starting I/O failed: -6 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 starting I/O failed: -6 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 starting I/O failed: -6 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 starting I/O failed: -6 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 starting I/O failed: -6 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 starting I/O failed: -6 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 starting I/O failed: -6 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 starting I/O failed: -6 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 starting I/O failed: -6 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 starting I/O failed: -6 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 starting I/O failed: -6 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 starting I/O failed: -6 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 starting I/O failed: -6 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 starting I/O failed: -6 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 starting I/O failed: -6 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 starting I/O failed: -6 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 starting I/O failed: -6 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 starting I/O failed: -6 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 starting I/O failed: -6 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 starting I/O failed: -6 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 starting I/O failed: -6 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 starting I/O failed: -6 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 starting I/O failed: -6 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 starting I/O failed: -6 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 starting I/O failed: -6 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 starting I/O failed: -6 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 starting I/O failed: -6 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 starting I/O failed: -6 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 starting I/O failed: -6 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 starting I/O failed: -6 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 starting I/O failed: -6 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 starting I/O failed: -6 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 starting I/O failed: -6 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 starting I/O failed: -6 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 starting I/O failed: -6 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 starting I/O failed: -6 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 starting I/O failed: -6 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 starting I/O failed: -6 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 starting I/O failed: -6 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 [2024-10-07 09:43:52.740994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 starting I/O failed: -6 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 starting I/O failed: -6 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 starting I/O failed: -6 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 starting I/O failed: -6 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 starting I/O failed: -6 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 starting I/O failed: -6 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 starting I/O failed: -6 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 starting I/O failed: -6 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 starting I/O failed: -6 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 starting I/O failed: -6 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 starting I/O failed: -6 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 starting I/O failed: -6 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 starting I/O failed: -6 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 starting I/O failed: -6 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 starting I/O failed: -6 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 starting I/O failed: -6 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 starting I/O failed: -6 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 starting I/O failed: -6 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 starting I/O failed: -6 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 starting I/O failed: -6 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 starting I/O failed: -6 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 starting I/O failed: -6 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 starting I/O failed: -6 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 starting I/O failed: -6 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.131 starting I/O failed: -6 00:23:04.131 Write completed with error (sct=0, sc=8) 00:23:04.132 starting I/O failed: -6 00:23:04.132 Write completed with error (sct=0, sc=8) 00:23:04.132 starting I/O failed: -6 00:23:04.132 Write completed with error (sct=0, sc=8) 00:23:04.132 Write completed with error (sct=0, sc=8) 00:23:04.132 starting I/O failed: -6 00:23:04.132 Write completed with error (sct=0, sc=8) 00:23:04.132 starting I/O failed: -6 00:23:04.132 Write completed with error (sct=0, sc=8) 00:23:04.132 starting I/O failed: -6 00:23:04.132 Write completed with error (sct=0, sc=8) 00:23:04.132 Write completed with error (sct=0, sc=8) 00:23:04.132 starting I/O failed: -6 00:23:04.132 Write completed with error (sct=0, sc=8) 00:23:04.132 starting I/O failed: -6 00:23:04.132 Write completed with error (sct=0, sc=8) 00:23:04.132 starting I/O failed: -6 00:23:04.132 Write completed with error (sct=0, sc=8) 00:23:04.132 Write completed with error (sct=0, sc=8) 00:23:04.132 starting I/O failed: -6 00:23:04.132 Write completed with error (sct=0, sc=8) 00:23:04.132 starting I/O failed: -6 00:23:04.132 Write completed with error (sct=0, sc=8) 00:23:04.132 starting I/O failed: -6 00:23:04.132 [2024-10-07 09:43:52.742103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:23:04.132 Write completed with error (sct=0, sc=8) 00:23:04.132 starting I/O failed: -6 00:23:04.132 Write completed with error (sct=0, sc=8) 00:23:04.132 starting I/O failed: -6 00:23:04.132 Write completed with error (sct=0, sc=8) 00:23:04.132 starting I/O failed: -6 00:23:04.132 Write completed with error (sct=0, sc=8) 00:23:04.132 starting I/O failed: -6 00:23:04.132 Write completed with error (sct=0, sc=8) 00:23:04.132 starting I/O failed: -6 00:23:04.132 Write completed with error (sct=0, sc=8) 00:23:04.132 starting I/O failed: -6 00:23:04.132 Write completed with error (sct=0, sc=8) 00:23:04.132 starting I/O failed: -6 00:23:04.132 Write completed with error (sct=0, sc=8) 00:23:04.132 starting I/O failed: -6 00:23:04.132 Write completed with error (sct=0, sc=8) 00:23:04.132 starting I/O failed: -6 00:23:04.132 Write completed with error (sct=0, sc=8) 00:23:04.132 starting I/O failed: -6 00:23:04.132 Write completed with error (sct=0, sc=8) 00:23:04.132 starting I/O failed: -6 00:23:04.132 Write completed with error (sct=0, sc=8) 00:23:04.132 starting I/O failed: -6 00:23:04.132 Write completed with error (sct=0, sc=8) 00:23:04.132 starting I/O failed: -6 00:23:04.132 Write completed with error (sct=0, sc=8) 00:23:04.132 starting I/O failed: -6 00:23:04.132 Write completed with error (sct=0, sc=8) 00:23:04.132 starting I/O failed: -6 00:23:04.132 Write completed with error (sct=0, sc=8) 00:23:04.132 starting I/O failed: -6 00:23:04.132 Write completed with error (sct=0, sc=8) 00:23:04.132 starting I/O failed: -6 00:23:04.132 Write completed with error (sct=0, sc=8) 00:23:04.132 starting I/O failed: -6 00:23:04.132 Write completed with error (sct=0, sc=8) 00:23:04.132 starting I/O failed: -6 00:23:04.132 Write completed with error (sct=0, sc=8) 00:23:04.132 starting I/O failed: -6 00:23:04.132 Write completed with error (sct=0, sc=8) 00:23:04.132 starting I/O failed: -6 00:23:04.132 Write completed with error (sct=0, sc=8) 00:23:04.132 starting I/O failed: -6 00:23:04.132 Write completed with error (sct=0, sc=8) 00:23:04.132 starting I/O failed: -6 00:23:04.132 Write completed with error (sct=0, sc=8) 00:23:04.132 starting I/O failed: -6 00:23:04.132 Write completed with error (sct=0, sc=8) 00:23:04.132 starting I/O failed: -6 00:23:04.132 Write completed with error (sct=0, sc=8) 00:23:04.132 starting I/O failed: -6 00:23:04.132 Write completed with error (sct=0, sc=8) 00:23:04.132 starting I/O failed: -6 00:23:04.132 Write completed with error (sct=0, sc=8) 00:23:04.132 starting I/O failed: -6 00:23:04.132 Write completed with error (sct=0, sc=8) 00:23:04.132 starting I/O failed: -6 00:23:04.132 Write completed with error (sct=0, sc=8) 00:23:04.132 starting I/O failed: -6 00:23:04.132 Write completed with error (sct=0, sc=8) 00:23:04.132 starting I/O failed: -6 00:23:04.132 Write completed with error (sct=0, sc=8) 00:23:04.132 starting I/O failed: -6 00:23:04.132 Write completed with error (sct=0, sc=8) 00:23:04.132 starting I/O failed: -6 00:23:04.132 Write completed with error (sct=0, sc=8) 00:23:04.132 starting I/O failed: -6 00:23:04.132 Write completed with error (sct=0, sc=8) 00:23:04.132 starting I/O failed: -6 00:23:04.132 Write completed with error (sct=0, sc=8) 00:23:04.132 starting I/O failed: -6 00:23:04.132 Write completed with error (sct=0, sc=8) 00:23:04.132 starting I/O failed: -6 00:23:04.132 Write completed with error (sct=0, sc=8) 00:23:04.132 starting I/O failed: -6 00:23:04.132 Write completed with error (sct=0, sc=8) 00:23:04.132 starting I/O failed: -6 00:23:04.132 Write completed with error (sct=0, sc=8) 00:23:04.132 starting I/O failed: -6 00:23:04.132 Write completed with error (sct=0, sc=8) 00:23:04.132 starting I/O failed: -6 00:23:04.132 Write completed with error (sct=0, sc=8) 00:23:04.132 starting I/O failed: -6 00:23:04.132 Write completed with error (sct=0, sc=8) 00:23:04.132 starting I/O failed: -6 00:23:04.132 Write completed with error (sct=0, sc=8) 00:23:04.132 starting I/O failed: -6 00:23:04.132 Write completed with error (sct=0, sc=8) 00:23:04.132 starting I/O failed: -6 00:23:04.132 Write completed with error (sct=0, sc=8) 00:23:04.132 starting I/O failed: -6 00:23:04.132 Write completed with error (sct=0, sc=8) 00:23:04.132 starting I/O failed: -6 00:23:04.132 Write completed with error (sct=0, sc=8) 00:23:04.132 starting I/O failed: -6 00:23:04.132 Write completed with error (sct=0, sc=8) 00:23:04.132 starting I/O failed: -6 00:23:04.132 Write completed with error (sct=0, sc=8) 00:23:04.132 starting I/O failed: -6 00:23:04.132 Write completed with error (sct=0, sc=8) 00:23:04.132 starting I/O failed: -6 00:23:04.132 Write completed with error (sct=0, sc=8) 00:23:04.132 starting I/O failed: -6 00:23:04.132 Write completed with error (sct=0, sc=8) 00:23:04.132 starting I/O failed: -6 00:23:04.132 Write completed with error (sct=0, sc=8) 00:23:04.132 starting I/O failed: -6 00:23:04.132 Write completed with error (sct=0, sc=8) 00:23:04.132 starting I/O failed: -6 00:23:04.132 Write completed with error (sct=0, sc=8) 00:23:04.132 starting I/O failed: -6 00:23:04.132 Write completed with error (sct=0, sc=8) 00:23:04.132 starting I/O failed: -6 00:23:04.132 Write completed with error (sct=0, sc=8) 00:23:04.132 starting I/O failed: -6 00:23:04.132 Write completed with error (sct=0, sc=8) 00:23:04.132 starting I/O failed: -6 00:23:04.132 [2024-10-07 09:43:52.745161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:23:04.132 NVMe io qpair process completion error 00:23:04.132 Write completed with error (sct=0, sc=8) 00:23:04.132 starting I/O failed: -6 00:23:04.132 Write completed with error (sct=0, sc=8) 00:23:04.132 Write completed with error (sct=0, sc=8) 00:23:04.132 Write completed with error (sct=0, sc=8) 00:23:04.132 Write completed with error (sct=0, sc=8) 00:23:04.132 starting I/O failed: -6 00:23:04.132 Write completed with error (sct=0, sc=8) 00:23:04.132 Write completed with error (sct=0, sc=8) 00:23:04.132 Write completed with error (sct=0, sc=8) 00:23:04.132 Write completed with error (sct=0, sc=8) 00:23:04.132 starting I/O failed: -6 00:23:04.132 Write completed with error (sct=0, sc=8) 00:23:04.132 Write completed with error (sct=0, sc=8) 00:23:04.132 Write completed with error (sct=0, sc=8) 00:23:04.132 Write completed with error (sct=0, sc=8) 00:23:04.132 starting I/O failed: -6 00:23:04.132 Write completed with error (sct=0, sc=8) 00:23:04.132 Write completed with error (sct=0, sc=8) 00:23:04.132 Write completed with error (sct=0, sc=8) 00:23:04.132 Write completed with error (sct=0, sc=8) 00:23:04.132 starting I/O failed: -6 00:23:04.132 Write completed with error (sct=0, sc=8) 00:23:04.132 Write completed with error (sct=0, sc=8) 00:23:04.132 Write completed with error (sct=0, sc=8) 00:23:04.132 Write completed with error (sct=0, sc=8) 00:23:04.132 starting I/O failed: -6 00:23:04.132 Write completed with error (sct=0, sc=8) 00:23:04.132 Write completed with error (sct=0, sc=8) 00:23:04.132 Write completed with error (sct=0, sc=8) 00:23:04.132 Write completed with error (sct=0, sc=8) 00:23:04.132 starting I/O failed: -6 00:23:04.132 Write completed with error (sct=0, sc=8) 00:23:04.132 Write completed with error (sct=0, sc=8) 00:23:04.132 Write completed with error (sct=0, sc=8) 00:23:04.132 Write completed with error (sct=0, sc=8) 00:23:04.132 starting I/O failed: -6 00:23:04.132 Write completed with error (sct=0, sc=8) 00:23:04.132 Write completed with error (sct=0, sc=8) 00:23:04.132 Write completed with error (sct=0, sc=8) 00:23:04.132 Write completed with error (sct=0, sc=8) 00:23:04.132 starting I/O failed: -6 00:23:04.132 Write completed with error (sct=0, sc=8) 00:23:04.132 Write completed with error (sct=0, sc=8) 00:23:04.132 Write completed with error (sct=0, sc=8) 00:23:04.132 [2024-10-07 09:43:52.746488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:04.132 starting I/O failed: -6 00:23:04.132 Write completed with error (sct=0, sc=8) 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 starting I/O failed: -6 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 starting I/O failed: -6 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 starting I/O failed: -6 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 starting I/O failed: -6 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 starting I/O failed: -6 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 starting I/O failed: -6 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 starting I/O failed: -6 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 starting I/O failed: -6 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 starting I/O failed: -6 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 starting I/O failed: -6 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 starting I/O failed: -6 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 starting I/O failed: -6 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 starting I/O failed: -6 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 starting I/O failed: -6 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 starting I/O failed: -6 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 starting I/O failed: -6 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 starting I/O failed: -6 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 starting I/O failed: -6 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 starting I/O failed: -6 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 starting I/O failed: -6 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 starting I/O failed: -6 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 starting I/O failed: -6 00:23:04.133 [2024-10-07 09:43:52.747538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 starting I/O failed: -6 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 starting I/O failed: -6 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 starting I/O failed: -6 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 starting I/O failed: -6 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 starting I/O failed: -6 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 starting I/O failed: -6 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 starting I/O failed: -6 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 starting I/O failed: -6 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 starting I/O failed: -6 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 starting I/O failed: -6 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 starting I/O failed: -6 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 starting I/O failed: -6 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 starting I/O failed: -6 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 starting I/O failed: -6 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 starting I/O failed: -6 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 starting I/O failed: -6 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 starting I/O failed: -6 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 starting I/O failed: -6 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 starting I/O failed: -6 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 starting I/O failed: -6 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 starting I/O failed: -6 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 starting I/O failed: -6 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 starting I/O failed: -6 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 starting I/O failed: -6 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 starting I/O failed: -6 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 starting I/O failed: -6 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 starting I/O failed: -6 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 starting I/O failed: -6 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 starting I/O failed: -6 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 starting I/O failed: -6 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 starting I/O failed: -6 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 starting I/O failed: -6 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 starting I/O failed: -6 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 starting I/O failed: -6 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 starting I/O failed: -6 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 starting I/O failed: -6 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 starting I/O failed: -6 00:23:04.133 [2024-10-07 09:43:52.748705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 starting I/O failed: -6 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 starting I/O failed: -6 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 starting I/O failed: -6 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 starting I/O failed: -6 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 starting I/O failed: -6 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 starting I/O failed: -6 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 starting I/O failed: -6 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 starting I/O failed: -6 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 starting I/O failed: -6 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 starting I/O failed: -6 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 starting I/O failed: -6 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 starting I/O failed: -6 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 starting I/O failed: -6 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 starting I/O failed: -6 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 starting I/O failed: -6 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 starting I/O failed: -6 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 starting I/O failed: -6 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 starting I/O failed: -6 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 starting I/O failed: -6 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 starting I/O failed: -6 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 starting I/O failed: -6 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 starting I/O failed: -6 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 starting I/O failed: -6 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 starting I/O failed: -6 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 starting I/O failed: -6 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 starting I/O failed: -6 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 starting I/O failed: -6 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 starting I/O failed: -6 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 starting I/O failed: -6 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 starting I/O failed: -6 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 starting I/O failed: -6 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 starting I/O failed: -6 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 starting I/O failed: -6 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.133 starting I/O failed: -6 00:23:04.133 Write completed with error (sct=0, sc=8) 00:23:04.134 starting I/O failed: -6 00:23:04.134 Write completed with error (sct=0, sc=8) 00:23:04.134 starting I/O failed: -6 00:23:04.134 Write completed with error (sct=0, sc=8) 00:23:04.134 starting I/O failed: -6 00:23:04.134 Write completed with error (sct=0, sc=8) 00:23:04.134 starting I/O failed: -6 00:23:04.134 Write completed with error (sct=0, sc=8) 00:23:04.134 starting I/O failed: -6 00:23:04.134 Write completed with error (sct=0, sc=8) 00:23:04.134 starting I/O failed: -6 00:23:04.134 Write completed with error (sct=0, sc=8) 00:23:04.134 starting I/O failed: -6 00:23:04.134 Write completed with error (sct=0, sc=8) 00:23:04.134 starting I/O failed: -6 00:23:04.134 Write completed with error (sct=0, sc=8) 00:23:04.134 starting I/O failed: -6 00:23:04.134 Write completed with error (sct=0, sc=8) 00:23:04.134 starting I/O failed: -6 00:23:04.134 Write completed with error (sct=0, sc=8) 00:23:04.134 starting I/O failed: -6 00:23:04.134 Write completed with error (sct=0, sc=8) 00:23:04.134 starting I/O failed: -6 00:23:04.134 Write completed with error (sct=0, sc=8) 00:23:04.134 starting I/O failed: -6 00:23:04.134 Write completed with error (sct=0, sc=8) 00:23:04.134 starting I/O failed: -6 00:23:04.134 Write completed with error (sct=0, sc=8) 00:23:04.134 starting I/O failed: -6 00:23:04.134 Write completed with error (sct=0, sc=8) 00:23:04.134 starting I/O failed: -6 00:23:04.134 Write completed with error (sct=0, sc=8) 00:23:04.134 starting I/O failed: -6 00:23:04.134 Write completed with error (sct=0, sc=8) 00:23:04.134 starting I/O failed: -6 00:23:04.134 Write completed with error (sct=0, sc=8) 00:23:04.134 starting I/O failed: -6 00:23:04.134 Write completed with error (sct=0, sc=8) 00:23:04.134 starting I/O failed: -6 00:23:04.134 Write completed with error (sct=0, sc=8) 00:23:04.134 starting I/O failed: -6 00:23:04.134 Write completed with error (sct=0, sc=8) 00:23:04.134 starting I/O failed: -6 00:23:04.134 Write completed with error (sct=0, sc=8) 00:23:04.134 starting I/O failed: -6 00:23:04.134 Write completed with error (sct=0, sc=8) 00:23:04.134 starting I/O failed: -6 00:23:04.134 Write completed with error (sct=0, sc=8) 00:23:04.134 starting I/O failed: -6 00:23:04.134 [2024-10-07 09:43:52.751940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:23:04.134 NVMe io qpair process completion error 00:23:04.134 Initializing NVMe Controllers 00:23:04.134 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:23:04.134 Controller IO queue size 128, less than required. 00:23:04.134 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:04.134 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:04.134 Controller IO queue size 128, less than required. 00:23:04.134 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:04.134 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:23:04.134 Controller IO queue size 128, less than required. 00:23:04.134 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:04.134 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:23:04.134 Controller IO queue size 128, less than required. 00:23:04.134 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:04.134 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:23:04.134 Controller IO queue size 128, less than required. 00:23:04.134 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:04.134 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:23:04.134 Controller IO queue size 128, less than required. 00:23:04.134 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:04.134 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:23:04.134 Controller IO queue size 128, less than required. 00:23:04.134 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:04.134 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:23:04.134 Controller IO queue size 128, less than required. 00:23:04.134 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:04.134 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:23:04.134 Controller IO queue size 128, less than required. 00:23:04.134 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:04.134 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:23:04.134 Controller IO queue size 128, less than required. 00:23:04.134 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:04.134 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:23:04.134 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:04.134 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:23:04.134 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:23:04.134 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:23:04.134 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:23:04.134 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:23:04.134 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:23:04.134 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:23:04.134 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:23:04.134 Initialization complete. Launching workers. 00:23:04.134 ======================================================== 00:23:04.134 Latency(us) 00:23:04.134 Device Information : IOPS MiB/s Average min max 00:23:04.134 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1770.54 76.08 72318.27 965.25 127122.47 00:23:04.134 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1747.84 75.10 72520.32 952.15 128352.06 00:23:04.134 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1688.49 72.55 75790.36 1462.66 136503.76 00:23:04.134 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1728.06 74.25 74075.17 1092.85 137864.06 00:23:04.134 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1759.30 75.59 72802.13 1046.59 125847.71 00:23:04.134 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1757.63 75.52 72111.39 1043.74 126009.68 00:23:04.134 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1751.18 75.25 72397.68 941.27 126704.78 00:23:04.134 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1717.65 73.81 73833.20 949.21 126512.08 00:23:04.134 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1707.03 73.35 74329.42 1091.26 126815.93 00:23:04.134 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1677.87 72.10 75661.14 1112.75 131981.29 00:23:04.134 ======================================================== 00:23:04.134 Total : 17305.59 743.60 73561.95 941.27 137864.06 00:23:04.134 00:23:04.134 [2024-10-07 09:43:52.755964] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaab0 is same with the state(6) to be set 00:23:04.134 [2024-10-07 09:43:52.756056] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fec7f0 is same with the state(6) to be set 00:23:04.134 [2024-10-07 09:43:52.756113] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff0d10 is same with the state(6) to be set 00:23:04.134 [2024-10-07 09:43:52.756169] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff1370 is same with the state(6) to be set 00:23:04.134 [2024-10-07 09:43:52.756224] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff16a0 is same with the state(6) to be set 00:23:04.134 [2024-10-07 09:43:52.756280] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fec9d0 is same with the state(6) to be set 00:23:04.134 [2024-10-07 09:43:52.756335] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feade0 is same with the state(6) to be set 00:23:04.134 [2024-10-07 09:43:52.756389] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fecbb0 is same with the state(6) to be set 00:23:04.134 [2024-10-07 09:43:52.756454] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fea780 is same with the state(6) to be set 00:23:04.134 [2024-10-07 09:43:52.756508] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff1040 is same with the state(6) to be set 00:23:04.134 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:23:04.394 09:43:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:23:05.333 09:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 269687 00:23:05.333 09:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@650 -- # local es=0 00:23:05.333 09:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 269687 00:23:05.333 09:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@638 -- # local arg=wait 00:23:05.333 09:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:05.333 09:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # type -t wait 00:23:05.333 09:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:05.333 09:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # wait 269687 00:23:05.333 09:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # es=1 00:23:05.333 09:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:05.333 09:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:05.333 09:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:05.333 09:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:23:05.333 09:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:23:05.333 09:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:05.333 09:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:05.333 09:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:23:05.333 09:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@514 -- # nvmfcleanup 00:23:05.333 09:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:23:05.333 09:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:05.333 09:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:23:05.333 09:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:05.333 09:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:05.333 rmmod nvme_tcp 00:23:05.333 rmmod nvme_fabrics 00:23:05.333 rmmod nvme_keyring 00:23:05.333 09:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:05.333 09:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:23:05.333 09:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:23:05.333 09:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@515 -- # '[' -n 269514 ']' 00:23:05.333 09:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # killprocess 269514 00:23:05.333 09:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@950 -- # '[' -z 269514 ']' 00:23:05.333 09:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # kill -0 269514 00:23:05.333 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (269514) - No such process 00:23:05.333 09:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@977 -- # echo 'Process with pid 269514 is not found' 00:23:05.333 Process with pid 269514 is not found 00:23:05.333 09:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:23:05.333 09:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:23:05.333 09:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:23:05.333 09:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:23:05.333 09:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@789 -- # iptables-save 00:23:05.333 09:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:23:05.333 09:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@789 -- # iptables-restore 00:23:05.333 09:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:05.333 09:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:05.333 09:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:05.333 09:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:05.333 09:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:07.863 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:07.863 00:23:07.863 real 0m9.823s 00:23:07.863 user 0m23.103s 00:23:07.863 sys 0m5.973s 00:23:07.863 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:07.863 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:07.863 ************************************ 00:23:07.863 END TEST nvmf_shutdown_tc4 00:23:07.863 ************************************ 00:23:07.863 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:23:07.863 00:23:07.863 real 0m37.566s 00:23:07.863 user 1m40.533s 00:23:07.863 sys 0m12.372s 00:23:07.863 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:07.863 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:07.863 ************************************ 00:23:07.863 END TEST nvmf_shutdown 00:23:07.863 ************************************ 00:23:07.863 09:43:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:23:07.863 00:23:07.863 real 11m34.949s 00:23:07.863 user 27m29.166s 00:23:07.863 sys 2m42.817s 00:23:07.863 09:43:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:07.863 09:43:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:07.863 ************************************ 00:23:07.863 END TEST nvmf_target_extra 00:23:07.863 ************************************ 00:23:07.863 09:43:56 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:23:07.863 09:43:56 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:07.863 09:43:56 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:07.863 09:43:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:07.863 ************************************ 00:23:07.863 START TEST nvmf_host 00:23:07.863 ************************************ 00:23:07.863 09:43:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:23:07.863 * Looking for test storage... 00:23:07.863 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:23:07.863 09:43:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:07.863 09:43:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # lcov --version 00:23:07.863 09:43:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:07.863 09:43:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:07.863 09:43:56 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:07.863 09:43:56 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:07.863 09:43:56 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:07.863 09:43:56 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:23:07.863 09:43:56 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:23:07.863 09:43:56 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:23:07.863 09:43:56 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:23:07.863 09:43:56 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:23:07.863 09:43:56 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:23:07.863 09:43:56 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:23:07.863 09:43:56 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:07.863 09:43:56 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:23:07.863 09:43:56 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:23:07.863 09:43:56 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:07.863 09:43:56 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:07.863 09:43:56 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:23:07.863 09:43:56 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:23:07.863 09:43:56 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:07.863 09:43:56 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:23:07.863 09:43:56 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:23:07.863 09:43:56 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:23:07.863 09:43:56 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:23:07.863 09:43:56 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:07.863 09:43:56 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:23:07.863 09:43:56 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:23:07.863 09:43:56 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:07.863 09:43:56 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:07.863 09:43:56 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:23:07.863 09:43:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:07.863 09:43:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:07.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:07.863 --rc genhtml_branch_coverage=1 00:23:07.863 --rc genhtml_function_coverage=1 00:23:07.863 --rc genhtml_legend=1 00:23:07.863 --rc geninfo_all_blocks=1 00:23:07.863 --rc geninfo_unexecuted_blocks=1 00:23:07.863 00:23:07.863 ' 00:23:07.863 09:43:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:07.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:07.863 --rc genhtml_branch_coverage=1 00:23:07.863 --rc genhtml_function_coverage=1 00:23:07.863 --rc genhtml_legend=1 00:23:07.863 --rc geninfo_all_blocks=1 00:23:07.863 --rc geninfo_unexecuted_blocks=1 00:23:07.863 00:23:07.863 ' 00:23:07.863 09:43:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:07.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:07.863 --rc genhtml_branch_coverage=1 00:23:07.863 --rc genhtml_function_coverage=1 00:23:07.863 --rc genhtml_legend=1 00:23:07.863 --rc geninfo_all_blocks=1 00:23:07.863 --rc geninfo_unexecuted_blocks=1 00:23:07.863 00:23:07.863 ' 00:23:07.863 09:43:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:07.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:07.863 --rc genhtml_branch_coverage=1 00:23:07.863 --rc genhtml_function_coverage=1 00:23:07.863 --rc genhtml_legend=1 00:23:07.863 --rc geninfo_all_blocks=1 00:23:07.863 --rc geninfo_unexecuted_blocks=1 00:23:07.863 00:23:07.863 ' 00:23:07.863 09:43:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:07.863 09:43:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:23:07.863 09:43:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:07.863 09:43:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:07.863 09:43:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:07.863 09:43:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:07.863 09:43:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:07.863 09:43:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:07.863 09:43:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:07.863 09:43:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:07.863 09:43:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:07.863 09:43:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:07.863 09:43:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:23:07.863 09:43:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:23:07.863 09:43:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:07.863 09:43:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:07.863 09:43:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:07.863 09:43:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:07.864 09:43:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:07.864 09:43:56 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:23:07.864 09:43:56 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:07.864 09:43:56 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:07.864 09:43:56 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:07.864 09:43:56 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.864 09:43:56 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.864 09:43:56 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.864 09:43:56 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:23:07.864 09:43:56 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.864 09:43:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:23:07.864 09:43:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:07.864 09:43:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:07.864 09:43:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:07.864 09:43:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:07.864 09:43:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:07.864 09:43:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:07.864 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:07.864 09:43:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:07.864 09:43:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:07.864 09:43:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:07.864 09:43:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:23:07.864 09:43:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:23:07.864 09:43:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:23:07.864 09:43:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:07.864 09:43:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:07.864 09:43:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:07.864 09:43:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.864 ************************************ 00:23:07.864 START TEST nvmf_multicontroller 00:23:07.864 ************************************ 00:23:07.864 09:43:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:07.864 * Looking for test storage... 00:23:07.864 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:07.864 09:43:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:07.864 09:43:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1681 -- # lcov --version 00:23:07.864 09:43:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:07.864 09:43:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:07.864 09:43:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:07.864 09:43:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:07.864 09:43:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:07.864 09:43:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:23:07.864 09:43:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:23:07.864 09:43:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:23:07.864 09:43:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:23:07.864 09:43:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:23:07.864 09:43:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:23:07.864 09:43:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:23:07.864 09:43:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:07.864 09:43:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:23:07.864 09:43:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:23:07.864 09:43:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:07.864 09:43:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:07.864 09:43:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:23:07.864 09:43:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:23:07.864 09:43:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:07.864 09:43:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:23:07.864 09:43:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:23:07.864 09:43:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:23:07.864 09:43:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:23:07.864 09:43:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:07.864 09:43:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:23:07.864 09:43:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:23:07.864 09:43:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:07.864 09:43:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:07.864 09:43:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:23:07.864 09:43:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:07.864 09:43:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:07.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:07.864 --rc genhtml_branch_coverage=1 00:23:07.864 --rc genhtml_function_coverage=1 00:23:07.864 --rc genhtml_legend=1 00:23:07.864 --rc geninfo_all_blocks=1 00:23:07.864 --rc geninfo_unexecuted_blocks=1 00:23:07.864 00:23:07.864 ' 00:23:07.864 09:43:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:07.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:07.864 --rc genhtml_branch_coverage=1 00:23:07.864 --rc genhtml_function_coverage=1 00:23:07.864 --rc genhtml_legend=1 00:23:07.864 --rc geninfo_all_blocks=1 00:23:07.864 --rc geninfo_unexecuted_blocks=1 00:23:07.864 00:23:07.864 ' 00:23:07.864 09:43:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:07.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:07.864 --rc genhtml_branch_coverage=1 00:23:07.864 --rc genhtml_function_coverage=1 00:23:07.864 --rc genhtml_legend=1 00:23:07.864 --rc geninfo_all_blocks=1 00:23:07.864 --rc geninfo_unexecuted_blocks=1 00:23:07.864 00:23:07.864 ' 00:23:07.864 09:43:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:07.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:07.864 --rc genhtml_branch_coverage=1 00:23:07.864 --rc genhtml_function_coverage=1 00:23:07.864 --rc genhtml_legend=1 00:23:07.864 --rc geninfo_all_blocks=1 00:23:07.864 --rc geninfo_unexecuted_blocks=1 00:23:07.864 00:23:07.864 ' 00:23:07.864 09:43:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:07.864 09:43:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:23:07.864 09:43:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:07.864 09:43:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:07.864 09:43:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:07.865 09:43:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:07.865 09:43:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:07.865 09:43:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:07.865 09:43:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:07.865 09:43:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:07.865 09:43:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:07.865 09:43:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:07.865 09:43:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:23:07.865 09:43:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:23:07.865 09:43:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:07.865 09:43:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:07.865 09:43:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:07.865 09:43:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:07.865 09:43:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:07.865 09:43:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:23:07.865 09:43:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:07.865 09:43:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:07.865 09:43:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:07.865 09:43:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.865 09:43:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.865 09:43:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.865 09:43:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:23:07.865 09:43:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.865 09:43:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:23:07.865 09:43:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:07.865 09:43:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:07.865 09:43:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:07.865 09:43:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:07.865 09:43:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:07.865 09:43:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:07.865 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:07.865 09:43:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:07.865 09:43:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:07.865 09:43:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:07.865 09:43:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:07.865 09:43:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:07.865 09:43:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:23:07.865 09:43:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:23:07.865 09:43:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:07.865 09:43:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:23:07.865 09:43:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:23:07.865 09:43:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:23:07.865 09:43:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:07.865 09:43:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # prepare_net_devs 00:23:07.865 09:43:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@436 -- # local -g is_hw=no 00:23:07.865 09:43:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # remove_spdk_ns 00:23:07.865 09:43:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:07.865 09:43:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:07.865 09:43:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:07.865 09:43:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:23:07.865 09:43:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:23:07.865 09:43:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:23:07.865 09:43:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:09.771 09:43:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:09.771 09:43:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:23:09.771 09:43:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:09.771 09:43:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:09.771 09:43:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:09.771 09:43:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:09.771 09:43:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:09.771 09:43:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:23:09.771 09:43:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:09.771 09:43:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:23:09.771 09:43:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:23:09.771 09:43:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:23:09.771 09:43:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:23:09.771 09:43:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:23:09.771 09:43:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:23:09.771 09:43:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:09.771 09:43:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:09.771 09:43:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:09.771 09:43:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:09.771 09:43:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:09.771 09:43:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:09.771 09:43:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:09.771 09:43:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:09.771 09:43:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:09.771 09:43:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:09.771 09:43:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:09.771 09:43:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:09.771 09:43:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:09.771 09:43:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:09.771 09:43:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:09.771 09:43:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:09.771 09:43:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:09.771 09:43:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:09.771 09:43:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:09.771 09:43:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x1592)' 00:23:09.771 Found 0000:09:00.0 (0x8086 - 0x1592) 00:23:09.771 09:43:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:09.771 09:43:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:09.771 09:43:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:23:09.771 09:43:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:23:09.771 09:43:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:09.771 09:43:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:09.771 09:43:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x1592)' 00:23:09.771 Found 0000:09:00.1 (0x8086 - 0x1592) 00:23:09.771 09:43:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:09.771 09:43:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:09.771 09:43:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:23:09.771 09:43:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:23:09.771 09:43:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:09.771 09:43:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:09.771 09:43:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:09.771 09:43:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:09.771 09:43:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:09.771 09:43:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:09.771 09:43:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:09.771 09:43:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:09.771 09:43:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:09.771 09:43:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:09.771 09:43:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:09.771 09:43:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:23:09.771 Found net devices under 0000:09:00.0: cvl_0_0 00:23:09.771 09:43:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:09.771 09:43:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:09.771 09:43:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:09.771 09:43:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:09.771 09:43:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:09.771 09:43:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:09.771 09:43:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:09.771 09:43:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:09.771 09:43:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:23:09.771 Found net devices under 0000:09:00.1: cvl_0_1 00:23:09.771 09:43:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:09.771 09:43:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:23:09.771 09:43:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # is_hw=yes 00:23:09.771 09:43:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:23:09.771 09:43:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:23:09.771 09:43:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:23:09.771 09:43:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:09.771 09:43:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:09.771 09:43:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:09.771 09:43:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:09.771 09:43:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:09.771 09:43:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:09.771 09:43:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:09.771 09:43:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:09.771 09:43:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:09.771 09:43:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:09.771 09:43:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:09.771 09:43:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:09.771 09:43:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:09.771 09:43:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:09.771 09:43:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:09.771 09:43:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:10.028 09:43:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:10.028 09:43:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:10.028 09:43:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:10.028 09:43:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:10.029 09:43:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:10.029 09:43:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:10.029 09:43:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:10.029 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:10.029 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.237 ms 00:23:10.029 00:23:10.029 --- 10.0.0.2 ping statistics --- 00:23:10.029 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:10.029 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:23:10.029 09:43:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:10.029 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:10.029 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:23:10.029 00:23:10.029 --- 10.0.0.1 ping statistics --- 00:23:10.029 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:10.029 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:23:10.029 09:43:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:10.029 09:43:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@448 -- # return 0 00:23:10.029 09:43:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:23:10.029 09:43:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:10.029 09:43:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:23:10.029 09:43:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:23:10.029 09:43:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:10.029 09:43:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:23:10.029 09:43:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:23:10.029 09:43:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:23:10.029 09:43:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:23:10.029 09:43:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:10.029 09:43:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:10.029 09:43:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # nvmfpid=272348 00:23:10.029 09:43:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # waitforlisten 272348 00:23:10.029 09:43:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 272348 ']' 00:23:10.029 09:43:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:10.029 09:43:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:10.029 09:43:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:10.029 09:43:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:10.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:10.029 09:43:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:10.029 09:43:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:10.029 [2024-10-07 09:43:58.904320] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:23:10.029 [2024-10-07 09:43:58.904401] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:10.029 [2024-10-07 09:43:58.966638] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:10.288 [2024-10-07 09:43:59.068042] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:10.288 [2024-10-07 09:43:59.068097] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:10.288 [2024-10-07 09:43:59.068125] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:10.288 [2024-10-07 09:43:59.068135] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:10.288 [2024-10-07 09:43:59.068144] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:10.288 [2024-10-07 09:43:59.068976] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:23:10.288 [2024-10-07 09:43:59.069023] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:23:10.288 [2024-10-07 09:43:59.069026] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:23:10.288 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:10.288 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:23:10.288 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:23:10.288 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:10.288 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:10.288 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:10.288 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:10.288 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.288 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:10.288 [2024-10-07 09:43:59.207612] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:10.288 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.288 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:10.288 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.288 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:10.288 Malloc0 00:23:10.288 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.288 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:10.288 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.288 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:10.288 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.288 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:10.288 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.288 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:10.288 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.288 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:10.288 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.288 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:10.288 [2024-10-07 09:43:59.270078] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:10.288 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.288 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:10.288 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.288 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:10.288 [2024-10-07 09:43:59.277918] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:10.288 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.288 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:10.288 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.288 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:10.546 Malloc1 00:23:10.546 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.546 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:23:10.546 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.546 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:10.546 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.546 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:23:10.546 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.546 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:10.546 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.546 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:10.546 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.546 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:10.546 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.546 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:23:10.546 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.546 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:10.546 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.546 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=272380 00:23:10.546 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:10.546 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 272380 /var/tmp/bdevperf.sock 00:23:10.546 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 272380 ']' 00:23:10.546 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:10.546 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:10.546 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:23:10.546 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:10.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:10.546 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:10.546 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:10.804 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:10.804 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:23:10.804 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:23:10.804 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.804 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:10.804 NVMe0n1 00:23:10.804 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.804 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:10.804 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:23:10.804 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.804 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:10.804 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.804 1 00:23:10.804 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:23:10.804 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:23:10.804 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:23:10.804 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:10.804 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:10.804 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:10.804 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:10.804 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:23:10.804 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.804 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:10.804 request: 00:23:10.804 { 00:23:10.804 "name": "NVMe0", 00:23:10.804 "trtype": "tcp", 00:23:10.804 "traddr": "10.0.0.2", 00:23:10.804 "adrfam": "ipv4", 00:23:10.804 "trsvcid": "4420", 00:23:10.804 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:10.804 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:23:10.804 "hostaddr": "10.0.0.1", 00:23:10.804 "prchk_reftag": false, 00:23:10.804 "prchk_guard": false, 00:23:10.804 "hdgst": false, 00:23:10.804 "ddgst": false, 00:23:10.804 "allow_unrecognized_csi": false, 00:23:10.804 "method": "bdev_nvme_attach_controller", 00:23:10.804 "req_id": 1 00:23:10.805 } 00:23:10.805 Got JSON-RPC error response 00:23:10.805 response: 00:23:10.805 { 00:23:10.805 "code": -114, 00:23:10.805 "message": "A controller named NVMe0 already exists with the specified network path" 00:23:10.805 } 00:23:10.805 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:10.805 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:23:10.805 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:10.805 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:10.805 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:10.805 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:23:10.805 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:23:10.805 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:23:10.805 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:10.805 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:10.805 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:10.805 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:10.805 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:23:10.805 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.805 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:11.063 request: 00:23:11.063 { 00:23:11.063 "name": "NVMe0", 00:23:11.063 "trtype": "tcp", 00:23:11.063 "traddr": "10.0.0.2", 00:23:11.063 "adrfam": "ipv4", 00:23:11.063 "trsvcid": "4420", 00:23:11.063 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:11.063 "hostaddr": "10.0.0.1", 00:23:11.063 "prchk_reftag": false, 00:23:11.063 "prchk_guard": false, 00:23:11.063 "hdgst": false, 00:23:11.063 "ddgst": false, 00:23:11.063 "allow_unrecognized_csi": false, 00:23:11.063 "method": "bdev_nvme_attach_controller", 00:23:11.063 "req_id": 1 00:23:11.063 } 00:23:11.063 Got JSON-RPC error response 00:23:11.063 response: 00:23:11.063 { 00:23:11.063 "code": -114, 00:23:11.063 "message": "A controller named NVMe0 already exists with the specified network path" 00:23:11.063 } 00:23:11.063 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:11.063 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:23:11.063 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:11.063 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:11.063 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:11.063 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:23:11.063 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:23:11.063 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:23:11.063 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:11.063 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:11.063 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:11.063 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:11.063 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:23:11.063 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.063 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:11.063 request: 00:23:11.063 { 00:23:11.063 "name": "NVMe0", 00:23:11.063 "trtype": "tcp", 00:23:11.063 "traddr": "10.0.0.2", 00:23:11.063 "adrfam": "ipv4", 00:23:11.063 "trsvcid": "4420", 00:23:11.063 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:11.063 "hostaddr": "10.0.0.1", 00:23:11.063 "prchk_reftag": false, 00:23:11.063 "prchk_guard": false, 00:23:11.063 "hdgst": false, 00:23:11.063 "ddgst": false, 00:23:11.063 "multipath": "disable", 00:23:11.063 "allow_unrecognized_csi": false, 00:23:11.063 "method": "bdev_nvme_attach_controller", 00:23:11.063 "req_id": 1 00:23:11.063 } 00:23:11.063 Got JSON-RPC error response 00:23:11.063 response: 00:23:11.063 { 00:23:11.063 "code": -114, 00:23:11.063 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:23:11.063 } 00:23:11.063 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:11.063 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:23:11.063 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:11.063 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:11.063 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:11.063 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:23:11.063 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:23:11.063 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:23:11.063 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:11.063 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:11.063 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:11.063 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:11.063 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:23:11.063 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.063 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:11.063 request: 00:23:11.063 { 00:23:11.063 "name": "NVMe0", 00:23:11.063 "trtype": "tcp", 00:23:11.063 "traddr": "10.0.0.2", 00:23:11.063 "adrfam": "ipv4", 00:23:11.063 "trsvcid": "4420", 00:23:11.063 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:11.063 "hostaddr": "10.0.0.1", 00:23:11.063 "prchk_reftag": false, 00:23:11.063 "prchk_guard": false, 00:23:11.063 "hdgst": false, 00:23:11.063 "ddgst": false, 00:23:11.063 "multipath": "failover", 00:23:11.063 "allow_unrecognized_csi": false, 00:23:11.063 "method": "bdev_nvme_attach_controller", 00:23:11.063 "req_id": 1 00:23:11.063 } 00:23:11.063 Got JSON-RPC error response 00:23:11.063 response: 00:23:11.063 { 00:23:11.063 "code": -114, 00:23:11.063 "message": "A controller named NVMe0 already exists with the specified network path" 00:23:11.063 } 00:23:11.063 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:11.063 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:23:11.063 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:11.063 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:11.063 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:11.063 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:11.063 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.063 09:43:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:11.322 00:23:11.322 09:44:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.322 09:44:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:11.322 09:44:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.322 09:44:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:11.322 09:44:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.322 09:44:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:23:11.322 09:44:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.322 09:44:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:11.322 00:23:11.322 09:44:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.322 09:44:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:11.322 09:44:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:23:11.322 09:44:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.322 09:44:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:11.322 09:44:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.322 09:44:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:23:11.322 09:44:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:12.698 { 00:23:12.698 "results": [ 00:23:12.698 { 00:23:12.698 "job": "NVMe0n1", 00:23:12.698 "core_mask": "0x1", 00:23:12.698 "workload": "write", 00:23:12.698 "status": "finished", 00:23:12.698 "queue_depth": 128, 00:23:12.698 "io_size": 4096, 00:23:12.698 "runtime": 1.007585, 00:23:12.698 "iops": 18280.343593840716, 00:23:12.698 "mibps": 71.4075921634403, 00:23:12.698 "io_failed": 0, 00:23:12.698 "io_timeout": 0, 00:23:12.698 "avg_latency_us": 6983.774016926966, 00:23:12.698 "min_latency_us": 4174.885925925926, 00:23:12.698 "max_latency_us": 17379.176296296297 00:23:12.698 } 00:23:12.698 ], 00:23:12.698 "core_count": 1 00:23:12.698 } 00:23:12.698 09:44:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:23:12.698 09:44:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.698 09:44:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:12.698 09:44:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.698 09:44:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:23:12.698 09:44:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 272380 00:23:12.698 09:44:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 272380 ']' 00:23:12.698 09:44:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 272380 00:23:12.698 09:44:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:23:12.698 09:44:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:12.698 09:44:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 272380 00:23:12.698 09:44:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:12.698 09:44:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:12.698 09:44:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 272380' 00:23:12.698 killing process with pid 272380 00:23:12.698 09:44:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 272380 00:23:12.698 09:44:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 272380 00:23:12.698 09:44:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:12.698 09:44:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.698 09:44:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:12.698 09:44:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.698 09:44:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:12.698 09:44:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.698 09:44:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:12.698 09:44:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.698 09:44:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:23:12.698 09:44:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:12.698 09:44:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:23:12.698 09:44:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:23:12.698 09:44:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # sort -u 00:23:12.698 09:44:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # cat 00:23:12.698 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:12.698 [2024-10-07 09:43:59.384729] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:23:12.698 [2024-10-07 09:43:59.384814] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid272380 ] 00:23:12.698 [2024-10-07 09:43:59.440389] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:12.698 [2024-10-07 09:43:59.550215] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:23:12.698 [2024-10-07 09:44:00.180808] bdev.c:4701:bdev_name_add: *ERROR*: Bdev name b94cb83d-8f9d-4639-823c-a01c70cf7e4c already exists 00:23:12.698 [2024-10-07 09:44:00.180850] bdev.c:7846:bdev_register: *ERROR*: Unable to add uuid:b94cb83d-8f9d-4639-823c-a01c70cf7e4c alias for bdev NVMe1n1 00:23:12.698 [2024-10-07 09:44:00.180866] bdev_nvme.c:4483:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:23:12.698 Running I/O for 1 seconds... 00:23:12.698 18228.00 IOPS, 71.20 MiB/s 00:23:12.698 Latency(us) 00:23:12.698 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:12.698 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:23:12.698 NVMe0n1 : 1.01 18280.34 71.41 0.00 0.00 6983.77 4174.89 17379.18 00:23:12.698 =================================================================================================================== 00:23:12.698 Total : 18280.34 71.41 0.00 0.00 6983.77 4174.89 17379.18 00:23:12.698 Received shutdown signal, test time was about 1.000000 seconds 00:23:12.698 00:23:12.698 Latency(us) 00:23:12.698 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:12.698 =================================================================================================================== 00:23:12.698 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:12.698 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:12.698 09:44:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1603 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:12.698 09:44:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:23:12.698 09:44:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:23:12.698 09:44:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@514 -- # nvmfcleanup 00:23:12.698 09:44:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:23:12.698 09:44:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:12.698 09:44:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:23:12.698 09:44:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:12.698 09:44:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:12.698 rmmod nvme_tcp 00:23:12.698 rmmod nvme_fabrics 00:23:12.958 rmmod nvme_keyring 00:23:12.958 09:44:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:12.958 09:44:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:23:12.958 09:44:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:23:12.958 09:44:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@515 -- # '[' -n 272348 ']' 00:23:12.958 09:44:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # killprocess 272348 00:23:12.958 09:44:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 272348 ']' 00:23:12.958 09:44:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 272348 00:23:12.958 09:44:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:23:12.958 09:44:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:12.958 09:44:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 272348 00:23:12.958 09:44:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:12.958 09:44:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:12.958 09:44:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 272348' 00:23:12.958 killing process with pid 272348 00:23:12.958 09:44:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 272348 00:23:12.958 09:44:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 272348 00:23:13.217 09:44:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:23:13.217 09:44:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:23:13.217 09:44:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:23:13.217 09:44:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:23:13.217 09:44:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@789 -- # iptables-save 00:23:13.217 09:44:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:23:13.217 09:44:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@789 -- # iptables-restore 00:23:13.217 09:44:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:13.217 09:44:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:13.217 09:44:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:13.217 09:44:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:13.217 09:44:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:15.125 09:44:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:15.383 00:23:15.383 real 0m7.536s 00:23:15.383 user 0m12.002s 00:23:15.383 sys 0m2.245s 00:23:15.383 09:44:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:15.383 09:44:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:15.383 ************************************ 00:23:15.383 END TEST nvmf_multicontroller 00:23:15.383 ************************************ 00:23:15.383 09:44:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:15.383 09:44:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:15.383 09:44:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:15.383 09:44:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.383 ************************************ 00:23:15.383 START TEST nvmf_aer 00:23:15.383 ************************************ 00:23:15.383 09:44:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:15.383 * Looking for test storage... 00:23:15.383 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:15.383 09:44:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:15.383 09:44:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1681 -- # lcov --version 00:23:15.383 09:44:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:15.383 09:44:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:15.383 09:44:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:15.383 09:44:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:15.384 09:44:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:15.384 09:44:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:23:15.384 09:44:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:23:15.384 09:44:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:23:15.384 09:44:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:23:15.384 09:44:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:23:15.384 09:44:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:23:15.384 09:44:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:23:15.384 09:44:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:15.384 09:44:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:23:15.384 09:44:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:23:15.384 09:44:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:15.384 09:44:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:15.384 09:44:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:23:15.384 09:44:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:23:15.384 09:44:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:15.384 09:44:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:23:15.384 09:44:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:23:15.384 09:44:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:23:15.384 09:44:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:23:15.384 09:44:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:15.384 09:44:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:23:15.384 09:44:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:23:15.384 09:44:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:15.384 09:44:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:15.384 09:44:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:23:15.384 09:44:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:15.384 09:44:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:15.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:15.384 --rc genhtml_branch_coverage=1 00:23:15.384 --rc genhtml_function_coverage=1 00:23:15.384 --rc genhtml_legend=1 00:23:15.384 --rc geninfo_all_blocks=1 00:23:15.384 --rc geninfo_unexecuted_blocks=1 00:23:15.384 00:23:15.384 ' 00:23:15.384 09:44:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:15.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:15.384 --rc genhtml_branch_coverage=1 00:23:15.384 --rc genhtml_function_coverage=1 00:23:15.384 --rc genhtml_legend=1 00:23:15.384 --rc geninfo_all_blocks=1 00:23:15.384 --rc geninfo_unexecuted_blocks=1 00:23:15.384 00:23:15.384 ' 00:23:15.384 09:44:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:15.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:15.384 --rc genhtml_branch_coverage=1 00:23:15.384 --rc genhtml_function_coverage=1 00:23:15.384 --rc genhtml_legend=1 00:23:15.384 --rc geninfo_all_blocks=1 00:23:15.384 --rc geninfo_unexecuted_blocks=1 00:23:15.384 00:23:15.384 ' 00:23:15.384 09:44:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:15.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:15.384 --rc genhtml_branch_coverage=1 00:23:15.384 --rc genhtml_function_coverage=1 00:23:15.384 --rc genhtml_legend=1 00:23:15.384 --rc geninfo_all_blocks=1 00:23:15.384 --rc geninfo_unexecuted_blocks=1 00:23:15.384 00:23:15.384 ' 00:23:15.384 09:44:04 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:15.384 09:44:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:23:15.384 09:44:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:15.384 09:44:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:15.384 09:44:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:15.384 09:44:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:15.384 09:44:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:15.384 09:44:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:15.384 09:44:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:15.384 09:44:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:15.384 09:44:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:15.384 09:44:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:15.384 09:44:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:23:15.384 09:44:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:23:15.384 09:44:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:15.384 09:44:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:15.384 09:44:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:15.384 09:44:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:15.384 09:44:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:15.384 09:44:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:23:15.384 09:44:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:15.384 09:44:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:15.384 09:44:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:15.384 09:44:04 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.384 09:44:04 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.384 09:44:04 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.384 09:44:04 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:23:15.384 09:44:04 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.384 09:44:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:23:15.384 09:44:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:15.384 09:44:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:15.384 09:44:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:15.384 09:44:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:15.384 09:44:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:15.384 09:44:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:15.384 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:15.384 09:44:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:15.384 09:44:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:15.384 09:44:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:15.384 09:44:04 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:23:15.384 09:44:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:23:15.384 09:44:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:15.384 09:44:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # prepare_net_devs 00:23:15.384 09:44:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@436 -- # local -g is_hw=no 00:23:15.384 09:44:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # remove_spdk_ns 00:23:15.384 09:44:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:15.384 09:44:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:15.384 09:44:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:15.384 09:44:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:23:15.384 09:44:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:23:15.384 09:44:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:23:15.384 09:44:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:17.913 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:17.913 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:23:17.913 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:17.913 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:17.913 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:17.913 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:17.913 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:17.913 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:23:17.913 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:17.913 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:23:17.913 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:23:17.913 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:23:17.913 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:23:17.913 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:23:17.913 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:23:17.913 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:17.913 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:17.913 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:17.913 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:17.913 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:17.913 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:17.913 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:17.913 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:17.913 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:17.913 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:17.913 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:17.913 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:17.913 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:17.913 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:17.913 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:17.913 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:17.913 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:17.913 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:17.913 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:17.913 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x1592)' 00:23:17.913 Found 0000:09:00.0 (0x8086 - 0x1592) 00:23:17.913 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:17.913 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:17.913 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:23:17.913 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:23:17.913 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:17.913 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:17.913 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x1592)' 00:23:17.913 Found 0000:09:00.1 (0x8086 - 0x1592) 00:23:17.913 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:17.913 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:17.913 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:23:17.913 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:23:17.913 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:17.913 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:17.913 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:17.913 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:17.913 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:17.913 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:17.913 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:17.913 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:17.913 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:17.913 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:17.913 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:17.913 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:23:17.913 Found net devices under 0000:09:00.0: cvl_0_0 00:23:17.913 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:17.913 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:17.913 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:17.913 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:17.914 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:17.914 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:17.914 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:17.914 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:17.914 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:23:17.914 Found net devices under 0000:09:00.1: cvl_0_1 00:23:17.914 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:17.914 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:23:17.914 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # is_hw=yes 00:23:17.914 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:23:17.914 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:23:17.914 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:23:17.914 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:17.914 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:17.914 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:17.914 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:17.914 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:17.914 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:17.914 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:17.914 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:17.914 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:17.914 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:17.914 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:17.914 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:17.914 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:17.914 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:17.914 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:17.914 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:17.914 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:17.914 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:17.914 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:17.914 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:17.914 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:17.914 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:17.914 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:17.914 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:17.914 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:23:17.914 00:23:17.914 --- 10.0.0.2 ping statistics --- 00:23:17.914 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:17.914 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:23:17.914 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:17.914 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:17.914 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.095 ms 00:23:17.914 00:23:17.914 --- 10.0.0.1 ping statistics --- 00:23:17.914 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:17.914 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:23:17.914 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:17.914 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # return 0 00:23:17.914 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:23:17.914 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:17.914 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:23:17.914 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:23:17.914 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:17.914 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:23:17.914 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:23:17.914 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:23:17.914 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:23:17.914 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:17.914 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:17.914 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # nvmfpid=274565 00:23:17.914 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:17.914 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # waitforlisten 274565 00:23:17.914 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@831 -- # '[' -z 274565 ']' 00:23:17.914 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:17.914 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:17.914 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:17.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:17.914 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:17.914 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:17.914 [2024-10-07 09:44:06.664262] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:23:17.914 [2024-10-07 09:44:06.664351] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:17.914 [2024-10-07 09:44:06.722949] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:17.914 [2024-10-07 09:44:06.825706] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:17.914 [2024-10-07 09:44:06.825797] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:17.914 [2024-10-07 09:44:06.825826] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:17.914 [2024-10-07 09:44:06.825837] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:17.914 [2024-10-07 09:44:06.825847] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:17.914 [2024-10-07 09:44:06.827475] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:23:17.914 [2024-10-07 09:44:06.827584] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:23:17.914 [2024-10-07 09:44:06.827689] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:23:17.914 [2024-10-07 09:44:06.827693] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:23:18.173 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:18.173 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # return 0 00:23:18.173 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:23:18.173 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:18.173 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:18.173 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:18.173 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:18.173 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.173 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:18.173 [2024-10-07 09:44:06.975487] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:18.173 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.173 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:23:18.173 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.173 09:44:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:18.173 Malloc0 00:23:18.173 09:44:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.173 09:44:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:23:18.173 09:44:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.173 09:44:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:18.173 09:44:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.173 09:44:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:18.173 09:44:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.173 09:44:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:18.173 09:44:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.173 09:44:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:18.173 09:44:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.173 09:44:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:18.173 [2024-10-07 09:44:07.025907] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:18.173 09:44:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.173 09:44:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:23:18.173 09:44:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.173 09:44:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:18.173 [ 00:23:18.173 { 00:23:18.173 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:18.173 "subtype": "Discovery", 00:23:18.173 "listen_addresses": [], 00:23:18.173 "allow_any_host": true, 00:23:18.173 "hosts": [] 00:23:18.173 }, 00:23:18.173 { 00:23:18.173 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:18.173 "subtype": "NVMe", 00:23:18.173 "listen_addresses": [ 00:23:18.173 { 00:23:18.173 "trtype": "TCP", 00:23:18.173 "adrfam": "IPv4", 00:23:18.173 "traddr": "10.0.0.2", 00:23:18.173 "trsvcid": "4420" 00:23:18.173 } 00:23:18.173 ], 00:23:18.173 "allow_any_host": true, 00:23:18.173 "hosts": [], 00:23:18.173 "serial_number": "SPDK00000000000001", 00:23:18.173 "model_number": "SPDK bdev Controller", 00:23:18.173 "max_namespaces": 2, 00:23:18.173 "min_cntlid": 1, 00:23:18.173 "max_cntlid": 65519, 00:23:18.173 "namespaces": [ 00:23:18.173 { 00:23:18.173 "nsid": 1, 00:23:18.173 "bdev_name": "Malloc0", 00:23:18.173 "name": "Malloc0", 00:23:18.173 "nguid": "A895C2DA627D4D088278F8EF4E47F20E", 00:23:18.173 "uuid": "a895c2da-627d-4d08-8278-f8ef4e47f20e" 00:23:18.173 } 00:23:18.173 ] 00:23:18.173 } 00:23:18.173 ] 00:23:18.173 09:44:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.173 09:44:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:23:18.173 09:44:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:23:18.173 09:44:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=274632 00:23:18.173 09:44:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:23:18.173 09:44:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:23:18.173 09:44:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:23:18.173 09:44:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:18.173 09:44:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:23:18.173 09:44:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:23:18.173 09:44:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:23:18.173 09:44:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:18.173 09:44:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:23:18.173 09:44:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:23:18.173 09:44:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:23:18.433 09:44:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:18.433 09:44:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:18.433 09:44:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:23:18.433 09:44:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:23:18.433 09:44:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.433 09:44:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:18.433 Malloc1 00:23:18.433 09:44:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.433 09:44:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:23:18.433 09:44:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.433 09:44:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:18.433 09:44:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.433 09:44:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:23:18.433 09:44:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.433 09:44:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:18.433 [ 00:23:18.433 { 00:23:18.433 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:18.433 "subtype": "Discovery", 00:23:18.433 "listen_addresses": [], 00:23:18.433 "allow_any_host": true, 00:23:18.433 "hosts": [] 00:23:18.433 }, 00:23:18.433 { 00:23:18.433 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:18.433 "subtype": "NVMe", 00:23:18.433 "listen_addresses": [ 00:23:18.433 { 00:23:18.433 "trtype": "TCP", 00:23:18.433 "adrfam": "IPv4", 00:23:18.433 "traddr": "10.0.0.2", 00:23:18.433 "trsvcid": "4420" 00:23:18.433 } 00:23:18.433 ], 00:23:18.433 "allow_any_host": true, 00:23:18.433 "hosts": [], 00:23:18.433 "serial_number": "SPDK00000000000001", 00:23:18.433 "model_number": "SPDK bdev Controller", 00:23:18.433 "max_namespaces": 2, 00:23:18.433 "min_cntlid": 1, 00:23:18.433 "max_cntlid": 65519, 00:23:18.433 "namespaces": [ 00:23:18.433 { 00:23:18.433 "nsid": 1, 00:23:18.433 "bdev_name": "Malloc0", 00:23:18.433 "name": "Malloc0", 00:23:18.433 "nguid": "A895C2DA627D4D088278F8EF4E47F20E", 00:23:18.433 "uuid": "a895c2da-627d-4d08-8278-f8ef4e47f20e" 00:23:18.433 }, 00:23:18.433 { 00:23:18.433 "nsid": 2, 00:23:18.433 "bdev_name": "Malloc1", 00:23:18.433 "name": "Malloc1", 00:23:18.433 "nguid": "3951221927A846FEB9DBF386D105F24C", 00:23:18.433 "uuid": "39512219-27a8-46fe-b9db-f386d105f24c" 00:23:18.433 } 00:23:18.433 ] 00:23:18.433 } 00:23:18.433 ] 00:23:18.433 09:44:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.433 09:44:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 274632 00:23:18.433 Asynchronous Event Request test 00:23:18.433 Attaching to 10.0.0.2 00:23:18.433 Attached to 10.0.0.2 00:23:18.433 Registering asynchronous event callbacks... 00:23:18.433 Starting namespace attribute notice tests for all controllers... 00:23:18.433 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:23:18.433 aer_cb - Changed Namespace 00:23:18.433 Cleaning up... 00:23:18.433 09:44:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:23:18.433 09:44:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.433 09:44:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:18.433 09:44:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.433 09:44:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:23:18.433 09:44:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.433 09:44:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:18.433 09:44:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.433 09:44:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:18.433 09:44:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.433 09:44:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:18.433 09:44:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.433 09:44:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:23:18.433 09:44:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:23:18.433 09:44:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@514 -- # nvmfcleanup 00:23:18.433 09:44:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:23:18.433 09:44:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:18.433 09:44:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:23:18.433 09:44:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:18.433 09:44:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:18.433 rmmod nvme_tcp 00:23:18.433 rmmod nvme_fabrics 00:23:18.433 rmmod nvme_keyring 00:23:18.693 09:44:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:18.693 09:44:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:23:18.693 09:44:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:23:18.693 09:44:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@515 -- # '[' -n 274565 ']' 00:23:18.693 09:44:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # killprocess 274565 00:23:18.693 09:44:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@950 -- # '[' -z 274565 ']' 00:23:18.693 09:44:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # kill -0 274565 00:23:18.693 09:44:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # uname 00:23:18.693 09:44:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:18.693 09:44:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 274565 00:23:18.693 09:44:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:18.693 09:44:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:18.693 09:44:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@968 -- # echo 'killing process with pid 274565' 00:23:18.693 killing process with pid 274565 00:23:18.693 09:44:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@969 -- # kill 274565 00:23:18.693 09:44:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@974 -- # wait 274565 00:23:18.953 09:44:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:23:18.953 09:44:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:23:18.953 09:44:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:23:18.953 09:44:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:23:18.953 09:44:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@789 -- # iptables-save 00:23:18.953 09:44:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:23:18.953 09:44:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@789 -- # iptables-restore 00:23:18.953 09:44:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:18.953 09:44:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:18.953 09:44:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:18.953 09:44:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:18.953 09:44:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:20.854 09:44:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:20.854 00:23:20.854 real 0m5.634s 00:23:20.854 user 0m4.439s 00:23:20.854 sys 0m1.965s 00:23:20.854 09:44:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:20.854 09:44:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:20.854 ************************************ 00:23:20.854 END TEST nvmf_aer 00:23:20.854 ************************************ 00:23:20.854 09:44:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:20.854 09:44:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:20.854 09:44:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:20.854 09:44:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.113 ************************************ 00:23:21.113 START TEST nvmf_async_init 00:23:21.113 ************************************ 00:23:21.113 09:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:21.113 * Looking for test storage... 00:23:21.113 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:21.113 09:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:21.113 09:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1681 -- # lcov --version 00:23:21.113 09:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:21.113 09:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:21.113 09:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:21.113 09:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:21.113 09:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:21.113 09:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:23:21.113 09:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:23:21.113 09:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:23:21.113 09:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:23:21.113 09:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:23:21.113 09:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:23:21.113 09:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:23:21.113 09:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:21.113 09:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:23:21.113 09:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:23:21.113 09:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:21.113 09:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:21.113 09:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:23:21.113 09:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:23:21.113 09:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:21.113 09:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:23:21.113 09:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:23:21.113 09:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:23:21.113 09:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:23:21.113 09:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:21.113 09:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:23:21.113 09:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:23:21.113 09:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:21.113 09:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:21.113 09:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:23:21.113 09:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:21.113 09:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:21.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:21.113 --rc genhtml_branch_coverage=1 00:23:21.113 --rc genhtml_function_coverage=1 00:23:21.113 --rc genhtml_legend=1 00:23:21.113 --rc geninfo_all_blocks=1 00:23:21.113 --rc geninfo_unexecuted_blocks=1 00:23:21.113 00:23:21.113 ' 00:23:21.113 09:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:21.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:21.113 --rc genhtml_branch_coverage=1 00:23:21.113 --rc genhtml_function_coverage=1 00:23:21.113 --rc genhtml_legend=1 00:23:21.113 --rc geninfo_all_blocks=1 00:23:21.113 --rc geninfo_unexecuted_blocks=1 00:23:21.113 00:23:21.113 ' 00:23:21.113 09:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:21.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:21.113 --rc genhtml_branch_coverage=1 00:23:21.113 --rc genhtml_function_coverage=1 00:23:21.113 --rc genhtml_legend=1 00:23:21.113 --rc geninfo_all_blocks=1 00:23:21.113 --rc geninfo_unexecuted_blocks=1 00:23:21.113 00:23:21.113 ' 00:23:21.113 09:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:21.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:21.113 --rc genhtml_branch_coverage=1 00:23:21.113 --rc genhtml_function_coverage=1 00:23:21.113 --rc genhtml_legend=1 00:23:21.113 --rc geninfo_all_blocks=1 00:23:21.113 --rc geninfo_unexecuted_blocks=1 00:23:21.113 00:23:21.113 ' 00:23:21.113 09:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:21.113 09:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:23:21.113 09:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:21.113 09:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:21.113 09:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:21.113 09:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:21.114 09:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:21.114 09:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:21.114 09:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:21.114 09:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:21.114 09:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:21.114 09:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:21.114 09:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:23:21.114 09:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:23:21.114 09:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:21.114 09:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:21.114 09:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:21.114 09:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:21.114 09:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:21.114 09:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:23:21.114 09:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:21.114 09:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:21.114 09:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:21.114 09:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.114 09:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.114 09:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.114 09:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:23:21.114 09:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.114 09:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:23:21.114 09:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:21.114 09:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:21.114 09:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:21.114 09:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:21.114 09:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:21.114 09:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:21.114 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:21.114 09:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:21.114 09:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:21.114 09:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:21.114 09:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:23:21.114 09:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:23:21.114 09:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:23:21.114 09:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:23:21.114 09:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:23:21.114 09:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:23:21.114 09:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=10be64888f5e41819427391682d83e45 00:23:21.114 09:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:23:21.114 09:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:23:21.114 09:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:21.114 09:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # prepare_net_devs 00:23:21.114 09:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@436 -- # local -g is_hw=no 00:23:21.114 09:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # remove_spdk_ns 00:23:21.114 09:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:21.114 09:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:21.114 09:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:21.114 09:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:23:21.114 09:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:23:21.114 09:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:23:21.114 09:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:23.019 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:23.019 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:23:23.019 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:23.019 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:23.019 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:23.019 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:23.019 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:23.019 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:23:23.019 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:23.019 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:23:23.019 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:23:23.019 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:23:23.019 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:23:23.019 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:23:23.019 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:23:23.019 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:23.019 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:23.019 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:23.019 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:23.019 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:23.019 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:23.019 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:23.019 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:23.019 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:23.019 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:23.019 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:23.019 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:23.019 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:23.019 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:23.019 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:23.019 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:23.019 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:23.019 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:23.019 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:23.019 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x1592)' 00:23:23.019 Found 0000:09:00.0 (0x8086 - 0x1592) 00:23:23.019 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:23.019 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:23.019 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:23:23.019 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:23:23.019 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:23.019 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:23.019 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x1592)' 00:23:23.019 Found 0000:09:00.1 (0x8086 - 0x1592) 00:23:23.019 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:23.019 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:23.019 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:23:23.019 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:23:23.019 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:23.019 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:23.019 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:23.019 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:23.019 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:23.019 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:23.019 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:23.019 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:23.019 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:23.019 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:23.019 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:23.019 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:23:23.019 Found net devices under 0000:09:00.0: cvl_0_0 00:23:23.019 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:23.019 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:23.019 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:23.019 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:23.019 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:23.019 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:23.019 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:23.277 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:23.277 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:23:23.277 Found net devices under 0000:09:00.1: cvl_0_1 00:23:23.277 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:23.277 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:23:23.277 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # is_hw=yes 00:23:23.277 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:23:23.277 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:23:23.277 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:23:23.277 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:23.277 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:23.277 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:23.277 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:23.277 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:23.277 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:23.277 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:23.277 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:23.277 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:23.277 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:23.277 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:23.277 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:23.277 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:23.277 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:23.277 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:23.277 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:23.277 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:23.277 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:23.277 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:23.277 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:23.277 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:23.277 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:23.277 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:23.277 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:23.277 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.362 ms 00:23:23.277 00:23:23.277 --- 10.0.0.2 ping statistics --- 00:23:23.277 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:23.277 rtt min/avg/max/mdev = 0.362/0.362/0.362/0.000 ms 00:23:23.277 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:23.277 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:23.277 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:23:23.277 00:23:23.277 --- 10.0.0.1 ping statistics --- 00:23:23.277 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:23.277 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:23:23.277 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:23.277 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # return 0 00:23:23.277 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:23:23.277 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:23.277 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:23:23.277 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:23:23.277 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:23.277 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:23:23.277 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:23:23.277 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:23:23.277 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:23:23.277 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:23.277 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:23.277 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # nvmfpid=276504 00:23:23.277 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:23.277 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # waitforlisten 276504 00:23:23.277 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@831 -- # '[' -z 276504 ']' 00:23:23.277 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:23.277 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:23.277 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:23.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:23.277 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:23.277 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:23.277 [2024-10-07 09:44:12.238700] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:23:23.277 [2024-10-07 09:44:12.238779] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:23.535 [2024-10-07 09:44:12.302692] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:23.535 [2024-10-07 09:44:12.410145] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:23.535 [2024-10-07 09:44:12.410194] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:23.535 [2024-10-07 09:44:12.410221] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:23.535 [2024-10-07 09:44:12.410231] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:23.535 [2024-10-07 09:44:12.410240] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:23.535 [2024-10-07 09:44:12.410754] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:23:23.535 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:23.535 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # return 0 00:23:23.535 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:23:23.535 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:23.535 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:23.793 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:23.793 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:23:23.793 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.793 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:23.793 [2024-10-07 09:44:12.552062] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:23.793 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.793 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:23:23.793 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.793 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:23.793 null0 00:23:23.793 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.793 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:23:23.793 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.793 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:23.793 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.793 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:23:23.793 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.793 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:23.793 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.793 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 10be64888f5e41819427391682d83e45 00:23:23.793 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.793 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:23.793 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.793 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:23.793 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.793 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:23.793 [2024-10-07 09:44:12.592319] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:23.793 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.793 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:23:23.793 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.793 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:24.053 nvme0n1 00:23:24.053 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.053 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:24.053 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.053 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:24.053 [ 00:23:24.053 { 00:23:24.053 "name": "nvme0n1", 00:23:24.053 "aliases": [ 00:23:24.053 "10be6488-8f5e-4181-9427-391682d83e45" 00:23:24.053 ], 00:23:24.053 "product_name": "NVMe disk", 00:23:24.053 "block_size": 512, 00:23:24.053 "num_blocks": 2097152, 00:23:24.053 "uuid": "10be6488-8f5e-4181-9427-391682d83e45", 00:23:24.053 "numa_id": 0, 00:23:24.053 "assigned_rate_limits": { 00:23:24.053 "rw_ios_per_sec": 0, 00:23:24.053 "rw_mbytes_per_sec": 0, 00:23:24.053 "r_mbytes_per_sec": 0, 00:23:24.053 "w_mbytes_per_sec": 0 00:23:24.053 }, 00:23:24.053 "claimed": false, 00:23:24.053 "zoned": false, 00:23:24.053 "supported_io_types": { 00:23:24.053 "read": true, 00:23:24.053 "write": true, 00:23:24.053 "unmap": false, 00:23:24.053 "flush": true, 00:23:24.053 "reset": true, 00:23:24.053 "nvme_admin": true, 00:23:24.053 "nvme_io": true, 00:23:24.053 "nvme_io_md": false, 00:23:24.053 "write_zeroes": true, 00:23:24.053 "zcopy": false, 00:23:24.053 "get_zone_info": false, 00:23:24.053 "zone_management": false, 00:23:24.053 "zone_append": false, 00:23:24.053 "compare": true, 00:23:24.053 "compare_and_write": true, 00:23:24.053 "abort": true, 00:23:24.053 "seek_hole": false, 00:23:24.053 "seek_data": false, 00:23:24.053 "copy": true, 00:23:24.053 "nvme_iov_md": false 00:23:24.053 }, 00:23:24.053 "memory_domains": [ 00:23:24.053 { 00:23:24.053 "dma_device_id": "system", 00:23:24.053 "dma_device_type": 1 00:23:24.053 } 00:23:24.053 ], 00:23:24.053 "driver_specific": { 00:23:24.053 "nvme": [ 00:23:24.053 { 00:23:24.053 "trid": { 00:23:24.053 "trtype": "TCP", 00:23:24.053 "adrfam": "IPv4", 00:23:24.053 "traddr": "10.0.0.2", 00:23:24.053 "trsvcid": "4420", 00:23:24.053 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:24.053 }, 00:23:24.053 "ctrlr_data": { 00:23:24.053 "cntlid": 1, 00:23:24.053 "vendor_id": "0x8086", 00:23:24.053 "model_number": "SPDK bdev Controller", 00:23:24.053 "serial_number": "00000000000000000000", 00:23:24.053 "firmware_revision": "25.01", 00:23:24.053 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:24.053 "oacs": { 00:23:24.053 "security": 0, 00:23:24.053 "format": 0, 00:23:24.053 "firmware": 0, 00:23:24.053 "ns_manage": 0 00:23:24.053 }, 00:23:24.053 "multi_ctrlr": true, 00:23:24.053 "ana_reporting": false 00:23:24.053 }, 00:23:24.053 "vs": { 00:23:24.053 "nvme_version": "1.3" 00:23:24.053 }, 00:23:24.053 "ns_data": { 00:23:24.053 "id": 1, 00:23:24.053 "can_share": true 00:23:24.053 } 00:23:24.053 } 00:23:24.053 ], 00:23:24.053 "mp_policy": "active_passive" 00:23:24.053 } 00:23:24.053 } 00:23:24.053 ] 00:23:24.053 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.053 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:23:24.053 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.053 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:24.053 [2024-10-07 09:44:12.841259] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:24.053 [2024-10-07 09:44:12.841345] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe65860 (9): Bad file descriptor 00:23:24.053 [2024-10-07 09:44:12.973786] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:24.053 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.053 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:24.053 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.053 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:24.053 [ 00:23:24.053 { 00:23:24.053 "name": "nvme0n1", 00:23:24.053 "aliases": [ 00:23:24.053 "10be6488-8f5e-4181-9427-391682d83e45" 00:23:24.053 ], 00:23:24.053 "product_name": "NVMe disk", 00:23:24.053 "block_size": 512, 00:23:24.053 "num_blocks": 2097152, 00:23:24.053 "uuid": "10be6488-8f5e-4181-9427-391682d83e45", 00:23:24.053 "numa_id": 0, 00:23:24.053 "assigned_rate_limits": { 00:23:24.053 "rw_ios_per_sec": 0, 00:23:24.053 "rw_mbytes_per_sec": 0, 00:23:24.053 "r_mbytes_per_sec": 0, 00:23:24.053 "w_mbytes_per_sec": 0 00:23:24.053 }, 00:23:24.053 "claimed": false, 00:23:24.053 "zoned": false, 00:23:24.053 "supported_io_types": { 00:23:24.053 "read": true, 00:23:24.053 "write": true, 00:23:24.053 "unmap": false, 00:23:24.053 "flush": true, 00:23:24.053 "reset": true, 00:23:24.053 "nvme_admin": true, 00:23:24.053 "nvme_io": true, 00:23:24.053 "nvme_io_md": false, 00:23:24.053 "write_zeroes": true, 00:23:24.053 "zcopy": false, 00:23:24.053 "get_zone_info": false, 00:23:24.053 "zone_management": false, 00:23:24.053 "zone_append": false, 00:23:24.053 "compare": true, 00:23:24.053 "compare_and_write": true, 00:23:24.053 "abort": true, 00:23:24.053 "seek_hole": false, 00:23:24.053 "seek_data": false, 00:23:24.053 "copy": true, 00:23:24.053 "nvme_iov_md": false 00:23:24.053 }, 00:23:24.053 "memory_domains": [ 00:23:24.053 { 00:23:24.053 "dma_device_id": "system", 00:23:24.053 "dma_device_type": 1 00:23:24.053 } 00:23:24.053 ], 00:23:24.053 "driver_specific": { 00:23:24.053 "nvme": [ 00:23:24.053 { 00:23:24.053 "trid": { 00:23:24.053 "trtype": "TCP", 00:23:24.053 "adrfam": "IPv4", 00:23:24.053 "traddr": "10.0.0.2", 00:23:24.053 "trsvcid": "4420", 00:23:24.053 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:24.053 }, 00:23:24.053 "ctrlr_data": { 00:23:24.053 "cntlid": 2, 00:23:24.053 "vendor_id": "0x8086", 00:23:24.053 "model_number": "SPDK bdev Controller", 00:23:24.053 "serial_number": "00000000000000000000", 00:23:24.053 "firmware_revision": "25.01", 00:23:24.053 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:24.053 "oacs": { 00:23:24.053 "security": 0, 00:23:24.053 "format": 0, 00:23:24.053 "firmware": 0, 00:23:24.053 "ns_manage": 0 00:23:24.053 }, 00:23:24.053 "multi_ctrlr": true, 00:23:24.053 "ana_reporting": false 00:23:24.053 }, 00:23:24.053 "vs": { 00:23:24.053 "nvme_version": "1.3" 00:23:24.053 }, 00:23:24.053 "ns_data": { 00:23:24.053 "id": 1, 00:23:24.053 "can_share": true 00:23:24.053 } 00:23:24.053 } 00:23:24.053 ], 00:23:24.053 "mp_policy": "active_passive" 00:23:24.053 } 00:23:24.053 } 00:23:24.053 ] 00:23:24.053 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.053 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:24.054 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.054 09:44:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:24.054 09:44:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.054 09:44:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:23:24.054 09:44:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.f2zokFaAZU 00:23:24.054 09:44:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:24.054 09:44:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.f2zokFaAZU 00:23:24.054 09:44:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.f2zokFaAZU 00:23:24.054 09:44:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.054 09:44:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:24.054 09:44:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.054 09:44:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:23:24.054 09:44:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.054 09:44:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:24.054 09:44:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.054 09:44:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:23:24.054 09:44:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.054 09:44:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:24.054 [2024-10-07 09:44:13.029898] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:24.054 [2024-10-07 09:44:13.030017] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:24.054 09:44:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.054 09:44:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:23:24.054 09:44:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.054 09:44:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:24.054 09:44:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.054 09:44:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:24.054 09:44:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.054 09:44:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:24.054 [2024-10-07 09:44:13.045939] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:24.314 nvme0n1 00:23:24.314 09:44:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.314 09:44:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:24.314 09:44:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.314 09:44:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:24.314 [ 00:23:24.314 { 00:23:24.314 "name": "nvme0n1", 00:23:24.314 "aliases": [ 00:23:24.314 "10be6488-8f5e-4181-9427-391682d83e45" 00:23:24.314 ], 00:23:24.314 "product_name": "NVMe disk", 00:23:24.314 "block_size": 512, 00:23:24.314 "num_blocks": 2097152, 00:23:24.314 "uuid": "10be6488-8f5e-4181-9427-391682d83e45", 00:23:24.314 "numa_id": 0, 00:23:24.314 "assigned_rate_limits": { 00:23:24.314 "rw_ios_per_sec": 0, 00:23:24.314 "rw_mbytes_per_sec": 0, 00:23:24.314 "r_mbytes_per_sec": 0, 00:23:24.314 "w_mbytes_per_sec": 0 00:23:24.314 }, 00:23:24.314 "claimed": false, 00:23:24.314 "zoned": false, 00:23:24.314 "supported_io_types": { 00:23:24.314 "read": true, 00:23:24.314 "write": true, 00:23:24.314 "unmap": false, 00:23:24.314 "flush": true, 00:23:24.314 "reset": true, 00:23:24.314 "nvme_admin": true, 00:23:24.314 "nvme_io": true, 00:23:24.314 "nvme_io_md": false, 00:23:24.314 "write_zeroes": true, 00:23:24.314 "zcopy": false, 00:23:24.314 "get_zone_info": false, 00:23:24.314 "zone_management": false, 00:23:24.314 "zone_append": false, 00:23:24.314 "compare": true, 00:23:24.314 "compare_and_write": true, 00:23:24.314 "abort": true, 00:23:24.314 "seek_hole": false, 00:23:24.314 "seek_data": false, 00:23:24.314 "copy": true, 00:23:24.314 "nvme_iov_md": false 00:23:24.314 }, 00:23:24.314 "memory_domains": [ 00:23:24.314 { 00:23:24.314 "dma_device_id": "system", 00:23:24.314 "dma_device_type": 1 00:23:24.314 } 00:23:24.314 ], 00:23:24.314 "driver_specific": { 00:23:24.314 "nvme": [ 00:23:24.314 { 00:23:24.314 "trid": { 00:23:24.314 "trtype": "TCP", 00:23:24.314 "adrfam": "IPv4", 00:23:24.314 "traddr": "10.0.0.2", 00:23:24.314 "trsvcid": "4421", 00:23:24.314 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:24.314 }, 00:23:24.314 "ctrlr_data": { 00:23:24.314 "cntlid": 3, 00:23:24.314 "vendor_id": "0x8086", 00:23:24.314 "model_number": "SPDK bdev Controller", 00:23:24.314 "serial_number": "00000000000000000000", 00:23:24.314 "firmware_revision": "25.01", 00:23:24.314 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:24.314 "oacs": { 00:23:24.314 "security": 0, 00:23:24.314 "format": 0, 00:23:24.314 "firmware": 0, 00:23:24.314 "ns_manage": 0 00:23:24.314 }, 00:23:24.314 "multi_ctrlr": true, 00:23:24.314 "ana_reporting": false 00:23:24.314 }, 00:23:24.314 "vs": { 00:23:24.314 "nvme_version": "1.3" 00:23:24.314 }, 00:23:24.314 "ns_data": { 00:23:24.314 "id": 1, 00:23:24.314 "can_share": true 00:23:24.314 } 00:23:24.314 } 00:23:24.314 ], 00:23:24.314 "mp_policy": "active_passive" 00:23:24.314 } 00:23:24.314 } 00:23:24.314 ] 00:23:24.314 09:44:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.314 09:44:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:24.314 09:44:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.314 09:44:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:24.314 09:44:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.314 09:44:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.f2zokFaAZU 00:23:24.314 09:44:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:23:24.314 09:44:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:23:24.314 09:44:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@514 -- # nvmfcleanup 00:23:24.314 09:44:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:23:24.314 09:44:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:24.314 09:44:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:23:24.314 09:44:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:24.314 09:44:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:24.314 rmmod nvme_tcp 00:23:24.314 rmmod nvme_fabrics 00:23:24.314 rmmod nvme_keyring 00:23:24.314 09:44:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:24.314 09:44:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:23:24.314 09:44:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:23:24.314 09:44:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@515 -- # '[' -n 276504 ']' 00:23:24.314 09:44:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # killprocess 276504 00:23:24.314 09:44:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@950 -- # '[' -z 276504 ']' 00:23:24.314 09:44:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # kill -0 276504 00:23:24.314 09:44:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # uname 00:23:24.314 09:44:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:24.314 09:44:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 276504 00:23:24.314 09:44:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:24.314 09:44:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:24.314 09:44:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 276504' 00:23:24.314 killing process with pid 276504 00:23:24.314 09:44:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@969 -- # kill 276504 00:23:24.314 09:44:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@974 -- # wait 276504 00:23:24.573 09:44:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:23:24.573 09:44:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:23:24.573 09:44:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:23:24.573 09:44:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:23:24.573 09:44:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@789 -- # iptables-save 00:23:24.573 09:44:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@789 -- # iptables-restore 00:23:24.573 09:44:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:23:24.573 09:44:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:24.573 09:44:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:24.573 09:44:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:24.573 09:44:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:24.573 09:44:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:27.112 09:44:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:27.112 00:23:27.112 real 0m5.662s 00:23:27.112 user 0m2.196s 00:23:27.112 sys 0m1.882s 00:23:27.112 09:44:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:27.112 09:44:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:27.112 ************************************ 00:23:27.112 END TEST nvmf_async_init 00:23:27.112 ************************************ 00:23:27.112 09:44:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:27.112 09:44:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:27.112 09:44:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:27.112 09:44:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.112 ************************************ 00:23:27.112 START TEST dma 00:23:27.112 ************************************ 00:23:27.112 09:44:15 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:27.112 * Looking for test storage... 00:23:27.113 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:27.113 09:44:15 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:27.113 09:44:15 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1681 -- # lcov --version 00:23:27.113 09:44:15 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:27.113 09:44:15 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:27.113 09:44:15 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:27.113 09:44:15 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:27.113 09:44:15 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:27.113 09:44:15 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:23:27.113 09:44:15 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:23:27.113 09:44:15 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:23:27.113 09:44:15 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:23:27.113 09:44:15 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:23:27.113 09:44:15 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:23:27.113 09:44:15 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:23:27.113 09:44:15 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:27.113 09:44:15 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:23:27.113 09:44:15 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:23:27.113 09:44:15 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:27.113 09:44:15 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:27.113 09:44:15 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:23:27.113 09:44:15 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:23:27.113 09:44:15 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:27.113 09:44:15 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:23:27.113 09:44:15 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:23:27.113 09:44:15 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:23:27.113 09:44:15 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:23:27.113 09:44:15 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:27.113 09:44:15 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:23:27.113 09:44:15 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:23:27.113 09:44:15 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:27.113 09:44:15 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:27.113 09:44:15 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:23:27.113 09:44:15 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:27.113 09:44:15 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:27.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:27.113 --rc genhtml_branch_coverage=1 00:23:27.113 --rc genhtml_function_coverage=1 00:23:27.113 --rc genhtml_legend=1 00:23:27.113 --rc geninfo_all_blocks=1 00:23:27.113 --rc geninfo_unexecuted_blocks=1 00:23:27.113 00:23:27.113 ' 00:23:27.113 09:44:15 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:27.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:27.113 --rc genhtml_branch_coverage=1 00:23:27.113 --rc genhtml_function_coverage=1 00:23:27.113 --rc genhtml_legend=1 00:23:27.113 --rc geninfo_all_blocks=1 00:23:27.113 --rc geninfo_unexecuted_blocks=1 00:23:27.113 00:23:27.113 ' 00:23:27.113 09:44:15 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:27.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:27.113 --rc genhtml_branch_coverage=1 00:23:27.113 --rc genhtml_function_coverage=1 00:23:27.113 --rc genhtml_legend=1 00:23:27.113 --rc geninfo_all_blocks=1 00:23:27.113 --rc geninfo_unexecuted_blocks=1 00:23:27.113 00:23:27.113 ' 00:23:27.113 09:44:15 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:27.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:27.113 --rc genhtml_branch_coverage=1 00:23:27.113 --rc genhtml_function_coverage=1 00:23:27.113 --rc genhtml_legend=1 00:23:27.113 --rc geninfo_all_blocks=1 00:23:27.113 --rc geninfo_unexecuted_blocks=1 00:23:27.113 00:23:27.113 ' 00:23:27.113 09:44:15 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:27.113 09:44:15 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:23:27.113 09:44:15 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:27.113 09:44:15 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:27.113 09:44:15 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:27.113 09:44:15 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:27.113 09:44:15 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:27.113 09:44:15 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:27.113 09:44:15 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:27.113 09:44:15 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:27.113 09:44:15 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:27.113 09:44:15 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:27.113 09:44:15 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:23:27.113 09:44:15 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:23:27.113 09:44:15 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:27.113 09:44:15 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:27.113 09:44:15 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:27.113 09:44:15 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:27.113 09:44:15 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:27.113 09:44:15 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:23:27.113 09:44:15 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:27.113 09:44:15 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:27.113 09:44:15 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:27.113 09:44:15 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:27.113 09:44:15 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:27.113 09:44:15 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:27.113 09:44:15 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:23:27.113 09:44:15 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:27.113 09:44:15 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:23:27.113 09:44:15 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:27.113 09:44:15 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:27.113 09:44:15 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:27.113 09:44:15 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:27.113 09:44:15 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:27.113 09:44:15 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:27.114 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:27.114 09:44:15 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:27.114 09:44:15 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:27.114 09:44:15 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:27.114 09:44:15 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:23:27.114 09:44:15 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:23:27.114 00:23:27.114 real 0m0.144s 00:23:27.114 user 0m0.095s 00:23:27.114 sys 0m0.057s 00:23:27.114 09:44:15 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:27.114 09:44:15 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:23:27.114 ************************************ 00:23:27.114 END TEST dma 00:23:27.114 ************************************ 00:23:27.114 09:44:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:27.114 09:44:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:27.114 09:44:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:27.114 09:44:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.114 ************************************ 00:23:27.114 START TEST nvmf_identify 00:23:27.114 ************************************ 00:23:27.114 09:44:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:27.114 * Looking for test storage... 00:23:27.114 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:27.114 09:44:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:27.114 09:44:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # lcov --version 00:23:27.114 09:44:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:27.114 09:44:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:27.114 09:44:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:27.114 09:44:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:27.114 09:44:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:27.114 09:44:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:23:27.114 09:44:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:23:27.114 09:44:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:23:27.114 09:44:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:23:27.114 09:44:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:23:27.114 09:44:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:23:27.114 09:44:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:23:27.114 09:44:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:27.114 09:44:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:23:27.114 09:44:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:23:27.114 09:44:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:27.114 09:44:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:27.114 09:44:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:23:27.114 09:44:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:23:27.114 09:44:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:27.114 09:44:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:23:27.114 09:44:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:23:27.114 09:44:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:23:27.114 09:44:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:23:27.114 09:44:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:27.114 09:44:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:23:27.114 09:44:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:23:27.114 09:44:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:27.114 09:44:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:27.114 09:44:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:23:27.114 09:44:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:27.114 09:44:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:27.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:27.114 --rc genhtml_branch_coverage=1 00:23:27.114 --rc genhtml_function_coverage=1 00:23:27.114 --rc genhtml_legend=1 00:23:27.114 --rc geninfo_all_blocks=1 00:23:27.114 --rc geninfo_unexecuted_blocks=1 00:23:27.114 00:23:27.114 ' 00:23:27.114 09:44:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:27.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:27.114 --rc genhtml_branch_coverage=1 00:23:27.114 --rc genhtml_function_coverage=1 00:23:27.114 --rc genhtml_legend=1 00:23:27.114 --rc geninfo_all_blocks=1 00:23:27.114 --rc geninfo_unexecuted_blocks=1 00:23:27.114 00:23:27.114 ' 00:23:27.114 09:44:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:27.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:27.114 --rc genhtml_branch_coverage=1 00:23:27.114 --rc genhtml_function_coverage=1 00:23:27.114 --rc genhtml_legend=1 00:23:27.114 --rc geninfo_all_blocks=1 00:23:27.114 --rc geninfo_unexecuted_blocks=1 00:23:27.114 00:23:27.114 ' 00:23:27.114 09:44:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:27.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:27.114 --rc genhtml_branch_coverage=1 00:23:27.114 --rc genhtml_function_coverage=1 00:23:27.114 --rc genhtml_legend=1 00:23:27.114 --rc geninfo_all_blocks=1 00:23:27.114 --rc geninfo_unexecuted_blocks=1 00:23:27.114 00:23:27.114 ' 00:23:27.114 09:44:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:27.114 09:44:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:23:27.114 09:44:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:27.114 09:44:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:27.114 09:44:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:27.114 09:44:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:27.114 09:44:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:27.114 09:44:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:27.114 09:44:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:27.114 09:44:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:27.114 09:44:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:27.114 09:44:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:27.114 09:44:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:23:27.114 09:44:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:23:27.114 09:44:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:27.114 09:44:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:27.114 09:44:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:27.114 09:44:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:27.114 09:44:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:27.114 09:44:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:23:27.114 09:44:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:27.114 09:44:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:27.114 09:44:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:27.114 09:44:15 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:27.114 09:44:15 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:27.114 09:44:15 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:27.114 09:44:15 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:23:27.115 09:44:15 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:27.115 09:44:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:23:27.115 09:44:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:27.115 09:44:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:27.115 09:44:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:27.115 09:44:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:27.115 09:44:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:27.115 09:44:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:27.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:27.115 09:44:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:27.115 09:44:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:27.115 09:44:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:27.115 09:44:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:27.115 09:44:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:27.115 09:44:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:23:27.115 09:44:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:23:27.115 09:44:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:27.115 09:44:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # prepare_net_devs 00:23:27.115 09:44:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@436 -- # local -g is_hw=no 00:23:27.115 09:44:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # remove_spdk_ns 00:23:27.115 09:44:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:27.115 09:44:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:27.115 09:44:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:27.115 09:44:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:23:27.115 09:44:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:23:27.115 09:44:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:23:27.115 09:44:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:29.023 09:44:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:29.023 09:44:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:23:29.023 09:44:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:29.023 09:44:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:29.023 09:44:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:29.023 09:44:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:29.023 09:44:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:29.023 09:44:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:23:29.023 09:44:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:29.023 09:44:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:23:29.023 09:44:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:23:29.023 09:44:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:23:29.023 09:44:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:23:29.023 09:44:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:23:29.023 09:44:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:23:29.023 09:44:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:29.023 09:44:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:29.023 09:44:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:29.023 09:44:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:29.023 09:44:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:29.023 09:44:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:29.023 09:44:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:29.023 09:44:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:29.023 09:44:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:29.023 09:44:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:29.023 09:44:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:29.023 09:44:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:29.023 09:44:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:29.023 09:44:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:29.023 09:44:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:29.023 09:44:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:29.023 09:44:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:29.023 09:44:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:29.023 09:44:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:29.023 09:44:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x1592)' 00:23:29.023 Found 0000:09:00.0 (0x8086 - 0x1592) 00:23:29.023 09:44:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:29.023 09:44:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:29.023 09:44:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:23:29.024 09:44:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:23:29.024 09:44:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:29.024 09:44:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:29.024 09:44:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x1592)' 00:23:29.024 Found 0000:09:00.1 (0x8086 - 0x1592) 00:23:29.024 09:44:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:29.024 09:44:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:29.024 09:44:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:23:29.024 09:44:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:23:29.024 09:44:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:29.024 09:44:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:29.024 09:44:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:29.024 09:44:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:29.024 09:44:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:29.024 09:44:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:29.024 09:44:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:29.024 09:44:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:29.024 09:44:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:29.024 09:44:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:29.024 09:44:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:29.024 09:44:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:23:29.024 Found net devices under 0000:09:00.0: cvl_0_0 00:23:29.024 09:44:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:29.024 09:44:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:29.024 09:44:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:29.024 09:44:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:29.024 09:44:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:29.024 09:44:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:29.024 09:44:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:29.024 09:44:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:29.024 09:44:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:23:29.024 Found net devices under 0000:09:00.1: cvl_0_1 00:23:29.024 09:44:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:29.024 09:44:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:23:29.024 09:44:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # is_hw=yes 00:23:29.024 09:44:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:23:29.024 09:44:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:23:29.024 09:44:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:23:29.024 09:44:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:29.024 09:44:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:29.024 09:44:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:29.024 09:44:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:29.024 09:44:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:29.024 09:44:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:29.024 09:44:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:29.024 09:44:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:29.024 09:44:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:29.024 09:44:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:29.024 09:44:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:29.024 09:44:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:29.024 09:44:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:29.024 09:44:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:29.024 09:44:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:29.024 09:44:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:29.024 09:44:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:29.024 09:44:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:29.024 09:44:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:29.024 09:44:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:29.024 09:44:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:29.024 09:44:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:29.024 09:44:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:29.024 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:29.024 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:23:29.024 00:23:29.024 --- 10.0.0.2 ping statistics --- 00:23:29.024 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:29.024 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:23:29.024 09:44:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:29.024 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:29.024 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.143 ms 00:23:29.024 00:23:29.024 --- 10.0.0.1 ping statistics --- 00:23:29.024 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:29.024 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:23:29.024 09:44:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:29.024 09:44:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # return 0 00:23:29.024 09:44:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:23:29.024 09:44:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:29.024 09:44:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:23:29.024 09:44:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:23:29.024 09:44:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:29.024 09:44:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:23:29.024 09:44:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:23:29.024 09:44:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:23:29.024 09:44:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:29.024 09:44:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:29.024 09:44:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=278551 00:23:29.024 09:44:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:29.024 09:44:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:29.024 09:44:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 278551 00:23:29.024 09:44:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 278551 ']' 00:23:29.024 09:44:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:29.024 09:44:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:29.024 09:44:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:29.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:29.024 09:44:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:29.024 09:44:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:29.024 [2024-10-07 09:44:17.922978] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:23:29.024 [2024-10-07 09:44:17.923052] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:29.024 [2024-10-07 09:44:17.985893] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:29.283 [2024-10-07 09:44:18.091043] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:29.283 [2024-10-07 09:44:18.091100] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:29.283 [2024-10-07 09:44:18.091128] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:29.283 [2024-10-07 09:44:18.091138] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:29.283 [2024-10-07 09:44:18.091148] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:29.283 [2024-10-07 09:44:18.092756] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:23:29.283 [2024-10-07 09:44:18.092809] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:23:29.283 [2024-10-07 09:44:18.092783] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:23:29.283 [2024-10-07 09:44:18.092813] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:23:29.283 09:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:29.283 09:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:23:29.283 09:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:29.283 09:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.283 09:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:29.283 [2024-10-07 09:44:18.223223] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:29.283 09:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.283 09:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:23:29.283 09:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:29.283 09:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:29.283 09:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:29.283 09:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.283 09:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:29.541 Malloc0 00:23:29.541 09:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.541 09:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:29.541 09:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.541 09:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:29.541 09:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.541 09:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:23:29.541 09:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.541 09:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:29.541 09:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.541 09:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:29.541 09:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.541 09:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:29.541 [2024-10-07 09:44:18.300578] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:29.541 09:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.541 09:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:29.541 09:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.541 09:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:29.541 09:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.541 09:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:23:29.541 09:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.541 09:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:29.541 [ 00:23:29.541 { 00:23:29.541 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:29.541 "subtype": "Discovery", 00:23:29.541 "listen_addresses": [ 00:23:29.541 { 00:23:29.541 "trtype": "TCP", 00:23:29.541 "adrfam": "IPv4", 00:23:29.541 "traddr": "10.0.0.2", 00:23:29.541 "trsvcid": "4420" 00:23:29.541 } 00:23:29.541 ], 00:23:29.541 "allow_any_host": true, 00:23:29.541 "hosts": [] 00:23:29.541 }, 00:23:29.541 { 00:23:29.541 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:29.541 "subtype": "NVMe", 00:23:29.541 "listen_addresses": [ 00:23:29.541 { 00:23:29.541 "trtype": "TCP", 00:23:29.541 "adrfam": "IPv4", 00:23:29.541 "traddr": "10.0.0.2", 00:23:29.541 "trsvcid": "4420" 00:23:29.541 } 00:23:29.541 ], 00:23:29.541 "allow_any_host": true, 00:23:29.541 "hosts": [], 00:23:29.541 "serial_number": "SPDK00000000000001", 00:23:29.541 "model_number": "SPDK bdev Controller", 00:23:29.541 "max_namespaces": 32, 00:23:29.541 "min_cntlid": 1, 00:23:29.541 "max_cntlid": 65519, 00:23:29.541 "namespaces": [ 00:23:29.541 { 00:23:29.541 "nsid": 1, 00:23:29.541 "bdev_name": "Malloc0", 00:23:29.541 "name": "Malloc0", 00:23:29.541 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:23:29.541 "eui64": "ABCDEF0123456789", 00:23:29.541 "uuid": "c1b18451-bdd0-4cc3-9629-b68064ec0f45" 00:23:29.541 } 00:23:29.541 ] 00:23:29.541 } 00:23:29.541 ] 00:23:29.541 09:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.541 09:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:23:29.541 [2024-10-07 09:44:18.343992] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:23:29.541 [2024-10-07 09:44:18.344035] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid278680 ] 00:23:29.541 [2024-10-07 09:44:18.378019] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:23:29.541 [2024-10-07 09:44:18.378082] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:29.541 [2024-10-07 09:44:18.378093] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:29.541 [2024-10-07 09:44:18.378109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:29.541 [2024-10-07 09:44:18.378123] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:29.541 [2024-10-07 09:44:18.382082] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:23:29.541 [2024-10-07 09:44:18.382145] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x16ce760 0 00:23:29.541 [2024-10-07 09:44:18.389697] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:29.541 [2024-10-07 09:44:18.389719] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:29.541 [2024-10-07 09:44:18.389728] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:29.542 [2024-10-07 09:44:18.389734] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:29.542 [2024-10-07 09:44:18.389771] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.542 [2024-10-07 09:44:18.389786] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.542 [2024-10-07 09:44:18.389793] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16ce760) 00:23:29.542 [2024-10-07 09:44:18.389810] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:29.542 [2024-10-07 09:44:18.389837] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x172e480, cid 0, qid 0 00:23:29.542 [2024-10-07 09:44:18.397700] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.542 [2024-10-07 09:44:18.397719] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.542 [2024-10-07 09:44:18.397727] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.542 [2024-10-07 09:44:18.397734] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x172e480) on tqpair=0x16ce760 00:23:29.542 [2024-10-07 09:44:18.397756] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:29.542 [2024-10-07 09:44:18.397769] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:23:29.542 [2024-10-07 09:44:18.397778] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:23:29.542 [2024-10-07 09:44:18.397799] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.542 [2024-10-07 09:44:18.397811] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.542 [2024-10-07 09:44:18.397818] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16ce760) 00:23:29.542 [2024-10-07 09:44:18.397829] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.542 [2024-10-07 09:44:18.397853] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x172e480, cid 0, qid 0 00:23:29.542 [2024-10-07 09:44:18.397996] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.542 [2024-10-07 09:44:18.398011] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.542 [2024-10-07 09:44:18.398018] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.542 [2024-10-07 09:44:18.398025] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x172e480) on tqpair=0x16ce760 00:23:29.542 [2024-10-07 09:44:18.398035] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:23:29.542 [2024-10-07 09:44:18.398051] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:23:29.542 [2024-10-07 09:44:18.398064] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.542 [2024-10-07 09:44:18.398072] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.542 [2024-10-07 09:44:18.398078] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16ce760) 00:23:29.542 [2024-10-07 09:44:18.398091] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.542 [2024-10-07 09:44:18.398115] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x172e480, cid 0, qid 0 00:23:29.542 [2024-10-07 09:44:18.398197] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.542 [2024-10-07 09:44:18.398217] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.542 [2024-10-07 09:44:18.398225] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.542 [2024-10-07 09:44:18.398232] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x172e480) on tqpair=0x16ce760 00:23:29.542 [2024-10-07 09:44:18.398241] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:23:29.542 [2024-10-07 09:44:18.398256] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:23:29.542 [2024-10-07 09:44:18.398271] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.542 [2024-10-07 09:44:18.398279] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.542 [2024-10-07 09:44:18.398285] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16ce760) 00:23:29.542 [2024-10-07 09:44:18.398295] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.542 [2024-10-07 09:44:18.398317] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x172e480, cid 0, qid 0 00:23:29.542 [2024-10-07 09:44:18.398395] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.542 [2024-10-07 09:44:18.398410] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.542 [2024-10-07 09:44:18.398417] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.542 [2024-10-07 09:44:18.398424] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x172e480) on tqpair=0x16ce760 00:23:29.542 [2024-10-07 09:44:18.398433] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:29.542 [2024-10-07 09:44:18.398453] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.542 [2024-10-07 09:44:18.398463] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.542 [2024-10-07 09:44:18.398469] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16ce760) 00:23:29.542 [2024-10-07 09:44:18.398479] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.542 [2024-10-07 09:44:18.398504] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x172e480, cid 0, qid 0 00:23:29.542 [2024-10-07 09:44:18.398579] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.542 [2024-10-07 09:44:18.398594] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.542 [2024-10-07 09:44:18.398601] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.542 [2024-10-07 09:44:18.398607] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x172e480) on tqpair=0x16ce760 00:23:29.542 [2024-10-07 09:44:18.398615] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:23:29.542 [2024-10-07 09:44:18.398627] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:23:29.542 [2024-10-07 09:44:18.398641] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:29.542 [2024-10-07 09:44:18.398753] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:23:29.542 [2024-10-07 09:44:18.398765] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:29.542 [2024-10-07 09:44:18.398794] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.542 [2024-10-07 09:44:18.398802] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.542 [2024-10-07 09:44:18.398808] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16ce760) 00:23:29.542 [2024-10-07 09:44:18.398818] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.542 [2024-10-07 09:44:18.398844] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x172e480, cid 0, qid 0 00:23:29.542 [2024-10-07 09:44:18.398964] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.542 [2024-10-07 09:44:18.398979] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.542 [2024-10-07 09:44:18.398986] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.542 [2024-10-07 09:44:18.398993] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x172e480) on tqpair=0x16ce760 00:23:29.542 [2024-10-07 09:44:18.399001] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:29.542 [2024-10-07 09:44:18.399019] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.542 [2024-10-07 09:44:18.399030] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.542 [2024-10-07 09:44:18.399036] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16ce760) 00:23:29.542 [2024-10-07 09:44:18.399047] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.542 [2024-10-07 09:44:18.399069] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x172e480, cid 0, qid 0 00:23:29.542 [2024-10-07 09:44:18.399149] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.542 [2024-10-07 09:44:18.399164] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.542 [2024-10-07 09:44:18.399171] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.542 [2024-10-07 09:44:18.399178] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x172e480) on tqpair=0x16ce760 00:23:29.542 [2024-10-07 09:44:18.399185] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:29.542 [2024-10-07 09:44:18.399193] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:23:29.542 [2024-10-07 09:44:18.399207] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:23:29.542 [2024-10-07 09:44:18.399232] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:23:29.542 [2024-10-07 09:44:18.399249] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.542 [2024-10-07 09:44:18.399257] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16ce760) 00:23:29.542 [2024-10-07 09:44:18.399268] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.542 [2024-10-07 09:44:18.399289] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x172e480, cid 0, qid 0 00:23:29.542 [2024-10-07 09:44:18.399403] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:29.542 [2024-10-07 09:44:18.399419] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:29.542 [2024-10-07 09:44:18.399426] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:29.542 [2024-10-07 09:44:18.399432] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x16ce760): datao=0, datal=4096, cccid=0 00:23:29.542 [2024-10-07 09:44:18.399441] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x172e480) on tqpair(0x16ce760): expected_datao=0, payload_size=4096 00:23:29.542 [2024-10-07 09:44:18.399453] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.542 [2024-10-07 09:44:18.399475] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:29.542 [2024-10-07 09:44:18.399485] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:29.542 [2024-10-07 09:44:18.439824] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.542 [2024-10-07 09:44:18.439845] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.542 [2024-10-07 09:44:18.439858] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.542 [2024-10-07 09:44:18.439870] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x172e480) on tqpair=0x16ce760 00:23:29.542 [2024-10-07 09:44:18.439883] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:23:29.542 [2024-10-07 09:44:18.439893] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:23:29.542 [2024-10-07 09:44:18.439900] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:23:29.542 [2024-10-07 09:44:18.439908] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:23:29.542 [2024-10-07 09:44:18.439915] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:23:29.542 [2024-10-07 09:44:18.439923] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:23:29.542 [2024-10-07 09:44:18.439939] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:23:29.542 [2024-10-07 09:44:18.439954] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.542 [2024-10-07 09:44:18.439962] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.542 [2024-10-07 09:44:18.439968] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16ce760) 00:23:29.542 [2024-10-07 09:44:18.439980] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:29.542 [2024-10-07 09:44:18.440004] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x172e480, cid 0, qid 0 00:23:29.542 [2024-10-07 09:44:18.440087] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.542 [2024-10-07 09:44:18.440103] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.542 [2024-10-07 09:44:18.440110] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.542 [2024-10-07 09:44:18.440117] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x172e480) on tqpair=0x16ce760 00:23:29.542 [2024-10-07 09:44:18.440130] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.542 [2024-10-07 09:44:18.440137] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.542 [2024-10-07 09:44:18.440143] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16ce760) 00:23:29.542 [2024-10-07 09:44:18.440153] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.542 [2024-10-07 09:44:18.440163] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.542 [2024-10-07 09:44:18.440170] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.542 [2024-10-07 09:44:18.440176] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x16ce760) 00:23:29.542 [2024-10-07 09:44:18.440185] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.542 [2024-10-07 09:44:18.440194] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.542 [2024-10-07 09:44:18.440201] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.542 [2024-10-07 09:44:18.440207] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x16ce760) 00:23:29.542 [2024-10-07 09:44:18.440215] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.542 [2024-10-07 09:44:18.440225] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.542 [2024-10-07 09:44:18.440231] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.542 [2024-10-07 09:44:18.440237] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16ce760) 00:23:29.542 [2024-10-07 09:44:18.440250] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.542 [2024-10-07 09:44:18.440260] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:23:29.542 [2024-10-07 09:44:18.440295] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:29.542 [2024-10-07 09:44:18.440309] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.542 [2024-10-07 09:44:18.440316] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x16ce760) 00:23:29.542 [2024-10-07 09:44:18.440326] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.543 [2024-10-07 09:44:18.440348] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x172e480, cid 0, qid 0 00:23:29.543 [2024-10-07 09:44:18.440374] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x172e600, cid 1, qid 0 00:23:29.543 [2024-10-07 09:44:18.440382] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x172e780, cid 2, qid 0 00:23:29.543 [2024-10-07 09:44:18.440389] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x172e900, cid 3, qid 0 00:23:29.543 [2024-10-07 09:44:18.440397] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x172ea80, cid 4, qid 0 00:23:29.543 [2024-10-07 09:44:18.440533] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.543 [2024-10-07 09:44:18.440548] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.543 [2024-10-07 09:44:18.440555] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.543 [2024-10-07 09:44:18.440562] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x172ea80) on tqpair=0x16ce760 00:23:29.543 [2024-10-07 09:44:18.440572] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:23:29.543 [2024-10-07 09:44:18.440584] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:23:29.543 [2024-10-07 09:44:18.440603] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.543 [2024-10-07 09:44:18.440612] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x16ce760) 00:23:29.543 [2024-10-07 09:44:18.440624] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.543 [2024-10-07 09:44:18.440648] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x172ea80, cid 4, qid 0 00:23:29.543 [2024-10-07 09:44:18.440785] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:29.543 [2024-10-07 09:44:18.440801] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:29.543 [2024-10-07 09:44:18.440809] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:29.543 [2024-10-07 09:44:18.440815] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x16ce760): datao=0, datal=4096, cccid=4 00:23:29.543 [2024-10-07 09:44:18.440827] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x172ea80) on tqpair(0x16ce760): expected_datao=0, payload_size=4096 00:23:29.543 [2024-10-07 09:44:18.440842] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.543 [2024-10-07 09:44:18.440855] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:29.543 [2024-10-07 09:44:18.440862] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:29.543 [2024-10-07 09:44:18.440874] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.543 [2024-10-07 09:44:18.440884] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.543 [2024-10-07 09:44:18.440891] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.543 [2024-10-07 09:44:18.440898] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x172ea80) on tqpair=0x16ce760 00:23:29.543 [2024-10-07 09:44:18.440918] nvme_ctrlr.c:4189:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:23:29.543 [2024-10-07 09:44:18.440966] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.543 [2024-10-07 09:44:18.440978] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x16ce760) 00:23:29.543 [2024-10-07 09:44:18.440989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.543 [2024-10-07 09:44:18.441001] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.543 [2024-10-07 09:44:18.441008] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.543 [2024-10-07 09:44:18.441014] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x16ce760) 00:23:29.543 [2024-10-07 09:44:18.441023] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.543 [2024-10-07 09:44:18.441045] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x172ea80, cid 4, qid 0 00:23:29.543 [2024-10-07 09:44:18.441071] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x172ec00, cid 5, qid 0 00:23:29.543 [2024-10-07 09:44:18.441285] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:29.543 [2024-10-07 09:44:18.441300] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:29.543 [2024-10-07 09:44:18.441307] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:29.543 [2024-10-07 09:44:18.441313] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x16ce760): datao=0, datal=1024, cccid=4 00:23:29.543 [2024-10-07 09:44:18.441321] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x172ea80) on tqpair(0x16ce760): expected_datao=0, payload_size=1024 00:23:29.543 [2024-10-07 09:44:18.441328] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.543 [2024-10-07 09:44:18.441337] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:29.543 [2024-10-07 09:44:18.441345] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:29.543 [2024-10-07 09:44:18.441369] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.543 [2024-10-07 09:44:18.441377] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.543 [2024-10-07 09:44:18.441384] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.543 [2024-10-07 09:44:18.441390] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x172ec00) on tqpair=0x16ce760 00:23:29.543 [2024-10-07 09:44:18.484697] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.543 [2024-10-07 09:44:18.484716] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.543 [2024-10-07 09:44:18.484724] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.543 [2024-10-07 09:44:18.484731] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x172ea80) on tqpair=0x16ce760 00:23:29.543 [2024-10-07 09:44:18.484755] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.543 [2024-10-07 09:44:18.484765] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x16ce760) 00:23:29.543 [2024-10-07 09:44:18.484777] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.543 [2024-10-07 09:44:18.484809] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x172ea80, cid 4, qid 0 00:23:29.543 [2024-10-07 09:44:18.484944] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:29.543 [2024-10-07 09:44:18.484960] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:29.543 [2024-10-07 09:44:18.484968] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:29.543 [2024-10-07 09:44:18.484975] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x16ce760): datao=0, datal=3072, cccid=4 00:23:29.543 [2024-10-07 09:44:18.484983] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x172ea80) on tqpair(0x16ce760): expected_datao=0, payload_size=3072 00:23:29.543 [2024-10-07 09:44:18.484997] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.543 [2024-10-07 09:44:18.485026] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:29.543 [2024-10-07 09:44:18.485036] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:29.543 [2024-10-07 09:44:18.525808] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.543 [2024-10-07 09:44:18.525828] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.543 [2024-10-07 09:44:18.525836] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.543 [2024-10-07 09:44:18.525844] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x172ea80) on tqpair=0x16ce760 00:23:29.543 [2024-10-07 09:44:18.525860] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.543 [2024-10-07 09:44:18.525869] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x16ce760) 00:23:29.543 [2024-10-07 09:44:18.525881] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.543 [2024-10-07 09:44:18.525912] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x172ea80, cid 4, qid 0 00:23:29.543 [2024-10-07 09:44:18.526010] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:29.543 [2024-10-07 09:44:18.526025] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:29.543 [2024-10-07 09:44:18.526032] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:29.543 [2024-10-07 09:44:18.526038] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x16ce760): datao=0, datal=8, cccid=4 00:23:29.543 [2024-10-07 09:44:18.526046] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x172ea80) on tqpair(0x16ce760): expected_datao=0, payload_size=8 00:23:29.543 [2024-10-07 09:44:18.526053] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.543 [2024-10-07 09:44:18.526063] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:29.543 [2024-10-07 09:44:18.526070] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:29.803 [2024-10-07 09:44:18.566795] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.803 [2024-10-07 09:44:18.566815] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.803 [2024-10-07 09:44:18.566824] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.804 [2024-10-07 09:44:18.566834] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x172ea80) on tqpair=0x16ce760 00:23:29.804 ===================================================== 00:23:29.804 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:23:29.804 ===================================================== 00:23:29.804 Controller Capabilities/Features 00:23:29.804 ================================ 00:23:29.804 Vendor ID: 0000 00:23:29.804 Subsystem Vendor ID: 0000 00:23:29.804 Serial Number: .................... 00:23:29.804 Model Number: ........................................ 00:23:29.804 Firmware Version: 25.01 00:23:29.804 Recommended Arb Burst: 0 00:23:29.804 IEEE OUI Identifier: 00 00 00 00:23:29.804 Multi-path I/O 00:23:29.804 May have multiple subsystem ports: No 00:23:29.804 May have multiple controllers: No 00:23:29.804 Associated with SR-IOV VF: No 00:23:29.804 Max Data Transfer Size: 131072 00:23:29.804 Max Number of Namespaces: 0 00:23:29.804 Max Number of I/O Queues: 1024 00:23:29.804 NVMe Specification Version (VS): 1.3 00:23:29.804 NVMe Specification Version (Identify): 1.3 00:23:29.804 Maximum Queue Entries: 128 00:23:29.804 Contiguous Queues Required: Yes 00:23:29.804 Arbitration Mechanisms Supported 00:23:29.804 Weighted Round Robin: Not Supported 00:23:29.804 Vendor Specific: Not Supported 00:23:29.804 Reset Timeout: 15000 ms 00:23:29.804 Doorbell Stride: 4 bytes 00:23:29.804 NVM Subsystem Reset: Not Supported 00:23:29.804 Command Sets Supported 00:23:29.804 NVM Command Set: Supported 00:23:29.804 Boot Partition: Not Supported 00:23:29.804 Memory Page Size Minimum: 4096 bytes 00:23:29.804 Memory Page Size Maximum: 4096 bytes 00:23:29.804 Persistent Memory Region: Not Supported 00:23:29.804 Optional Asynchronous Events Supported 00:23:29.804 Namespace Attribute Notices: Not Supported 00:23:29.804 Firmware Activation Notices: Not Supported 00:23:29.804 ANA Change Notices: Not Supported 00:23:29.804 PLE Aggregate Log Change Notices: Not Supported 00:23:29.804 LBA Status Info Alert Notices: Not Supported 00:23:29.804 EGE Aggregate Log Change Notices: Not Supported 00:23:29.804 Normal NVM Subsystem Shutdown event: Not Supported 00:23:29.804 Zone Descriptor Change Notices: Not Supported 00:23:29.804 Discovery Log Change Notices: Supported 00:23:29.804 Controller Attributes 00:23:29.804 128-bit Host Identifier: Not Supported 00:23:29.804 Non-Operational Permissive Mode: Not Supported 00:23:29.804 NVM Sets: Not Supported 00:23:29.804 Read Recovery Levels: Not Supported 00:23:29.804 Endurance Groups: Not Supported 00:23:29.804 Predictable Latency Mode: Not Supported 00:23:29.804 Traffic Based Keep ALive: Not Supported 00:23:29.804 Namespace Granularity: Not Supported 00:23:29.804 SQ Associations: Not Supported 00:23:29.804 UUID List: Not Supported 00:23:29.804 Multi-Domain Subsystem: Not Supported 00:23:29.804 Fixed Capacity Management: Not Supported 00:23:29.804 Variable Capacity Management: Not Supported 00:23:29.804 Delete Endurance Group: Not Supported 00:23:29.804 Delete NVM Set: Not Supported 00:23:29.804 Extended LBA Formats Supported: Not Supported 00:23:29.804 Flexible Data Placement Supported: Not Supported 00:23:29.804 00:23:29.804 Controller Memory Buffer Support 00:23:29.804 ================================ 00:23:29.804 Supported: No 00:23:29.804 00:23:29.804 Persistent Memory Region Support 00:23:29.804 ================================ 00:23:29.804 Supported: No 00:23:29.804 00:23:29.804 Admin Command Set Attributes 00:23:29.804 ============================ 00:23:29.804 Security Send/Receive: Not Supported 00:23:29.804 Format NVM: Not Supported 00:23:29.804 Firmware Activate/Download: Not Supported 00:23:29.804 Namespace Management: Not Supported 00:23:29.804 Device Self-Test: Not Supported 00:23:29.804 Directives: Not Supported 00:23:29.804 NVMe-MI: Not Supported 00:23:29.804 Virtualization Management: Not Supported 00:23:29.804 Doorbell Buffer Config: Not Supported 00:23:29.804 Get LBA Status Capability: Not Supported 00:23:29.804 Command & Feature Lockdown Capability: Not Supported 00:23:29.804 Abort Command Limit: 1 00:23:29.804 Async Event Request Limit: 4 00:23:29.804 Number of Firmware Slots: N/A 00:23:29.804 Firmware Slot 1 Read-Only: N/A 00:23:29.804 Firmware Activation Without Reset: N/A 00:23:29.804 Multiple Update Detection Support: N/A 00:23:29.804 Firmware Update Granularity: No Information Provided 00:23:29.804 Per-Namespace SMART Log: No 00:23:29.804 Asymmetric Namespace Access Log Page: Not Supported 00:23:29.804 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:23:29.804 Command Effects Log Page: Not Supported 00:23:29.804 Get Log Page Extended Data: Supported 00:23:29.804 Telemetry Log Pages: Not Supported 00:23:29.804 Persistent Event Log Pages: Not Supported 00:23:29.804 Supported Log Pages Log Page: May Support 00:23:29.804 Commands Supported & Effects Log Page: Not Supported 00:23:29.804 Feature Identifiers & Effects Log Page:May Support 00:23:29.804 NVMe-MI Commands & Effects Log Page: May Support 00:23:29.804 Data Area 4 for Telemetry Log: Not Supported 00:23:29.804 Error Log Page Entries Supported: 128 00:23:29.804 Keep Alive: Not Supported 00:23:29.804 00:23:29.804 NVM Command Set Attributes 00:23:29.804 ========================== 00:23:29.804 Submission Queue Entry Size 00:23:29.804 Max: 1 00:23:29.804 Min: 1 00:23:29.804 Completion Queue Entry Size 00:23:29.804 Max: 1 00:23:29.804 Min: 1 00:23:29.804 Number of Namespaces: 0 00:23:29.804 Compare Command: Not Supported 00:23:29.804 Write Uncorrectable Command: Not Supported 00:23:29.804 Dataset Management Command: Not Supported 00:23:29.804 Write Zeroes Command: Not Supported 00:23:29.804 Set Features Save Field: Not Supported 00:23:29.804 Reservations: Not Supported 00:23:29.804 Timestamp: Not Supported 00:23:29.804 Copy: Not Supported 00:23:29.804 Volatile Write Cache: Not Present 00:23:29.804 Atomic Write Unit (Normal): 1 00:23:29.804 Atomic Write Unit (PFail): 1 00:23:29.804 Atomic Compare & Write Unit: 1 00:23:29.804 Fused Compare & Write: Supported 00:23:29.804 Scatter-Gather List 00:23:29.804 SGL Command Set: Supported 00:23:29.804 SGL Keyed: Supported 00:23:29.804 SGL Bit Bucket Descriptor: Not Supported 00:23:29.804 SGL Metadata Pointer: Not Supported 00:23:29.804 Oversized SGL: Not Supported 00:23:29.804 SGL Metadata Address: Not Supported 00:23:29.804 SGL Offset: Supported 00:23:29.804 Transport SGL Data Block: Not Supported 00:23:29.804 Replay Protected Memory Block: Not Supported 00:23:29.804 00:23:29.804 Firmware Slot Information 00:23:29.804 ========================= 00:23:29.804 Active slot: 0 00:23:29.804 00:23:29.804 00:23:29.804 Error Log 00:23:29.804 ========= 00:23:29.804 00:23:29.804 Active Namespaces 00:23:29.804 ================= 00:23:29.804 Discovery Log Page 00:23:29.804 ================== 00:23:29.804 Generation Counter: 2 00:23:29.804 Number of Records: 2 00:23:29.804 Record Format: 0 00:23:29.804 00:23:29.804 Discovery Log Entry 0 00:23:29.804 ---------------------- 00:23:29.804 Transport Type: 3 (TCP) 00:23:29.804 Address Family: 1 (IPv4) 00:23:29.804 Subsystem Type: 3 (Current Discovery Subsystem) 00:23:29.804 Entry Flags: 00:23:29.804 Duplicate Returned Information: 1 00:23:29.804 Explicit Persistent Connection Support for Discovery: 1 00:23:29.804 Transport Requirements: 00:23:29.804 Secure Channel: Not Required 00:23:29.804 Port ID: 0 (0x0000) 00:23:29.804 Controller ID: 65535 (0xffff) 00:23:29.804 Admin Max SQ Size: 128 00:23:29.804 Transport Service Identifier: 4420 00:23:29.804 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:23:29.804 Transport Address: 10.0.0.2 00:23:29.804 Discovery Log Entry 1 00:23:29.804 ---------------------- 00:23:29.804 Transport Type: 3 (TCP) 00:23:29.804 Address Family: 1 (IPv4) 00:23:29.804 Subsystem Type: 2 (NVM Subsystem) 00:23:29.804 Entry Flags: 00:23:29.804 Duplicate Returned Information: 0 00:23:29.804 Explicit Persistent Connection Support for Discovery: 0 00:23:29.804 Transport Requirements: 00:23:29.804 Secure Channel: Not Required 00:23:29.804 Port ID: 0 (0x0000) 00:23:29.804 Controller ID: 65535 (0xffff) 00:23:29.804 Admin Max SQ Size: 128 00:23:29.804 Transport Service Identifier: 4420 00:23:29.804 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:23:29.804 Transport Address: 10.0.0.2 [2024-10-07 09:44:18.566948] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:23:29.804 [2024-10-07 09:44:18.566971] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x172e480) on tqpair=0x16ce760 00:23:29.805 [2024-10-07 09:44:18.566986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.805 [2024-10-07 09:44:18.566996] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x172e600) on tqpair=0x16ce760 00:23:29.805 [2024-10-07 09:44:18.567004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.805 [2024-10-07 09:44:18.567012] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x172e780) on tqpair=0x16ce760 00:23:29.805 [2024-10-07 09:44:18.567020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.805 [2024-10-07 09:44:18.567028] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x172e900) on tqpair=0x16ce760 00:23:29.805 [2024-10-07 09:44:18.567036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.805 [2024-10-07 09:44:18.567049] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.805 [2024-10-07 09:44:18.567058] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.805 [2024-10-07 09:44:18.567064] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16ce760) 00:23:29.805 [2024-10-07 09:44:18.567089] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.805 [2024-10-07 09:44:18.567117] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x172e900, cid 3, qid 0 00:23:29.805 [2024-10-07 09:44:18.567233] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.805 [2024-10-07 09:44:18.567249] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.805 [2024-10-07 09:44:18.567256] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.805 [2024-10-07 09:44:18.567263] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x172e900) on tqpair=0x16ce760 00:23:29.805 [2024-10-07 09:44:18.567278] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.805 [2024-10-07 09:44:18.567288] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.805 [2024-10-07 09:44:18.567294] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16ce760) 00:23:29.805 [2024-10-07 09:44:18.567305] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.805 [2024-10-07 09:44:18.567334] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x172e900, cid 3, qid 0 00:23:29.805 [2024-10-07 09:44:18.567424] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.805 [2024-10-07 09:44:18.567439] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.805 [2024-10-07 09:44:18.567447] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.805 [2024-10-07 09:44:18.567454] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x172e900) on tqpair=0x16ce760 00:23:29.805 [2024-10-07 09:44:18.567462] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:23:29.805 [2024-10-07 09:44:18.567475] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:23:29.805 [2024-10-07 09:44:18.567495] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.805 [2024-10-07 09:44:18.567507] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.805 [2024-10-07 09:44:18.567513] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16ce760) 00:23:29.805 [2024-10-07 09:44:18.567524] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.805 [2024-10-07 09:44:18.567546] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x172e900, cid 3, qid 0 00:23:29.805 [2024-10-07 09:44:18.567626] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.805 [2024-10-07 09:44:18.567641] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.805 [2024-10-07 09:44:18.567649] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.805 [2024-10-07 09:44:18.567656] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x172e900) on tqpair=0x16ce760 00:23:29.805 [2024-10-07 09:44:18.567684] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.805 [2024-10-07 09:44:18.567697] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.805 [2024-10-07 09:44:18.567703] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16ce760) 00:23:29.805 [2024-10-07 09:44:18.567714] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.805 [2024-10-07 09:44:18.567740] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x172e900, cid 3, qid 0 00:23:29.805 [2024-10-07 09:44:18.567818] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.805 [2024-10-07 09:44:18.567833] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.805 [2024-10-07 09:44:18.567841] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.805 [2024-10-07 09:44:18.567848] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x172e900) on tqpair=0x16ce760 00:23:29.805 [2024-10-07 09:44:18.567867] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.805 [2024-10-07 09:44:18.567878] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.805 [2024-10-07 09:44:18.567885] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16ce760) 00:23:29.805 [2024-10-07 09:44:18.567900] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.805 [2024-10-07 09:44:18.567923] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x172e900, cid 3, qid 0 00:23:29.805 [2024-10-07 09:44:18.568000] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.805 [2024-10-07 09:44:18.568015] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.805 [2024-10-07 09:44:18.568022] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.805 [2024-10-07 09:44:18.568029] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x172e900) on tqpair=0x16ce760 00:23:29.805 [2024-10-07 09:44:18.568048] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.805 [2024-10-07 09:44:18.568059] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.805 [2024-10-07 09:44:18.568065] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16ce760) 00:23:29.805 [2024-10-07 09:44:18.568076] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.805 [2024-10-07 09:44:18.568097] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x172e900, cid 3, qid 0 00:23:29.805 [2024-10-07 09:44:18.568175] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.805 [2024-10-07 09:44:18.568190] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.805 [2024-10-07 09:44:18.568197] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.805 [2024-10-07 09:44:18.568204] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x172e900) on tqpair=0x16ce760 00:23:29.805 [2024-10-07 09:44:18.568222] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.805 [2024-10-07 09:44:18.568233] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.805 [2024-10-07 09:44:18.568239] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16ce760) 00:23:29.805 [2024-10-07 09:44:18.568250] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.805 [2024-10-07 09:44:18.568271] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x172e900, cid 3, qid 0 00:23:29.805 [2024-10-07 09:44:18.568348] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.805 [2024-10-07 09:44:18.568363] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.805 [2024-10-07 09:44:18.568370] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.805 [2024-10-07 09:44:18.568377] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x172e900) on tqpair=0x16ce760 00:23:29.805 [2024-10-07 09:44:18.568396] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.805 [2024-10-07 09:44:18.568407] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.805 [2024-10-07 09:44:18.568414] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16ce760) 00:23:29.805 [2024-10-07 09:44:18.568424] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.805 [2024-10-07 09:44:18.568447] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x172e900, cid 3, qid 0 00:23:29.805 [2024-10-07 09:44:18.568520] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.805 [2024-10-07 09:44:18.568535] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.805 [2024-10-07 09:44:18.568542] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.805 [2024-10-07 09:44:18.568549] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x172e900) on tqpair=0x16ce760 00:23:29.805 [2024-10-07 09:44:18.568568] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.805 [2024-10-07 09:44:18.568580] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.805 [2024-10-07 09:44:18.568587] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16ce760) 00:23:29.805 [2024-10-07 09:44:18.568603] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.805 [2024-10-07 09:44:18.568629] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x172e900, cid 3, qid 0 00:23:29.805 [2024-10-07 09:44:18.572699] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.805 [2024-10-07 09:44:18.572717] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.805 [2024-10-07 09:44:18.572725] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.805 [2024-10-07 09:44:18.572731] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x172e900) on tqpair=0x16ce760 00:23:29.805 [2024-10-07 09:44:18.572750] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.805 [2024-10-07 09:44:18.572761] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.805 [2024-10-07 09:44:18.572768] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16ce760) 00:23:29.805 [2024-10-07 09:44:18.572779] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.805 [2024-10-07 09:44:18.572801] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x172e900, cid 3, qid 0 00:23:29.805 [2024-10-07 09:44:18.572922] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.805 [2024-10-07 09:44:18.572937] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.805 [2024-10-07 09:44:18.572944] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.805 [2024-10-07 09:44:18.572951] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x172e900) on tqpair=0x16ce760 00:23:29.805 [2024-10-07 09:44:18.572965] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:23:29.805 00:23:29.805 09:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:23:29.805 [2024-10-07 09:44:18.610540] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:23:29.806 [2024-10-07 09:44:18.610585] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid278692 ] 00:23:29.806 [2024-10-07 09:44:18.644369] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:23:29.806 [2024-10-07 09:44:18.644426] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:29.806 [2024-10-07 09:44:18.644436] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:29.806 [2024-10-07 09:44:18.644451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:29.806 [2024-10-07 09:44:18.644463] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:29.806 [2024-10-07 09:44:18.644911] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:23:29.806 [2024-10-07 09:44:18.644952] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xce1760 0 00:23:29.806 [2024-10-07 09:44:18.655685] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:29.806 [2024-10-07 09:44:18.655706] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:29.806 [2024-10-07 09:44:18.655715] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:29.806 [2024-10-07 09:44:18.655722] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:29.806 [2024-10-07 09:44:18.655754] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.806 [2024-10-07 09:44:18.655770] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.806 [2024-10-07 09:44:18.655778] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xce1760) 00:23:29.806 [2024-10-07 09:44:18.655792] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:29.806 [2024-10-07 09:44:18.655820] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd41480, cid 0, qid 0 00:23:29.806 [2024-10-07 09:44:18.663680] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.806 [2024-10-07 09:44:18.663699] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.806 [2024-10-07 09:44:18.663712] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.806 [2024-10-07 09:44:18.663719] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd41480) on tqpair=0xce1760 00:23:29.806 [2024-10-07 09:44:18.663734] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:29.806 [2024-10-07 09:44:18.663744] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:23:29.806 [2024-10-07 09:44:18.663753] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:23:29.806 [2024-10-07 09:44:18.663772] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.806 [2024-10-07 09:44:18.663781] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.806 [2024-10-07 09:44:18.663787] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xce1760) 00:23:29.806 [2024-10-07 09:44:18.663798] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.806 [2024-10-07 09:44:18.663823] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd41480, cid 0, qid 0 00:23:29.806 [2024-10-07 09:44:18.663926] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.806 [2024-10-07 09:44:18.663940] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.806 [2024-10-07 09:44:18.663947] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.806 [2024-10-07 09:44:18.663954] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd41480) on tqpair=0xce1760 00:23:29.806 [2024-10-07 09:44:18.663962] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:23:29.806 [2024-10-07 09:44:18.663975] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:23:29.806 [2024-10-07 09:44:18.663987] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.806 [2024-10-07 09:44:18.663994] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.806 [2024-10-07 09:44:18.664001] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xce1760) 00:23:29.806 [2024-10-07 09:44:18.664011] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.806 [2024-10-07 09:44:18.664032] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd41480, cid 0, qid 0 00:23:29.806 [2024-10-07 09:44:18.664106] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.806 [2024-10-07 09:44:18.664118] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.806 [2024-10-07 09:44:18.664125] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.806 [2024-10-07 09:44:18.664132] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd41480) on tqpair=0xce1760 00:23:29.806 [2024-10-07 09:44:18.664140] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:23:29.806 [2024-10-07 09:44:18.664153] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:23:29.806 [2024-10-07 09:44:18.664165] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.806 [2024-10-07 09:44:18.664172] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.806 [2024-10-07 09:44:18.664183] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xce1760) 00:23:29.806 [2024-10-07 09:44:18.664194] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.806 [2024-10-07 09:44:18.664215] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd41480, cid 0, qid 0 00:23:29.806 [2024-10-07 09:44:18.664284] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.806 [2024-10-07 09:44:18.664296] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.806 [2024-10-07 09:44:18.664303] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.806 [2024-10-07 09:44:18.664310] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd41480) on tqpair=0xce1760 00:23:29.806 [2024-10-07 09:44:18.664319] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:29.806 [2024-10-07 09:44:18.664334] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.806 [2024-10-07 09:44:18.664343] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.806 [2024-10-07 09:44:18.664349] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xce1760) 00:23:29.806 [2024-10-07 09:44:18.664360] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.806 [2024-10-07 09:44:18.664381] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd41480, cid 0, qid 0 00:23:29.806 [2024-10-07 09:44:18.664469] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.806 [2024-10-07 09:44:18.664481] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.806 [2024-10-07 09:44:18.664488] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.806 [2024-10-07 09:44:18.664495] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd41480) on tqpair=0xce1760 00:23:29.806 [2024-10-07 09:44:18.664502] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:23:29.806 [2024-10-07 09:44:18.664510] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:23:29.806 [2024-10-07 09:44:18.664523] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:29.806 [2024-10-07 09:44:18.664632] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:23:29.806 [2024-10-07 09:44:18.664640] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:29.806 [2024-10-07 09:44:18.664675] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.806 [2024-10-07 09:44:18.664685] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.806 [2024-10-07 09:44:18.664691] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xce1760) 00:23:29.806 [2024-10-07 09:44:18.664701] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.806 [2024-10-07 09:44:18.664738] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd41480, cid 0, qid 0 00:23:29.806 [2024-10-07 09:44:18.664812] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.806 [2024-10-07 09:44:18.664824] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.806 [2024-10-07 09:44:18.664831] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.806 [2024-10-07 09:44:18.664838] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd41480) on tqpair=0xce1760 00:23:29.806 [2024-10-07 09:44:18.664846] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:29.806 [2024-10-07 09:44:18.664862] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.806 [2024-10-07 09:44:18.664871] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.806 [2024-10-07 09:44:18.664881] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xce1760) 00:23:29.806 [2024-10-07 09:44:18.664892] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.806 [2024-10-07 09:44:18.664912] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd41480, cid 0, qid 0 00:23:29.806 [2024-10-07 09:44:18.664982] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.806 [2024-10-07 09:44:18.664994] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.806 [2024-10-07 09:44:18.665001] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.806 [2024-10-07 09:44:18.665008] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd41480) on tqpair=0xce1760 00:23:29.806 [2024-10-07 09:44:18.665015] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:29.806 [2024-10-07 09:44:18.665023] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:23:29.806 [2024-10-07 09:44:18.665036] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:23:29.806 [2024-10-07 09:44:18.665050] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:23:29.806 [2024-10-07 09:44:18.665065] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.806 [2024-10-07 09:44:18.665073] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xce1760) 00:23:29.806 [2024-10-07 09:44:18.665083] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.806 [2024-10-07 09:44:18.665104] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd41480, cid 0, qid 0 00:23:29.806 [2024-10-07 09:44:18.665225] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:29.806 [2024-10-07 09:44:18.665239] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:29.807 [2024-10-07 09:44:18.665246] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:29.807 [2024-10-07 09:44:18.665253] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xce1760): datao=0, datal=4096, cccid=0 00:23:29.807 [2024-10-07 09:44:18.665260] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd41480) on tqpair(0xce1760): expected_datao=0, payload_size=4096 00:23:29.807 [2024-10-07 09:44:18.665268] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.807 [2024-10-07 09:44:18.665285] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:29.807 [2024-10-07 09:44:18.665294] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:29.807 [2024-10-07 09:44:18.706735] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.807 [2024-10-07 09:44:18.706768] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.807 [2024-10-07 09:44:18.706775] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.807 [2024-10-07 09:44:18.706782] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd41480) on tqpair=0xce1760 00:23:29.807 [2024-10-07 09:44:18.706794] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:23:29.807 [2024-10-07 09:44:18.706802] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:23:29.807 [2024-10-07 09:44:18.706810] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:23:29.807 [2024-10-07 09:44:18.706816] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:23:29.807 [2024-10-07 09:44:18.706824] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:23:29.807 [2024-10-07 09:44:18.706832] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:23:29.807 [2024-10-07 09:44:18.706851] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:23:29.807 [2024-10-07 09:44:18.706865] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.807 [2024-10-07 09:44:18.706873] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.807 [2024-10-07 09:44:18.706879] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xce1760) 00:23:29.807 [2024-10-07 09:44:18.706890] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:29.807 [2024-10-07 09:44:18.706914] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd41480, cid 0, qid 0 00:23:29.807 [2024-10-07 09:44:18.706998] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.807 [2024-10-07 09:44:18.707014] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.807 [2024-10-07 09:44:18.707021] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.807 [2024-10-07 09:44:18.707028] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd41480) on tqpair=0xce1760 00:23:29.807 [2024-10-07 09:44:18.707038] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.807 [2024-10-07 09:44:18.707046] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.807 [2024-10-07 09:44:18.707052] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xce1760) 00:23:29.807 [2024-10-07 09:44:18.707062] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.807 [2024-10-07 09:44:18.707073] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.807 [2024-10-07 09:44:18.707079] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.807 [2024-10-07 09:44:18.707085] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xce1760) 00:23:29.807 [2024-10-07 09:44:18.707094] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.807 [2024-10-07 09:44:18.707104] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.807 [2024-10-07 09:44:18.707111] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.807 [2024-10-07 09:44:18.707117] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xce1760) 00:23:29.807 [2024-10-07 09:44:18.707125] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.807 [2024-10-07 09:44:18.707135] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.807 [2024-10-07 09:44:18.707142] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.807 [2024-10-07 09:44:18.707148] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xce1760) 00:23:29.807 [2024-10-07 09:44:18.707157] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.807 [2024-10-07 09:44:18.707165] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:23:29.807 [2024-10-07 09:44:18.707185] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:29.807 [2024-10-07 09:44:18.707213] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.807 [2024-10-07 09:44:18.707219] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xce1760) 00:23:29.807 [2024-10-07 09:44:18.707229] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.807 [2024-10-07 09:44:18.707251] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd41480, cid 0, qid 0 00:23:29.807 [2024-10-07 09:44:18.707277] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd41600, cid 1, qid 0 00:23:29.807 [2024-10-07 09:44:18.707288] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd41780, cid 2, qid 0 00:23:29.807 [2024-10-07 09:44:18.707296] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd41900, cid 3, qid 0 00:23:29.807 [2024-10-07 09:44:18.707304] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd41a80, cid 4, qid 0 00:23:29.807 [2024-10-07 09:44:18.707446] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.807 [2024-10-07 09:44:18.707460] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.807 [2024-10-07 09:44:18.707467] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.807 [2024-10-07 09:44:18.707474] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd41a80) on tqpair=0xce1760 00:23:29.807 [2024-10-07 09:44:18.707481] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:23:29.807 [2024-10-07 09:44:18.707490] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:23:29.807 [2024-10-07 09:44:18.707503] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:23:29.807 [2024-10-07 09:44:18.707519] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:23:29.807 [2024-10-07 09:44:18.707531] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.807 [2024-10-07 09:44:18.707538] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.807 [2024-10-07 09:44:18.707544] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xce1760) 00:23:29.807 [2024-10-07 09:44:18.707569] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:29.807 [2024-10-07 09:44:18.707591] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd41a80, cid 4, qid 0 00:23:29.807 [2024-10-07 09:44:18.707735] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.807 [2024-10-07 09:44:18.707750] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.807 [2024-10-07 09:44:18.707757] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.807 [2024-10-07 09:44:18.707764] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd41a80) on tqpair=0xce1760 00:23:29.807 [2024-10-07 09:44:18.707832] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:23:29.807 [2024-10-07 09:44:18.707853] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:23:29.807 [2024-10-07 09:44:18.707867] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.807 [2024-10-07 09:44:18.707875] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xce1760) 00:23:29.807 [2024-10-07 09:44:18.707885] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.807 [2024-10-07 09:44:18.707908] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd41a80, cid 4, qid 0 00:23:29.807 [2024-10-07 09:44:18.708004] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:29.807 [2024-10-07 09:44:18.708021] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:29.807 [2024-10-07 09:44:18.708029] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:29.807 [2024-10-07 09:44:18.708036] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xce1760): datao=0, datal=4096, cccid=4 00:23:29.807 [2024-10-07 09:44:18.708043] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd41a80) on tqpair(0xce1760): expected_datao=0, payload_size=4096 00:23:29.807 [2024-10-07 09:44:18.708051] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.807 [2024-10-07 09:44:18.708060] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:29.807 [2024-10-07 09:44:18.708072] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:29.807 [2024-10-07 09:44:18.708085] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.807 [2024-10-07 09:44:18.708094] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.807 [2024-10-07 09:44:18.708101] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.807 [2024-10-07 09:44:18.708108] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd41a80) on tqpair=0xce1760 00:23:29.807 [2024-10-07 09:44:18.708123] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:23:29.807 [2024-10-07 09:44:18.708147] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:23:29.807 [2024-10-07 09:44:18.708166] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:23:29.807 [2024-10-07 09:44:18.708180] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.807 [2024-10-07 09:44:18.708187] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xce1760) 00:23:29.807 [2024-10-07 09:44:18.708198] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.807 [2024-10-07 09:44:18.708236] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd41a80, cid 4, qid 0 00:23:29.807 [2024-10-07 09:44:18.708355] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:29.807 [2024-10-07 09:44:18.708371] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:29.807 [2024-10-07 09:44:18.708378] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:29.807 [2024-10-07 09:44:18.708385] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xce1760): datao=0, datal=4096, cccid=4 00:23:29.807 [2024-10-07 09:44:18.708392] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd41a80) on tqpair(0xce1760): expected_datao=0, payload_size=4096 00:23:29.807 [2024-10-07 09:44:18.708399] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.807 [2024-10-07 09:44:18.708417] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:29.808 [2024-10-07 09:44:18.708426] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:29.808 [2024-10-07 09:44:18.748738] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.808 [2024-10-07 09:44:18.748757] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.808 [2024-10-07 09:44:18.748764] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.808 [2024-10-07 09:44:18.748771] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd41a80) on tqpair=0xce1760 00:23:29.808 [2024-10-07 09:44:18.748791] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:23:29.808 [2024-10-07 09:44:18.748810] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:23:29.808 [2024-10-07 09:44:18.748825] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.808 [2024-10-07 09:44:18.748832] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xce1760) 00:23:29.808 [2024-10-07 09:44:18.748843] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.808 [2024-10-07 09:44:18.748866] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd41a80, cid 4, qid 0 00:23:29.808 [2024-10-07 09:44:18.748974] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:29.808 [2024-10-07 09:44:18.748989] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:29.808 [2024-10-07 09:44:18.748996] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:29.808 [2024-10-07 09:44:18.749002] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xce1760): datao=0, datal=4096, cccid=4 00:23:29.808 [2024-10-07 09:44:18.749014] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd41a80) on tqpair(0xce1760): expected_datao=0, payload_size=4096 00:23:29.808 [2024-10-07 09:44:18.749022] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.808 [2024-10-07 09:44:18.749039] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:29.808 [2024-10-07 09:44:18.749048] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:29.808 [2024-10-07 09:44:18.794688] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.808 [2024-10-07 09:44:18.794706] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.808 [2024-10-07 09:44:18.794714] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.808 [2024-10-07 09:44:18.794721] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd41a80) on tqpair=0xce1760 00:23:29.808 [2024-10-07 09:44:18.794735] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:23:29.808 [2024-10-07 09:44:18.794750] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:23:29.808 [2024-10-07 09:44:18.794766] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:23:29.808 [2024-10-07 09:44:18.794777] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:23:29.808 [2024-10-07 09:44:18.794786] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:23:29.808 [2024-10-07 09:44:18.794795] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:23:29.808 [2024-10-07 09:44:18.794803] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:23:29.808 [2024-10-07 09:44:18.794810] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:23:29.808 [2024-10-07 09:44:18.794819] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:23:29.808 [2024-10-07 09:44:18.794837] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.808 [2024-10-07 09:44:18.794846] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xce1760) 00:23:29.808 [2024-10-07 09:44:18.794857] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.808 [2024-10-07 09:44:18.794868] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.808 [2024-10-07 09:44:18.794876] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.808 [2024-10-07 09:44:18.794882] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xce1760) 00:23:29.808 [2024-10-07 09:44:18.794891] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.808 [2024-10-07 09:44:18.794914] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd41a80, cid 4, qid 0 00:23:29.808 [2024-10-07 09:44:18.794926] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd41c00, cid 5, qid 0 00:23:29.808 [2024-10-07 09:44:18.795017] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.808 [2024-10-07 09:44:18.795031] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.808 [2024-10-07 09:44:18.795038] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.808 [2024-10-07 09:44:18.795045] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd41a80) on tqpair=0xce1760 00:23:29.808 [2024-10-07 09:44:18.795055] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.808 [2024-10-07 09:44:18.795064] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.808 [2024-10-07 09:44:18.795071] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.808 [2024-10-07 09:44:18.795081] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd41c00) on tqpair=0xce1760 00:23:29.808 [2024-10-07 09:44:18.795097] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.808 [2024-10-07 09:44:18.795107] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xce1760) 00:23:29.808 [2024-10-07 09:44:18.795117] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.808 [2024-10-07 09:44:18.795138] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd41c00, cid 5, qid 0 00:23:29.808 [2024-10-07 09:44:18.795217] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.808 [2024-10-07 09:44:18.795230] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.808 [2024-10-07 09:44:18.795237] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.808 [2024-10-07 09:44:18.795244] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd41c00) on tqpair=0xce1760 00:23:29.808 [2024-10-07 09:44:18.795259] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.808 [2024-10-07 09:44:18.795268] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xce1760) 00:23:29.808 [2024-10-07 09:44:18.795278] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.808 [2024-10-07 09:44:18.795298] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd41c00, cid 5, qid 0 00:23:29.808 [2024-10-07 09:44:18.795372] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.808 [2024-10-07 09:44:18.795384] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.808 [2024-10-07 09:44:18.795391] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.808 [2024-10-07 09:44:18.795397] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd41c00) on tqpair=0xce1760 00:23:29.808 [2024-10-07 09:44:18.795412] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.808 [2024-10-07 09:44:18.795421] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xce1760) 00:23:29.808 [2024-10-07 09:44:18.795431] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.808 [2024-10-07 09:44:18.795451] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd41c00, cid 5, qid 0 00:23:29.808 [2024-10-07 09:44:18.795530] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.808 [2024-10-07 09:44:18.795542] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.808 [2024-10-07 09:44:18.795549] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.808 [2024-10-07 09:44:18.795556] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd41c00) on tqpair=0xce1760 00:23:29.808 [2024-10-07 09:44:18.795580] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.808 [2024-10-07 09:44:18.795591] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xce1760) 00:23:29.808 [2024-10-07 09:44:18.795601] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.808 [2024-10-07 09:44:18.795614] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.808 [2024-10-07 09:44:18.795621] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xce1760) 00:23:29.808 [2024-10-07 09:44:18.795631] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.808 [2024-10-07 09:44:18.795642] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.808 [2024-10-07 09:44:18.795650] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xce1760) 00:23:29.808 [2024-10-07 09:44:18.795659] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.808 [2024-10-07 09:44:18.795687] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.808 [2024-10-07 09:44:18.795698] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xce1760) 00:23:29.808 [2024-10-07 09:44:18.795709] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.809 [2024-10-07 09:44:18.795731] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd41c00, cid 5, qid 0 00:23:29.809 [2024-10-07 09:44:18.795743] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd41a80, cid 4, qid 0 00:23:29.809 [2024-10-07 09:44:18.795750] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd41d80, cid 6, qid 0 00:23:29.809 [2024-10-07 09:44:18.795758] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd41f00, cid 7, qid 0 00:23:29.809 [2024-10-07 09:44:18.795920] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:29.809 [2024-10-07 09:44:18.795935] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:29.809 [2024-10-07 09:44:18.795942] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:29.809 [2024-10-07 09:44:18.795948] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xce1760): datao=0, datal=8192, cccid=5 00:23:29.809 [2024-10-07 09:44:18.795956] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd41c00) on tqpair(0xce1760): expected_datao=0, payload_size=8192 00:23:29.809 [2024-10-07 09:44:18.795963] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.809 [2024-10-07 09:44:18.795984] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:29.809 [2024-10-07 09:44:18.795996] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:29.809 [2024-10-07 09:44:18.796009] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:29.809 [2024-10-07 09:44:18.796019] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:29.809 [2024-10-07 09:44:18.796026] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:29.809 [2024-10-07 09:44:18.796032] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xce1760): datao=0, datal=512, cccid=4 00:23:29.809 [2024-10-07 09:44:18.796040] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd41a80) on tqpair(0xce1760): expected_datao=0, payload_size=512 00:23:29.809 [2024-10-07 09:44:18.796047] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.809 [2024-10-07 09:44:18.796056] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:29.809 [2024-10-07 09:44:18.796063] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:29.809 [2024-10-07 09:44:18.796071] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:29.809 [2024-10-07 09:44:18.796080] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:29.809 [2024-10-07 09:44:18.796086] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:29.809 [2024-10-07 09:44:18.796092] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xce1760): datao=0, datal=512, cccid=6 00:23:29.809 [2024-10-07 09:44:18.796100] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd41d80) on tqpair(0xce1760): expected_datao=0, payload_size=512 00:23:29.809 [2024-10-07 09:44:18.796107] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.809 [2024-10-07 09:44:18.796116] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:29.809 [2024-10-07 09:44:18.796123] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:29.809 [2024-10-07 09:44:18.796131] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:29.809 [2024-10-07 09:44:18.796140] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:29.809 [2024-10-07 09:44:18.796146] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:29.809 [2024-10-07 09:44:18.796152] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xce1760): datao=0, datal=4096, cccid=7 00:23:29.809 [2024-10-07 09:44:18.796159] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd41f00) on tqpair(0xce1760): expected_datao=0, payload_size=4096 00:23:29.809 [2024-10-07 09:44:18.796170] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.809 [2024-10-07 09:44:18.796180] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:29.809 [2024-10-07 09:44:18.796202] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:29.809 [2024-10-07 09:44:18.796213] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.809 [2024-10-07 09:44:18.796223] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.809 [2024-10-07 09:44:18.796229] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.809 [2024-10-07 09:44:18.796235] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd41c00) on tqpair=0xce1760 00:23:29.809 [2024-10-07 09:44:18.796254] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.809 [2024-10-07 09:44:18.796280] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.809 [2024-10-07 09:44:18.796286] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.809 [2024-10-07 09:44:18.796292] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd41a80) on tqpair=0xce1760 00:23:29.809 [2024-10-07 09:44:18.796307] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.809 [2024-10-07 09:44:18.796317] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.809 [2024-10-07 09:44:18.796323] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.809 [2024-10-07 09:44:18.796329] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd41d80) on tqpair=0xce1760 00:23:29.809 [2024-10-07 09:44:18.796339] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.809 [2024-10-07 09:44:18.796348] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.809 [2024-10-07 09:44:18.796354] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.809 [2024-10-07 09:44:18.796360] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd41f00) on tqpair=0xce1760 00:23:29.809 ===================================================== 00:23:29.809 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:29.809 ===================================================== 00:23:29.809 Controller Capabilities/Features 00:23:29.809 ================================ 00:23:29.809 Vendor ID: 8086 00:23:29.809 Subsystem Vendor ID: 8086 00:23:29.809 Serial Number: SPDK00000000000001 00:23:29.809 Model Number: SPDK bdev Controller 00:23:29.809 Firmware Version: 25.01 00:23:29.809 Recommended Arb Burst: 6 00:23:29.809 IEEE OUI Identifier: e4 d2 5c 00:23:29.809 Multi-path I/O 00:23:29.809 May have multiple subsystem ports: Yes 00:23:29.809 May have multiple controllers: Yes 00:23:29.809 Associated with SR-IOV VF: No 00:23:29.809 Max Data Transfer Size: 131072 00:23:29.809 Max Number of Namespaces: 32 00:23:29.809 Max Number of I/O Queues: 127 00:23:29.809 NVMe Specification Version (VS): 1.3 00:23:29.809 NVMe Specification Version (Identify): 1.3 00:23:29.809 Maximum Queue Entries: 128 00:23:29.809 Contiguous Queues Required: Yes 00:23:29.809 Arbitration Mechanisms Supported 00:23:29.809 Weighted Round Robin: Not Supported 00:23:29.809 Vendor Specific: Not Supported 00:23:29.809 Reset Timeout: 15000 ms 00:23:29.809 Doorbell Stride: 4 bytes 00:23:29.809 NVM Subsystem Reset: Not Supported 00:23:29.809 Command Sets Supported 00:23:29.809 NVM Command Set: Supported 00:23:29.809 Boot Partition: Not Supported 00:23:29.809 Memory Page Size Minimum: 4096 bytes 00:23:29.809 Memory Page Size Maximum: 4096 bytes 00:23:29.809 Persistent Memory Region: Not Supported 00:23:29.809 Optional Asynchronous Events Supported 00:23:29.809 Namespace Attribute Notices: Supported 00:23:29.809 Firmware Activation Notices: Not Supported 00:23:29.809 ANA Change Notices: Not Supported 00:23:29.809 PLE Aggregate Log Change Notices: Not Supported 00:23:29.809 LBA Status Info Alert Notices: Not Supported 00:23:29.809 EGE Aggregate Log Change Notices: Not Supported 00:23:29.809 Normal NVM Subsystem Shutdown event: Not Supported 00:23:29.809 Zone Descriptor Change Notices: Not Supported 00:23:29.809 Discovery Log Change Notices: Not Supported 00:23:29.809 Controller Attributes 00:23:29.809 128-bit Host Identifier: Supported 00:23:29.809 Non-Operational Permissive Mode: Not Supported 00:23:29.809 NVM Sets: Not Supported 00:23:29.809 Read Recovery Levels: Not Supported 00:23:29.809 Endurance Groups: Not Supported 00:23:29.809 Predictable Latency Mode: Not Supported 00:23:29.809 Traffic Based Keep ALive: Not Supported 00:23:29.809 Namespace Granularity: Not Supported 00:23:29.809 SQ Associations: Not Supported 00:23:29.809 UUID List: Not Supported 00:23:29.809 Multi-Domain Subsystem: Not Supported 00:23:29.809 Fixed Capacity Management: Not Supported 00:23:29.809 Variable Capacity Management: Not Supported 00:23:29.809 Delete Endurance Group: Not Supported 00:23:29.809 Delete NVM Set: Not Supported 00:23:29.809 Extended LBA Formats Supported: Not Supported 00:23:29.809 Flexible Data Placement Supported: Not Supported 00:23:29.809 00:23:29.809 Controller Memory Buffer Support 00:23:29.809 ================================ 00:23:29.809 Supported: No 00:23:29.809 00:23:29.809 Persistent Memory Region Support 00:23:29.809 ================================ 00:23:29.809 Supported: No 00:23:29.809 00:23:29.809 Admin Command Set Attributes 00:23:29.809 ============================ 00:23:29.809 Security Send/Receive: Not Supported 00:23:29.809 Format NVM: Not Supported 00:23:29.809 Firmware Activate/Download: Not Supported 00:23:29.809 Namespace Management: Not Supported 00:23:29.809 Device Self-Test: Not Supported 00:23:29.809 Directives: Not Supported 00:23:29.809 NVMe-MI: Not Supported 00:23:29.809 Virtualization Management: Not Supported 00:23:29.809 Doorbell Buffer Config: Not Supported 00:23:29.809 Get LBA Status Capability: Not Supported 00:23:29.809 Command & Feature Lockdown Capability: Not Supported 00:23:29.809 Abort Command Limit: 4 00:23:29.809 Async Event Request Limit: 4 00:23:29.809 Number of Firmware Slots: N/A 00:23:29.809 Firmware Slot 1 Read-Only: N/A 00:23:29.809 Firmware Activation Without Reset: N/A 00:23:29.809 Multiple Update Detection Support: N/A 00:23:29.809 Firmware Update Granularity: No Information Provided 00:23:29.809 Per-Namespace SMART Log: No 00:23:29.809 Asymmetric Namespace Access Log Page: Not Supported 00:23:29.809 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:23:29.809 Command Effects Log Page: Supported 00:23:29.809 Get Log Page Extended Data: Supported 00:23:29.809 Telemetry Log Pages: Not Supported 00:23:29.809 Persistent Event Log Pages: Not Supported 00:23:29.809 Supported Log Pages Log Page: May Support 00:23:29.809 Commands Supported & Effects Log Page: Not Supported 00:23:29.809 Feature Identifiers & Effects Log Page:May Support 00:23:29.810 NVMe-MI Commands & Effects Log Page: May Support 00:23:29.810 Data Area 4 for Telemetry Log: Not Supported 00:23:29.810 Error Log Page Entries Supported: 128 00:23:29.810 Keep Alive: Supported 00:23:29.810 Keep Alive Granularity: 10000 ms 00:23:29.810 00:23:29.810 NVM Command Set Attributes 00:23:29.810 ========================== 00:23:29.810 Submission Queue Entry Size 00:23:29.810 Max: 64 00:23:29.810 Min: 64 00:23:29.810 Completion Queue Entry Size 00:23:29.810 Max: 16 00:23:29.810 Min: 16 00:23:29.810 Number of Namespaces: 32 00:23:29.810 Compare Command: Supported 00:23:29.810 Write Uncorrectable Command: Not Supported 00:23:29.810 Dataset Management Command: Supported 00:23:29.810 Write Zeroes Command: Supported 00:23:29.810 Set Features Save Field: Not Supported 00:23:29.810 Reservations: Supported 00:23:29.810 Timestamp: Not Supported 00:23:29.810 Copy: Supported 00:23:29.810 Volatile Write Cache: Present 00:23:29.810 Atomic Write Unit (Normal): 1 00:23:29.810 Atomic Write Unit (PFail): 1 00:23:29.810 Atomic Compare & Write Unit: 1 00:23:29.810 Fused Compare & Write: Supported 00:23:29.810 Scatter-Gather List 00:23:29.810 SGL Command Set: Supported 00:23:29.810 SGL Keyed: Supported 00:23:29.810 SGL Bit Bucket Descriptor: Not Supported 00:23:29.810 SGL Metadata Pointer: Not Supported 00:23:29.810 Oversized SGL: Not Supported 00:23:29.810 SGL Metadata Address: Not Supported 00:23:29.810 SGL Offset: Supported 00:23:29.810 Transport SGL Data Block: Not Supported 00:23:29.810 Replay Protected Memory Block: Not Supported 00:23:29.810 00:23:29.810 Firmware Slot Information 00:23:29.810 ========================= 00:23:29.810 Active slot: 1 00:23:29.810 Slot 1 Firmware Revision: 25.01 00:23:29.810 00:23:29.810 00:23:29.810 Commands Supported and Effects 00:23:29.810 ============================== 00:23:29.810 Admin Commands 00:23:29.810 -------------- 00:23:29.810 Get Log Page (02h): Supported 00:23:29.810 Identify (06h): Supported 00:23:29.810 Abort (08h): Supported 00:23:29.810 Set Features (09h): Supported 00:23:29.810 Get Features (0Ah): Supported 00:23:29.810 Asynchronous Event Request (0Ch): Supported 00:23:29.810 Keep Alive (18h): Supported 00:23:29.810 I/O Commands 00:23:29.810 ------------ 00:23:29.810 Flush (00h): Supported LBA-Change 00:23:29.810 Write (01h): Supported LBA-Change 00:23:29.810 Read (02h): Supported 00:23:29.810 Compare (05h): Supported 00:23:29.810 Write Zeroes (08h): Supported LBA-Change 00:23:29.810 Dataset Management (09h): Supported LBA-Change 00:23:29.810 Copy (19h): Supported LBA-Change 00:23:29.810 00:23:29.810 Error Log 00:23:29.810 ========= 00:23:29.810 00:23:29.810 Arbitration 00:23:29.810 =========== 00:23:29.810 Arbitration Burst: 1 00:23:29.810 00:23:29.810 Power Management 00:23:29.810 ================ 00:23:29.810 Number of Power States: 1 00:23:29.810 Current Power State: Power State #0 00:23:29.810 Power State #0: 00:23:29.810 Max Power: 0.00 W 00:23:29.810 Non-Operational State: Operational 00:23:29.810 Entry Latency: Not Reported 00:23:29.810 Exit Latency: Not Reported 00:23:29.810 Relative Read Throughput: 0 00:23:29.810 Relative Read Latency: 0 00:23:29.810 Relative Write Throughput: 0 00:23:29.810 Relative Write Latency: 0 00:23:29.810 Idle Power: Not Reported 00:23:29.810 Active Power: Not Reported 00:23:29.810 Non-Operational Permissive Mode: Not Supported 00:23:29.810 00:23:29.810 Health Information 00:23:29.810 ================== 00:23:29.810 Critical Warnings: 00:23:29.810 Available Spare Space: OK 00:23:29.810 Temperature: OK 00:23:29.810 Device Reliability: OK 00:23:29.810 Read Only: No 00:23:29.810 Volatile Memory Backup: OK 00:23:29.810 Current Temperature: 0 Kelvin (-273 Celsius) 00:23:29.810 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:23:29.810 Available Spare: 0% 00:23:29.810 Available Spare Threshold: 0% 00:23:29.810 Life Percentage Used:[2024-10-07 09:44:18.796485] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.810 [2024-10-07 09:44:18.796496] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xce1760) 00:23:29.810 [2024-10-07 09:44:18.796507] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.810 [2024-10-07 09:44:18.796529] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd41f00, cid 7, qid 0 00:23:29.810 [2024-10-07 09:44:18.796661] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.810 [2024-10-07 09:44:18.796685] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.810 [2024-10-07 09:44:18.796692] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.810 [2024-10-07 09:44:18.796699] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd41f00) on tqpair=0xce1760 00:23:29.810 [2024-10-07 09:44:18.796741] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:23:29.810 [2024-10-07 09:44:18.796761] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd41480) on tqpair=0xce1760 00:23:29.810 [2024-10-07 09:44:18.796771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.810 [2024-10-07 09:44:18.796780] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd41600) on tqpair=0xce1760 00:23:29.810 [2024-10-07 09:44:18.796788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.810 [2024-10-07 09:44:18.796796] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd41780) on tqpair=0xce1760 00:23:29.810 [2024-10-07 09:44:18.796803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.810 [2024-10-07 09:44:18.796811] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd41900) on tqpair=0xce1760 00:23:29.810 [2024-10-07 09:44:18.796818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.810 [2024-10-07 09:44:18.796834] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.810 [2024-10-07 09:44:18.796843] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.810 [2024-10-07 09:44:18.796849] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xce1760) 00:23:29.810 [2024-10-07 09:44:18.796860] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.810 [2024-10-07 09:44:18.796882] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd41900, cid 3, qid 0 00:23:29.810 [2024-10-07 09:44:18.796974] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.810 [2024-10-07 09:44:18.796986] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.810 [2024-10-07 09:44:18.796993] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.810 [2024-10-07 09:44:18.797000] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd41900) on tqpair=0xce1760 00:23:29.810 [2024-10-07 09:44:18.797011] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.810 [2024-10-07 09:44:18.797019] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.810 [2024-10-07 09:44:18.797025] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xce1760) 00:23:29.810 [2024-10-07 09:44:18.797035] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.069 [2024-10-07 09:44:18.797060] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd41900, cid 3, qid 0 00:23:30.069 [2024-10-07 09:44:18.797151] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.069 [2024-10-07 09:44:18.797164] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.069 [2024-10-07 09:44:18.797171] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.069 [2024-10-07 09:44:18.797178] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd41900) on tqpair=0xce1760 00:23:30.069 [2024-10-07 09:44:18.797185] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:23:30.069 [2024-10-07 09:44:18.797193] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:23:30.069 [2024-10-07 09:44:18.797208] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.069 [2024-10-07 09:44:18.797217] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.069 [2024-10-07 09:44:18.797223] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xce1760) 00:23:30.069 [2024-10-07 09:44:18.797234] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.069 [2024-10-07 09:44:18.797254] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd41900, cid 3, qid 0 00:23:30.069 [2024-10-07 09:44:18.797330] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.069 [2024-10-07 09:44:18.797344] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.069 [2024-10-07 09:44:18.797351] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.069 [2024-10-07 09:44:18.797357] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd41900) on tqpair=0xce1760 00:23:30.069 [2024-10-07 09:44:18.797373] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.069 [2024-10-07 09:44:18.797382] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.069 [2024-10-07 09:44:18.797389] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xce1760) 00:23:30.069 [2024-10-07 09:44:18.797399] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.069 [2024-10-07 09:44:18.797419] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd41900, cid 3, qid 0 00:23:30.069 [2024-10-07 09:44:18.797495] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.069 [2024-10-07 09:44:18.797508] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.069 [2024-10-07 09:44:18.797515] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.069 [2024-10-07 09:44:18.797525] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd41900) on tqpair=0xce1760 00:23:30.069 [2024-10-07 09:44:18.797542] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.069 [2024-10-07 09:44:18.797551] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.069 [2024-10-07 09:44:18.797557] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xce1760) 00:23:30.069 [2024-10-07 09:44:18.797567] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.069 [2024-10-07 09:44:18.797588] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd41900, cid 3, qid 0 00:23:30.069 [2024-10-07 09:44:18.797662] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.069 [2024-10-07 09:44:18.797684] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.069 [2024-10-07 09:44:18.797692] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.069 [2024-10-07 09:44:18.797698] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd41900) on tqpair=0xce1760 00:23:30.069 [2024-10-07 09:44:18.797715] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.069 [2024-10-07 09:44:18.797725] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.069 [2024-10-07 09:44:18.797731] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xce1760) 00:23:30.069 [2024-10-07 09:44:18.797741] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.069 [2024-10-07 09:44:18.797762] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd41900, cid 3, qid 0 00:23:30.069 [2024-10-07 09:44:18.797840] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.069 [2024-10-07 09:44:18.797854] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.069 [2024-10-07 09:44:18.797861] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.069 [2024-10-07 09:44:18.797867] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd41900) on tqpair=0xce1760 00:23:30.069 [2024-10-07 09:44:18.797883] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.069 [2024-10-07 09:44:18.797892] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.069 [2024-10-07 09:44:18.797898] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xce1760) 00:23:30.069 [2024-10-07 09:44:18.797908] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.069 [2024-10-07 09:44:18.797928] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd41900, cid 3, qid 0 00:23:30.069 [2024-10-07 09:44:18.798003] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.069 [2024-10-07 09:44:18.798016] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.069 [2024-10-07 09:44:18.798023] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.069 [2024-10-07 09:44:18.798030] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd41900) on tqpair=0xce1760 00:23:30.069 [2024-10-07 09:44:18.798045] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.069 [2024-10-07 09:44:18.798054] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.069 [2024-10-07 09:44:18.798060] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xce1760) 00:23:30.069 [2024-10-07 09:44:18.798071] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.069 [2024-10-07 09:44:18.798091] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd41900, cid 3, qid 0 00:23:30.069 [2024-10-07 09:44:18.798164] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.069 [2024-10-07 09:44:18.798177] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.069 [2024-10-07 09:44:18.798184] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.069 [2024-10-07 09:44:18.798190] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd41900) on tqpair=0xce1760 00:23:30.069 [2024-10-07 09:44:18.798210] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.069 [2024-10-07 09:44:18.798220] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.069 [2024-10-07 09:44:18.798226] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xce1760) 00:23:30.069 [2024-10-07 09:44:18.798236] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.069 [2024-10-07 09:44:18.798257] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd41900, cid 3, qid 0 00:23:30.069 [2024-10-07 09:44:18.798332] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.069 [2024-10-07 09:44:18.798346] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.069 [2024-10-07 09:44:18.798353] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.069 [2024-10-07 09:44:18.798359] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd41900) on tqpair=0xce1760 00:23:30.070 [2024-10-07 09:44:18.798375] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.070 [2024-10-07 09:44:18.798384] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.070 [2024-10-07 09:44:18.798390] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xce1760) 00:23:30.070 [2024-10-07 09:44:18.798400] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.070 [2024-10-07 09:44:18.798420] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd41900, cid 3, qid 0 00:23:30.070 [2024-10-07 09:44:18.798493] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.070 [2024-10-07 09:44:18.798506] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.070 [2024-10-07 09:44:18.798512] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.070 [2024-10-07 09:44:18.798519] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd41900) on tqpair=0xce1760 00:23:30.070 [2024-10-07 09:44:18.798534] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.070 [2024-10-07 09:44:18.798543] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.070 [2024-10-07 09:44:18.798549] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xce1760) 00:23:30.070 [2024-10-07 09:44:18.798559] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.070 [2024-10-07 09:44:18.798579] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd41900, cid 3, qid 0 00:23:30.070 [2024-10-07 09:44:18.798652] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.070 [2024-10-07 09:44:18.802677] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.070 [2024-10-07 09:44:18.802692] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.070 [2024-10-07 09:44:18.802699] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd41900) on tqpair=0xce1760 00:23:30.070 [2024-10-07 09:44:18.802717] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:30.070 [2024-10-07 09:44:18.802741] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:30.070 [2024-10-07 09:44:18.802748] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xce1760) 00:23:30.070 [2024-10-07 09:44:18.802758] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.070 [2024-10-07 09:44:18.802781] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd41900, cid 3, qid 0 00:23:30.070 [2024-10-07 09:44:18.802876] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:30.070 [2024-10-07 09:44:18.802890] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:30.070 [2024-10-07 09:44:18.802897] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:30.070 [2024-10-07 09:44:18.802903] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd41900) on tqpair=0xce1760 00:23:30.070 [2024-10-07 09:44:18.802916] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 5 milliseconds 00:23:30.070 0% 00:23:30.070 Data Units Read: 0 00:23:30.070 Data Units Written: 0 00:23:30.070 Host Read Commands: 0 00:23:30.070 Host Write Commands: 0 00:23:30.070 Controller Busy Time: 0 minutes 00:23:30.070 Power Cycles: 0 00:23:30.070 Power On Hours: 0 hours 00:23:30.070 Unsafe Shutdowns: 0 00:23:30.070 Unrecoverable Media Errors: 0 00:23:30.070 Lifetime Error Log Entries: 0 00:23:30.070 Warning Temperature Time: 0 minutes 00:23:30.070 Critical Temperature Time: 0 minutes 00:23:30.070 00:23:30.070 Number of Queues 00:23:30.070 ================ 00:23:30.070 Number of I/O Submission Queues: 127 00:23:30.070 Number of I/O Completion Queues: 127 00:23:30.070 00:23:30.070 Active Namespaces 00:23:30.070 ================= 00:23:30.070 Namespace ID:1 00:23:30.070 Error Recovery Timeout: Unlimited 00:23:30.070 Command Set Identifier: NVM (00h) 00:23:30.070 Deallocate: Supported 00:23:30.070 Deallocated/Unwritten Error: Not Supported 00:23:30.070 Deallocated Read Value: Unknown 00:23:30.070 Deallocate in Write Zeroes: Not Supported 00:23:30.070 Deallocated Guard Field: 0xFFFF 00:23:30.070 Flush: Supported 00:23:30.070 Reservation: Supported 00:23:30.070 Namespace Sharing Capabilities: Multiple Controllers 00:23:30.070 Size (in LBAs): 131072 (0GiB) 00:23:30.070 Capacity (in LBAs): 131072 (0GiB) 00:23:30.070 Utilization (in LBAs): 131072 (0GiB) 00:23:30.070 NGUID: ABCDEF0123456789ABCDEF0123456789 00:23:30.070 EUI64: ABCDEF0123456789 00:23:30.070 UUID: c1b18451-bdd0-4cc3-9629-b68064ec0f45 00:23:30.070 Thin Provisioning: Not Supported 00:23:30.070 Per-NS Atomic Units: Yes 00:23:30.070 Atomic Boundary Size (Normal): 0 00:23:30.070 Atomic Boundary Size (PFail): 0 00:23:30.070 Atomic Boundary Offset: 0 00:23:30.070 Maximum Single Source Range Length: 65535 00:23:30.070 Maximum Copy Length: 65535 00:23:30.070 Maximum Source Range Count: 1 00:23:30.070 NGUID/EUI64 Never Reused: No 00:23:30.070 Namespace Write Protected: No 00:23:30.070 Number of LBA Formats: 1 00:23:30.070 Current LBA Format: LBA Format #00 00:23:30.070 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:30.070 00:23:30.070 09:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:23:30.070 09:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:30.070 09:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.070 09:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:30.070 09:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.070 09:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:23:30.070 09:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:23:30.070 09:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@514 -- # nvmfcleanup 00:23:30.070 09:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:23:30.070 09:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:30.070 09:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:23:30.070 09:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:30.070 09:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:30.070 rmmod nvme_tcp 00:23:30.070 rmmod nvme_fabrics 00:23:30.070 rmmod nvme_keyring 00:23:30.070 09:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:30.070 09:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:23:30.070 09:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:23:30.070 09:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@515 -- # '[' -n 278551 ']' 00:23:30.070 09:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # killprocess 278551 00:23:30.070 09:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 278551 ']' 00:23:30.070 09:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 278551 00:23:30.070 09:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:23:30.070 09:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:30.070 09:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 278551 00:23:30.070 09:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:30.070 09:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:30.070 09:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 278551' 00:23:30.070 killing process with pid 278551 00:23:30.070 09:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 278551 00:23:30.070 09:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 278551 00:23:30.328 09:44:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:23:30.329 09:44:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:23:30.329 09:44:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:23:30.329 09:44:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:23:30.329 09:44:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # iptables-save 00:23:30.329 09:44:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:23:30.329 09:44:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # iptables-restore 00:23:30.329 09:44:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:30.329 09:44:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:30.329 09:44:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:30.329 09:44:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:30.329 09:44:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:32.232 09:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:32.492 00:23:32.492 real 0m5.479s 00:23:32.492 user 0m4.992s 00:23:32.492 sys 0m1.768s 00:23:32.492 09:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:32.492 09:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:32.492 ************************************ 00:23:32.492 END TEST nvmf_identify 00:23:32.492 ************************************ 00:23:32.492 09:44:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:32.492 09:44:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:32.492 09:44:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:32.492 09:44:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.492 ************************************ 00:23:32.492 START TEST nvmf_perf 00:23:32.492 ************************************ 00:23:32.492 09:44:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:32.492 * Looking for test storage... 00:23:32.492 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:32.492 09:44:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:32.492 09:44:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # lcov --version 00:23:32.492 09:44:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:32.492 09:44:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:32.492 09:44:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:32.492 09:44:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:32.492 09:44:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:32.492 09:44:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:23:32.492 09:44:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:23:32.492 09:44:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:23:32.492 09:44:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:23:32.492 09:44:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:23:32.492 09:44:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:23:32.492 09:44:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:23:32.492 09:44:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:32.492 09:44:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:23:32.492 09:44:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:23:32.492 09:44:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:32.492 09:44:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:32.492 09:44:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:23:32.492 09:44:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:23:32.492 09:44:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:32.492 09:44:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:23:32.492 09:44:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:23:32.492 09:44:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:23:32.492 09:44:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:23:32.492 09:44:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:32.492 09:44:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:23:32.492 09:44:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:23:32.492 09:44:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:32.492 09:44:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:32.492 09:44:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:23:32.492 09:44:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:32.492 09:44:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:32.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:32.492 --rc genhtml_branch_coverage=1 00:23:32.492 --rc genhtml_function_coverage=1 00:23:32.492 --rc genhtml_legend=1 00:23:32.492 --rc geninfo_all_blocks=1 00:23:32.492 --rc geninfo_unexecuted_blocks=1 00:23:32.492 00:23:32.492 ' 00:23:32.492 09:44:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:32.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:32.492 --rc genhtml_branch_coverage=1 00:23:32.492 --rc genhtml_function_coverage=1 00:23:32.492 --rc genhtml_legend=1 00:23:32.492 --rc geninfo_all_blocks=1 00:23:32.492 --rc geninfo_unexecuted_blocks=1 00:23:32.492 00:23:32.492 ' 00:23:32.492 09:44:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:32.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:32.492 --rc genhtml_branch_coverage=1 00:23:32.492 --rc genhtml_function_coverage=1 00:23:32.492 --rc genhtml_legend=1 00:23:32.492 --rc geninfo_all_blocks=1 00:23:32.492 --rc geninfo_unexecuted_blocks=1 00:23:32.492 00:23:32.492 ' 00:23:32.492 09:44:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:32.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:32.492 --rc genhtml_branch_coverage=1 00:23:32.492 --rc genhtml_function_coverage=1 00:23:32.492 --rc genhtml_legend=1 00:23:32.492 --rc geninfo_all_blocks=1 00:23:32.492 --rc geninfo_unexecuted_blocks=1 00:23:32.492 00:23:32.492 ' 00:23:32.492 09:44:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:32.492 09:44:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:23:32.492 09:44:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:32.492 09:44:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:32.492 09:44:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:32.492 09:44:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:32.492 09:44:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:32.492 09:44:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:32.492 09:44:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:32.492 09:44:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:32.492 09:44:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:32.492 09:44:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:32.492 09:44:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:23:32.492 09:44:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:23:32.492 09:44:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:32.492 09:44:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:32.492 09:44:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:32.492 09:44:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:32.493 09:44:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:32.493 09:44:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:23:32.493 09:44:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:32.493 09:44:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:32.493 09:44:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:32.493 09:44:21 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:32.493 09:44:21 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:32.493 09:44:21 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:32.493 09:44:21 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:23:32.493 09:44:21 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:32.493 09:44:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:23:32.493 09:44:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:32.493 09:44:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:32.493 09:44:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:32.493 09:44:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:32.493 09:44:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:32.493 09:44:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:32.493 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:32.493 09:44:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:32.493 09:44:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:32.493 09:44:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:32.493 09:44:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:32.493 09:44:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:32.493 09:44:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:32.493 09:44:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:23:32.493 09:44:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:23:32.493 09:44:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:32.493 09:44:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # prepare_net_devs 00:23:32.493 09:44:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:23:32.493 09:44:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:23:32.493 09:44:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:32.493 09:44:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:32.493 09:44:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:32.493 09:44:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:23:32.493 09:44:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:23:32.493 09:44:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:23:32.493 09:44:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:35.027 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:35.027 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:23:35.027 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:35.027 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:35.027 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:35.027 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:35.027 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:35.027 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:23:35.027 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:35.027 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:23:35.027 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:23:35.027 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:23:35.027 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:23:35.027 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:23:35.027 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:23:35.027 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:35.027 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:35.027 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:35.027 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:35.027 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:35.027 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:35.027 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:35.027 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:35.027 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:35.027 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:35.027 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:35.027 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:35.027 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:35.027 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:35.027 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:35.027 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:35.027 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:35.027 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:35.027 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:35.027 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x1592)' 00:23:35.027 Found 0000:09:00.0 (0x8086 - 0x1592) 00:23:35.027 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:35.027 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:35.027 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:23:35.027 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:23:35.027 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:35.027 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:35.027 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x1592)' 00:23:35.027 Found 0000:09:00.1 (0x8086 - 0x1592) 00:23:35.027 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:35.027 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:35.027 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:23:35.027 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:23:35.027 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:35.027 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:35.027 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:35.027 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:35.027 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:35.027 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:35.027 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:35.027 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:35.027 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:35.027 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:35.027 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:35.027 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:23:35.027 Found net devices under 0000:09:00.0: cvl_0_0 00:23:35.027 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:35.027 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:35.027 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:35.027 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:35.027 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:35.027 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:35.027 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:35.027 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:35.027 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:23:35.027 Found net devices under 0000:09:00.1: cvl_0_1 00:23:35.027 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:35.027 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:23:35.027 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # is_hw=yes 00:23:35.027 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:23:35.027 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:23:35.027 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:23:35.027 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:35.027 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:35.027 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:35.027 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:35.027 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:35.027 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:35.027 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:35.027 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:35.027 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:35.027 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:35.027 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:35.027 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:35.027 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:35.027 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:35.027 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:35.027 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:35.027 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:35.027 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:35.027 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:35.027 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:35.027 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:35.027 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:35.027 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:35.027 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:35.027 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.235 ms 00:23:35.027 00:23:35.027 --- 10.0.0.2 ping statistics --- 00:23:35.027 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:35.027 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:23:35.027 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:35.027 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:35.028 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.172 ms 00:23:35.028 00:23:35.028 --- 10.0.0.1 ping statistics --- 00:23:35.028 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:35.028 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:23:35.028 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:35.028 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # return 0 00:23:35.028 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:23:35.028 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:35.028 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:23:35.028 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:23:35.028 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:35.028 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:23:35.028 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:23:35.028 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:23:35.028 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:23:35.028 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:35.028 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:35.028 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # nvmfpid=280644 00:23:35.028 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:35.028 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # waitforlisten 280644 00:23:35.028 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 280644 ']' 00:23:35.028 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:35.028 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:35.028 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:35.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:35.028 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:35.028 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:35.028 [2024-10-07 09:44:23.699036] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:23:35.028 [2024-10-07 09:44:23.699126] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:35.028 [2024-10-07 09:44:23.761369] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:35.028 [2024-10-07 09:44:23.871887] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:35.028 [2024-10-07 09:44:23.871960] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:35.028 [2024-10-07 09:44:23.871973] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:35.028 [2024-10-07 09:44:23.871984] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:35.028 [2024-10-07 09:44:23.871993] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:35.028 [2024-10-07 09:44:23.874688] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:23:35.028 [2024-10-07 09:44:23.874755] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:23:35.028 [2024-10-07 09:44:23.874776] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:23:35.028 [2024-10-07 09:44:23.874779] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:23:35.028 09:44:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:35.028 09:44:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:23:35.028 09:44:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:23:35.028 09:44:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:35.028 09:44:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:35.287 09:44:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:35.287 09:44:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:23:35.287 09:44:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:23:38.581 09:44:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:23:38.581 09:44:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:23:38.581 09:44:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:84:00.0 00:23:38.581 09:44:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:38.839 09:44:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:23:38.839 09:44:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:84:00.0 ']' 00:23:38.839 09:44:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:23:38.839 09:44:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:23:38.839 09:44:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:39.097 [2024-10-07 09:44:28.015864] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:39.097 09:44:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:39.354 09:44:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:39.354 09:44:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:39.611 09:44:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:39.611 09:44:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:23:39.868 09:44:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:40.126 [2024-10-07 09:44:29.090492] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:40.126 09:44:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:40.691 09:44:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:84:00.0 ']' 00:23:40.691 09:44:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:84:00.0' 00:23:40.691 09:44:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:23:40.691 09:44:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:84:00.0' 00:23:41.623 Initializing NVMe Controllers 00:23:41.623 Attached to NVMe Controller at 0000:84:00.0 [8086:0a54] 00:23:41.623 Associating PCIE (0000:84:00.0) NSID 1 with lcore 0 00:23:41.623 Initialization complete. Launching workers. 00:23:41.623 ======================================================== 00:23:41.624 Latency(us) 00:23:41.624 Device Information : IOPS MiB/s Average min max 00:23:41.624 PCIE (0000:84:00.0) NSID 1 from core 0: 84368.88 329.57 378.84 42.56 5267.99 00:23:41.624 ======================================================== 00:23:41.624 Total : 84368.88 329.57 378.84 42.56 5267.99 00:23:41.624 00:23:41.624 09:44:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:43.526 Initializing NVMe Controllers 00:23:43.526 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:43.526 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:43.526 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:43.526 Initialization complete. Launching workers. 00:23:43.526 ======================================================== 00:23:43.526 Latency(us) 00:23:43.526 Device Information : IOPS MiB/s Average min max 00:23:43.526 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 88.69 0.35 11578.29 140.37 46057.34 00:23:43.526 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 66.77 0.26 15096.33 6998.80 47900.17 00:23:43.526 ======================================================== 00:23:43.526 Total : 155.46 0.61 13089.25 140.37 47900.17 00:23:43.526 00:23:43.526 09:44:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:44.459 Initializing NVMe Controllers 00:23:44.459 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:44.459 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:44.459 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:44.459 Initialization complete. Launching workers. 00:23:44.459 ======================================================== 00:23:44.459 Latency(us) 00:23:44.459 Device Information : IOPS MiB/s Average min max 00:23:44.459 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8331.89 32.55 3841.69 709.52 10793.32 00:23:44.459 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3830.32 14.96 8484.42 6796.79 47606.96 00:23:44.459 ======================================================== 00:23:44.459 Total : 12162.21 47.51 5303.85 709.52 47606.96 00:23:44.459 00:23:44.459 09:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:23:44.459 09:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:23:44.459 09:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:46.995 Initializing NVMe Controllers 00:23:46.995 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:46.995 Controller IO queue size 128, less than required. 00:23:46.995 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:46.995 Controller IO queue size 128, less than required. 00:23:46.995 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:46.995 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:46.995 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:46.995 Initialization complete. Launching workers. 00:23:46.995 ======================================================== 00:23:46.995 Latency(us) 00:23:46.995 Device Information : IOPS MiB/s Average min max 00:23:46.995 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1623.40 405.85 80247.87 47278.39 132149.47 00:23:46.995 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 526.47 131.62 253796.78 103165.92 395951.91 00:23:46.995 ======================================================== 00:23:46.995 Total : 2149.87 537.47 122747.17 47278.39 395951.91 00:23:46.995 00:23:46.995 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:23:47.253 No valid NVMe controllers or AIO or URING devices found 00:23:47.253 Initializing NVMe Controllers 00:23:47.253 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:47.253 Controller IO queue size 128, less than required. 00:23:47.253 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:47.253 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:23:47.253 Controller IO queue size 128, less than required. 00:23:47.253 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:47.253 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:23:47.253 WARNING: Some requested NVMe devices were skipped 00:23:47.253 09:44:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:23:49.785 Initializing NVMe Controllers 00:23:49.785 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:49.785 Controller IO queue size 128, less than required. 00:23:49.785 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:49.785 Controller IO queue size 128, less than required. 00:23:49.785 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:49.785 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:49.785 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:49.785 Initialization complete. Launching workers. 00:23:49.785 00:23:49.785 ==================== 00:23:49.785 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:23:49.785 TCP transport: 00:23:49.785 polls: 9324 00:23:49.785 idle_polls: 6398 00:23:49.785 sock_completions: 2926 00:23:49.785 nvme_completions: 5545 00:23:49.785 submitted_requests: 8296 00:23:49.785 queued_requests: 1 00:23:49.785 00:23:49.785 ==================== 00:23:49.785 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:23:49.785 TCP transport: 00:23:49.785 polls: 11789 00:23:49.785 idle_polls: 8428 00:23:49.785 sock_completions: 3361 00:23:49.785 nvme_completions: 6271 00:23:49.785 submitted_requests: 9396 00:23:49.785 queued_requests: 1 00:23:49.785 ======================================================== 00:23:49.785 Latency(us) 00:23:49.785 Device Information : IOPS MiB/s Average min max 00:23:49.785 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1382.83 345.71 94766.15 55482.79 159364.67 00:23:49.785 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1563.91 390.98 82341.87 46373.43 113631.13 00:23:49.785 ======================================================== 00:23:49.785 Total : 2946.73 736.68 88172.26 46373.43 159364.67 00:23:49.785 00:23:49.785 09:44:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:23:49.785 09:44:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:50.043 09:44:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:23:50.043 09:44:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:23:50.043 09:44:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:23:50.043 09:44:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@514 -- # nvmfcleanup 00:23:50.043 09:44:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:23:50.043 09:44:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:50.043 09:44:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:23:50.043 09:44:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:50.043 09:44:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:50.043 rmmod nvme_tcp 00:23:50.043 rmmod nvme_fabrics 00:23:50.043 rmmod nvme_keyring 00:23:50.302 09:44:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:50.302 09:44:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:23:50.302 09:44:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:23:50.302 09:44:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@515 -- # '[' -n 280644 ']' 00:23:50.302 09:44:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # killprocess 280644 00:23:50.302 09:44:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 280644 ']' 00:23:50.302 09:44:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 280644 00:23:50.302 09:44:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:23:50.302 09:44:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:50.302 09:44:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 280644 00:23:50.302 09:44:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:50.302 09:44:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:50.302 09:44:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 280644' 00:23:50.302 killing process with pid 280644 00:23:50.302 09:44:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 280644 00:23:50.302 09:44:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 280644 00:23:52.208 09:44:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:23:52.208 09:44:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:23:52.208 09:44:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:23:52.208 09:44:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:23:52.208 09:44:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # iptables-save 00:23:52.208 09:44:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:23:52.208 09:44:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # iptables-restore 00:23:52.208 09:44:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:52.208 09:44:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:52.208 09:44:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:52.208 09:44:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:52.208 09:44:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:54.114 09:44:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:54.114 00:23:54.114 real 0m21.447s 00:23:54.114 user 1m5.748s 00:23:54.114 sys 0m5.583s 00:23:54.114 09:44:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:54.114 09:44:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:54.114 ************************************ 00:23:54.114 END TEST nvmf_perf 00:23:54.114 ************************************ 00:23:54.114 09:44:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:23:54.114 09:44:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:54.114 09:44:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:54.114 09:44:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.114 ************************************ 00:23:54.114 START TEST nvmf_fio_host 00:23:54.114 ************************************ 00:23:54.114 09:44:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:23:54.114 * Looking for test storage... 00:23:54.114 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:54.114 09:44:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:54.114 09:44:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # lcov --version 00:23:54.114 09:44:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:54.114 09:44:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:54.114 09:44:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:54.114 09:44:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:54.114 09:44:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:54.114 09:44:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:23:54.114 09:44:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:23:54.114 09:44:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:23:54.114 09:44:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:23:54.114 09:44:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:23:54.114 09:44:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:23:54.114 09:44:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:23:54.114 09:44:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:54.114 09:44:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:23:54.114 09:44:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:23:54.114 09:44:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:54.114 09:44:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:54.114 09:44:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:23:54.114 09:44:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:23:54.114 09:44:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:54.114 09:44:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:23:54.114 09:44:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:23:54.114 09:44:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:23:54.114 09:44:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:23:54.114 09:44:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:54.114 09:44:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:23:54.114 09:44:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:23:54.114 09:44:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:54.114 09:44:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:54.114 09:44:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:23:54.114 09:44:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:54.114 09:44:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:54.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:54.114 --rc genhtml_branch_coverage=1 00:23:54.114 --rc genhtml_function_coverage=1 00:23:54.114 --rc genhtml_legend=1 00:23:54.114 --rc geninfo_all_blocks=1 00:23:54.114 --rc geninfo_unexecuted_blocks=1 00:23:54.114 00:23:54.114 ' 00:23:54.114 09:44:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:54.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:54.114 --rc genhtml_branch_coverage=1 00:23:54.114 --rc genhtml_function_coverage=1 00:23:54.114 --rc genhtml_legend=1 00:23:54.114 --rc geninfo_all_blocks=1 00:23:54.114 --rc geninfo_unexecuted_blocks=1 00:23:54.114 00:23:54.114 ' 00:23:54.114 09:44:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:54.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:54.114 --rc genhtml_branch_coverage=1 00:23:54.114 --rc genhtml_function_coverage=1 00:23:54.114 --rc genhtml_legend=1 00:23:54.114 --rc geninfo_all_blocks=1 00:23:54.114 --rc geninfo_unexecuted_blocks=1 00:23:54.114 00:23:54.114 ' 00:23:54.114 09:44:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:54.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:54.114 --rc genhtml_branch_coverage=1 00:23:54.114 --rc genhtml_function_coverage=1 00:23:54.115 --rc genhtml_legend=1 00:23:54.115 --rc geninfo_all_blocks=1 00:23:54.115 --rc geninfo_unexecuted_blocks=1 00:23:54.115 00:23:54.115 ' 00:23:54.115 09:44:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:54.115 09:44:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:23:54.115 09:44:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:54.115 09:44:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:54.115 09:44:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:54.115 09:44:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.115 09:44:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.115 09:44:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.115 09:44:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:23:54.115 09:44:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.115 09:44:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:54.115 09:44:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:23:54.115 09:44:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:54.115 09:44:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:54.115 09:44:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:54.115 09:44:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:54.115 09:44:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:54.115 09:44:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:54.115 09:44:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:54.115 09:44:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:54.115 09:44:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:54.115 09:44:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:54.115 09:44:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:23:54.115 09:44:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:23:54.115 09:44:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:54.115 09:44:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:54.115 09:44:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:54.115 09:44:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:54.115 09:44:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:54.115 09:44:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:23:54.115 09:44:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:54.115 09:44:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:54.115 09:44:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:54.115 09:44:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.115 09:44:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.115 09:44:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.115 09:44:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:23:54.115 09:44:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.115 09:44:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:23:54.115 09:44:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:54.115 09:44:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:54.115 09:44:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:54.115 09:44:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:54.115 09:44:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:54.115 09:44:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:54.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:54.115 09:44:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:54.115 09:44:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:54.115 09:44:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:54.115 09:44:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:54.115 09:44:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:23:54.115 09:44:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:23:54.115 09:44:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:54.115 09:44:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # prepare_net_devs 00:23:54.115 09:44:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@436 -- # local -g is_hw=no 00:23:54.115 09:44:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # remove_spdk_ns 00:23:54.115 09:44:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:54.115 09:44:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:54.115 09:44:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:54.115 09:44:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:23:54.115 09:44:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:23:54.115 09:44:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:23:54.115 09:44:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.019 09:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:56.019 09:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:23:56.019 09:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:56.019 09:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:56.019 09:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:56.019 09:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:56.019 09:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:56.019 09:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:23:56.019 09:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:56.019 09:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:23:56.019 09:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:23:56.019 09:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:23:56.019 09:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:23:56.019 09:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:23:56.019 09:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:23:56.019 09:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:56.019 09:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:56.019 09:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:56.019 09:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:56.019 09:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:56.019 09:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:56.019 09:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:56.019 09:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:56.019 09:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:56.019 09:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:56.019 09:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:56.019 09:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:56.019 09:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:56.019 09:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:56.019 09:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:56.019 09:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:56.019 09:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:56.019 09:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:56.019 09:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:56.019 09:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x1592)' 00:23:56.019 Found 0000:09:00.0 (0x8086 - 0x1592) 00:23:56.019 09:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:56.019 09:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:56.019 09:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:23:56.019 09:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:23:56.019 09:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:56.019 09:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:56.019 09:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x1592)' 00:23:56.019 Found 0000:09:00.1 (0x8086 - 0x1592) 00:23:56.019 09:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:56.019 09:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:56.019 09:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:23:56.019 09:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:23:56.019 09:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:56.019 09:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:56.019 09:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:56.019 09:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:56.019 09:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:56.019 09:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:56.019 09:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:56.019 09:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:56.019 09:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:56.019 09:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:56.019 09:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:56.019 09:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:23:56.019 Found net devices under 0000:09:00.0: cvl_0_0 00:23:56.019 09:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:56.019 09:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:56.019 09:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:56.019 09:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:56.019 09:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:56.019 09:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:56.019 09:44:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:56.019 09:44:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:56.019 09:44:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:23:56.019 Found net devices under 0000:09:00.1: cvl_0_1 00:23:56.019 09:44:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:56.019 09:44:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:23:56.019 09:44:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # is_hw=yes 00:23:56.019 09:44:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:23:56.019 09:44:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:23:56.019 09:44:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:23:56.019 09:44:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:56.019 09:44:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:56.019 09:44:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:56.019 09:44:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:56.019 09:44:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:56.020 09:44:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:56.020 09:44:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:56.020 09:44:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:56.020 09:44:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:56.020 09:44:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:56.020 09:44:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:56.020 09:44:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:56.020 09:44:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:56.020 09:44:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:56.020 09:44:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:56.278 09:44:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:56.278 09:44:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:56.278 09:44:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:56.278 09:44:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:56.278 09:44:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:56.278 09:44:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:56.278 09:44:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:56.278 09:44:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:56.278 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:56.278 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.280 ms 00:23:56.278 00:23:56.278 --- 10.0.0.2 ping statistics --- 00:23:56.278 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:56.278 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:23:56.278 09:44:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:56.278 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:56.278 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:23:56.278 00:23:56.278 --- 10.0.0.1 ping statistics --- 00:23:56.278 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:56.278 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:23:56.278 09:44:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:56.278 09:44:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # return 0 00:23:56.278 09:44:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:23:56.278 09:44:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:56.278 09:44:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:23:56.278 09:44:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:23:56.278 09:44:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:56.278 09:44:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:23:56.278 09:44:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:23:56.278 09:44:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:23:56.278 09:44:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:23:56.278 09:44:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:56.278 09:44:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.278 09:44:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=284440 00:23:56.278 09:44:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:56.278 09:44:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:56.278 09:44:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 284440 00:23:56.278 09:44:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 284440 ']' 00:23:56.278 09:44:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:56.278 09:44:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:56.278 09:44:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:56.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:56.278 09:44:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:56.278 09:44:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.278 [2024-10-07 09:44:45.214441] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:23:56.278 [2024-10-07 09:44:45.214512] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:56.536 [2024-10-07 09:44:45.275180] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:56.536 [2024-10-07 09:44:45.381663] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:56.536 [2024-10-07 09:44:45.381720] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:56.536 [2024-10-07 09:44:45.381743] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:56.536 [2024-10-07 09:44:45.381754] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:56.536 [2024-10-07 09:44:45.381764] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:56.536 [2024-10-07 09:44:45.383205] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:23:56.536 [2024-10-07 09:44:45.383270] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:23:56.536 [2024-10-07 09:44:45.383343] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:23:56.536 [2024-10-07 09:44:45.383339] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:23:56.536 09:44:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:56.536 09:44:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:23:56.536 09:44:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:57.102 [2024-10-07 09:44:45.817678] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:57.102 09:44:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:23:57.102 09:44:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:57.102 09:44:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.102 09:44:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:23:57.360 Malloc1 00:23:57.360 09:44:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:57.618 09:44:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:57.876 09:44:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:58.134 [2024-10-07 09:44:46.984331] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:58.134 09:44:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:58.392 09:44:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:23:58.392 09:44:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:58.392 09:44:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:58.392 09:44:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:23:58.392 09:44:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:58.392 09:44:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:23:58.392 09:44:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:58.392 09:44:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:23:58.392 09:44:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:23:58.392 09:44:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:58.392 09:44:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:58.392 09:44:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:23:58.392 09:44:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:58.392 09:44:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:58.392 09:44:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:58.392 09:44:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:58.392 09:44:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:58.392 09:44:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:23:58.392 09:44:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:58.392 09:44:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:58.392 09:44:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:58.392 09:44:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:23:58.392 09:44:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:58.651 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:23:58.651 fio-3.35 00:23:58.652 Starting 1 thread 00:24:01.178 00:24:01.178 test: (groupid=0, jobs=1): err= 0: pid=284782: Mon Oct 7 09:44:49 2024 00:24:01.178 read: IOPS=8795, BW=34.4MiB/s (36.0MB/s)(69.0MiB/2007msec) 00:24:01.178 slat (nsec): min=1853, max=166743, avg=2459.58, stdev=1935.42 00:24:01.178 clat (usec): min=2527, max=13029, avg=7915.34, stdev=669.67 00:24:01.178 lat (usec): min=2562, max=13031, avg=7917.80, stdev=669.54 00:24:01.178 clat percentiles (usec): 00:24:01.178 | 1.00th=[ 6456], 5.00th=[ 6915], 10.00th=[ 7111], 20.00th=[ 7373], 00:24:01.178 | 30.00th=[ 7570], 40.00th=[ 7767], 50.00th=[ 7898], 60.00th=[ 8094], 00:24:01.178 | 70.00th=[ 8291], 80.00th=[ 8455], 90.00th=[ 8717], 95.00th=[ 8979], 00:24:01.178 | 99.00th=[ 9372], 99.50th=[ 9634], 99.90th=[11338], 99.95th=[12518], 00:24:01.178 | 99.99th=[13042] 00:24:01.178 bw ( KiB/s): min=34520, max=35752, per=99.97%, avg=35174.00, stdev=520.69, samples=4 00:24:01.178 iops : min= 8630, max= 8938, avg=8793.50, stdev=130.17, samples=4 00:24:01.178 write: IOPS=8804, BW=34.4MiB/s (36.1MB/s)(69.0MiB/2007msec); 0 zone resets 00:24:01.178 slat (nsec): min=1964, max=148216, avg=2571.29, stdev=1520.22 00:24:01.178 clat (usec): min=1470, max=12780, avg=6576.78, stdev=554.58 00:24:01.178 lat (usec): min=1480, max=12782, avg=6579.35, stdev=554.51 00:24:01.178 clat percentiles (usec): 00:24:01.178 | 1.00th=[ 5342], 5.00th=[ 5735], 10.00th=[ 5932], 20.00th=[ 6194], 00:24:01.178 | 30.00th=[ 6325], 40.00th=[ 6456], 50.00th=[ 6587], 60.00th=[ 6718], 00:24:01.178 | 70.00th=[ 6849], 80.00th=[ 6980], 90.00th=[ 7177], 95.00th=[ 7373], 00:24:01.178 | 99.00th=[ 7701], 99.50th=[ 7832], 99.90th=[10945], 99.95th=[12256], 00:24:01.178 | 99.99th=[12780] 00:24:01.178 bw ( KiB/s): min=35040, max=35328, per=100.00%, avg=35220.00, stdev=124.88, samples=4 00:24:01.178 iops : min= 8760, max= 8832, avg=8805.00, stdev=31.22, samples=4 00:24:01.178 lat (msec) : 2=0.03%, 4=0.09%, 10=99.71%, 20=0.18% 00:24:01.178 cpu : usr=65.35%, sys=33.00%, ctx=106, majf=0, minf=37 00:24:01.178 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:24:01.178 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:01.178 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:01.178 issued rwts: total=17653,17671,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:01.178 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:01.178 00:24:01.178 Run status group 0 (all jobs): 00:24:01.178 READ: bw=34.4MiB/s (36.0MB/s), 34.4MiB/s-34.4MiB/s (36.0MB/s-36.0MB/s), io=69.0MiB (72.3MB), run=2007-2007msec 00:24:01.178 WRITE: bw=34.4MiB/s (36.1MB/s), 34.4MiB/s-34.4MiB/s (36.1MB/s-36.1MB/s), io=69.0MiB (72.4MB), run=2007-2007msec 00:24:01.178 09:44:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:01.178 09:44:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:01.178 09:44:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:24:01.178 09:44:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:01.178 09:44:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:24:01.179 09:44:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:01.179 09:44:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:24:01.179 09:44:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:24:01.179 09:44:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:01.179 09:44:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:01.179 09:44:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:24:01.179 09:44:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:01.179 09:44:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:01.179 09:44:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:01.179 09:44:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:01.179 09:44:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:01.179 09:44:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:24:01.179 09:44:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:01.179 09:44:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:01.179 09:44:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:01.179 09:44:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:01.179 09:44:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:01.179 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:24:01.179 fio-3.35 00:24:01.179 Starting 1 thread 00:24:03.709 00:24:03.709 test: (groupid=0, jobs=1): err= 0: pid=285107: Mon Oct 7 09:44:52 2024 00:24:03.709 read: IOPS=8145, BW=127MiB/s (133MB/s)(255MiB/2006msec) 00:24:03.709 slat (usec): min=2, max=113, avg= 3.63, stdev= 1.62 00:24:03.709 clat (usec): min=2402, max=54635, avg=9084.78, stdev=4090.00 00:24:03.709 lat (usec): min=2406, max=54639, avg=9088.41, stdev=4089.99 00:24:03.709 clat percentiles (usec): 00:24:03.709 | 1.00th=[ 4817], 5.00th=[ 5800], 10.00th=[ 6325], 20.00th=[ 7111], 00:24:03.709 | 30.00th=[ 7635], 40.00th=[ 8160], 50.00th=[ 8717], 60.00th=[ 9241], 00:24:03.709 | 70.00th=[ 9765], 80.00th=[10421], 90.00th=[11338], 95.00th=[12387], 00:24:03.709 | 99.00th=[15139], 99.50th=[48497], 99.90th=[52691], 99.95th=[54264], 00:24:03.709 | 99.99th=[54789] 00:24:03.709 bw ( KiB/s): min=54656, max=74880, per=51.14%, avg=66648.00, stdev=9126.51, samples=4 00:24:03.709 iops : min= 3416, max= 4680, avg=4165.50, stdev=570.41, samples=4 00:24:03.709 write: IOPS=4954, BW=77.4MiB/s (81.2MB/s)(136MiB/1763msec); 0 zone resets 00:24:03.709 slat (usec): min=30, max=142, avg=33.17, stdev= 4.47 00:24:03.709 clat (usec): min=4203, max=20365, avg=11682.02, stdev=1980.85 00:24:03.709 lat (usec): min=4235, max=20396, avg=11715.19, stdev=1980.89 00:24:03.709 clat percentiles (usec): 00:24:03.709 | 1.00th=[ 7504], 5.00th=[ 8717], 10.00th=[ 9372], 20.00th=[ 9896], 00:24:03.709 | 30.00th=[10552], 40.00th=[11076], 50.00th=[11600], 60.00th=[11994], 00:24:03.709 | 70.00th=[12649], 80.00th=[13435], 90.00th=[14353], 95.00th=[15139], 00:24:03.709 | 99.00th=[16712], 99.50th=[17171], 99.90th=[19006], 99.95th=[19268], 00:24:03.709 | 99.99th=[20317] 00:24:03.709 bw ( KiB/s): min=57984, max=77856, per=87.73%, avg=69536.00, stdev=9005.71, samples=4 00:24:03.709 iops : min= 3624, max= 4866, avg=4346.00, stdev=562.86, samples=4 00:24:03.709 lat (msec) : 4=0.18%, 10=56.23%, 20=43.08%, 50=0.32%, 100=0.20% 00:24:03.709 cpu : usr=76.66%, sys=21.95%, ctx=50, majf=0, minf=67 00:24:03.709 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:24:03.709 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:03.709 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:03.709 issued rwts: total=16340,8734,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:03.709 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:03.709 00:24:03.709 Run status group 0 (all jobs): 00:24:03.709 READ: bw=127MiB/s (133MB/s), 127MiB/s-127MiB/s (133MB/s-133MB/s), io=255MiB (268MB), run=2006-2006msec 00:24:03.709 WRITE: bw=77.4MiB/s (81.2MB/s), 77.4MiB/s-77.4MiB/s (81.2MB/s-81.2MB/s), io=136MiB (143MB), run=1763-1763msec 00:24:03.709 09:44:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:03.709 09:44:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:24:03.709 09:44:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:24:03.709 09:44:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:24:03.709 09:44:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:24:03.709 09:44:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@514 -- # nvmfcleanup 00:24:03.709 09:44:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:24:03.709 09:44:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:03.709 09:44:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:24:03.709 09:44:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:03.709 09:44:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:03.709 rmmod nvme_tcp 00:24:03.970 rmmod nvme_fabrics 00:24:03.970 rmmod nvme_keyring 00:24:03.970 09:44:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:03.970 09:44:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:24:03.970 09:44:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:24:03.970 09:44:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@515 -- # '[' -n 284440 ']' 00:24:03.970 09:44:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # killprocess 284440 00:24:03.970 09:44:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 284440 ']' 00:24:03.970 09:44:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 284440 00:24:03.970 09:44:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:24:03.970 09:44:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:03.970 09:44:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 284440 00:24:03.970 09:44:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:03.970 09:44:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:03.970 09:44:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 284440' 00:24:03.970 killing process with pid 284440 00:24:03.970 09:44:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 284440 00:24:03.970 09:44:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 284440 00:24:04.230 09:44:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:24:04.230 09:44:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:24:04.230 09:44:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:24:04.230 09:44:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:24:04.230 09:44:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # iptables-save 00:24:04.230 09:44:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:24:04.230 09:44:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # iptables-restore 00:24:04.230 09:44:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:04.230 09:44:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:04.230 09:44:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:04.230 09:44:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:04.230 09:44:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:06.141 09:44:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:06.141 00:24:06.141 real 0m12.352s 00:24:06.141 user 0m36.442s 00:24:06.141 sys 0m3.953s 00:24:06.141 09:44:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:06.141 09:44:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.141 ************************************ 00:24:06.141 END TEST nvmf_fio_host 00:24:06.141 ************************************ 00:24:06.400 09:44:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:06.400 09:44:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:06.400 09:44:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:06.400 09:44:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.400 ************************************ 00:24:06.400 START TEST nvmf_failover 00:24:06.400 ************************************ 00:24:06.400 09:44:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:06.400 * Looking for test storage... 00:24:06.400 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:06.400 09:44:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:24:06.400 09:44:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # lcov --version 00:24:06.400 09:44:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:24:06.400 09:44:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:24:06.400 09:44:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:06.400 09:44:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:06.400 09:44:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:06.400 09:44:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:24:06.400 09:44:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:24:06.400 09:44:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:24:06.400 09:44:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:24:06.400 09:44:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:24:06.400 09:44:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:24:06.400 09:44:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:24:06.400 09:44:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:06.400 09:44:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:24:06.400 09:44:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:24:06.400 09:44:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:06.400 09:44:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:06.400 09:44:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:24:06.400 09:44:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:24:06.400 09:44:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:06.400 09:44:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:24:06.400 09:44:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:24:06.400 09:44:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:24:06.400 09:44:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:24:06.400 09:44:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:06.400 09:44:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:24:06.400 09:44:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:24:06.400 09:44:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:06.400 09:44:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:06.400 09:44:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:24:06.400 09:44:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:06.400 09:44:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:24:06.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:06.400 --rc genhtml_branch_coverage=1 00:24:06.400 --rc genhtml_function_coverage=1 00:24:06.400 --rc genhtml_legend=1 00:24:06.400 --rc geninfo_all_blocks=1 00:24:06.400 --rc geninfo_unexecuted_blocks=1 00:24:06.400 00:24:06.400 ' 00:24:06.400 09:44:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:24:06.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:06.400 --rc genhtml_branch_coverage=1 00:24:06.400 --rc genhtml_function_coverage=1 00:24:06.400 --rc genhtml_legend=1 00:24:06.400 --rc geninfo_all_blocks=1 00:24:06.400 --rc geninfo_unexecuted_blocks=1 00:24:06.400 00:24:06.400 ' 00:24:06.400 09:44:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:24:06.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:06.400 --rc genhtml_branch_coverage=1 00:24:06.401 --rc genhtml_function_coverage=1 00:24:06.401 --rc genhtml_legend=1 00:24:06.401 --rc geninfo_all_blocks=1 00:24:06.401 --rc geninfo_unexecuted_blocks=1 00:24:06.401 00:24:06.401 ' 00:24:06.401 09:44:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:24:06.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:06.401 --rc genhtml_branch_coverage=1 00:24:06.401 --rc genhtml_function_coverage=1 00:24:06.401 --rc genhtml_legend=1 00:24:06.401 --rc geninfo_all_blocks=1 00:24:06.401 --rc geninfo_unexecuted_blocks=1 00:24:06.401 00:24:06.401 ' 00:24:06.401 09:44:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:06.401 09:44:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:24:06.401 09:44:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:06.401 09:44:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:06.401 09:44:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:06.401 09:44:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:06.401 09:44:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:06.401 09:44:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:06.401 09:44:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:06.401 09:44:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:06.401 09:44:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:06.401 09:44:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:06.401 09:44:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:24:06.401 09:44:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:24:06.401 09:44:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:06.401 09:44:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:06.401 09:44:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:06.401 09:44:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:06.401 09:44:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:06.401 09:44:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:24:06.401 09:44:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:06.401 09:44:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:06.401 09:44:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:06.401 09:44:55 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.401 09:44:55 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.401 09:44:55 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.401 09:44:55 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:24:06.401 09:44:55 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.401 09:44:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:24:06.401 09:44:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:06.401 09:44:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:06.401 09:44:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:06.401 09:44:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:06.401 09:44:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:06.401 09:44:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:06.401 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:06.401 09:44:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:06.401 09:44:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:06.401 09:44:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:06.401 09:44:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:06.401 09:44:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:06.401 09:44:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:06.401 09:44:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:06.401 09:44:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:24:06.401 09:44:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:24:06.401 09:44:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:06.401 09:44:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # prepare_net_devs 00:24:06.401 09:44:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@436 -- # local -g is_hw=no 00:24:06.401 09:44:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # remove_spdk_ns 00:24:06.401 09:44:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:06.401 09:44:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:06.401 09:44:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:06.401 09:44:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:24:06.401 09:44:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:24:06.401 09:44:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:24:06.401 09:44:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:08.930 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:08.930 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:24:08.930 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:08.930 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:08.930 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:08.930 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:08.930 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:08.931 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:24:08.931 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:08.931 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:24:08.931 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:24:08.931 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:24:08.931 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:24:08.931 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:24:08.931 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:24:08.931 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:08.931 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:08.931 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:08.931 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:08.931 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:08.931 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:08.931 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:08.931 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:08.931 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:08.931 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:08.931 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:08.931 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:08.931 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:08.931 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:08.931 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:08.931 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:08.931 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:08.931 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:08.931 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:08.931 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x1592)' 00:24:08.931 Found 0000:09:00.0 (0x8086 - 0x1592) 00:24:08.931 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:08.931 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:08.931 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:24:08.931 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:24:08.931 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:08.931 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:08.931 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x1592)' 00:24:08.931 Found 0000:09:00.1 (0x8086 - 0x1592) 00:24:08.931 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:08.931 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:08.931 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:24:08.931 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:24:08.931 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:08.931 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:08.931 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:08.931 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:08.931 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:08.931 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:08.931 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:08.931 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:08.931 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:08.931 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:08.931 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:08.931 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:24:08.931 Found net devices under 0000:09:00.0: cvl_0_0 00:24:08.931 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:08.931 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:08.931 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:08.931 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:08.931 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:08.931 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:08.931 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:08.931 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:08.931 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:24:08.931 Found net devices under 0000:09:00.1: cvl_0_1 00:24:08.931 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:08.931 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:24:08.931 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # is_hw=yes 00:24:08.931 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:24:08.931 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:24:08.931 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:24:08.931 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:08.931 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:08.931 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:08.931 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:08.931 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:08.931 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:08.931 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:08.931 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:08.931 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:08.931 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:08.931 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:08.931 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:08.931 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:08.931 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:08.931 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:08.931 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:08.931 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:08.931 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:08.931 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:08.931 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:08.931 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:08.931 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:08.931 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:08.931 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:08.931 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.242 ms 00:24:08.931 00:24:08.931 --- 10.0.0.2 ping statistics --- 00:24:08.931 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:08.931 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:24:08.931 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:08.931 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:08.931 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.076 ms 00:24:08.931 00:24:08.931 --- 10.0.0.1 ping statistics --- 00:24:08.931 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:08.931 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:24:08.931 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:08.931 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # return 0 00:24:08.931 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:24:08.931 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:08.931 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:24:08.931 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:24:08.931 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:08.931 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:24:08.931 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:24:08.931 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:24:08.931 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:08.931 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:08.931 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:08.932 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # nvmfpid=287317 00:24:08.932 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:08.932 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # waitforlisten 287317 00:24:08.932 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 287317 ']' 00:24:08.932 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:08.932 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:08.932 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:08.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:08.932 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:08.932 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:08.932 [2024-10-07 09:44:57.537678] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:24:08.932 [2024-10-07 09:44:57.537755] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:08.932 [2024-10-07 09:44:57.597847] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:08.932 [2024-10-07 09:44:57.705002] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:08.932 [2024-10-07 09:44:57.705057] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:08.932 [2024-10-07 09:44:57.705081] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:08.932 [2024-10-07 09:44:57.705092] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:08.932 [2024-10-07 09:44:57.705101] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:08.932 [2024-10-07 09:44:57.705921] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:24:08.932 [2024-10-07 09:44:57.705954] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:24:08.932 [2024-10-07 09:44:57.705957] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:24:08.932 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:08.932 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:24:08.932 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:08.932 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:08.932 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:08.932 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:08.932 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:09.192 [2024-10-07 09:44:58.153066] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:09.450 09:44:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:09.707 Malloc0 00:24:09.707 09:44:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:09.965 09:44:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:10.222 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:10.480 [2024-10-07 09:44:59.371124] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:10.480 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:10.739 [2024-10-07 09:44:59.692148] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:10.739 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:10.997 [2024-10-07 09:44:59.956938] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:10.997 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=287592 00:24:10.997 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:24:10.997 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:10.997 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 287592 /var/tmp/bdevperf.sock 00:24:10.997 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 287592 ']' 00:24:10.997 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:10.997 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:10.997 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:10.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:10.997 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:10.997 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:11.563 09:45:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:11.563 09:45:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:24:11.563 09:45:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:11.820 NVMe0n1 00:24:11.821 09:45:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:12.078 00:24:12.078 09:45:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=287797 00:24:12.078 09:45:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:12.078 09:45:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:24:13.455 09:45:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:13.455 [2024-10-07 09:45:02.335090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15be7f0 is same with the state(6) to be set 00:24:13.455 [2024-10-07 09:45:02.335174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15be7f0 is same with the state(6) to be set 00:24:13.455 [2024-10-07 09:45:02.335190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15be7f0 is same with the state(6) to be set 00:24:13.455 [2024-10-07 09:45:02.335203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15be7f0 is same with the state(6) to be set 00:24:13.455 [2024-10-07 09:45:02.335215] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15be7f0 is same with the state(6) to be set 00:24:13.455 [2024-10-07 09:45:02.335227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15be7f0 is same with the state(6) to be set 00:24:13.455 [2024-10-07 09:45:02.335238] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15be7f0 is same with the state(6) to be set 00:24:13.455 [2024-10-07 09:45:02.335250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15be7f0 is same with the state(6) to be set 00:24:13.455 [2024-10-07 09:45:02.335261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15be7f0 is same with the state(6) to be set 00:24:13.455 [2024-10-07 09:45:02.335273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15be7f0 is same with the state(6) to be set 00:24:13.455 [2024-10-07 09:45:02.335285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15be7f0 is same with the state(6) to be set 00:24:13.456 [2024-10-07 09:45:02.335297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15be7f0 is same with the state(6) to be set 00:24:13.456 [2024-10-07 09:45:02.335308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15be7f0 is same with the state(6) to be set 00:24:13.456 [2024-10-07 09:45:02.335319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15be7f0 is same with the state(6) to be set 00:24:13.456 09:45:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:24:16.742 09:45:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:16.742 00:24:17.001 09:45:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:17.259 [2024-10-07 09:45:06.059934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bf2a0 is same with the state(6) to be set 00:24:17.259 [2024-10-07 09:45:06.060011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bf2a0 is same with the state(6) to be set 00:24:17.259 [2024-10-07 09:45:06.060037] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bf2a0 is same with the state(6) to be set 00:24:17.259 [2024-10-07 09:45:06.060057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bf2a0 is same with the state(6) to be set 00:24:17.259 [2024-10-07 09:45:06.060070] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bf2a0 is same with the state(6) to be set 00:24:17.259 [2024-10-07 09:45:06.060082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bf2a0 is same with the state(6) to be set 00:24:17.259 [2024-10-07 09:45:06.060094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bf2a0 is same with the state(6) to be set 00:24:17.259 [2024-10-07 09:45:06.060107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bf2a0 is same with the state(6) to be set 00:24:17.259 09:45:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:24:20.543 09:45:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:20.543 [2024-10-07 09:45:09.344887] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:20.543 09:45:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:24:21.476 09:45:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:21.734 [2024-10-07 09:45:10.619992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1484700 is same with the state(6) to be set 00:24:21.734 [2024-10-07 09:45:10.620054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1484700 is same with the state(6) to be set 00:24:21.734 [2024-10-07 09:45:10.620076] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1484700 is same with the state(6) to be set 00:24:21.734 [2024-10-07 09:45:10.620089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1484700 is same with the state(6) to be set 00:24:21.734 [2024-10-07 09:45:10.620101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1484700 is same with the state(6) to be set 00:24:21.734 [2024-10-07 09:45:10.620112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1484700 is same with the state(6) to be set 00:24:21.734 [2024-10-07 09:45:10.620124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1484700 is same with the state(6) to be set 00:24:21.734 [2024-10-07 09:45:10.620136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1484700 is same with the state(6) to be set 00:24:21.734 [2024-10-07 09:45:10.620148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1484700 is same with the state(6) to be set 00:24:21.734 [2024-10-07 09:45:10.620159] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1484700 is same with the state(6) to be set 00:24:21.734 [2024-10-07 09:45:10.620171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1484700 is same with the state(6) to be set 00:24:21.734 [2024-10-07 09:45:10.620183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1484700 is same with the state(6) to be set 00:24:21.734 [2024-10-07 09:45:10.620195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1484700 is same with the state(6) to be set 00:24:21.734 [2024-10-07 09:45:10.620207] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1484700 is same with the state(6) to be set 00:24:21.734 [2024-10-07 09:45:10.620220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1484700 is same with the state(6) to be set 00:24:21.734 [2024-10-07 09:45:10.620244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1484700 is same with the state(6) to be set 00:24:21.734 [2024-10-07 09:45:10.620257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1484700 is same with the state(6) to be set 00:24:21.734 [2024-10-07 09:45:10.620269] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1484700 is same with the state(6) to be set 00:24:21.734 [2024-10-07 09:45:10.620282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1484700 is same with the state(6) to be set 00:24:21.734 [2024-10-07 09:45:10.620295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1484700 is same with the state(6) to be set 00:24:21.734 [2024-10-07 09:45:10.620307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1484700 is same with the state(6) to be set 00:24:21.734 [2024-10-07 09:45:10.620319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1484700 is same with the state(6) to be set 00:24:21.734 [2024-10-07 09:45:10.620331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1484700 is same with the state(6) to be set 00:24:21.734 [2024-10-07 09:45:10.620343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1484700 is same with the state(6) to be set 00:24:21.734 [2024-10-07 09:45:10.620355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1484700 is same with the state(6) to be set 00:24:21.734 [2024-10-07 09:45:10.620382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1484700 is same with the state(6) to be set 00:24:21.734 [2024-10-07 09:45:10.620393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1484700 is same with the state(6) to be set 00:24:21.734 [2024-10-07 09:45:10.620405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1484700 is same with the state(6) to be set 00:24:21.734 [2024-10-07 09:45:10.620416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1484700 is same with the state(6) to be set 00:24:21.734 09:45:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 287797 00:24:28.300 { 00:24:28.300 "results": [ 00:24:28.300 { 00:24:28.300 "job": "NVMe0n1", 00:24:28.300 "core_mask": "0x1", 00:24:28.300 "workload": "verify", 00:24:28.300 "status": "finished", 00:24:28.300 "verify_range": { 00:24:28.300 "start": 0, 00:24:28.300 "length": 16384 00:24:28.300 }, 00:24:28.300 "queue_depth": 128, 00:24:28.300 "io_size": 4096, 00:24:28.300 "runtime": 15.008235, 00:24:28.300 "iops": 8500.133426748715, 00:24:28.300 "mibps": 33.203646198237166, 00:24:28.300 "io_failed": 3965, 00:24:28.300 "io_timeout": 0, 00:24:28.300 "avg_latency_us": 14576.329462505833, 00:24:28.300 "min_latency_us": 540.0651851851852, 00:24:28.300 "max_latency_us": 16602.453333333335 00:24:28.300 } 00:24:28.300 ], 00:24:28.300 "core_count": 1 00:24:28.300 } 00:24:28.300 09:45:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 287592 00:24:28.300 09:45:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 287592 ']' 00:24:28.300 09:45:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 287592 00:24:28.300 09:45:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:24:28.300 09:45:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:28.300 09:45:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 287592 00:24:28.300 09:45:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:28.300 09:45:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:28.300 09:45:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 287592' 00:24:28.300 killing process with pid 287592 00:24:28.300 09:45:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 287592 00:24:28.300 09:45:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 287592 00:24:28.300 09:45:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:28.300 [2024-10-07 09:45:00.028904] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:24:28.300 [2024-10-07 09:45:00.029042] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid287592 ] 00:24:28.300 [2024-10-07 09:45:00.090540] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:28.300 [2024-10-07 09:45:00.202396] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:24:28.300 Running I/O for 15 seconds... 00:24:28.300 8415.00 IOPS, 32.87 MiB/s [2024-10-07 09:45:02.335724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:78560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.300 [2024-10-07 09:45:02.335767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.300 [2024-10-07 09:45:02.335794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:77664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.300 [2024-10-07 09:45:02.335810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.300 [2024-10-07 09:45:02.335828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:77672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.300 [2024-10-07 09:45:02.335842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.301 [2024-10-07 09:45:02.335859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:77680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.301 [2024-10-07 09:45:02.335873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.301 [2024-10-07 09:45:02.335889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:77688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.301 [2024-10-07 09:45:02.335903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.301 [2024-10-07 09:45:02.335919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:77696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.301 [2024-10-07 09:45:02.335933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.301 [2024-10-07 09:45:02.335949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:77704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.301 [2024-10-07 09:45:02.335982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.301 [2024-10-07 09:45:02.335998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:77712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.301 [2024-10-07 09:45:02.336012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.301 [2024-10-07 09:45:02.336027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:77720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.301 [2024-10-07 09:45:02.336054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.301 [2024-10-07 09:45:02.336068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:77728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.301 [2024-10-07 09:45:02.336081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.301 [2024-10-07 09:45:02.336096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:77736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.301 [2024-10-07 09:45:02.336109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.301 [2024-10-07 09:45:02.336138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:77744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.301 [2024-10-07 09:45:02.336152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.301 [2024-10-07 09:45:02.336166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:77752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.301 [2024-10-07 09:45:02.336179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.301 [2024-10-07 09:45:02.336193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:77760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.301 [2024-10-07 09:45:02.336206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.301 [2024-10-07 09:45:02.336220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:77768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.301 [2024-10-07 09:45:02.336233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.301 [2024-10-07 09:45:02.336247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:77776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.301 [2024-10-07 09:45:02.336260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.301 [2024-10-07 09:45:02.336274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:77784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.301 [2024-10-07 09:45:02.336289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.301 [2024-10-07 09:45:02.336304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:77792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.301 [2024-10-07 09:45:02.336318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.301 [2024-10-07 09:45:02.336332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:77800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.301 [2024-10-07 09:45:02.336345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.301 [2024-10-07 09:45:02.336359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:77808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.301 [2024-10-07 09:45:02.336372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.301 [2024-10-07 09:45:02.336387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:77816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.301 [2024-10-07 09:45:02.336400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.301 [2024-10-07 09:45:02.336413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:77824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.301 [2024-10-07 09:45:02.336427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.301 [2024-10-07 09:45:02.336441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:77832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.301 [2024-10-07 09:45:02.336454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.301 [2024-10-07 09:45:02.336468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:77840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.301 [2024-10-07 09:45:02.336485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.301 [2024-10-07 09:45:02.336501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:77848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.301 [2024-10-07 09:45:02.336513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.301 [2024-10-07 09:45:02.336528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:77856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.301 [2024-10-07 09:45:02.336541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.301 [2024-10-07 09:45:02.336555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:77864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.301 [2024-10-07 09:45:02.336569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.301 [2024-10-07 09:45:02.336584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:77872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.301 [2024-10-07 09:45:02.336596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.301 [2024-10-07 09:45:02.336611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:77880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.301 [2024-10-07 09:45:02.336624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.301 [2024-10-07 09:45:02.336639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:77888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.301 [2024-10-07 09:45:02.336674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.301 [2024-10-07 09:45:02.336692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:77896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.301 [2024-10-07 09:45:02.336706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.301 [2024-10-07 09:45:02.336722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:77904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.301 [2024-10-07 09:45:02.336735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.301 [2024-10-07 09:45:02.336750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:77912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.301 [2024-10-07 09:45:02.336763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.301 [2024-10-07 09:45:02.336778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:77920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.301 [2024-10-07 09:45:02.336792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.301 [2024-10-07 09:45:02.336806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:77928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.301 [2024-10-07 09:45:02.336820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.301 [2024-10-07 09:45:02.336835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:77936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.301 [2024-10-07 09:45:02.336848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.301 [2024-10-07 09:45:02.336867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:77944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.301 [2024-10-07 09:45:02.336882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.301 [2024-10-07 09:45:02.336896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:77952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.301 [2024-10-07 09:45:02.336910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.301 [2024-10-07 09:45:02.336924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:77960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.301 [2024-10-07 09:45:02.336938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.301 [2024-10-07 09:45:02.336953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:77968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.301 [2024-10-07 09:45:02.336966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.301 [2024-10-07 09:45:02.336981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:77976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.301 [2024-10-07 09:45:02.336994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.301 [2024-10-07 09:45:02.337009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:77984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.301 [2024-10-07 09:45:02.337023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.301 [2024-10-07 09:45:02.337037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:77992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.302 [2024-10-07 09:45:02.337050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.302 [2024-10-07 09:45:02.337065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:78000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.302 [2024-10-07 09:45:02.337078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.302 [2024-10-07 09:45:02.337093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:78008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.302 [2024-10-07 09:45:02.337106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.302 [2024-10-07 09:45:02.337121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:78016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.302 [2024-10-07 09:45:02.337134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.302 [2024-10-07 09:45:02.337149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:78024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.302 [2024-10-07 09:45:02.337162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.302 [2024-10-07 09:45:02.337178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:78032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.302 [2024-10-07 09:45:02.337192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.302 [2024-10-07 09:45:02.337206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:78040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.302 [2024-10-07 09:45:02.337223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.302 [2024-10-07 09:45:02.337238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:78048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.302 [2024-10-07 09:45:02.337252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.302 [2024-10-07 09:45:02.337266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:78056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.302 [2024-10-07 09:45:02.337279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.302 [2024-10-07 09:45:02.337294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:78064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.302 [2024-10-07 09:45:02.337308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.302 [2024-10-07 09:45:02.337322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:78072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.302 [2024-10-07 09:45:02.337335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.302 [2024-10-07 09:45:02.337350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:78080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.302 [2024-10-07 09:45:02.337363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.302 [2024-10-07 09:45:02.337378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:78088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.302 [2024-10-07 09:45:02.337393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.302 [2024-10-07 09:45:02.337409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:78096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.302 [2024-10-07 09:45:02.337424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.302 [2024-10-07 09:45:02.337440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:78104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.302 [2024-10-07 09:45:02.337454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.302 [2024-10-07 09:45:02.337469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:78112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.302 [2024-10-07 09:45:02.337482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.302 [2024-10-07 09:45:02.337497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:78120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.302 [2024-10-07 09:45:02.337512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.302 [2024-10-07 09:45:02.337528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:78128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.302 [2024-10-07 09:45:02.337544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.302 [2024-10-07 09:45:02.337558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:78136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.302 [2024-10-07 09:45:02.337572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.302 [2024-10-07 09:45:02.337589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:78144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.302 [2024-10-07 09:45:02.337607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.302 [2024-10-07 09:45:02.337623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:78152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.302 [2024-10-07 09:45:02.337638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.302 [2024-10-07 09:45:02.337655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:78160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.302 [2024-10-07 09:45:02.337684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.302 [2024-10-07 09:45:02.337702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:78168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.302 [2024-10-07 09:45:02.337724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.302 [2024-10-07 09:45:02.337740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:78176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.302 [2024-10-07 09:45:02.337754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.302 [2024-10-07 09:45:02.337769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:78184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.302 [2024-10-07 09:45:02.337782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.302 [2024-10-07 09:45:02.337797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:78192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.302 [2024-10-07 09:45:02.337811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.302 [2024-10-07 09:45:02.337825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:78200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.302 [2024-10-07 09:45:02.337838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.302 [2024-10-07 09:45:02.337853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:78208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.302 [2024-10-07 09:45:02.337866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.302 [2024-10-07 09:45:02.337881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:78216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.302 [2024-10-07 09:45:02.337895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.302 [2024-10-07 09:45:02.337910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:78224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.302 [2024-10-07 09:45:02.337923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.302 [2024-10-07 09:45:02.337937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:78232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.302 [2024-10-07 09:45:02.337951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.302 [2024-10-07 09:45:02.337966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.302 [2024-10-07 09:45:02.337979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.302 [2024-10-07 09:45:02.338002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:78248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.302 [2024-10-07 09:45:02.338016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.302 [2024-10-07 09:45:02.338031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:78256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.302 [2024-10-07 09:45:02.338045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.302 [2024-10-07 09:45:02.338059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:78264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.302 [2024-10-07 09:45:02.338073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.302 [2024-10-07 09:45:02.338088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:78272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.302 [2024-10-07 09:45:02.338101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.302 [2024-10-07 09:45:02.338116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:78280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.302 [2024-10-07 09:45:02.338130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.302 [2024-10-07 09:45:02.338151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:78288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.302 [2024-10-07 09:45:02.338165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.302 [2024-10-07 09:45:02.338180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:78296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.302 [2024-10-07 09:45:02.338195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.302 [2024-10-07 09:45:02.338212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:78304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.302 [2024-10-07 09:45:02.338226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.302 [2024-10-07 09:45:02.338241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:78312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.303 [2024-10-07 09:45:02.338255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.303 [2024-10-07 09:45:02.338271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:78320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.303 [2024-10-07 09:45:02.338285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.303 [2024-10-07 09:45:02.338300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:78328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.303 [2024-10-07 09:45:02.338314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.303 [2024-10-07 09:45:02.338329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:78336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.303 [2024-10-07 09:45:02.338343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.303 [2024-10-07 09:45:02.338359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:78344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.303 [2024-10-07 09:45:02.338376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.303 [2024-10-07 09:45:02.338394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:78352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.303 [2024-10-07 09:45:02.338409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.303 [2024-10-07 09:45:02.338424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:78360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.303 [2024-10-07 09:45:02.338437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.303 [2024-10-07 09:45:02.338453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:78368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.303 [2024-10-07 09:45:02.338466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.303 [2024-10-07 09:45:02.338481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:78376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.303 [2024-10-07 09:45:02.338495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.303 [2024-10-07 09:45:02.338511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:78384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.303 [2024-10-07 09:45:02.338524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.303 [2024-10-07 09:45:02.338539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:78392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.303 [2024-10-07 09:45:02.338553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.303 [2024-10-07 09:45:02.338568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:78400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.303 [2024-10-07 09:45:02.338582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.303 [2024-10-07 09:45:02.338597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:78408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.303 [2024-10-07 09:45:02.338610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.303 [2024-10-07 09:45:02.338635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:78416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.303 [2024-10-07 09:45:02.338650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.303 [2024-10-07 09:45:02.338672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:78424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.303 [2024-10-07 09:45:02.338704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.303 [2024-10-07 09:45:02.338721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:78432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.303 [2024-10-07 09:45:02.338736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.303 [2024-10-07 09:45:02.338752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:78440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.303 [2024-10-07 09:45:02.338766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.303 [2024-10-07 09:45:02.338786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:78448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.303 [2024-10-07 09:45:02.338802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.303 [2024-10-07 09:45:02.338818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:78456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.303 [2024-10-07 09:45:02.338832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.303 [2024-10-07 09:45:02.338848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:78464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.303 [2024-10-07 09:45:02.338862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.303 [2024-10-07 09:45:02.338878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:78472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.303 [2024-10-07 09:45:02.338892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.303 [2024-10-07 09:45:02.338908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:78480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.303 [2024-10-07 09:45:02.338923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.303 [2024-10-07 09:45:02.338939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:78488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.303 [2024-10-07 09:45:02.338953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.303 [2024-10-07 09:45:02.338969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:78496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.303 [2024-10-07 09:45:02.338999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.303 [2024-10-07 09:45:02.339016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:78504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.303 [2024-10-07 09:45:02.339030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.303 [2024-10-07 09:45:02.339045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:78512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.303 [2024-10-07 09:45:02.339059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.303 [2024-10-07 09:45:02.339074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:78520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.303 [2024-10-07 09:45:02.339088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.303 [2024-10-07 09:45:02.339103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:78528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.303 [2024-10-07 09:45:02.339118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.303 [2024-10-07 09:45:02.339133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:78536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.303 [2024-10-07 09:45:02.339147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.303 [2024-10-07 09:45:02.339168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:78544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.303 [2024-10-07 09:45:02.339185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.303 [2024-10-07 09:45:02.339201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:78552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.303 [2024-10-07 09:45:02.339216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.303 [2024-10-07 09:45:02.339231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:78568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.303 [2024-10-07 09:45:02.339245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.303 [2024-10-07 09:45:02.339260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:78576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.303 [2024-10-07 09:45:02.339274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.303 [2024-10-07 09:45:02.339290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:78584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.303 [2024-10-07 09:45:02.339303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.303 [2024-10-07 09:45:02.339319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:78592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.303 [2024-10-07 09:45:02.339334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.303 [2024-10-07 09:45:02.339349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:78600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.303 [2024-10-07 09:45:02.339365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.303 [2024-10-07 09:45:02.339380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:78608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.303 [2024-10-07 09:45:02.339394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.303 [2024-10-07 09:45:02.339410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:78616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.303 [2024-10-07 09:45:02.339424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.303 [2024-10-07 09:45:02.339439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:78624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.303 [2024-10-07 09:45:02.339453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.303 [2024-10-07 09:45:02.339468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:78632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.303 [2024-10-07 09:45:02.339481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.303 [2024-10-07 09:45:02.339497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:78640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.304 [2024-10-07 09:45:02.339511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.304 [2024-10-07 09:45:02.339525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:78648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.304 [2024-10-07 09:45:02.339539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.304 [2024-10-07 09:45:02.339554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:78656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.304 [2024-10-07 09:45:02.339571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.304 [2024-10-07 09:45:02.339588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:78664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.304 [2024-10-07 09:45:02.339602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.304 [2024-10-07 09:45:02.339617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:78672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.304 [2024-10-07 09:45:02.339631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.304 [2024-10-07 09:45:02.339688] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:28.304 [2024-10-07 09:45:02.339707] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:28.304 [2024-10-07 09:45:02.339719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78680 len:8 PRP1 0x0 PRP2 0x0 00:24:28.304 [2024-10-07 09:45:02.339733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.304 [2024-10-07 09:45:02.339797] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1ac3280 was disconnected and freed. reset controller. 00:24:28.304 [2024-10-07 09:45:02.339816] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:24:28.304 [2024-10-07 09:45:02.339851] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:28.304 [2024-10-07 09:45:02.339870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.304 [2024-10-07 09:45:02.339885] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:28.304 [2024-10-07 09:45:02.339899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.304 [2024-10-07 09:45:02.339913] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:28.304 [2024-10-07 09:45:02.339927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.304 [2024-10-07 09:45:02.339941] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:28.304 [2024-10-07 09:45:02.339961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.304 [2024-10-07 09:45:02.339975] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.304 [2024-10-07 09:45:02.340037] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa2ab0 (9): Bad file descriptor 00:24:28.304 [2024-10-07 09:45:02.343321] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.304 [2024-10-07 09:45:02.382489] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:28.304 8289.00 IOPS, 32.38 MiB/s 8416.00 IOPS, 32.88 MiB/s 8480.25 IOPS, 33.13 MiB/s [2024-10-07 09:45:06.063278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:80128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.304 [2024-10-07 09:45:06.063322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.304 [2024-10-07 09:45:06.063357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:80136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.304 [2024-10-07 09:45:06.063380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.304 [2024-10-07 09:45:06.063398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:80144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.304 [2024-10-07 09:45:06.063413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.304 [2024-10-07 09:45:06.063429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:80152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.304 [2024-10-07 09:45:06.063443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.304 [2024-10-07 09:45:06.063459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:80160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.304 [2024-10-07 09:45:06.063473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.304 [2024-10-07 09:45:06.063489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:80168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.304 [2024-10-07 09:45:06.063503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.304 [2024-10-07 09:45:06.063518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:80176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.304 [2024-10-07 09:45:06.063532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.304 [2024-10-07 09:45:06.063547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:80184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.304 [2024-10-07 09:45:06.063560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.304 [2024-10-07 09:45:06.063575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:80192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.304 [2024-10-07 09:45:06.063589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.304 [2024-10-07 09:45:06.063603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:80200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.304 [2024-10-07 09:45:06.063617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.304 [2024-10-07 09:45:06.063632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:80208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.304 [2024-10-07 09:45:06.063646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.304 [2024-10-07 09:45:06.063661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:80216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.304 [2024-10-07 09:45:06.063701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.304 [2024-10-07 09:45:06.063718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:80224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.304 [2024-10-07 09:45:06.063733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.304 [2024-10-07 09:45:06.063748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:80232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.304 [2024-10-07 09:45:06.063763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.304 [2024-10-07 09:45:06.063788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:80240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.304 [2024-10-07 09:45:06.063803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.304 [2024-10-07 09:45:06.063818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:80248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.304 [2024-10-07 09:45:06.063833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.304 [2024-10-07 09:45:06.063848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:80256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.304 [2024-10-07 09:45:06.063862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.304 [2024-10-07 09:45:06.063878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:80264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.304 [2024-10-07 09:45:06.063892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.304 [2024-10-07 09:45:06.063908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:80272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.304 [2024-10-07 09:45:06.063923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.304 [2024-10-07 09:45:06.063939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:80280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.304 [2024-10-07 09:45:06.063953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.304 [2024-10-07 09:45:06.063970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:80288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.304 [2024-10-07 09:45:06.064000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.304 [2024-10-07 09:45:06.064016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:80296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.304 [2024-10-07 09:45:06.064030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.304 [2024-10-07 09:45:06.064045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:80304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.304 [2024-10-07 09:45:06.064059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.304 [2024-10-07 09:45:06.064074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:80312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.304 [2024-10-07 09:45:06.064088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.304 [2024-10-07 09:45:06.064104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:80320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.304 [2024-10-07 09:45:06.064118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.304 [2024-10-07 09:45:06.064134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.304 [2024-10-07 09:45:06.064147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.304 [2024-10-07 09:45:06.064163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:80336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.305 [2024-10-07 09:45:06.064177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.305 [2024-10-07 09:45:06.064197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:80344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.305 [2024-10-07 09:45:06.064212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.305 [2024-10-07 09:45:06.064228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:80352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.305 [2024-10-07 09:45:06.064251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.305 [2024-10-07 09:45:06.064267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:80360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.305 [2024-10-07 09:45:06.064281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.305 [2024-10-07 09:45:06.064296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:80368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.305 [2024-10-07 09:45:06.064311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.305 [2024-10-07 09:45:06.064327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:80376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.305 [2024-10-07 09:45:06.064341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.305 [2024-10-07 09:45:06.064356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:80384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.305 [2024-10-07 09:45:06.064370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.305 [2024-10-07 09:45:06.064385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:80392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.305 [2024-10-07 09:45:06.064400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.305 [2024-10-07 09:45:06.064415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:80400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.305 [2024-10-07 09:45:06.064429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.305 [2024-10-07 09:45:06.064444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:80408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.305 [2024-10-07 09:45:06.064458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.305 [2024-10-07 09:45:06.064473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:80416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.305 [2024-10-07 09:45:06.064488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.305 [2024-10-07 09:45:06.064503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:80424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.305 [2024-10-07 09:45:06.064517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.305 [2024-10-07 09:45:06.064532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:80432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.305 [2024-10-07 09:45:06.064546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.305 [2024-10-07 09:45:06.064562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:80440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.305 [2024-10-07 09:45:06.064580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.305 [2024-10-07 09:45:06.064595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:80448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.305 [2024-10-07 09:45:06.064610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.305 [2024-10-07 09:45:06.064624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.305 [2024-10-07 09:45:06.064639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.305 [2024-10-07 09:45:06.064655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:80464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.305 [2024-10-07 09:45:06.064691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.305 [2024-10-07 09:45:06.064711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:80472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.305 [2024-10-07 09:45:06.064726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.305 [2024-10-07 09:45:06.064742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:80480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.305 [2024-10-07 09:45:06.064757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.305 [2024-10-07 09:45:06.064772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:80488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.305 [2024-10-07 09:45:06.064787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.305 [2024-10-07 09:45:06.064803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:80496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.305 [2024-10-07 09:45:06.064817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.305 [2024-10-07 09:45:06.064833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:80504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.305 [2024-10-07 09:45:06.064847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.305 [2024-10-07 09:45:06.064863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.305 [2024-10-07 09:45:06.064877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.305 [2024-10-07 09:45:06.064893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:80520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.305 [2024-10-07 09:45:06.064907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.305 [2024-10-07 09:45:06.064923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.305 [2024-10-07 09:45:06.064937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.305 [2024-10-07 09:45:06.064952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:80536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.305 [2024-10-07 09:45:06.064966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.305 [2024-10-07 09:45:06.065002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:80544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.305 [2024-10-07 09:45:06.065016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.305 [2024-10-07 09:45:06.065031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:80552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.305 [2024-10-07 09:45:06.065045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.305 [2024-10-07 09:45:06.065060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:80560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.305 [2024-10-07 09:45:06.065074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.305 [2024-10-07 09:45:06.065089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:80568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.305 [2024-10-07 09:45:06.065103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.305 [2024-10-07 09:45:06.065118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:80576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.305 [2024-10-07 09:45:06.065132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.305 [2024-10-07 09:45:06.065147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.305 [2024-10-07 09:45:06.065161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.305 [2024-10-07 09:45:06.065194] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:28.305 [2024-10-07 09:45:06.065212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80592 len:8 PRP1 0x0 PRP2 0x0 00:24:28.305 [2024-10-07 09:45:06.065226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.305 [2024-10-07 09:45:06.065243] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:28.305 [2024-10-07 09:45:06.065255] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:28.306 [2024-10-07 09:45:06.065266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80600 len:8 PRP1 0x0 PRP2 0x0 00:24:28.306 [2024-10-07 09:45:06.065279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.306 [2024-10-07 09:45:06.065292] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:28.306 [2024-10-07 09:45:06.065303] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:28.306 [2024-10-07 09:45:06.065314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80608 len:8 PRP1 0x0 PRP2 0x0 00:24:28.306 [2024-10-07 09:45:06.065326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.306 [2024-10-07 09:45:06.065339] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:28.306 [2024-10-07 09:45:06.065350] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:28.306 [2024-10-07 09:45:06.065361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80616 len:8 PRP1 0x0 PRP2 0x0 00:24:28.306 [2024-10-07 09:45:06.065373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.306 [2024-10-07 09:45:06.065386] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:28.306 [2024-10-07 09:45:06.065401] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:28.306 [2024-10-07 09:45:06.065412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80624 len:8 PRP1 0x0 PRP2 0x0 00:24:28.306 [2024-10-07 09:45:06.065425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.306 [2024-10-07 09:45:06.065438] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:28.306 [2024-10-07 09:45:06.065449] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:28.306 [2024-10-07 09:45:06.065460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80632 len:8 PRP1 0x0 PRP2 0x0 00:24:28.306 [2024-10-07 09:45:06.065472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.306 [2024-10-07 09:45:06.065485] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:28.306 [2024-10-07 09:45:06.065496] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:28.306 [2024-10-07 09:45:06.065507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80640 len:8 PRP1 0x0 PRP2 0x0 00:24:28.306 [2024-10-07 09:45:06.065519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.306 [2024-10-07 09:45:06.065532] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:28.306 [2024-10-07 09:45:06.065543] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:28.306 [2024-10-07 09:45:06.065554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80648 len:8 PRP1 0x0 PRP2 0x0 00:24:28.306 [2024-10-07 09:45:06.065567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.306 [2024-10-07 09:45:06.065580] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:28.306 [2024-10-07 09:45:06.065591] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:28.306 [2024-10-07 09:45:06.065603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80656 len:8 PRP1 0x0 PRP2 0x0 00:24:28.306 [2024-10-07 09:45:06.065615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.306 [2024-10-07 09:45:06.065628] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:28.306 [2024-10-07 09:45:06.065638] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:28.306 [2024-10-07 09:45:06.065649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80664 len:8 PRP1 0x0 PRP2 0x0 00:24:28.306 [2024-10-07 09:45:06.065662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.306 [2024-10-07 09:45:06.065699] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:28.306 [2024-10-07 09:45:06.065712] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:28.306 [2024-10-07 09:45:06.065723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80672 len:8 PRP1 0x0 PRP2 0x0 00:24:28.306 [2024-10-07 09:45:06.065735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.306 [2024-10-07 09:45:06.065749] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:28.306 [2024-10-07 09:45:06.065759] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:28.306 [2024-10-07 09:45:06.065771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80680 len:8 PRP1 0x0 PRP2 0x0 00:24:28.306 [2024-10-07 09:45:06.065784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.306 [2024-10-07 09:45:06.065801] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:28.306 [2024-10-07 09:45:06.065812] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:28.306 [2024-10-07 09:45:06.065824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80688 len:8 PRP1 0x0 PRP2 0x0 00:24:28.306 [2024-10-07 09:45:06.065837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.306 [2024-10-07 09:45:06.065851] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:28.306 [2024-10-07 09:45:06.065862] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:28.306 [2024-10-07 09:45:06.065873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80696 len:8 PRP1 0x0 PRP2 0x0 00:24:28.306 [2024-10-07 09:45:06.065886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.306 [2024-10-07 09:45:06.065899] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:28.306 [2024-10-07 09:45:06.065909] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:28.306 [2024-10-07 09:45:06.065921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80704 len:8 PRP1 0x0 PRP2 0x0 00:24:28.306 [2024-10-07 09:45:06.065933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.306 [2024-10-07 09:45:06.065947] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:28.306 [2024-10-07 09:45:06.065958] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:28.306 [2024-10-07 09:45:06.065969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80712 len:8 PRP1 0x0 PRP2 0x0 00:24:28.306 [2024-10-07 09:45:06.065997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.306 [2024-10-07 09:45:06.066010] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:28.306 [2024-10-07 09:45:06.066021] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:28.306 [2024-10-07 09:45:06.066032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80720 len:8 PRP1 0x0 PRP2 0x0 00:24:28.306 [2024-10-07 09:45:06.066044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.306 [2024-10-07 09:45:06.066057] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:28.306 [2024-10-07 09:45:06.066067] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:28.306 [2024-10-07 09:45:06.066078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80728 len:8 PRP1 0x0 PRP2 0x0 00:24:28.306 [2024-10-07 09:45:06.066090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.306 [2024-10-07 09:45:06.066102] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:28.306 [2024-10-07 09:45:06.066113] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:28.306 [2024-10-07 09:45:06.066124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80736 len:8 PRP1 0x0 PRP2 0x0 00:24:28.306 [2024-10-07 09:45:06.066136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.306 [2024-10-07 09:45:06.066149] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:28.306 [2024-10-07 09:45:06.066159] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:28.306 [2024-10-07 09:45:06.066170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80744 len:8 PRP1 0x0 PRP2 0x0 00:24:28.306 [2024-10-07 09:45:06.066185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.306 [2024-10-07 09:45:06.066198] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:28.306 [2024-10-07 09:45:06.066209] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:28.306 [2024-10-07 09:45:06.066220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80752 len:8 PRP1 0x0 PRP2 0x0 00:24:28.306 [2024-10-07 09:45:06.066232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.306 [2024-10-07 09:45:06.066244] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:28.306 [2024-10-07 09:45:06.066254] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:28.306 [2024-10-07 09:45:06.066265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80760 len:8 PRP1 0x0 PRP2 0x0 00:24:28.306 [2024-10-07 09:45:06.066277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.306 [2024-10-07 09:45:06.066290] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:28.306 [2024-10-07 09:45:06.066300] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:28.306 [2024-10-07 09:45:06.066310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80768 len:8 PRP1 0x0 PRP2 0x0 00:24:28.306 [2024-10-07 09:45:06.066322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.306 [2024-10-07 09:45:06.066335] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:28.306 [2024-10-07 09:45:06.066345] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:28.306 [2024-10-07 09:45:06.066355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80776 len:8 PRP1 0x0 PRP2 0x0 00:24:28.306 [2024-10-07 09:45:06.066368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.306 [2024-10-07 09:45:06.066381] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:28.306 [2024-10-07 09:45:06.066397] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:28.306 [2024-10-07 09:45:06.066409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80784 len:8 PRP1 0x0 PRP2 0x0 00:24:28.306 [2024-10-07 09:45:06.066421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.307 [2024-10-07 09:45:06.066434] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:28.307 [2024-10-07 09:45:06.066444] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:28.307 [2024-10-07 09:45:06.066455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80792 len:8 PRP1 0x0 PRP2 0x0 00:24:28.307 [2024-10-07 09:45:06.066468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.307 [2024-10-07 09:45:06.066480] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:28.307 [2024-10-07 09:45:06.066490] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:28.307 [2024-10-07 09:45:06.066501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80800 len:8 PRP1 0x0 PRP2 0x0 00:24:28.307 [2024-10-07 09:45:06.066513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.307 [2024-10-07 09:45:06.066526] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:28.307 [2024-10-07 09:45:06.066536] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:28.307 [2024-10-07 09:45:06.066550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80808 len:8 PRP1 0x0 PRP2 0x0 00:24:28.307 [2024-10-07 09:45:06.066563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.307 [2024-10-07 09:45:06.066576] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:28.307 [2024-10-07 09:45:06.066586] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:28.307 [2024-10-07 09:45:06.066597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80816 len:8 PRP1 0x0 PRP2 0x0 00:24:28.307 [2024-10-07 09:45:06.066609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.307 [2024-10-07 09:45:06.066622] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:28.307 [2024-10-07 09:45:06.066632] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:28.307 [2024-10-07 09:45:06.066642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80824 len:8 PRP1 0x0 PRP2 0x0 00:24:28.307 [2024-10-07 09:45:06.066654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.307 [2024-10-07 09:45:06.066689] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:28.307 [2024-10-07 09:45:06.066703] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:28.307 [2024-10-07 09:45:06.066714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80832 len:8 PRP1 0x0 PRP2 0x0 00:24:28.307 [2024-10-07 09:45:06.066727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.307 [2024-10-07 09:45:06.066740] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:28.307 [2024-10-07 09:45:06.066751] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:28.307 [2024-10-07 09:45:06.066762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80840 len:8 PRP1 0x0 PRP2 0x0 00:24:28.307 [2024-10-07 09:45:06.066775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.307 [2024-10-07 09:45:06.066787] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:28.307 [2024-10-07 09:45:06.066804] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:28.307 [2024-10-07 09:45:06.066816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80848 len:8 PRP1 0x0 PRP2 0x0 00:24:28.307 [2024-10-07 09:45:06.066829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.307 [2024-10-07 09:45:06.066842] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:28.307 [2024-10-07 09:45:06.066853] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:28.307 [2024-10-07 09:45:06.066864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80856 len:8 PRP1 0x0 PRP2 0x0 00:24:28.307 [2024-10-07 09:45:06.066876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.307 [2024-10-07 09:45:06.066889] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:28.307 [2024-10-07 09:45:06.066900] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:28.307 [2024-10-07 09:45:06.066910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80864 len:8 PRP1 0x0 PRP2 0x0 00:24:28.307 [2024-10-07 09:45:06.066923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.307 [2024-10-07 09:45:06.066940] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:28.307 [2024-10-07 09:45:06.066952] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:28.307 [2024-10-07 09:45:06.066962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80872 len:8 PRP1 0x0 PRP2 0x0 00:24:28.307 [2024-10-07 09:45:06.066990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.307 [2024-10-07 09:45:06.067003] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:28.307 [2024-10-07 09:45:06.067013] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:28.307 [2024-10-07 09:45:06.067024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80880 len:8 PRP1 0x0 PRP2 0x0 00:24:28.307 [2024-10-07 09:45:06.067036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.307 [2024-10-07 09:45:06.067048] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:28.307 [2024-10-07 09:45:06.067059] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:28.307 [2024-10-07 09:45:06.067069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80888 len:8 PRP1 0x0 PRP2 0x0 00:24:28.307 [2024-10-07 09:45:06.067081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.307 [2024-10-07 09:45:06.067094] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:28.307 [2024-10-07 09:45:06.067104] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:28.307 [2024-10-07 09:45:06.067115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80896 len:8 PRP1 0x0 PRP2 0x0 00:24:28.307 [2024-10-07 09:45:06.067128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.307 [2024-10-07 09:45:06.067141] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:28.307 [2024-10-07 09:45:06.067151] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:28.307 [2024-10-07 09:45:06.067162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80904 len:8 PRP1 0x0 PRP2 0x0 00:24:28.307 [2024-10-07 09:45:06.067174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.307 [2024-10-07 09:45:06.067186] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:28.307 [2024-10-07 09:45:06.067202] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:28.307 [2024-10-07 09:45:06.067213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80912 len:8 PRP1 0x0 PRP2 0x0 00:24:28.307 [2024-10-07 09:45:06.067225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.307 [2024-10-07 09:45:06.067238] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:28.307 [2024-10-07 09:45:06.067248] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:28.307 [2024-10-07 09:45:06.067258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80920 len:8 PRP1 0x0 PRP2 0x0 00:24:28.307 [2024-10-07 09:45:06.067271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.307 [2024-10-07 09:45:06.067283] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:28.307 [2024-10-07 09:45:06.067294] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:28.307 [2024-10-07 09:45:06.067304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80928 len:8 PRP1 0x0 PRP2 0x0 00:24:28.307 [2024-10-07 09:45:06.067316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.307 [2024-10-07 09:45:06.067332] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:28.307 [2024-10-07 09:45:06.067343] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:28.307 [2024-10-07 09:45:06.067354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80936 len:8 PRP1 0x0 PRP2 0x0 00:24:28.307 [2024-10-07 09:45:06.067366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.307 [2024-10-07 09:45:06.067379] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:28.307 [2024-10-07 09:45:06.067389] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:28.307 [2024-10-07 09:45:06.067399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80944 len:8 PRP1 0x0 PRP2 0x0 00:24:28.307 [2024-10-07 09:45:06.067411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.307 [2024-10-07 09:45:06.067424] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:28.307 [2024-10-07 09:45:06.067434] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:28.307 [2024-10-07 09:45:06.067445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80952 len:8 PRP1 0x0 PRP2 0x0 00:24:28.307 [2024-10-07 09:45:06.067457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.307 [2024-10-07 09:45:06.067470] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:28.307 [2024-10-07 09:45:06.067480] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:28.307 [2024-10-07 09:45:06.067490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80960 len:8 PRP1 0x0 PRP2 0x0 00:24:28.307 [2024-10-07 09:45:06.067503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.307 [2024-10-07 09:45:06.067515] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:28.307 [2024-10-07 09:45:06.067525] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:28.307 [2024-10-07 09:45:06.067536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80968 len:8 PRP1 0x0 PRP2 0x0 00:24:28.307 [2024-10-07 09:45:06.067548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.307 [2024-10-07 09:45:06.067561] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:28.307 [2024-10-07 09:45:06.067584] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:28.308 [2024-10-07 09:45:06.067596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80976 len:8 PRP1 0x0 PRP2 0x0 00:24:28.308 [2024-10-07 09:45:06.067608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.308 [2024-10-07 09:45:06.067620] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:28.308 [2024-10-07 09:45:06.067631] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:28.308 [2024-10-07 09:45:06.067641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80984 len:8 PRP1 0x0 PRP2 0x0 00:24:28.308 [2024-10-07 09:45:06.067654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.308 [2024-10-07 09:45:06.067687] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:28.308 [2024-10-07 09:45:06.067701] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:28.308 [2024-10-07 09:45:06.067716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80992 len:8 PRP1 0x0 PRP2 0x0 00:24:28.308 [2024-10-07 09:45:06.067730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.308 [2024-10-07 09:45:06.067743] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:28.308 [2024-10-07 09:45:06.067754] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:28.308 [2024-10-07 09:45:06.067764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81000 len:8 PRP1 0x0 PRP2 0x0 00:24:28.308 [2024-10-07 09:45:06.067777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.308 [2024-10-07 09:45:06.067790] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:28.308 [2024-10-07 09:45:06.067801] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:28.308 [2024-10-07 09:45:06.067811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81008 len:8 PRP1 0x0 PRP2 0x0 00:24:28.308 [2024-10-07 09:45:06.067824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.308 [2024-10-07 09:45:06.067837] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:28.308 [2024-10-07 09:45:06.067848] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:28.308 [2024-10-07 09:45:06.067858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81016 len:8 PRP1 0x0 PRP2 0x0 00:24:28.308 [2024-10-07 09:45:06.067871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.308 [2024-10-07 09:45:06.067884] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:28.308 [2024-10-07 09:45:06.067894] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:28.308 [2024-10-07 09:45:06.067905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81024 len:8 PRP1 0x0 PRP2 0x0 00:24:28.308 [2024-10-07 09:45:06.067918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.308 [2024-10-07 09:45:06.067931] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:28.308 [2024-10-07 09:45:06.067941] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:28.308 [2024-10-07 09:45:06.067952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81032 len:8 PRP1 0x0 PRP2 0x0 00:24:28.308 [2024-10-07 09:45:06.067965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.308 [2024-10-07 09:45:06.067992] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:28.308 [2024-10-07 09:45:06.068008] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:28.308 [2024-10-07 09:45:06.068020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81040 len:8 PRP1 0x0 PRP2 0x0 00:24:28.308 [2024-10-07 09:45:06.068032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.308 [2024-10-07 09:45:06.068044] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:28.308 [2024-10-07 09:45:06.068055] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:28.308 [2024-10-07 09:45:06.068066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81048 len:8 PRP1 0x0 PRP2 0x0 00:24:28.308 [2024-10-07 09:45:06.068078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.308 [2024-10-07 09:45:06.068091] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:28.308 [2024-10-07 09:45:06.068104] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:28.308 [2024-10-07 09:45:06.068115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81056 len:8 PRP1 0x0 PRP2 0x0 00:24:28.308 [2024-10-07 09:45:06.068128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.308 [2024-10-07 09:45:06.068141] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:28.308 [2024-10-07 09:45:06.068151] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:28.308 [2024-10-07 09:45:06.068161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81064 len:8 PRP1 0x0 PRP2 0x0 00:24:28.308 [2024-10-07 09:45:06.068174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.308 [2024-10-07 09:45:06.068186] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:28.308 [2024-10-07 09:45:06.068196] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:28.308 [2024-10-07 09:45:06.068207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81072 len:8 PRP1 0x0 PRP2 0x0 00:24:28.308 [2024-10-07 09:45:06.068220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.308 [2024-10-07 09:45:06.068233] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:28.308 [2024-10-07 09:45:06.068243] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:28.308 [2024-10-07 09:45:06.068254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81080 len:8 PRP1 0x0 PRP2 0x0 00:24:28.308 [2024-10-07 09:45:06.068266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.308 [2024-10-07 09:45:06.068278] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:28.308 [2024-10-07 09:45:06.068289] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:28.308 [2024-10-07 09:45:06.068299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81088 len:8 PRP1 0x0 PRP2 0x0 00:24:28.308 [2024-10-07 09:45:06.068312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.308 [2024-10-07 09:45:06.068324] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:28.308 [2024-10-07 09:45:06.068335] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:28.308 [2024-10-07 09:45:06.068345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81096 len:8 PRP1 0x0 PRP2 0x0 00:24:28.308 [2024-10-07 09:45:06.068358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.308 [2024-10-07 09:45:06.068371] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:28.308 [2024-10-07 09:45:06.068382] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:28.308 [2024-10-07 09:45:06.068394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81104 len:8 PRP1 0x0 PRP2 0x0 00:24:28.308 [2024-10-07 09:45:06.068406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.308 [2024-10-07 09:45:06.068420] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:28.308 [2024-10-07 09:45:06.068431] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:28.308 [2024-10-07 09:45:06.068441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81112 len:8 PRP1 0x0 PRP2 0x0 00:24:28.308 [2024-10-07 09:45:06.068454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.308 [2024-10-07 09:45:06.068470] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:28.308 [2024-10-07 09:45:06.068482] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:28.308 [2024-10-07 09:45:06.068493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81120 len:8 PRP1 0x0 PRP2 0x0 00:24:28.308 [2024-10-07 09:45:06.068505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.308 [2024-10-07 09:45:06.068518] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:28.308 [2024-10-07 09:45:06.068529] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:28.308 [2024-10-07 09:45:06.068540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81128 len:8 PRP1 0x0 PRP2 0x0 00:24:28.308 [2024-10-07 09:45:06.068552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.308 [2024-10-07 09:45:06.068565] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:28.308 [2024-10-07 09:45:06.068575] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:28.308 [2024-10-07 09:45:06.068586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81136 len:8 PRP1 0x0 PRP2 0x0 00:24:28.308 [2024-10-07 09:45:06.068598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.308 [2024-10-07 09:45:06.068611] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:28.308 [2024-10-07 09:45:06.068621] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:28.308 [2024-10-07 09:45:06.068632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81144 len:8 PRP1 0x0 PRP2 0x0 00:24:28.308 [2024-10-07 09:45:06.068644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.308 [2024-10-07 09:45:06.068724] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1ac5250 was disconnected and freed. reset controller. 00:24:28.308 [2024-10-07 09:45:06.068745] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:24:28.308 [2024-10-07 09:45:06.068780] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:28.308 [2024-10-07 09:45:06.068798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.308 [2024-10-07 09:45:06.068814] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:28.308 [2024-10-07 09:45:06.068827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.308 [2024-10-07 09:45:06.068842] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:28.309 [2024-10-07 09:45:06.068856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.309 [2024-10-07 09:45:06.068871] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:28.309 [2024-10-07 09:45:06.068884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.309 [2024-10-07 09:45:06.068897] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.309 [2024-10-07 09:45:06.068959] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa2ab0 (9): Bad file descriptor 00:24:28.309 [2024-10-07 09:45:06.072189] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.309 [2024-10-07 09:45:06.106472] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:28.309 8430.00 IOPS, 32.93 MiB/s 8445.83 IOPS, 32.99 MiB/s 8448.57 IOPS, 33.00 MiB/s 8475.88 IOPS, 33.11 MiB/s 8478.89 IOPS, 33.12 MiB/s [2024-10-07 09:45:10.622353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:4544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.309 [2024-10-07 09:45:10.622396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.309 [2024-10-07 09:45:10.622421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:4552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.309 [2024-10-07 09:45:10.622436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.309 [2024-10-07 09:45:10.622452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:4560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.309 [2024-10-07 09:45:10.622466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.309 [2024-10-07 09:45:10.622481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:4568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.309 [2024-10-07 09:45:10.622495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.309 [2024-10-07 09:45:10.622509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:4576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.309 [2024-10-07 09:45:10.622523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.309 [2024-10-07 09:45:10.622540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:4584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.309 [2024-10-07 09:45:10.622554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.309 [2024-10-07 09:45:10.622570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:4592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.309 [2024-10-07 09:45:10.622583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.309 [2024-10-07 09:45:10.622600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:4600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.309 [2024-10-07 09:45:10.622615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.309 [2024-10-07 09:45:10.622630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:4608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.309 [2024-10-07 09:45:10.622659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.309 [2024-10-07 09:45:10.622683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:4616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.309 [2024-10-07 09:45:10.622698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.309 [2024-10-07 09:45:10.622732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:4624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.309 [2024-10-07 09:45:10.622748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.309 [2024-10-07 09:45:10.622763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:4632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.309 [2024-10-07 09:45:10.622784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.309 [2024-10-07 09:45:10.622802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:4640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.309 [2024-10-07 09:45:10.622818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.309 [2024-10-07 09:45:10.622834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:4648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.309 [2024-10-07 09:45:10.622849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.309 [2024-10-07 09:45:10.622866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:4656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.309 [2024-10-07 09:45:10.622880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.309 [2024-10-07 09:45:10.622896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:4664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.309 [2024-10-07 09:45:10.622918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.309 [2024-10-07 09:45:10.622934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:4672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.309 [2024-10-07 09:45:10.622948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.309 [2024-10-07 09:45:10.622964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.309 [2024-10-07 09:45:10.622994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.309 [2024-10-07 09:45:10.623010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.309 [2024-10-07 09:45:10.623023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.309 [2024-10-07 09:45:10.623051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:4696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.309 [2024-10-07 09:45:10.623065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.309 [2024-10-07 09:45:10.623079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.309 [2024-10-07 09:45:10.623093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.309 [2024-10-07 09:45:10.623107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:4712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.309 [2024-10-07 09:45:10.623120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.309 [2024-10-07 09:45:10.623135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:4720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.309 [2024-10-07 09:45:10.623148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.309 [2024-10-07 09:45:10.623162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:4728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.309 [2024-10-07 09:45:10.623175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.309 [2024-10-07 09:45:10.623189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:4736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.309 [2024-10-07 09:45:10.623209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.309 [2024-10-07 09:45:10.623225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:4744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.309 [2024-10-07 09:45:10.623238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.309 [2024-10-07 09:45:10.623253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:4752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.309 [2024-10-07 09:45:10.623266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.309 [2024-10-07 09:45:10.623279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:4760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.309 [2024-10-07 09:45:10.623293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.309 [2024-10-07 09:45:10.623307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.309 [2024-10-07 09:45:10.623320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.309 [2024-10-07 09:45:10.623334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:4776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.309 [2024-10-07 09:45:10.623347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.309 [2024-10-07 09:45:10.623360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:4784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.309 [2024-10-07 09:45:10.623374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.309 [2024-10-07 09:45:10.623388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:4792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.310 [2024-10-07 09:45:10.623407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.310 [2024-10-07 09:45:10.623421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:4800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.310 [2024-10-07 09:45:10.623434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.310 [2024-10-07 09:45:10.623451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:4808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.310 [2024-10-07 09:45:10.623466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.310 [2024-10-07 09:45:10.623480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:4816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.310 [2024-10-07 09:45:10.623493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.310 [2024-10-07 09:45:10.623508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:4824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.310 [2024-10-07 09:45:10.623521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.310 [2024-10-07 09:45:10.623536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:4832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.310 [2024-10-07 09:45:10.623549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.310 [2024-10-07 09:45:10.623567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:4840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.310 [2024-10-07 09:45:10.623581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.310 [2024-10-07 09:45:10.623595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:4848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.310 [2024-10-07 09:45:10.623608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.310 [2024-10-07 09:45:10.623623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:4856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.310 [2024-10-07 09:45:10.623636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.310 [2024-10-07 09:45:10.623650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:4864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.310 [2024-10-07 09:45:10.623663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.310 [2024-10-07 09:45:10.623705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:4872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.310 [2024-10-07 09:45:10.623720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.310 [2024-10-07 09:45:10.623735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:4880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.310 [2024-10-07 09:45:10.623749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.310 [2024-10-07 09:45:10.623764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:4888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.310 [2024-10-07 09:45:10.623777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.310 [2024-10-07 09:45:10.623792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:4896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.310 [2024-10-07 09:45:10.623805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.310 [2024-10-07 09:45:10.623820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:4904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.310 [2024-10-07 09:45:10.623834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.310 [2024-10-07 09:45:10.623849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:4912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.310 [2024-10-07 09:45:10.623862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.310 [2024-10-07 09:45:10.623877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:4920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.310 [2024-10-07 09:45:10.623896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.310 [2024-10-07 09:45:10.623912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:4928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.310 [2024-10-07 09:45:10.623926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.310 [2024-10-07 09:45:10.623941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:4936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.310 [2024-10-07 09:45:10.623958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.310 [2024-10-07 09:45:10.623974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:4944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.310 [2024-10-07 09:45:10.623989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.310 [2024-10-07 09:45:10.624003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:4952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.310 [2024-10-07 09:45:10.624017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.310 [2024-10-07 09:45:10.624032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:4960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.310 [2024-10-07 09:45:10.624045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.310 [2024-10-07 09:45:10.624060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:4968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.310 [2024-10-07 09:45:10.624074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.310 [2024-10-07 09:45:10.624088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:4976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.310 [2024-10-07 09:45:10.624101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.310 [2024-10-07 09:45:10.624117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:4984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.310 [2024-10-07 09:45:10.624131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.310 [2024-10-07 09:45:10.624146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:4992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.310 [2024-10-07 09:45:10.624159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.310 [2024-10-07 09:45:10.624174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.310 [2024-10-07 09:45:10.624187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.310 [2024-10-07 09:45:10.624202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:5008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.310 [2024-10-07 09:45:10.624216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.310 [2024-10-07 09:45:10.624230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:5016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.310 [2024-10-07 09:45:10.624243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.310 [2024-10-07 09:45:10.624258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:5024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.310 [2024-10-07 09:45:10.624272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.310 [2024-10-07 09:45:10.624287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:5032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.310 [2024-10-07 09:45:10.624300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.310 [2024-10-07 09:45:10.624315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:5040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.310 [2024-10-07 09:45:10.624332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.310 [2024-10-07 09:45:10.624348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:5048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.310 [2024-10-07 09:45:10.624371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.310 [2024-10-07 09:45:10.624386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:5056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.310 [2024-10-07 09:45:10.624400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.310 [2024-10-07 09:45:10.624415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.310 [2024-10-07 09:45:10.624429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.310 [2024-10-07 09:45:10.624444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:5072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.310 [2024-10-07 09:45:10.624457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.310 [2024-10-07 09:45:10.624472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:5080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.310 [2024-10-07 09:45:10.624486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.310 [2024-10-07 09:45:10.624501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:5088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.310 [2024-10-07 09:45:10.624514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.310 [2024-10-07 09:45:10.624529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:5096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.310 [2024-10-07 09:45:10.624543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.310 [2024-10-07 09:45:10.624558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.310 [2024-10-07 09:45:10.624571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.310 [2024-10-07 09:45:10.624586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:5112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.311 [2024-10-07 09:45:10.624599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.311 [2024-10-07 09:45:10.624615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:5120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.311 [2024-10-07 09:45:10.624628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.311 [2024-10-07 09:45:10.624643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:5128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.311 [2024-10-07 09:45:10.624656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.311 [2024-10-07 09:45:10.624695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:5136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.311 [2024-10-07 09:45:10.624712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.311 [2024-10-07 09:45:10.624731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.311 [2024-10-07 09:45:10.624746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.311 [2024-10-07 09:45:10.624761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:5152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.311 [2024-10-07 09:45:10.624776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.311 [2024-10-07 09:45:10.624790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:5160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.311 [2024-10-07 09:45:10.624804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.311 [2024-10-07 09:45:10.624819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:5168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.311 [2024-10-07 09:45:10.624833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.311 [2024-10-07 09:45:10.624848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:5176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.311 [2024-10-07 09:45:10.624867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.311 [2024-10-07 09:45:10.624884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:5184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.311 [2024-10-07 09:45:10.624898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.311 [2024-10-07 09:45:10.624914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:5192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.311 [2024-10-07 09:45:10.624928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.311 [2024-10-07 09:45:10.624943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:5200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.311 [2024-10-07 09:45:10.624957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.311 [2024-10-07 09:45:10.624987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:5208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.311 [2024-10-07 09:45:10.625001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.311 [2024-10-07 09:45:10.625017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:5216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.311 [2024-10-07 09:45:10.625031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.311 [2024-10-07 09:45:10.625045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:5224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.311 [2024-10-07 09:45:10.625059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.311 [2024-10-07 09:45:10.625074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:5232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.311 [2024-10-07 09:45:10.625088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.311 [2024-10-07 09:45:10.625103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:5240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.311 [2024-10-07 09:45:10.625119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.311 [2024-10-07 09:45:10.625135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:5248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.311 [2024-10-07 09:45:10.625149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.311 [2024-10-07 09:45:10.625164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:5256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.311 [2024-10-07 09:45:10.625178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.311 [2024-10-07 09:45:10.625192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:5264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.311 [2024-10-07 09:45:10.625205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.311 [2024-10-07 09:45:10.625221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:5272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.311 [2024-10-07 09:45:10.625235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.311 [2024-10-07 09:45:10.625249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:5280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.311 [2024-10-07 09:45:10.625262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.311 [2024-10-07 09:45:10.625276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:5288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.311 [2024-10-07 09:45:10.625290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.311 [2024-10-07 09:45:10.625305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:5296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.311 [2024-10-07 09:45:10.625318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.311 [2024-10-07 09:45:10.625332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:5304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.311 [2024-10-07 09:45:10.625347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.311 [2024-10-07 09:45:10.625382] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:28.311 [2024-10-07 09:45:10.625399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5312 len:8 PRP1 0x0 PRP2 0x0 00:24:28.311 [2024-10-07 09:45:10.625413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.311 [2024-10-07 09:45:10.625431] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:28.311 [2024-10-07 09:45:10.625443] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:28.311 [2024-10-07 09:45:10.625454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5320 len:8 PRP1 0x0 PRP2 0x0 00:24:28.311 [2024-10-07 09:45:10.625466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.311 [2024-10-07 09:45:10.625479] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:28.311 [2024-10-07 09:45:10.625490] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:28.311 [2024-10-07 09:45:10.625501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5328 len:8 PRP1 0x0 PRP2 0x0 00:24:28.311 [2024-10-07 09:45:10.625513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.311 [2024-10-07 09:45:10.625530] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:28.311 [2024-10-07 09:45:10.625541] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:28.311 [2024-10-07 09:45:10.625551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5336 len:8 PRP1 0x0 PRP2 0x0 00:24:28.311 [2024-10-07 09:45:10.625563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.311 [2024-10-07 09:45:10.625576] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:28.311 [2024-10-07 09:45:10.625587] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:28.311 [2024-10-07 09:45:10.625598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5344 len:8 PRP1 0x0 PRP2 0x0 00:24:28.311 [2024-10-07 09:45:10.625610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.311 [2024-10-07 09:45:10.625622] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:28.311 [2024-10-07 09:45:10.625632] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:28.311 [2024-10-07 09:45:10.625643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5352 len:8 PRP1 0x0 PRP2 0x0 00:24:28.311 [2024-10-07 09:45:10.625655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.311 [2024-10-07 09:45:10.625689] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:28.311 [2024-10-07 09:45:10.625703] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:28.311 [2024-10-07 09:45:10.625714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5360 len:8 PRP1 0x0 PRP2 0x0 00:24:28.311 [2024-10-07 09:45:10.625728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.311 [2024-10-07 09:45:10.625741] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:28.311 [2024-10-07 09:45:10.625752] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:28.311 [2024-10-07 09:45:10.625763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5368 len:8 PRP1 0x0 PRP2 0x0 00:24:28.311 [2024-10-07 09:45:10.625775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.311 [2024-10-07 09:45:10.625788] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:28.311 [2024-10-07 09:45:10.625799] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:28.311 [2024-10-07 09:45:10.625810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5376 len:8 PRP1 0x0 PRP2 0x0 00:24:28.311 [2024-10-07 09:45:10.625823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.311 [2024-10-07 09:45:10.625836] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:28.311 [2024-10-07 09:45:10.625847] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:28.312 [2024-10-07 09:45:10.625858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5384 len:8 PRP1 0x0 PRP2 0x0 00:24:28.312 [2024-10-07 09:45:10.625871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.312 [2024-10-07 09:45:10.625885] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:28.312 [2024-10-07 09:45:10.625895] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:28.312 [2024-10-07 09:45:10.625906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5392 len:8 PRP1 0x0 PRP2 0x0 00:24:28.312 [2024-10-07 09:45:10.625923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.312 [2024-10-07 09:45:10.625937] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:28.312 [2024-10-07 09:45:10.625947] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:28.312 [2024-10-07 09:45:10.625958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5400 len:8 PRP1 0x0 PRP2 0x0 00:24:28.312 [2024-10-07 09:45:10.625971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.312 [2024-10-07 09:45:10.626000] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:28.312 [2024-10-07 09:45:10.626011] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:28.312 [2024-10-07 09:45:10.626022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5408 len:8 PRP1 0x0 PRP2 0x0 00:24:28.312 [2024-10-07 09:45:10.626034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.312 [2024-10-07 09:45:10.626047] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:28.312 [2024-10-07 09:45:10.626057] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:28.312 [2024-10-07 09:45:10.626067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5416 len:8 PRP1 0x0 PRP2 0x0 00:24:28.312 [2024-10-07 09:45:10.626079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.312 [2024-10-07 09:45:10.626092] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:28.312 [2024-10-07 09:45:10.626102] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:28.312 [2024-10-07 09:45:10.626113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5424 len:8 PRP1 0x0 PRP2 0x0 00:24:28.312 [2024-10-07 09:45:10.626125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.312 [2024-10-07 09:45:10.626138] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:28.312 [2024-10-07 09:45:10.626154] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:28.312 [2024-10-07 09:45:10.626165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5432 len:8 PRP1 0x0 PRP2 0x0 00:24:28.312 [2024-10-07 09:45:10.626178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.312 [2024-10-07 09:45:10.626199] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:28.312 [2024-10-07 09:45:10.626211] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:28.312 [2024-10-07 09:45:10.626221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5440 len:8 PRP1 0x0 PRP2 0x0 00:24:28.312 [2024-10-07 09:45:10.626233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.312 [2024-10-07 09:45:10.626246] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:28.312 [2024-10-07 09:45:10.626256] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:28.312 [2024-10-07 09:45:10.626267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5448 len:8 PRP1 0x0 PRP2 0x0 00:24:28.312 [2024-10-07 09:45:10.626280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.312 [2024-10-07 09:45:10.626292] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:28.312 [2024-10-07 09:45:10.626305] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:28.312 [2024-10-07 09:45:10.626317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5456 len:8 PRP1 0x0 PRP2 0x0 00:24:28.312 [2024-10-07 09:45:10.626329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.312 [2024-10-07 09:45:10.626342] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:28.312 [2024-10-07 09:45:10.626352] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:28.312 [2024-10-07 09:45:10.626362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5464 len:8 PRP1 0x0 PRP2 0x0 00:24:28.312 [2024-10-07 09:45:10.626375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.312 [2024-10-07 09:45:10.626387] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:28.312 [2024-10-07 09:45:10.626397] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:28.312 [2024-10-07 09:45:10.626408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5472 len:8 PRP1 0x0 PRP2 0x0 00:24:28.312 [2024-10-07 09:45:10.626420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.312 [2024-10-07 09:45:10.626432] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:28.312 [2024-10-07 09:45:10.626443] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:28.312 [2024-10-07 09:45:10.626454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5480 len:8 PRP1 0x0 PRP2 0x0 00:24:28.312 [2024-10-07 09:45:10.626465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.312 [2024-10-07 09:45:10.626478] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:28.312 [2024-10-07 09:45:10.626488] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:28.312 [2024-10-07 09:45:10.626498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5488 len:8 PRP1 0x0 PRP2 0x0 00:24:28.312 [2024-10-07 09:45:10.626510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.312 [2024-10-07 09:45:10.626523] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:28.312 [2024-10-07 09:45:10.626538] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:28.312 [2024-10-07 09:45:10.626549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5496 len:8 PRP1 0x0 PRP2 0x0 00:24:28.312 [2024-10-07 09:45:10.626562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.312 [2024-10-07 09:45:10.626579] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:28.312 [2024-10-07 09:45:10.626590] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:28.312 [2024-10-07 09:45:10.626601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5504 len:8 PRP1 0x0 PRP2 0x0 00:24:28.312 [2024-10-07 09:45:10.626613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.312 [2024-10-07 09:45:10.626627] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:28.312 [2024-10-07 09:45:10.626637] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:28.312 [2024-10-07 09:45:10.626647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5512 len:8 PRP1 0x0 PRP2 0x0 00:24:28.312 [2024-10-07 09:45:10.626659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.312 [2024-10-07 09:45:10.626699] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:28.312 [2024-10-07 09:45:10.626711] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:28.312 [2024-10-07 09:45:10.626723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5520 len:8 PRP1 0x0 PRP2 0x0 00:24:28.312 [2024-10-07 09:45:10.626736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.312 [2024-10-07 09:45:10.626749] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:28.312 [2024-10-07 09:45:10.626759] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:28.312 [2024-10-07 09:45:10.626770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5528 len:8 PRP1 0x0 PRP2 0x0 00:24:28.312 [2024-10-07 09:45:10.626783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.312 [2024-10-07 09:45:10.626796] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:28.312 [2024-10-07 09:45:10.626806] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:28.312 [2024-10-07 09:45:10.626817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5536 len:8 PRP1 0x0 PRP2 0x0 00:24:28.312 [2024-10-07 09:45:10.626829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.312 [2024-10-07 09:45:10.626842] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:28.312 [2024-10-07 09:45:10.626853] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:28.312 [2024-10-07 09:45:10.626864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5544 len:8 PRP1 0x0 PRP2 0x0 00:24:28.312 [2024-10-07 09:45:10.626876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.312 [2024-10-07 09:45:10.626889] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:28.312 [2024-10-07 09:45:10.626899] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:28.312 [2024-10-07 09:45:10.626910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5552 len:8 PRP1 0x0 PRP2 0x0 00:24:28.312 [2024-10-07 09:45:10.626923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.312 [2024-10-07 09:45:10.626936] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:28.312 [2024-10-07 09:45:10.626952] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:28.312 [2024-10-07 09:45:10.626963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5560 len:8 PRP1 0x0 PRP2 0x0 00:24:28.312 [2024-10-07 09:45:10.626976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.312 [2024-10-07 09:45:10.627056] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1ad2090 was disconnected and freed. reset controller. 00:24:28.312 [2024-10-07 09:45:10.627076] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:24:28.312 [2024-10-07 09:45:10.627108] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:28.313 [2024-10-07 09:45:10.627142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.313 [2024-10-07 09:45:10.627158] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:28.313 [2024-10-07 09:45:10.627171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.313 [2024-10-07 09:45:10.627189] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:28.313 [2024-10-07 09:45:10.627203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.313 [2024-10-07 09:45:10.627217] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:28.313 [2024-10-07 09:45:10.627230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.313 [2024-10-07 09:45:10.627243] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.313 [2024-10-07 09:45:10.627298] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa2ab0 (9): Bad file descriptor 00:24:28.313 [2024-10-07 09:45:10.630533] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.313 [2024-10-07 09:45:10.667780] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:28.313 8455.40 IOPS, 33.03 MiB/s 8481.55 IOPS, 33.13 MiB/s 8490.83 IOPS, 33.17 MiB/s 8492.15 IOPS, 33.17 MiB/s 8495.64 IOPS, 33.19 MiB/s 00:24:28.313 Latency(us) 00:24:28.313 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:28.313 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:28.313 Verification LBA range: start 0x0 length 0x4000 00:24:28.313 NVMe0n1 : 15.01 8500.13 33.20 264.19 0.00 14576.33 540.07 16602.45 00:24:28.313 =================================================================================================================== 00:24:28.313 Total : 8500.13 33.20 264.19 0.00 14576.33 540.07 16602.45 00:24:28.313 Received shutdown signal, test time was about 15.000000 seconds 00:24:28.313 00:24:28.313 Latency(us) 00:24:28.313 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:28.313 =================================================================================================================== 00:24:28.313 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:28.313 09:45:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:24:28.313 09:45:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:24:28.313 09:45:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:24:28.313 09:45:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=290101 00:24:28.313 09:45:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:24:28.313 09:45:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 290101 /var/tmp/bdevperf.sock 00:24:28.313 09:45:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 290101 ']' 00:24:28.313 09:45:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:28.313 09:45:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:28.313 09:45:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:28.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:28.313 09:45:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:28.313 09:45:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:28.313 09:45:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:28.313 09:45:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:24:28.313 09:45:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:28.313 [2024-10-07 09:45:17.113607] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:28.313 09:45:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:28.571 [2024-10-07 09:45:17.374376] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:28.571 09:45:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:28.829 NVMe0n1 00:24:28.829 09:45:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:29.396 00:24:29.396 09:45:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:29.652 00:24:29.652 09:45:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:29.652 09:45:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:24:29.908 09:45:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:30.167 09:45:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:24:33.452 09:45:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:33.452 09:45:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:24:33.452 09:45:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=290735 00:24:33.452 09:45:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:33.452 09:45:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 290735 00:24:34.980 { 00:24:34.980 "results": [ 00:24:34.980 { 00:24:34.980 "job": "NVMe0n1", 00:24:34.980 "core_mask": "0x1", 00:24:34.980 "workload": "verify", 00:24:34.980 "status": "finished", 00:24:34.980 "verify_range": { 00:24:34.980 "start": 0, 00:24:34.980 "length": 16384 00:24:34.980 }, 00:24:34.980 "queue_depth": 128, 00:24:34.980 "io_size": 4096, 00:24:34.980 "runtime": 1.009312, 00:24:34.980 "iops": 8554.34196759773, 00:24:34.980 "mibps": 33.41539831092863, 00:24:34.980 "io_failed": 0, 00:24:34.980 "io_timeout": 0, 00:24:34.980 "avg_latency_us": 14878.373246167179, 00:24:34.980 "min_latency_us": 2645.7125925925925, 00:24:34.980 "max_latency_us": 13592.651851851851 00:24:34.980 } 00:24:34.980 ], 00:24:34.980 "core_count": 1 00:24:34.980 } 00:24:34.980 09:45:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:34.980 [2024-10-07 09:45:16.593709] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:24:34.980 [2024-10-07 09:45:16.593806] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid290101 ] 00:24:34.980 [2024-10-07 09:45:16.650850] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:34.980 [2024-10-07 09:45:16.757617] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:24:34.981 [2024-10-07 09:45:19.012101] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:24:34.981 [2024-10-07 09:45:19.012190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:34.981 [2024-10-07 09:45:19.012213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.981 [2024-10-07 09:45:19.012230] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:34.981 [2024-10-07 09:45:19.012243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.981 [2024-10-07 09:45:19.012257] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:34.981 [2024-10-07 09:45:19.012270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.981 [2024-10-07 09:45:19.012285] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:34.981 [2024-10-07 09:45:19.012298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.981 [2024-10-07 09:45:19.012311] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.981 [2024-10-07 09:45:19.012352] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.981 [2024-10-07 09:45:19.012383] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1964ab0 (9): Bad file descriptor 00:24:34.981 [2024-10-07 09:45:19.065909] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:34.981 Running I/O for 1 seconds... 00:24:34.981 8501.00 IOPS, 33.21 MiB/s 00:24:34.981 Latency(us) 00:24:34.981 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:34.981 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:34.981 Verification LBA range: start 0x0 length 0x4000 00:24:34.981 NVMe0n1 : 1.01 8554.34 33.42 0.00 0.00 14878.37 2645.71 13592.65 00:24:34.981 =================================================================================================================== 00:24:34.981 Total : 8554.34 33.42 0.00 0.00 14878.37 2645.71 13592.65 00:24:34.981 09:45:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:34.981 09:45:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:24:34.981 09:45:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:35.288 09:45:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:35.288 09:45:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:24:35.288 09:45:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:35.575 09:45:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:24:38.983 09:45:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:38.983 09:45:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:24:38.983 09:45:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 290101 00:24:38.983 09:45:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 290101 ']' 00:24:38.983 09:45:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 290101 00:24:38.983 09:45:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:24:38.983 09:45:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:38.983 09:45:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 290101 00:24:38.983 09:45:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:38.983 09:45:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:38.983 09:45:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 290101' 00:24:38.983 killing process with pid 290101 00:24:38.983 09:45:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 290101 00:24:38.983 09:45:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 290101 00:24:39.242 09:45:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:24:39.242 09:45:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:39.501 09:45:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:24:39.501 09:45:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:39.501 09:45:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:24:39.501 09:45:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@514 -- # nvmfcleanup 00:24:39.501 09:45:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:24:39.501 09:45:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:39.501 09:45:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:24:39.501 09:45:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:39.501 09:45:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:39.501 rmmod nvme_tcp 00:24:39.501 rmmod nvme_fabrics 00:24:39.501 rmmod nvme_keyring 00:24:39.759 09:45:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:39.759 09:45:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:24:39.759 09:45:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:24:39.759 09:45:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@515 -- # '[' -n 287317 ']' 00:24:39.759 09:45:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # killprocess 287317 00:24:39.759 09:45:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 287317 ']' 00:24:39.759 09:45:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 287317 00:24:39.759 09:45:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:24:39.759 09:45:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:39.759 09:45:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 287317 00:24:39.759 09:45:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:39.759 09:45:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:39.759 09:45:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 287317' 00:24:39.759 killing process with pid 287317 00:24:39.759 09:45:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 287317 00:24:39.759 09:45:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 287317 00:24:40.019 09:45:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:24:40.019 09:45:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:24:40.019 09:45:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:24:40.019 09:45:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:24:40.019 09:45:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # iptables-save 00:24:40.019 09:45:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:24:40.019 09:45:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # iptables-restore 00:24:40.019 09:45:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:40.019 09:45:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:40.019 09:45:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:40.019 09:45:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:40.019 09:45:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:41.928 09:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:41.928 00:24:41.928 real 0m35.722s 00:24:41.928 user 2m5.592s 00:24:41.928 sys 0m6.162s 00:24:41.928 09:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:41.928 09:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:41.928 ************************************ 00:24:41.928 END TEST nvmf_failover 00:24:41.928 ************************************ 00:24:41.928 09:45:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:24:41.928 09:45:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:41.928 09:45:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:41.928 09:45:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.186 ************************************ 00:24:42.186 START TEST nvmf_host_discovery 00:24:42.186 ************************************ 00:24:42.186 09:45:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:24:42.186 * Looking for test storage... 00:24:42.186 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:42.186 09:45:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:24:42.186 09:45:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # lcov --version 00:24:42.186 09:45:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:24:42.186 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:24:42.186 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:42.186 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:42.186 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:42.186 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:24:42.186 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:24:42.186 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:24:42.186 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:24:42.186 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:24:42.186 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:24:42.186 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:24:42.186 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:42.186 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:24:42.186 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:24:42.186 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:42.186 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:42.186 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:24:42.186 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:24:42.186 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:42.186 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:24:42.187 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:24:42.187 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:24:42.187 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:24:42.187 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:42.187 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:24:42.187 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:24:42.187 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:42.187 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:42.187 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:24:42.187 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:42.187 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:24:42.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:42.187 --rc genhtml_branch_coverage=1 00:24:42.187 --rc genhtml_function_coverage=1 00:24:42.187 --rc genhtml_legend=1 00:24:42.187 --rc geninfo_all_blocks=1 00:24:42.187 --rc geninfo_unexecuted_blocks=1 00:24:42.187 00:24:42.187 ' 00:24:42.187 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:24:42.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:42.187 --rc genhtml_branch_coverage=1 00:24:42.187 --rc genhtml_function_coverage=1 00:24:42.187 --rc genhtml_legend=1 00:24:42.187 --rc geninfo_all_blocks=1 00:24:42.187 --rc geninfo_unexecuted_blocks=1 00:24:42.187 00:24:42.187 ' 00:24:42.187 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:24:42.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:42.187 --rc genhtml_branch_coverage=1 00:24:42.187 --rc genhtml_function_coverage=1 00:24:42.187 --rc genhtml_legend=1 00:24:42.187 --rc geninfo_all_blocks=1 00:24:42.187 --rc geninfo_unexecuted_blocks=1 00:24:42.187 00:24:42.187 ' 00:24:42.187 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:24:42.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:42.187 --rc genhtml_branch_coverage=1 00:24:42.187 --rc genhtml_function_coverage=1 00:24:42.187 --rc genhtml_legend=1 00:24:42.187 --rc geninfo_all_blocks=1 00:24:42.187 --rc geninfo_unexecuted_blocks=1 00:24:42.187 00:24:42.187 ' 00:24:42.187 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:42.187 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:24:42.187 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:42.187 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:42.187 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:42.187 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:42.187 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:42.187 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:42.187 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:42.187 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:42.187 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:42.187 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:42.187 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:24:42.187 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:24:42.187 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:42.187 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:42.187 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:42.187 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:42.187 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:42.187 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:24:42.187 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:42.187 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:42.187 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:42.187 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:42.187 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:42.187 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:42.187 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:24:42.187 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:42.187 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:24:42.187 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:42.187 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:42.187 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:42.187 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:42.187 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:42.187 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:42.187 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:42.187 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:42.187 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:42.187 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:42.187 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:24:42.187 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:24:42.187 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:24:42.187 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:24:42.187 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:24:42.187 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:24:42.187 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:24:42.187 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:24:42.187 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:42.187 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # prepare_net_devs 00:24:42.187 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@436 -- # local -g is_hw=no 00:24:42.187 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # remove_spdk_ns 00:24:42.187 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:42.187 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:42.187 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:42.187 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:24:42.187 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:24:42.187 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:24:42.187 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:44.096 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:44.096 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:24:44.096 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:44.096 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:44.096 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:44.096 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:44.096 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:44.096 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:24:44.096 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:44.096 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:24:44.096 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:24:44.096 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:24:44.096 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:24:44.096 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:24:44.096 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:24:44.096 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:44.096 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:44.096 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:44.096 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:44.096 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:44.096 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:44.096 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:44.096 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:44.096 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:44.096 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:44.096 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:44.096 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:44.096 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:44.096 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:44.096 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:44.096 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:44.096 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:44.096 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:44.096 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:44.096 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x1592)' 00:24:44.096 Found 0000:09:00.0 (0x8086 - 0x1592) 00:24:44.096 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:44.096 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:44.096 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:24:44.096 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:24:44.096 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:44.096 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:44.096 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x1592)' 00:24:44.096 Found 0000:09:00.1 (0x8086 - 0x1592) 00:24:44.096 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:44.096 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:44.096 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:24:44.096 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:24:44.096 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:44.096 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:44.096 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:44.096 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:44.096 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:44.096 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:44.096 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:44.096 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:44.096 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:44.096 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:44.096 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:44.096 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:24:44.096 Found net devices under 0000:09:00.0: cvl_0_0 00:24:44.096 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:44.096 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:44.096 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:44.096 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:44.096 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:44.096 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:44.096 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:44.096 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:44.096 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:24:44.096 Found net devices under 0000:09:00.1: cvl_0_1 00:24:44.096 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:44.096 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:24:44.096 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # is_hw=yes 00:24:44.096 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:24:44.096 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:24:44.096 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:24:44.096 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:44.096 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:44.096 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:44.096 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:44.096 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:44.096 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:44.096 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:44.096 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:44.097 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:44.097 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:44.097 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:44.097 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:44.097 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:44.097 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:44.097 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:44.358 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:44.358 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:44.358 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:44.358 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:44.358 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:44.358 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:44.358 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:44.358 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:44.358 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:44.358 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.269 ms 00:24:44.358 00:24:44.358 --- 10.0.0.2 ping statistics --- 00:24:44.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:44.358 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:24:44.358 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:44.358 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:44.358 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.169 ms 00:24:44.358 00:24:44.358 --- 10.0.0.1 ping statistics --- 00:24:44.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:44.358 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:24:44.358 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:44.358 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@448 -- # return 0 00:24:44.358 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:24:44.358 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:44.358 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:24:44.358 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:24:44.358 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:44.358 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:24:44.358 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:24:44.358 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:24:44.358 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:44.358 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:44.358 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:44.358 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # nvmfpid=293258 00:24:44.358 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:44.358 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # waitforlisten 293258 00:24:44.358 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 293258 ']' 00:24:44.358 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:44.358 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:44.358 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:44.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:44.358 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:44.358 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:44.358 [2024-10-07 09:45:33.269358] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:24:44.358 [2024-10-07 09:45:33.269437] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:44.358 [2024-10-07 09:45:33.331055] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:44.618 [2024-10-07 09:45:33.441635] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:44.618 [2024-10-07 09:45:33.441722] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:44.618 [2024-10-07 09:45:33.441736] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:44.618 [2024-10-07 09:45:33.441746] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:44.618 [2024-10-07 09:45:33.441756] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:44.618 [2024-10-07 09:45:33.442344] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:24:44.618 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:44.618 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:24:44.618 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:44.618 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:44.618 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:44.618 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:44.618 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:44.618 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.618 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:44.618 [2024-10-07 09:45:33.586946] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:44.618 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.618 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:24:44.618 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.618 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:44.618 [2024-10-07 09:45:33.595190] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:24:44.618 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.618 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:24:44.618 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.618 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:44.618 null0 00:24:44.618 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.618 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:24:44.618 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.618 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:44.876 null1 00:24:44.876 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.876 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:24:44.876 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.876 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:44.876 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.876 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=293380 00:24:44.876 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:24:44.876 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 293380 /tmp/host.sock 00:24:44.876 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 293380 ']' 00:24:44.876 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:24:44.876 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:44.876 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:24:44.876 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:24:44.876 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:44.876 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:44.876 [2024-10-07 09:45:33.668874] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:24:44.876 [2024-10-07 09:45:33.668945] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid293380 ] 00:24:44.876 [2024-10-07 09:45:33.723387] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:44.876 [2024-10-07 09:45:33.828556] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:24:45.135 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:45.135 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:24:45.135 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:45.135 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:24:45.135 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.135 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:45.135 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.135 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:24:45.135 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.135 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:45.135 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.135 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:24:45.135 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:24:45.135 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:45.135 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:45.135 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.135 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:45.135 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:45.135 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:45.135 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.135 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:24:45.135 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:24:45.135 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:45.135 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:45.135 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.135 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:45.135 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:45.135 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:45.135 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.135 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:24:45.135 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:24:45.135 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.135 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:45.135 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.135 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:24:45.135 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:45.135 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:45.135 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.135 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:45.135 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:45.135 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:45.135 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.135 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:24:45.135 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:24:45.135 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:45.135 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:45.135 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.135 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:45.135 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:45.135 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:45.135 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.394 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:24:45.394 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:24:45.394 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.394 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:45.394 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.394 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:24:45.394 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:45.394 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:45.394 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.394 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:45.394 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:45.394 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:45.394 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.394 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:24:45.394 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:24:45.394 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:45.394 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:45.394 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.394 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:45.394 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:45.394 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:45.394 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.394 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:24:45.394 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:45.394 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.394 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:45.394 [2024-10-07 09:45:34.224776] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:45.394 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.394 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:24:45.394 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:45.394 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.394 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:45.394 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:45.394 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:45.394 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:45.394 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.394 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:24:45.394 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:24:45.394 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:45.394 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.395 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:45.395 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:45.395 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:45.395 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:45.395 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.395 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:24:45.395 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:24:45.395 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:45.395 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:45.395 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:45.395 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:45.395 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:45.395 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:45.395 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:24:45.395 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:24:45.395 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.395 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:45.395 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:45.395 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.395 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:45.395 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:24:45.395 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:24:45.395 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:45.395 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:24:45.395 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.395 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:45.395 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.395 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:45.395 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:45.395 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:45.395 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:45.395 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:45.395 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:24:45.395 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:45.395 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.395 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:45.395 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:45.395 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:45.395 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:45.395 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.395 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == \n\v\m\e\0 ]] 00:24:45.395 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:24:46.332 [2024-10-07 09:45:34.974128] bdev_nvme.c:7164:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:46.333 [2024-10-07 09:45:34.974157] bdev_nvme.c:7244:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:46.333 [2024-10-07 09:45:34.974180] bdev_nvme.c:7127:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:46.333 [2024-10-07 09:45:35.060453] bdev_nvme.c:7093:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:24:46.333 [2024-10-07 09:45:35.238149] bdev_nvme.c:6983:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:46.333 [2024-10-07 09:45:35.238172] bdev_nvme.c:6942:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:46.592 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:46.592 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:46.592 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:24:46.592 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:46.592 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:46.592 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:46.592 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:46.592 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:46.592 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:46.592 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:46.592 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:46.592 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:46.592 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:24:46.592 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:24:46.592 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:46.592 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:46.592 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:24:46.592 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:24:46.592 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:46.592 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:46.592 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:46.592 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:46.592 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:46.592 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:46.592 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:46.592 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:24:46.592 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:46.592 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:24:46.592 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:24:46.592 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:46.592 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:46.592 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:24:46.592 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:24:46.592 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:46.592 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:46.592 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:46.592 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:46.592 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:46.592 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:46.592 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:46.592 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0 ]] 00:24:46.592 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:46.592 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:24:46.592 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:24:46.592 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:46.592 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:46.592 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:46.592 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:46.592 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:46.592 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:24:46.592 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:24:46.592 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:46.592 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:46.592 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:46.592 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:46.592 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:24:46.592 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:24:46.592 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:24:46.592 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:46.592 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:24:46.592 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:46.592 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:46.592 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:46.592 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:46.592 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:46.592 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:46.592 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:46.592 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:46.592 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:24:46.592 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:46.592 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:46.592 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:46.592 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:46.593 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:46.593 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:46.593 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:46.852 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:46.852 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:46.852 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:24:46.852 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:24:46.852 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:46.852 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:46.853 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:46.853 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:46.853 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:46.853 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:24:46.853 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:24:46.853 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:46.853 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:46.853 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:46.853 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:46.853 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:46.853 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:24:46.853 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:24:46.853 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:24:47.791 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:47.791 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:47.791 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:24:47.791 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:24:47.791 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:47.791 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.791 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:47.791 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.791 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:24:47.791 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:47.791 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:24:47.791 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:47.791 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:24:47.791 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.791 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:47.791 [2024-10-07 09:45:36.692083] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:47.791 [2024-10-07 09:45:36.692413] bdev_nvme.c:7146:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:24:47.791 [2024-10-07 09:45:36.692451] bdev_nvme.c:7127:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:47.791 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.791 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:47.791 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:47.791 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:47.791 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:47.791 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:47.791 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:24:47.791 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:47.791 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.791 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:47.791 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:47.791 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:47.791 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:47.791 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.791 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:47.791 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:47.791 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:47.791 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:47.791 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:47.791 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:47.791 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:47.791 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:24:47.791 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:47.791 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.791 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:47.791 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:47.791 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:47.791 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:47.791 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.791 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:47.791 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:47.791 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:24:47.791 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:24:47.791 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:47.791 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:47.791 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:24:48.052 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:24:48.052 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:48.052 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:48.052 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:48.052 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:48.052 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:48.052 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:48.052 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:48.052 [2024-10-07 09:45:36.818829] bdev_nvme.c:7088:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:24:48.052 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:24:48.052 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:24:48.052 [2024-10-07 09:45:37.042288] bdev_nvme.c:6983:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:48.052 [2024-10-07 09:45:37.042310] bdev_nvme.c:6942:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:48.052 [2024-10-07 09:45:37.042319] bdev_nvme.c:6942:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:48.992 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:48.992 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:24:48.992 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:24:48.992 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:48.992 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:48.992 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:48.992 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:48.992 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:48.992 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:48.992 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:48.992 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:24:48.992 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:48.992 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:24:48.992 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:48.992 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:48.992 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:48.992 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:48.992 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:48.992 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:48.992 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:24:48.992 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:48.992 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:48.992 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:48.992 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:48.992 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:48.992 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:48.992 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:48.992 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:24:48.992 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:48.992 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:48.992 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:48.992 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:48.992 [2024-10-07 09:45:37.911854] bdev_nvme.c:7146:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:24:48.992 [2024-10-07 09:45:37.911885] bdev_nvme.c:7127:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:48.992 [2024-10-07 09:45:37.912491] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:48.992 [2024-10-07 09:45:37.912518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.992 [2024-10-07 09:45:37.912534] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:48.992 [2024-10-07 09:45:37.912563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.992 [2024-10-07 09:45:37.912577] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:48.992 [2024-10-07 09:45:37.912591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.992 [2024-10-07 09:45:37.912605] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:48.992 [2024-10-07 09:45:37.912619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.992 [2024-10-07 09:45:37.912632] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57ad0 is same with the state(6) to be set 00:24:48.992 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:48.992 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:48.992 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:48.992 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:48.992 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:48.992 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:48.992 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:24:48.992 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:48.992 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:48.992 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:48.992 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:48.992 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:48.992 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:48.992 [2024-10-07 09:45:37.922484] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b57ad0 (9): Bad file descriptor 00:24:48.992 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:48.992 [2024-10-07 09:45:37.932525] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:48.993 [2024-10-07 09:45:37.932754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.993 [2024-10-07 09:45:37.932785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b57ad0 with addr=10.0.0.2, port=4420 00:24:48.993 [2024-10-07 09:45:37.932803] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57ad0 is same with the state(6) to be set 00:24:48.993 [2024-10-07 09:45:37.932826] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b57ad0 (9): Bad file descriptor 00:24:48.993 [2024-10-07 09:45:37.932848] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:48.993 [2024-10-07 09:45:37.932862] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:48.993 [2024-10-07 09:45:37.932877] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:48.993 [2024-10-07 09:45:37.932898] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:48.993 [2024-10-07 09:45:37.942596] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:48.993 [2024-10-07 09:45:37.942784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.993 [2024-10-07 09:45:37.942812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b57ad0 with addr=10.0.0.2, port=4420 00:24:48.993 [2024-10-07 09:45:37.942829] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57ad0 is same with the state(6) to be set 00:24:48.993 [2024-10-07 09:45:37.942851] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b57ad0 (9): Bad file descriptor 00:24:48.993 [2024-10-07 09:45:37.942872] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:48.993 [2024-10-07 09:45:37.942887] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:48.993 [2024-10-07 09:45:37.942901] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:48.993 [2024-10-07 09:45:37.942921] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:48.993 [2024-10-07 09:45:37.952681] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:48.993 [2024-10-07 09:45:37.952867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.993 [2024-10-07 09:45:37.952896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b57ad0 with addr=10.0.0.2, port=4420 00:24:48.993 [2024-10-07 09:45:37.952917] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57ad0 is same with the state(6) to be set 00:24:48.993 [2024-10-07 09:45:37.952939] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b57ad0 (9): Bad file descriptor 00:24:48.993 [2024-10-07 09:45:37.952976] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:48.993 [2024-10-07 09:45:37.952997] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:48.993 [2024-10-07 09:45:37.953011] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:48.993 [2024-10-07 09:45:37.953031] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:48.993 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:48.993 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:48.993 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:48.993 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:48.993 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:48.993 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:48.993 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:48.993 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:24:48.993 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:48.993 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:48.993 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:48.993 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:48.993 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:48.993 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:48.993 [2024-10-07 09:45:37.962753] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:48.993 [2024-10-07 09:45:37.962876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.993 [2024-10-07 09:45:37.962905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b57ad0 with addr=10.0.0.2, port=4420 00:24:48.993 [2024-10-07 09:45:37.962923] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57ad0 is same with the state(6) to be set 00:24:48.993 [2024-10-07 09:45:37.962945] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b57ad0 (9): Bad file descriptor 00:24:48.993 [2024-10-07 09:45:37.962987] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:48.993 [2024-10-07 09:45:37.963005] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:48.993 [2024-10-07 09:45:37.963019] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:48.993 [2024-10-07 09:45:37.963040] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:48.993 [2024-10-07 09:45:37.972832] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:48.993 [2024-10-07 09:45:37.972977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.993 [2024-10-07 09:45:37.973005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b57ad0 with addr=10.0.0.2, port=4420 00:24:48.993 [2024-10-07 09:45:37.973022] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57ad0 is same with the state(6) to be set 00:24:48.993 [2024-10-07 09:45:37.973045] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b57ad0 (9): Bad file descriptor 00:24:48.993 [2024-10-07 09:45:37.973078] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:48.993 [2024-10-07 09:45:37.973095] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:48.993 [2024-10-07 09:45:37.973115] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:48.993 [2024-10-07 09:45:37.973137] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:48.993 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:48.993 [2024-10-07 09:45:37.982907] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:48.993 [2024-10-07 09:45:37.983084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.993 [2024-10-07 09:45:37.983112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b57ad0 with addr=10.0.0.2, port=4420 00:24:48.993 [2024-10-07 09:45:37.983129] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57ad0 is same with the state(6) to be set 00:24:48.993 [2024-10-07 09:45:37.983151] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b57ad0 (9): Bad file descriptor 00:24:48.993 [2024-10-07 09:45:37.983185] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:48.994 [2024-10-07 09:45:37.983204] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:48.994 [2024-10-07 09:45:37.983219] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:48.994 [2024-10-07 09:45:37.983239] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:49.252 [2024-10-07 09:45:37.992995] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:49.252 [2024-10-07 09:45:37.993218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.252 [2024-10-07 09:45:37.993246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b57ad0 with addr=10.0.0.2, port=4420 00:24:49.252 [2024-10-07 09:45:37.993262] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b57ad0 is same with the state(6) to be set 00:24:49.252 [2024-10-07 09:45:37.993284] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b57ad0 (9): Bad file descriptor 00:24:49.252 [2024-10-07 09:45:37.993318] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:49.252 [2024-10-07 09:45:37.993336] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:49.252 [2024-10-07 09:45:37.993366] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:49.253 [2024-10-07 09:45:37.993393] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:49.253 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:49.253 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:49.253 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:24:49.253 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:24:49.253 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:49.253 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:49.253 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:24:49.253 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:24:49.253 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:49.253 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.253 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:49.253 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:49.253 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:49.253 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:49.253 [2024-10-07 09:45:37.998323] bdev_nvme.c:6951:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:24:49.253 [2024-10-07 09:45:37.998350] bdev_nvme.c:6942:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:49.253 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.253 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4421 == \4\4\2\1 ]] 00:24:49.253 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:49.253 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:24:49.253 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:49.253 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:49.253 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:49.253 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:49.253 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:49.253 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:49.253 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:24:49.253 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:49.253 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:49.253 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.253 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:49.253 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.253 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:49.253 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:49.253 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:24:49.253 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:49.253 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:24:49.253 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.253 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:49.253 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.253 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:24:49.253 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:24:49.253 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:49.253 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:49.253 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:24:49.253 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:24:49.253 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:49.253 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.253 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:49.253 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:49.253 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:49.253 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:49.253 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.253 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:24:49.253 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:49.253 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:24:49.253 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:24:49.253 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:49.253 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:49.253 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:24:49.253 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:24:49.253 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:49.253 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.253 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:49.253 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:49.253 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:49.253 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:49.253 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.253 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:24:49.253 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:49.253 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:24:49.253 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:24:49.253 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:49.253 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:49.253 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:49.253 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:49.253 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:49.253 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:24:49.253 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:49.253 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:49.253 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.253 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:49.253 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.253 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:24:49.253 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:24:49.253 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:24:49.253 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:49.253 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:49.253 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.253 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:50.635 [2024-10-07 09:45:39.214634] bdev_nvme.c:7164:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:50.635 [2024-10-07 09:45:39.214656] bdev_nvme.c:7244:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:50.635 [2024-10-07 09:45:39.214695] bdev_nvme.c:7127:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:50.635 [2024-10-07 09:45:39.300975] bdev_nvme.c:7093:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:24:50.636 [2024-10-07 09:45:39.362440] bdev_nvme.c:6983:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:50.636 [2024-10-07 09:45:39.362471] bdev_nvme.c:6942:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:50.636 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.636 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:50.636 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:24:50.636 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:50.636 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:50.636 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:50.636 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:50.636 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:50.636 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:50.636 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.636 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:50.636 request: 00:24:50.636 { 00:24:50.636 "name": "nvme", 00:24:50.636 "trtype": "tcp", 00:24:50.636 "traddr": "10.0.0.2", 00:24:50.636 "adrfam": "ipv4", 00:24:50.636 "trsvcid": "8009", 00:24:50.636 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:50.636 "wait_for_attach": true, 00:24:50.636 "method": "bdev_nvme_start_discovery", 00:24:50.636 "req_id": 1 00:24:50.636 } 00:24:50.636 Got JSON-RPC error response 00:24:50.636 response: 00:24:50.636 { 00:24:50.636 "code": -17, 00:24:50.636 "message": "File exists" 00:24:50.636 } 00:24:50.636 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:50.636 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:24:50.636 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:50.636 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:50.636 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:50.636 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:24:50.636 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:50.636 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:50.636 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.636 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:50.636 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:50.636 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:50.636 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.636 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:24:50.636 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:24:50.636 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:50.636 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.636 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:50.636 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:50.636 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:50.636 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:50.636 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.636 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:50.636 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:50.636 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:24:50.636 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:50.636 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:50.636 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:50.636 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:50.636 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:50.636 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:50.636 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.636 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:50.636 request: 00:24:50.636 { 00:24:50.636 "name": "nvme_second", 00:24:50.636 "trtype": "tcp", 00:24:50.636 "traddr": "10.0.0.2", 00:24:50.636 "adrfam": "ipv4", 00:24:50.636 "trsvcid": "8009", 00:24:50.636 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:50.636 "wait_for_attach": true, 00:24:50.636 "method": "bdev_nvme_start_discovery", 00:24:50.636 "req_id": 1 00:24:50.636 } 00:24:50.636 Got JSON-RPC error response 00:24:50.636 response: 00:24:50.636 { 00:24:50.636 "code": -17, 00:24:50.636 "message": "File exists" 00:24:50.636 } 00:24:50.636 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:50.636 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:24:50.636 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:50.636 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:50.636 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:50.636 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:24:50.636 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:50.636 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:50.636 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.636 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:50.636 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:50.636 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:50.637 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.637 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:24:50.637 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:24:50.637 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:50.637 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.637 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:50.637 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:50.637 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:50.637 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:50.637 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.637 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:50.637 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:50.637 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:24:50.637 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:50.637 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:50.637 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:50.637 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:50.637 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:50.637 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:50.637 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.637 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:51.575 [2024-10-07 09:45:40.569972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.575 [2024-10-07 09:45:40.570044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cbef90 with addr=10.0.0.2, port=8010 00:24:51.576 [2024-10-07 09:45:40.570078] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:51.576 [2024-10-07 09:45:40.570094] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:51.576 [2024-10-07 09:45:40.570108] bdev_nvme.c:7226:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:24:52.957 [2024-10-07 09:45:41.572329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.957 [2024-10-07 09:45:41.572398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cbef90 with addr=10.0.0.2, port=8010 00:24:52.957 [2024-10-07 09:45:41.572427] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:52.957 [2024-10-07 09:45:41.572441] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:52.957 [2024-10-07 09:45:41.572483] bdev_nvme.c:7226:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:24:53.897 [2024-10-07 09:45:42.574519] bdev_nvme.c:7207:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:24:53.897 request: 00:24:53.897 { 00:24:53.897 "name": "nvme_second", 00:24:53.897 "trtype": "tcp", 00:24:53.897 "traddr": "10.0.0.2", 00:24:53.897 "adrfam": "ipv4", 00:24:53.897 "trsvcid": "8010", 00:24:53.898 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:53.898 "wait_for_attach": false, 00:24:53.898 "attach_timeout_ms": 3000, 00:24:53.898 "method": "bdev_nvme_start_discovery", 00:24:53.898 "req_id": 1 00:24:53.898 } 00:24:53.898 Got JSON-RPC error response 00:24:53.898 response: 00:24:53.898 { 00:24:53.898 "code": -110, 00:24:53.898 "message": "Connection timed out" 00:24:53.898 } 00:24:53.898 09:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:53.898 09:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:24:53.898 09:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:53.898 09:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:53.898 09:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:53.898 09:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:24:53.898 09:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:53.898 09:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.898 09:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:53.898 09:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:53.898 09:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:53.898 09:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:53.898 09:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.898 09:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:24:53.898 09:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:24:53.898 09:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 293380 00:24:53.898 09:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:24:53.898 09:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@514 -- # nvmfcleanup 00:24:53.898 09:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:24:53.898 09:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:53.898 09:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:24:53.898 09:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:53.898 09:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:53.898 rmmod nvme_tcp 00:24:53.898 rmmod nvme_fabrics 00:24:53.898 rmmod nvme_keyring 00:24:53.898 09:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:53.898 09:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:24:53.898 09:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:24:53.898 09:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@515 -- # '[' -n 293258 ']' 00:24:53.898 09:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # killprocess 293258 00:24:53.898 09:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@950 -- # '[' -z 293258 ']' 00:24:53.898 09:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # kill -0 293258 00:24:53.898 09:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # uname 00:24:53.898 09:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:53.898 09:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 293258 00:24:53.898 09:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:53.898 09:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:53.898 09:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 293258' 00:24:53.898 killing process with pid 293258 00:24:53.898 09:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@969 -- # kill 293258 00:24:53.898 09:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@974 -- # wait 293258 00:24:54.157 09:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:24:54.157 09:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:24:54.157 09:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:24:54.157 09:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:24:54.157 09:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@789 -- # iptables-save 00:24:54.157 09:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:24:54.157 09:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@789 -- # iptables-restore 00:24:54.157 09:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:54.157 09:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:54.157 09:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:54.157 09:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:54.157 09:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:56.694 09:45:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:56.694 00:24:56.694 real 0m14.138s 00:24:56.694 user 0m20.891s 00:24:56.694 sys 0m2.817s 00:24:56.694 09:45:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:56.694 09:45:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:56.694 ************************************ 00:24:56.694 END TEST nvmf_host_discovery 00:24:56.694 ************************************ 00:24:56.694 09:45:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:24:56.694 09:45:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:56.694 09:45:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:56.694 09:45:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.694 ************************************ 00:24:56.694 START TEST nvmf_host_multipath_status 00:24:56.694 ************************************ 00:24:56.694 09:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:24:56.694 * Looking for test storage... 00:24:56.694 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:56.694 09:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:24:56.694 09:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # lcov --version 00:24:56.694 09:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:24:56.694 09:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:24:56.694 09:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:56.694 09:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:56.694 09:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:56.694 09:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:24:56.694 09:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:24:56.694 09:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:24:56.694 09:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:24:56.694 09:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:24:56.694 09:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:24:56.694 09:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:24:56.694 09:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:56.694 09:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:24:56.694 09:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:24:56.694 09:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:56.694 09:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:56.694 09:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:24:56.694 09:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:24:56.694 09:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:56.694 09:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:24:56.694 09:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:24:56.694 09:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:24:56.694 09:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:24:56.694 09:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:56.694 09:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:24:56.694 09:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:24:56.694 09:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:56.694 09:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:56.694 09:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:24:56.694 09:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:56.694 09:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:24:56.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:56.694 --rc genhtml_branch_coverage=1 00:24:56.694 --rc genhtml_function_coverage=1 00:24:56.694 --rc genhtml_legend=1 00:24:56.694 --rc geninfo_all_blocks=1 00:24:56.694 --rc geninfo_unexecuted_blocks=1 00:24:56.694 00:24:56.694 ' 00:24:56.694 09:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:24:56.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:56.694 --rc genhtml_branch_coverage=1 00:24:56.694 --rc genhtml_function_coverage=1 00:24:56.694 --rc genhtml_legend=1 00:24:56.694 --rc geninfo_all_blocks=1 00:24:56.694 --rc geninfo_unexecuted_blocks=1 00:24:56.694 00:24:56.694 ' 00:24:56.694 09:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:24:56.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:56.695 --rc genhtml_branch_coverage=1 00:24:56.695 --rc genhtml_function_coverage=1 00:24:56.695 --rc genhtml_legend=1 00:24:56.695 --rc geninfo_all_blocks=1 00:24:56.695 --rc geninfo_unexecuted_blocks=1 00:24:56.695 00:24:56.695 ' 00:24:56.695 09:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:24:56.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:56.695 --rc genhtml_branch_coverage=1 00:24:56.695 --rc genhtml_function_coverage=1 00:24:56.695 --rc genhtml_legend=1 00:24:56.695 --rc geninfo_all_blocks=1 00:24:56.695 --rc geninfo_unexecuted_blocks=1 00:24:56.695 00:24:56.695 ' 00:24:56.695 09:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:56.695 09:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:24:56.695 09:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:56.695 09:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:56.695 09:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:56.695 09:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:56.695 09:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:56.695 09:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:56.695 09:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:56.695 09:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:56.695 09:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:56.695 09:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:56.695 09:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:24:56.695 09:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:24:56.695 09:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:56.695 09:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:56.695 09:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:56.695 09:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:56.695 09:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:56.695 09:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:24:56.695 09:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:56.695 09:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:56.695 09:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:56.695 09:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:56.695 09:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:56.695 09:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:56.695 09:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:24:56.695 09:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:56.695 09:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:24:56.695 09:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:56.695 09:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:56.695 09:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:56.695 09:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:56.695 09:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:56.695 09:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:56.695 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:56.695 09:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:56.695 09:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:56.695 09:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:56.695 09:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:56.695 09:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:56.695 09:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:56.695 09:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:24:56.695 09:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:56.695 09:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:24:56.695 09:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:24:56.695 09:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:24:56.695 09:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:56.696 09:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # prepare_net_devs 00:24:56.696 09:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@436 -- # local -g is_hw=no 00:24:56.696 09:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # remove_spdk_ns 00:24:56.696 09:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:56.696 09:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:56.696 09:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:56.696 09:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:24:56.696 09:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:24:56.696 09:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:24:56.696 09:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:58.604 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:58.604 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:24:58.604 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:58.604 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:58.604 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:58.604 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:58.604 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:58.604 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:24:58.604 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:58.604 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:24:58.604 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:24:58.604 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:24:58.604 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:24:58.604 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:24:58.604 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:24:58.604 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:58.604 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:58.604 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:58.604 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:58.604 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:58.604 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:58.604 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:58.604 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:58.604 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:58.604 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:58.604 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:58.604 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:58.604 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:58.604 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:58.604 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:58.604 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:58.604 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:58.604 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:58.604 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:58.604 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x1592)' 00:24:58.604 Found 0000:09:00.0 (0x8086 - 0x1592) 00:24:58.604 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:58.604 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:58.604 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:24:58.604 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:24:58.604 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:58.604 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:58.604 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x1592)' 00:24:58.604 Found 0000:09:00.1 (0x8086 - 0x1592) 00:24:58.604 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:58.604 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:58.604 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:24:58.604 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:24:58.604 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:58.604 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:58.604 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:58.604 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:58.604 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:58.604 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:58.604 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:58.604 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:58.604 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:58.604 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:58.604 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:58.604 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:24:58.604 Found net devices under 0000:09:00.0: cvl_0_0 00:24:58.604 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:58.604 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:58.604 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:58.604 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:58.604 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:58.604 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:58.604 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:58.604 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:58.604 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:24:58.604 Found net devices under 0000:09:00.1: cvl_0_1 00:24:58.604 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:58.604 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:24:58.604 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # is_hw=yes 00:24:58.604 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:24:58.604 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:24:58.604 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:24:58.604 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:58.604 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:58.604 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:58.604 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:58.604 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:58.605 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:58.605 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:58.605 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:58.605 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:58.605 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:58.605 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:58.605 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:58.605 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:58.605 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:58.605 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:58.605 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:58.605 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:58.605 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:58.605 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:58.605 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:58.605 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:58.605 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:58.605 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:58.605 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:58.605 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.169 ms 00:24:58.605 00:24:58.605 --- 10.0.0.2 ping statistics --- 00:24:58.605 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:58.605 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:24:58.605 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:58.605 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:58.605 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:24:58.605 00:24:58.605 --- 10.0.0.1 ping statistics --- 00:24:58.605 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:58.605 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:24:58.605 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:58.605 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # return 0 00:24:58.605 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:24:58.605 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:58.605 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:24:58.605 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:24:58.605 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:58.605 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:24:58.605 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:24:58.605 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:24:58.605 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:58.605 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:58.605 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:58.605 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # nvmfpid=296403 00:24:58.605 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:24:58.605 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # waitforlisten 296403 00:24:58.605 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 296403 ']' 00:24:58.605 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:58.605 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:58.605 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:58.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:58.605 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:58.605 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:58.605 [2024-10-07 09:45:47.321262] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:24:58.605 [2024-10-07 09:45:47.321339] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:58.605 [2024-10-07 09:45:47.381244] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:58.605 [2024-10-07 09:45:47.486113] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:58.605 [2024-10-07 09:45:47.486170] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:58.605 [2024-10-07 09:45:47.486208] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:58.605 [2024-10-07 09:45:47.486219] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:58.605 [2024-10-07 09:45:47.486229] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:58.605 [2024-10-07 09:45:47.486935] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:24:58.605 [2024-10-07 09:45:47.486941] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:24:58.605 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:58.605 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:24:58.605 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:58.605 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:58.605 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:58.863 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:58.863 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=296403 00:24:58.863 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:58.863 [2024-10-07 09:45:47.853355] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:59.121 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:59.380 Malloc0 00:24:59.380 09:45:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:24:59.637 09:45:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:59.895 09:45:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:00.153 [2024-10-07 09:45:48.959867] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:00.153 09:45:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:00.411 [2024-10-07 09:45:49.244752] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:00.411 09:45:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=296676 00:25:00.411 09:45:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:25:00.411 09:45:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:00.411 09:45:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 296676 /var/tmp/bdevperf.sock 00:25:00.411 09:45:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 296676 ']' 00:25:00.411 09:45:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:00.411 09:45:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:00.411 09:45:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:00.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:00.411 09:45:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:00.411 09:45:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:00.669 09:45:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:00.669 09:45:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:25:00.669 09:45:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:25:00.927 09:45:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:25:01.493 Nvme0n1 00:25:01.493 09:45:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:02.059 Nvme0n1 00:25:02.059 09:45:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:25:02.059 09:45:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:25:03.959 09:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:25:03.959 09:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:25:04.218 09:45:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:04.477 09:45:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:25:05.851 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:25:05.851 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:05.851 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:05.851 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:05.851 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:05.851 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:05.851 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:05.851 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:06.109 09:45:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:06.109 09:45:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:06.109 09:45:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:06.109 09:45:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:06.367 09:45:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:06.367 09:45:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:06.367 09:45:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:06.367 09:45:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:06.626 09:45:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:06.626 09:45:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:06.626 09:45:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:06.626 09:45:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:06.883 09:45:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:06.883 09:45:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:06.883 09:45:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:06.883 09:45:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:07.142 09:45:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:07.142 09:45:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:25:07.142 09:45:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:07.401 09:45:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:07.659 09:45:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:25:09.032 09:45:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:25:09.032 09:45:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:09.032 09:45:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:09.032 09:45:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:09.032 09:45:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:09.032 09:45:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:09.032 09:45:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:09.032 09:45:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:09.290 09:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:09.290 09:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:09.290 09:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:09.290 09:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:09.552 09:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:09.552 09:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:09.552 09:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:09.552 09:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:09.811 09:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:09.811 09:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:09.811 09:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:09.811 09:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:10.069 09:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:10.069 09:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:10.069 09:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:10.069 09:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:10.327 09:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:10.327 09:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:25:10.327 09:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:10.894 09:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:25:10.894 09:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:25:12.268 09:46:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:25:12.268 09:46:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:12.269 09:46:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:12.269 09:46:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:12.269 09:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:12.269 09:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:12.269 09:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:12.269 09:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:12.527 09:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:12.527 09:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:12.527 09:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:12.527 09:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:12.784 09:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:12.784 09:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:12.784 09:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:12.784 09:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:13.042 09:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:13.042 09:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:13.042 09:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:13.042 09:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:13.300 09:46:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:13.300 09:46:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:13.300 09:46:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:13.300 09:46:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:13.558 09:46:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:13.558 09:46:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:25:13.558 09:46:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:14.124 09:46:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:14.124 09:46:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:25:15.499 09:46:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:25:15.499 09:46:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:15.499 09:46:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:15.499 09:46:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:15.499 09:46:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:15.499 09:46:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:15.499 09:46:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:15.499 09:46:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:15.757 09:46:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:15.757 09:46:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:15.757 09:46:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:15.757 09:46:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:16.015 09:46:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:16.015 09:46:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:16.015 09:46:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:16.015 09:46:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:16.273 09:46:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:16.273 09:46:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:16.273 09:46:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:16.273 09:46:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:16.531 09:46:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:16.531 09:46:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:16.531 09:46:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:16.531 09:46:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:16.789 09:46:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:16.789 09:46:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:25:16.789 09:46:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:25:17.355 09:46:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:17.355 09:46:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:25:18.727 09:46:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:25:18.727 09:46:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:18.727 09:46:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:18.727 09:46:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:18.727 09:46:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:18.727 09:46:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:18.727 09:46:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:18.727 09:46:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:18.985 09:46:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:18.985 09:46:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:18.985 09:46:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:18.985 09:46:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:19.242 09:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:19.242 09:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:19.242 09:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:19.242 09:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:19.499 09:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:19.499 09:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:25:19.499 09:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:19.499 09:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:19.756 09:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:19.756 09:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:19.756 09:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:19.756 09:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:20.014 09:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:20.014 09:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:25:20.014 09:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:25:20.273 09:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:20.531 09:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:25:21.904 09:46:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:25:21.904 09:46:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:21.904 09:46:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:21.904 09:46:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:21.904 09:46:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:21.904 09:46:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:21.904 09:46:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:21.904 09:46:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:22.162 09:46:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:22.162 09:46:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:22.162 09:46:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:22.162 09:46:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:22.420 09:46:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:22.420 09:46:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:22.420 09:46:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:22.420 09:46:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:22.677 09:46:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:22.677 09:46:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:25:22.677 09:46:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:22.677 09:46:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:22.934 09:46:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:22.934 09:46:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:22.934 09:46:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:22.934 09:46:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:23.192 09:46:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:23.192 09:46:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:25:23.449 09:46:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:25:23.449 09:46:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:25:24.015 09:46:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:24.273 09:46:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:25:25.208 09:46:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:25:25.208 09:46:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:25.208 09:46:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:25.208 09:46:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:25.466 09:46:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:25.466 09:46:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:25.466 09:46:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:25.466 09:46:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:25.724 09:46:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:25.724 09:46:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:25.724 09:46:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:25.724 09:46:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:25.983 09:46:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:25.983 09:46:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:25.983 09:46:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:25.983 09:46:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:26.240 09:46:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:26.240 09:46:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:26.240 09:46:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:26.240 09:46:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:26.498 09:46:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:26.498 09:46:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:26.498 09:46:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:26.498 09:46:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:26.756 09:46:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:26.756 09:46:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:25:26.756 09:46:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:27.015 09:46:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:27.274 09:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:25:28.650 09:46:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:25:28.650 09:46:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:28.650 09:46:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:28.650 09:46:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:28.650 09:46:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:28.650 09:46:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:28.650 09:46:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:28.650 09:46:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:28.908 09:46:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:28.908 09:46:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:28.908 09:46:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:28.908 09:46:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:29.166 09:46:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:29.166 09:46:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:29.166 09:46:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:29.166 09:46:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:29.423 09:46:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:29.423 09:46:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:29.423 09:46:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:29.423 09:46:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:29.680 09:46:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:29.680 09:46:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:29.680 09:46:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:29.680 09:46:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:29.936 09:46:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:29.936 09:46:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:25:29.936 09:46:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:30.194 09:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:25:30.451 09:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:25:31.820 09:46:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:25:31.820 09:46:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:31.820 09:46:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:31.820 09:46:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:31.820 09:46:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:31.820 09:46:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:31.820 09:46:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:31.820 09:46:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:32.076 09:46:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:32.076 09:46:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:32.076 09:46:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:32.076 09:46:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:32.333 09:46:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:32.333 09:46:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:32.333 09:46:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:32.333 09:46:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:32.590 09:46:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:32.590 09:46:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:32.590 09:46:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:32.590 09:46:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:32.849 09:46:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:32.849 09:46:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:32.849 09:46:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:32.849 09:46:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:33.107 09:46:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:33.108 09:46:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:25:33.108 09:46:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:33.366 09:46:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:33.624 09:46:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:25:34.998 09:46:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:25:34.998 09:46:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:34.998 09:46:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:34.998 09:46:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:34.998 09:46:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:34.998 09:46:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:34.998 09:46:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:34.998 09:46:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:35.256 09:46:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:35.256 09:46:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:35.256 09:46:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:35.256 09:46:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:35.514 09:46:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:35.514 09:46:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:35.514 09:46:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:35.514 09:46:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:35.772 09:46:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:35.772 09:46:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:35.772 09:46:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:35.772 09:46:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:36.030 09:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:36.030 09:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:36.030 09:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:36.030 09:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:36.288 09:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:36.288 09:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 296676 00:25:36.288 09:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 296676 ']' 00:25:36.288 09:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 296676 00:25:36.288 09:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:25:36.547 09:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:36.547 09:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 296676 00:25:36.547 09:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:25:36.547 09:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:25:36.547 09:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 296676' 00:25:36.547 killing process with pid 296676 00:25:36.547 09:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 296676 00:25:36.547 09:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 296676 00:25:36.547 { 00:25:36.547 "results": [ 00:25:36.547 { 00:25:36.547 "job": "Nvme0n1", 00:25:36.547 "core_mask": "0x4", 00:25:36.547 "workload": "verify", 00:25:36.547 "status": "terminated", 00:25:36.547 "verify_range": { 00:25:36.547 "start": 0, 00:25:36.547 "length": 16384 00:25:36.547 }, 00:25:36.547 "queue_depth": 128, 00:25:36.547 "io_size": 4096, 00:25:36.547 "runtime": 34.239235, 00:25:36.547 "iops": 8018.637098638448, 00:25:36.547 "mibps": 31.322801166556438, 00:25:36.547 "io_failed": 0, 00:25:36.547 "io_timeout": 0, 00:25:36.547 "avg_latency_us": 15936.466879873258, 00:25:36.547 "min_latency_us": 235.14074074074074, 00:25:36.547 "max_latency_us": 4026531.84 00:25:36.547 } 00:25:36.547 ], 00:25:36.547 "core_count": 1 00:25:36.547 } 00:25:36.809 09:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 296676 00:25:36.809 09:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:36.809 [2024-10-07 09:45:49.302158] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:25:36.809 [2024-10-07 09:45:49.302249] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid296676 ] 00:25:36.809 [2024-10-07 09:45:49.359092] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:36.809 [2024-10-07 09:45:49.467708] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:25:36.809 [2024-10-07 09:45:50.829064] bdev_nvme.c:5607:nvme_bdev_ctrlr_create: *WARNING*: multipath_config: deprecated feature bdev_nvme_attach_controller.multipath configuration mismatch to be removed in v25.01 00:25:36.809 Running I/O for 90 seconds... 00:25:36.809 8596.00 IOPS, 33.58 MiB/s 8554.00 IOPS, 33.41 MiB/s 8542.67 IOPS, 33.37 MiB/s 8566.75 IOPS, 33.46 MiB/s 8537.40 IOPS, 33.35 MiB/s 8517.17 IOPS, 33.27 MiB/s 8484.57 IOPS, 33.14 MiB/s 8475.12 IOPS, 33.11 MiB/s 8463.78 IOPS, 33.06 MiB/s 8479.70 IOPS, 33.12 MiB/s 8491.64 IOPS, 33.17 MiB/s 8491.08 IOPS, 33.17 MiB/s 8503.38 IOPS, 33.22 MiB/s 8510.29 IOPS, 33.24 MiB/s [2024-10-07 09:46:06.034537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:103472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.809 [2024-10-07 09:46:06.034627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:36.809 [2024-10-07 09:46:06.034724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:103480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.809 [2024-10-07 09:46:06.034747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:36.809 [2024-10-07 09:46:06.034797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:103488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.809 [2024-10-07 09:46:06.034815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:36.809 [2024-10-07 09:46:06.034838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:103496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.809 [2024-10-07 09:46:06.034855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:36.809 [2024-10-07 09:46:06.034877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:103504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.809 [2024-10-07 09:46:06.034893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:36.809 [2024-10-07 09:46:06.034916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:103512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.809 [2024-10-07 09:46:06.034932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:36.809 [2024-10-07 09:46:06.034955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:103520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.809 [2024-10-07 09:46:06.034972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:36.809 [2024-10-07 09:46:06.035011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:103528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.809 [2024-10-07 09:46:06.035028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:36.809 [2024-10-07 09:46:06.035050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:103536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.809 [2024-10-07 09:46:06.035095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:36.809 [2024-10-07 09:46:06.035120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:103544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.809 [2024-10-07 09:46:06.035137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:36.809 [2024-10-07 09:46:06.035174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:103552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.809 [2024-10-07 09:46:06.035191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:36.809 [2024-10-07 09:46:06.035214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:103560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.809 [2024-10-07 09:46:06.035230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:36.809 [2024-10-07 09:46:06.035252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:103568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.809 [2024-10-07 09:46:06.035268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:36.809 [2024-10-07 09:46:06.035290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:103576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.810 [2024-10-07 09:46:06.035306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:36.810 [2024-10-07 09:46:06.035327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:103584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.810 [2024-10-07 09:46:06.035343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:36.810 [2024-10-07 09:46:06.035364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:103592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.810 [2024-10-07 09:46:06.035379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:36.810 [2024-10-07 09:46:06.035400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:103600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.810 [2024-10-07 09:46:06.035416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:36.810 [2024-10-07 09:46:06.035439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:103608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.810 [2024-10-07 09:46:06.035455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:36.810 [2024-10-07 09:46:06.035478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:103624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.810 [2024-10-07 09:46:06.035494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:36.810 [2024-10-07 09:46:06.035516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:103632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.810 [2024-10-07 09:46:06.035532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:36.810 [2024-10-07 09:46:06.035554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:103640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.810 [2024-10-07 09:46:06.035570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:36.810 [2024-10-07 09:46:06.035596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:103648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.810 [2024-10-07 09:46:06.035613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:36.810 [2024-10-07 09:46:06.035634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:103656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.810 [2024-10-07 09:46:06.035649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:36.810 [2024-10-07 09:46:06.035693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:103664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.810 [2024-10-07 09:46:06.035711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:36.810 [2024-10-07 09:46:06.035733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:103672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.810 [2024-10-07 09:46:06.035766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:36.810 [2024-10-07 09:46:06.036127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:103680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.810 [2024-10-07 09:46:06.036151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:36.810 [2024-10-07 09:46:06.036180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:103688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.810 [2024-10-07 09:46:06.036198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:36.810 [2024-10-07 09:46:06.036222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:103696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.810 [2024-10-07 09:46:06.036239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:36.810 [2024-10-07 09:46:06.036262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:103704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.810 [2024-10-07 09:46:06.036279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:36.810 [2024-10-07 09:46:06.036303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:103712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.810 [2024-10-07 09:46:06.036319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:36.810 [2024-10-07 09:46:06.036342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:103720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.810 [2024-10-07 09:46:06.036359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:36.810 [2024-10-07 09:46:06.036383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:103728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.810 [2024-10-07 09:46:06.036399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:36.810 [2024-10-07 09:46:06.036422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:103736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.810 [2024-10-07 09:46:06.036438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:36.810 [2024-10-07 09:46:06.036485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:103744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.810 [2024-10-07 09:46:06.036502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:36.810 [2024-10-07 09:46:06.036526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:103752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.810 [2024-10-07 09:46:06.036542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:36.810 [2024-10-07 09:46:06.036580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:103760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.810 [2024-10-07 09:46:06.036596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:36.810 [2024-10-07 09:46:06.036618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:103768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.810 [2024-10-07 09:46:06.036634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:36.810 [2024-10-07 09:46:06.036656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:103776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.810 [2024-10-07 09:46:06.036698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:36.810 [2024-10-07 09:46:06.036725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:103784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.810 [2024-10-07 09:46:06.036742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:36.810 [2024-10-07 09:46:06.036765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:103792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.810 [2024-10-07 09:46:06.036781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:36.810 [2024-10-07 09:46:06.036804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.810 [2024-10-07 09:46:06.036820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:36.810 [2024-10-07 09:46:06.036842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:103808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.810 [2024-10-07 09:46:06.036859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:36.810 [2024-10-07 09:46:06.036882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:103816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.810 [2024-10-07 09:46:06.036898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:36.810 [2024-10-07 09:46:06.036921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:103824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.810 [2024-10-07 09:46:06.036937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:36.810 [2024-10-07 09:46:06.036960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:103832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.810 [2024-10-07 09:46:06.036976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:36.810 [2024-10-07 09:46:06.037018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:103840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.810 [2024-10-07 09:46:06.037034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:36.810 [2024-10-07 09:46:06.037056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:103848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.810 [2024-10-07 09:46:06.037072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:36.810 [2024-10-07 09:46:06.037094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:103856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.810 [2024-10-07 09:46:06.037108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:36.810 [2024-10-07 09:46:06.037130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:103864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.810 [2024-10-07 09:46:06.037145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:36.810 [2024-10-07 09:46:06.037168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:103872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.810 [2024-10-07 09:46:06.037183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:36.810 [2024-10-07 09:46:06.037205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:103880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.810 [2024-10-07 09:46:06.037236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:36.810 [2024-10-07 09:46:06.037260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:103888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.810 [2024-10-07 09:46:06.037277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:36.811 [2024-10-07 09:46:06.037299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:103896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.811 [2024-10-07 09:46:06.037315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:36.811 [2024-10-07 09:46:06.037338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:103904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.811 [2024-10-07 09:46:06.037354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:36.811 [2024-10-07 09:46:06.037377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:103912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.811 [2024-10-07 09:46:06.037393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:36.811 [2024-10-07 09:46:06.037416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:103920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.811 [2024-10-07 09:46:06.037432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:36.811 [2024-10-07 09:46:06.037454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:103928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.811 [2024-10-07 09:46:06.037470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:36.811 [2024-10-07 09:46:06.037493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:103936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.811 [2024-10-07 09:46:06.037513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:36.811 [2024-10-07 09:46:06.037552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:103944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.811 [2024-10-07 09:46:06.037568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:36.811 [2024-10-07 09:46:06.037590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:103952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.811 [2024-10-07 09:46:06.037606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:36.811 [2024-10-07 09:46:06.037628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:103960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.811 [2024-10-07 09:46:06.037644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:36.811 [2024-10-07 09:46:06.037687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:103968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.811 [2024-10-07 09:46:06.037706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:36.811 [2024-10-07 09:46:06.037731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:103976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.811 [2024-10-07 09:46:06.037747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:36.811 [2024-10-07 09:46:06.037769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:103984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.811 [2024-10-07 09:46:06.037784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:36.811 [2024-10-07 09:46:06.037807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:103616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.811 [2024-10-07 09:46:06.037823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:36.811 [2024-10-07 09:46:06.037846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:103992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.811 [2024-10-07 09:46:06.037862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.811 [2024-10-07 09:46:06.037885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:104000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.811 [2024-10-07 09:46:06.037901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.811 [2024-10-07 09:46:06.037923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:104008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.811 [2024-10-07 09:46:06.037939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:36.811 [2024-10-07 09:46:06.037962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:104016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.811 [2024-10-07 09:46:06.037993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:36.811 [2024-10-07 09:46:06.038015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:104024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.811 [2024-10-07 09:46:06.038034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:36.811 [2024-10-07 09:46:06.038061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:104032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.811 [2024-10-07 09:46:06.038076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:36.811 [2024-10-07 09:46:06.038099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:104040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.811 [2024-10-07 09:46:06.038114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:36.811 [2024-10-07 09:46:06.038136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:104048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.811 [2024-10-07 09:46:06.038151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:36.811 [2024-10-07 09:46:06.038174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:104056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.811 [2024-10-07 09:46:06.038189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:36.811 [2024-10-07 09:46:06.038211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:104064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.811 [2024-10-07 09:46:06.038225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:36.811 [2024-10-07 09:46:06.038248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:104072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.811 [2024-10-07 09:46:06.038263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:36.811 [2024-10-07 09:46:06.038285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:104080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.811 [2024-10-07 09:46:06.038300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:36.811 [2024-10-07 09:46:06.038323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:104088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.811 [2024-10-07 09:46:06.038347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:36.811 [2024-10-07 09:46:06.038371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:104096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.811 [2024-10-07 09:46:06.038386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:36.811 [2024-10-07 09:46:06.038408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:104104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.811 [2024-10-07 09:46:06.038423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:36.811 [2024-10-07 09:46:06.038445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:104112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.811 [2024-10-07 09:46:06.038460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:36.811 [2024-10-07 09:46:06.038483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:104120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.811 [2024-10-07 09:46:06.038498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:36.811 [2024-10-07 09:46:06.038525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:104128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.811 [2024-10-07 09:46:06.038541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:36.811 [2024-10-07 09:46:06.038563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:104136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.811 [2024-10-07 09:46:06.038579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:36.811 [2024-10-07 09:46:06.038601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:104144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.811 [2024-10-07 09:46:06.038616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:36.811 [2024-10-07 09:46:06.038638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:104152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.811 [2024-10-07 09:46:06.038653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:36.811 [2024-10-07 09:46:06.038696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:104160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.811 [2024-10-07 09:46:06.038714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:36.811 [2024-10-07 09:46:06.038738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:104168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.811 [2024-10-07 09:46:06.038753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:36.811 [2024-10-07 09:46:06.038793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:104176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.811 [2024-10-07 09:46:06.038809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:36.811 [2024-10-07 09:46:06.038832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:104184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.811 [2024-10-07 09:46:06.038848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:36.811 [2024-10-07 09:46:06.038872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:104192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.811 [2024-10-07 09:46:06.038887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:36.812 [2024-10-07 09:46:06.038911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:104200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.812 [2024-10-07 09:46:06.038927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:36.812 [2024-10-07 09:46:06.038950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:104208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.812 [2024-10-07 09:46:06.038966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:36.812 [2024-10-07 09:46:06.039004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:104216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.812 [2024-10-07 09:46:06.039022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:36.812 [2024-10-07 09:46:06.039056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:104224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.812 [2024-10-07 09:46:06.039072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:36.812 [2024-10-07 09:46:06.039096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:104232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.812 [2024-10-07 09:46:06.039115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:36.812 [2024-10-07 09:46:06.039139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:104240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.812 [2024-10-07 09:46:06.039155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:36.812 [2024-10-07 09:46:06.039178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:104248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.812 [2024-10-07 09:46:06.039193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:36.812 [2024-10-07 09:46:06.039456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:104256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.812 [2024-10-07 09:46:06.039480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:36.812 [2024-10-07 09:46:06.039513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:104264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.812 [2024-10-07 09:46:06.039531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:36.812 [2024-10-07 09:46:06.039559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:104272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.812 [2024-10-07 09:46:06.039575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:36.812 [2024-10-07 09:46:06.039604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:104280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.812 [2024-10-07 09:46:06.039621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:36.812 [2024-10-07 09:46:06.039649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:104288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.812 [2024-10-07 09:46:06.039673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:36.812 [2024-10-07 09:46:06.039704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:104296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.812 [2024-10-07 09:46:06.039721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:36.812 [2024-10-07 09:46:06.039749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:104304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.812 [2024-10-07 09:46:06.039766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:36.812 [2024-10-07 09:46:06.039794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:104312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.812 [2024-10-07 09:46:06.039811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:36.812 [2024-10-07 09:46:06.039839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:104320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.812 [2024-10-07 09:46:06.039860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:36.812 [2024-10-07 09:46:06.039889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:104328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.812 [2024-10-07 09:46:06.039906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:36.812 [2024-10-07 09:46:06.039934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:104336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.812 [2024-10-07 09:46:06.039951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:36.812 [2024-10-07 09:46:06.039980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:104344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.812 [2024-10-07 09:46:06.039996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:36.812 [2024-10-07 09:46:06.040025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:104352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.812 [2024-10-07 09:46:06.040041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:36.812 [2024-10-07 09:46:06.040069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:104360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.812 [2024-10-07 09:46:06.040091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:36.812 [2024-10-07 09:46:06.040122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:104368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.812 [2024-10-07 09:46:06.040143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:36.812 [2024-10-07 09:46:06.040174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:104376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.812 [2024-10-07 09:46:06.040191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:36.812 [2024-10-07 09:46:06.040219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:104384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.812 [2024-10-07 09:46:06.040235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:36.812 [2024-10-07 09:46:06.040264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:104392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.812 [2024-10-07 09:46:06.040280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:36.812 [2024-10-07 09:46:06.040308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:104400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.812 [2024-10-07 09:46:06.040325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:36.812 [2024-10-07 09:46:06.040354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:104408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.812 [2024-10-07 09:46:06.040370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:36.812 [2024-10-07 09:46:06.040398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:104416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.812 [2024-10-07 09:46:06.040419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:36.812 [2024-10-07 09:46:06.040448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:104424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.812 [2024-10-07 09:46:06.040466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:36.812 [2024-10-07 09:46:06.040494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:104432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.812 [2024-10-07 09:46:06.040511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:36.812 [2024-10-07 09:46:06.040540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:104440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.812 [2024-10-07 09:46:06.040556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:36.812 [2024-10-07 09:46:06.040584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:104448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.812 [2024-10-07 09:46:06.040601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:36.812 [2024-10-07 09:46:06.040629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:104456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.812 [2024-10-07 09:46:06.040645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:36.812 [2024-10-07 09:46:06.040682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:104464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.812 [2024-10-07 09:46:06.040707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:36.812 [2024-10-07 09:46:06.040738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:104472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.812 [2024-10-07 09:46:06.040755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:36.812 [2024-10-07 09:46:06.040784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:104480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.812 [2024-10-07 09:46:06.040800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:36.812 [2024-10-07 09:46:06.040829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:104488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.812 [2024-10-07 09:46:06.040851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:36.813 8508.13 IOPS, 33.23 MiB/s 7976.38 IOPS, 31.16 MiB/s 7507.18 IOPS, 29.32 MiB/s 7090.11 IOPS, 27.70 MiB/s 6724.79 IOPS, 26.27 MiB/s 6800.25 IOPS, 26.56 MiB/s 6877.57 IOPS, 26.87 MiB/s 6982.05 IOPS, 27.27 MiB/s 7175.91 IOPS, 28.03 MiB/s 7364.88 IOPS, 28.77 MiB/s 7505.84 IOPS, 29.32 MiB/s 7536.08 IOPS, 29.44 MiB/s 7566.07 IOPS, 29.55 MiB/s 7596.64 IOPS, 29.67 MiB/s 7677.41 IOPS, 29.99 MiB/s 7812.50 IOPS, 30.52 MiB/s 7944.32 IOPS, 31.03 MiB/s [2024-10-07 09:46:22.589453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:46888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.813 [2024-10-07 09:46:22.589523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:36.813 [2024-10-07 09:46:22.589734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:47072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.813 [2024-10-07 09:46:22.589760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:36.813 [2024-10-07 09:46:22.589787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:47088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.813 [2024-10-07 09:46:22.589804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:36.813 [2024-10-07 09:46:22.589828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:47104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.813 [2024-10-07 09:46:22.589844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:36.813 [2024-10-07 09:46:22.589868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:47120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.813 [2024-10-07 09:46:22.589885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:36.813 [2024-10-07 09:46:22.589907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:47136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.813 [2024-10-07 09:46:22.589924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:36.813 [2024-10-07 09:46:22.589946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:47152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.813 [2024-10-07 09:46:22.589962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:36.813 [2024-10-07 09:46:22.590001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:47168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.813 [2024-10-07 09:46:22.590017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:36.813 [2024-10-07 09:46:22.590038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:47184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.813 [2024-10-07 09:46:22.590053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:36.813 [2024-10-07 09:46:22.590092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:47200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.813 [2024-10-07 09:46:22.590108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:36.813 [2024-10-07 09:46:22.590129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:47216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.813 [2024-10-07 09:46:22.590145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:36.813 [2024-10-07 09:46:22.590167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:47232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.813 [2024-10-07 09:46:22.590184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:36.813 [2024-10-07 09:46:22.590206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:47248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.813 [2024-10-07 09:46:22.590222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:36.813 [2024-10-07 09:46:22.590619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:46944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.813 [2024-10-07 09:46:22.590649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:36.813 [2024-10-07 09:46:22.590691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:46976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.813 [2024-10-07 09:46:22.590713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:36.813 [2024-10-07 09:46:22.590737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:47264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.813 [2024-10-07 09:46:22.590754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:36.813 [2024-10-07 09:46:22.590776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:47280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.813 [2024-10-07 09:46:22.590793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:36.813 [2024-10-07 09:46:22.590815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:47296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.813 [2024-10-07 09:46:22.590831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:36.813 [2024-10-07 09:46:22.590853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:47312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.813 [2024-10-07 09:46:22.590869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:36.813 [2024-10-07 09:46:22.590892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:47328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.813 [2024-10-07 09:46:22.590908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:36.813 [2024-10-07 09:46:22.590930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:47344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.813 [2024-10-07 09:46:22.590946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:36.813 [2024-10-07 09:46:22.590969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:47360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.813 [2024-10-07 09:46:22.590985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:36.813 [2024-10-07 09:46:22.591007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:47376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.813 [2024-10-07 09:46:22.591023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:36.813 [2024-10-07 09:46:22.591061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:47392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.813 [2024-10-07 09:46:22.591076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:36.813 [2024-10-07 09:46:22.591098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:47408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.813 [2024-10-07 09:46:22.591114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:36.813 [2024-10-07 09:46:22.591135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:47424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.813 [2024-10-07 09:46:22.591156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:36.813 [2024-10-07 09:46:22.591178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:47440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.813 [2024-10-07 09:46:22.591195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:36.813 [2024-10-07 09:46:22.591216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:47456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.813 [2024-10-07 09:46:22.591237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:36.813 [2024-10-07 09:46:22.591278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:47472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.813 [2024-10-07 09:46:22.591295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:36.813 [2024-10-07 09:46:22.591318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:47000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.813 [2024-10-07 09:46:22.591334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:36.813 [2024-10-07 09:46:22.591356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:47032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.813 [2024-10-07 09:46:22.591373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:36.813 [2024-10-07 09:46:22.591395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:47064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.813 [2024-10-07 09:46:22.591412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:36.813 [2024-10-07 09:46:22.591434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:47496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.813 [2024-10-07 09:46:22.591450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:36.813 [2024-10-07 09:46:22.591473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:47512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.813 [2024-10-07 09:46:22.591489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:36.813 [2024-10-07 09:46:22.591512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:47528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.814 [2024-10-07 09:46:22.591528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:36.814 [2024-10-07 09:46:22.591551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:47544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.814 [2024-10-07 09:46:22.591567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:36.814 [2024-10-07 09:46:22.591594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:47560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.814 [2024-10-07 09:46:22.591611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:36.814 [2024-10-07 09:46:22.591635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:47576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.814 [2024-10-07 09:46:22.591663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:36.814 [2024-10-07 09:46:22.591700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:47592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.814 [2024-10-07 09:46:22.591728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:36.814 [2024-10-07 09:46:22.591750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:47608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.814 [2024-10-07 09:46:22.591766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:36.814 [2024-10-07 09:46:22.591788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:47624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.814 [2024-10-07 09:46:22.591804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:36.814 [2024-10-07 09:46:22.591826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:47640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.814 [2024-10-07 09:46:22.591842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:36.814 [2024-10-07 09:46:22.591864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:47656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.814 [2024-10-07 09:46:22.591880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:36.814 [2024-10-07 09:46:22.591903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:47672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.814 [2024-10-07 09:46:22.591919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:36.814 [2024-10-07 09:46:22.591941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:47096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.814 [2024-10-07 09:46:22.591957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:36.814 [2024-10-07 09:46:22.591993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:47128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.814 [2024-10-07 09:46:22.592010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:36.814 [2024-10-07 09:46:22.592032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:47160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.814 [2024-10-07 09:46:22.592048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:36.814 [2024-10-07 09:46:22.592069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:47192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.814 [2024-10-07 09:46:22.592100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:36.814 [2024-10-07 09:46:22.592123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:47224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.814 [2024-10-07 09:46:22.592139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:36.814 [2024-10-07 09:46:22.592161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:47680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.814 [2024-10-07 09:46:22.592177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:36.814 [2024-10-07 09:46:22.592204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:47696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.814 [2024-10-07 09:46:22.592221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:36.814 [2024-10-07 09:46:22.594256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:46936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.814 [2024-10-07 09:46:22.594284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:36.814 [2024-10-07 09:46:22.594314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:46968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.814 [2024-10-07 09:46:22.594332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:36.814 [2024-10-07 09:46:22.594355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:47704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.814 [2024-10-07 09:46:22.594372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:36.814 [2024-10-07 09:46:22.594394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:47720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.814 [2024-10-07 09:46:22.594411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:36.814 [2024-10-07 09:46:22.594434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:47736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.814 [2024-10-07 09:46:22.594450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:36.814 [2024-10-07 09:46:22.594473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:47752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.814 [2024-10-07 09:46:22.594489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:36.814 [2024-10-07 09:46:22.594512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:47768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.814 [2024-10-07 09:46:22.594528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:36.814 [2024-10-07 09:46:22.594550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:47784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.814 [2024-10-07 09:46:22.594581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:36.814 [2024-10-07 09:46:22.594604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:47800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.814 [2024-10-07 09:46:22.594620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:36.814 [2024-10-07 09:46:22.594641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:47816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.814 [2024-10-07 09:46:22.594657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:36.814 [2024-10-07 09:46:22.594704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:47832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.814 [2024-10-07 09:46:22.594724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:36.814 [2024-10-07 09:46:22.594752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:47848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.814 [2024-10-07 09:46:22.594770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:36.814 [2024-10-07 09:46:22.594792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:47864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.814 [2024-10-07 09:46:22.594808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:36.814 [2024-10-07 09:46:22.594831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:47880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.814 [2024-10-07 09:46:22.594848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:36.814 [2024-10-07 09:46:22.594871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:47896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.814 [2024-10-07 09:46:22.594887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:36.814 [2024-10-07 09:46:22.594910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:47912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.814 [2024-10-07 09:46:22.594927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:36.814 7999.00 IOPS, 31.25 MiB/s 8011.39 IOPS, 31.29 MiB/s 8021.74 IOPS, 31.33 MiB/s Received shutdown signal, test time was about 34.240037 seconds 00:25:36.814 00:25:36.814 Latency(us) 00:25:36.814 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:36.814 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:36.814 Verification LBA range: start 0x0 length 0x4000 00:25:36.814 Nvme0n1 : 34.24 8018.64 31.32 0.00 0.00 15936.47 235.14 4026531.84 00:25:36.814 =================================================================================================================== 00:25:36.814 Total : 8018.64 31.32 0.00 0.00 15936.47 235.14 4026531.84 00:25:36.814 [2024-10-07 09:46:25.335774] app.c:1033:log_deprecation_hits: *WARNING*: multipath_config: deprecation 'bdev_nvme_attach_controller.multipath configuration mismatch' scheduled for removal in v25.01 hit 1 times 00:25:36.814 09:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:37.073 09:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:25:37.073 09:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:37.073 09:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:25:37.073 09:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@514 -- # nvmfcleanup 00:25:37.073 09:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:25:37.073 09:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:37.073 09:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:25:37.073 09:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:37.073 09:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:37.073 rmmod nvme_tcp 00:25:37.073 rmmod nvme_fabrics 00:25:37.073 rmmod nvme_keyring 00:25:37.073 09:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:37.073 09:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:25:37.073 09:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:25:37.073 09:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@515 -- # '[' -n 296403 ']' 00:25:37.073 09:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # killprocess 296403 00:25:37.073 09:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 296403 ']' 00:25:37.073 09:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 296403 00:25:37.073 09:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:25:37.073 09:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:37.073 09:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 296403 00:25:37.073 09:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:37.073 09:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:37.073 09:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 296403' 00:25:37.073 killing process with pid 296403 00:25:37.073 09:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 296403 00:25:37.073 09:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 296403 00:25:37.331 09:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:25:37.331 09:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:25:37.331 09:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:25:37.331 09:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:25:37.331 09:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@789 -- # iptables-save 00:25:37.331 09:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:25:37.331 09:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@789 -- # iptables-restore 00:25:37.331 09:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:37.331 09:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:37.331 09:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:37.332 09:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:37.332 09:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:39.870 09:46:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:39.870 00:25:39.870 real 0m43.220s 00:25:39.870 user 2m10.966s 00:25:39.870 sys 0m10.943s 00:25:39.870 09:46:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:39.870 09:46:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:39.870 ************************************ 00:25:39.870 END TEST nvmf_host_multipath_status 00:25:39.870 ************************************ 00:25:39.870 09:46:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:39.870 09:46:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:39.870 09:46:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:39.870 09:46:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.870 ************************************ 00:25:39.870 START TEST nvmf_discovery_remove_ifc 00:25:39.870 ************************************ 00:25:39.870 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:39.870 * Looking for test storage... 00:25:39.870 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:39.870 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:25:39.870 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # lcov --version 00:25:39.870 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:25:39.870 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:25:39.870 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:39.870 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:39.870 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:39.870 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:25:39.870 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:25:39.870 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:25:39.870 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:25:39.870 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:25:39.870 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:25:39.870 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:25:39.870 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:39.870 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:25:39.870 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:25:39.870 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:39.870 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:39.870 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:25:39.870 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:25:39.870 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:39.870 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:25:39.870 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:25:39.870 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:25:39.870 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:25:39.870 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:39.870 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:25:39.870 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:25:39.870 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:39.870 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:39.870 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:25:39.870 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:39.870 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:25:39.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:39.870 --rc genhtml_branch_coverage=1 00:25:39.870 --rc genhtml_function_coverage=1 00:25:39.870 --rc genhtml_legend=1 00:25:39.870 --rc geninfo_all_blocks=1 00:25:39.870 --rc geninfo_unexecuted_blocks=1 00:25:39.870 00:25:39.870 ' 00:25:39.870 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:25:39.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:39.870 --rc genhtml_branch_coverage=1 00:25:39.870 --rc genhtml_function_coverage=1 00:25:39.870 --rc genhtml_legend=1 00:25:39.870 --rc geninfo_all_blocks=1 00:25:39.870 --rc geninfo_unexecuted_blocks=1 00:25:39.870 00:25:39.871 ' 00:25:39.871 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:25:39.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:39.871 --rc genhtml_branch_coverage=1 00:25:39.871 --rc genhtml_function_coverage=1 00:25:39.871 --rc genhtml_legend=1 00:25:39.871 --rc geninfo_all_blocks=1 00:25:39.871 --rc geninfo_unexecuted_blocks=1 00:25:39.871 00:25:39.871 ' 00:25:39.871 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:25:39.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:39.871 --rc genhtml_branch_coverage=1 00:25:39.871 --rc genhtml_function_coverage=1 00:25:39.871 --rc genhtml_legend=1 00:25:39.871 --rc geninfo_all_blocks=1 00:25:39.871 --rc geninfo_unexecuted_blocks=1 00:25:39.871 00:25:39.871 ' 00:25:39.871 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:39.871 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:25:39.871 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:39.871 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:39.871 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:39.871 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:39.871 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:39.871 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:39.871 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:39.871 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:39.871 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:39.871 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:39.871 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:25:39.871 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:25:39.871 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:39.871 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:39.871 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:39.871 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:39.871 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:39.871 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:25:39.871 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:39.871 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:39.871 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:39.871 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:39.871 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:39.871 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:39.871 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:25:39.871 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:39.871 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:25:39.871 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:39.871 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:39.871 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:39.871 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:39.871 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:39.871 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:39.871 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:39.871 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:39.871 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:39.871 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:39.871 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:25:39.871 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:25:39.871 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:25:39.871 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:25:39.871 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:25:39.871 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:25:39.871 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:25:39.871 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:25:39.871 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:39.871 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # prepare_net_devs 00:25:39.871 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@436 -- # local -g is_hw=no 00:25:39.871 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # remove_spdk_ns 00:25:39.871 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:39.871 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:39.871 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:39.871 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:25:39.871 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:25:39.871 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:25:39.871 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:41.775 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:41.775 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:25:41.775 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:41.775 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:41.775 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:41.775 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:41.776 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:41.776 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:25:41.776 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:41.776 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:25:41.776 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:25:41.776 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:25:41.776 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:25:41.776 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:25:41.776 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:25:41.776 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:41.776 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:41.776 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:41.776 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:41.776 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:41.776 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:41.776 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:41.776 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:41.776 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:41.776 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:41.776 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:41.776 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:41.776 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:41.776 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:41.776 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:41.776 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:41.776 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:41.776 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:41.776 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:41.776 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x1592)' 00:25:41.776 Found 0000:09:00.0 (0x8086 - 0x1592) 00:25:41.776 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:41.776 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:41.776 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:25:41.776 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:25:41.776 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:41.776 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:41.776 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x1592)' 00:25:41.776 Found 0000:09:00.1 (0x8086 - 0x1592) 00:25:41.776 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:41.776 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:41.776 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:25:41.776 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:25:41.776 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:41.776 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:41.776 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:41.776 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:41.776 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:41.776 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:41.776 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:41.776 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:41.776 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:41.776 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:41.776 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:41.776 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:25:41.776 Found net devices under 0000:09:00.0: cvl_0_0 00:25:41.776 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:41.776 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:41.776 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:41.776 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:41.776 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:41.776 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:41.776 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:41.776 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:41.776 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:25:41.776 Found net devices under 0000:09:00.1: cvl_0_1 00:25:41.776 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:41.776 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:25:41.776 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # is_hw=yes 00:25:41.776 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:25:41.776 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:25:41.776 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:25:41.776 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:41.776 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:41.776 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:41.776 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:41.776 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:41.776 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:41.776 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:41.776 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:41.776 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:41.776 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:41.776 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:41.776 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:41.776 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:41.776 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:41.776 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:41.776 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:41.776 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:41.776 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:41.776 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:41.776 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:41.776 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:41.776 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:41.776 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:41.776 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:41.776 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.297 ms 00:25:41.776 00:25:41.776 --- 10.0.0.2 ping statistics --- 00:25:41.776 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:41.776 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:25:41.776 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:41.776 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:41.776 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:25:41.776 00:25:41.776 --- 10.0.0.1 ping statistics --- 00:25:41.776 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:41.776 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:25:41.776 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:41.776 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # return 0 00:25:41.776 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:25:41.776 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:41.777 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:25:41.777 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:25:41.777 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:41.777 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:25:41.777 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:25:41.777 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:25:41.777 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:25:41.777 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:41.777 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:41.777 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # nvmfpid=302841 00:25:41.777 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:41.777 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # waitforlisten 302841 00:25:41.777 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 302841 ']' 00:25:41.777 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:41.777 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:41.777 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:41.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:41.777 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:41.777 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:42.035 [2024-10-07 09:46:30.805791] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:25:42.035 [2024-10-07 09:46:30.805864] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:42.035 [2024-10-07 09:46:30.865944] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:42.035 [2024-10-07 09:46:30.967204] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:42.035 [2024-10-07 09:46:30.967267] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:42.035 [2024-10-07 09:46:30.967288] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:42.035 [2024-10-07 09:46:30.967306] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:42.035 [2024-10-07 09:46:30.967316] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:42.035 [2024-10-07 09:46:30.967881] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:25:42.294 09:46:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:42.294 09:46:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:25:42.294 09:46:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:25:42.294 09:46:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:42.294 09:46:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:42.294 09:46:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:42.294 09:46:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:25:42.294 09:46:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.294 09:46:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:42.294 [2024-10-07 09:46:31.118051] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:42.294 [2024-10-07 09:46:31.126273] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:42.294 null0 00:25:42.294 [2024-10-07 09:46:31.158155] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:42.294 09:46:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.294 09:46:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=302861 00:25:42.294 09:46:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:25:42.294 09:46:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 302861 /tmp/host.sock 00:25:42.294 09:46:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 302861 ']' 00:25:42.294 09:46:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:25:42.294 09:46:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:42.294 09:46:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:42.294 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:42.294 09:46:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:42.294 09:46:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:42.294 [2024-10-07 09:46:31.224277] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:25:42.294 [2024-10-07 09:46:31.224345] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid302861 ] 00:25:42.294 [2024-10-07 09:46:31.279405] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:42.553 [2024-10-07 09:46:31.386500] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:25:42.553 09:46:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:42.553 09:46:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:25:42.553 09:46:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:42.553 09:46:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:25:42.553 09:46:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.553 09:46:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:42.553 09:46:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.553 09:46:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:25:42.553 09:46:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.553 09:46:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:42.812 09:46:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.812 09:46:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:25:42.812 09:46:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.812 09:46:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:43.746 [2024-10-07 09:46:32.615615] bdev_nvme.c:7164:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:43.746 [2024-10-07 09:46:32.615652] bdev_nvme.c:7244:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:43.746 [2024-10-07 09:46:32.615713] bdev_nvme.c:7127:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:44.004 [2024-10-07 09:46:32.744163] bdev_nvme.c:7093:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:44.004 [2024-10-07 09:46:32.848521] bdev_nvme.c:7954:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:25:44.004 [2024-10-07 09:46:32.848580] bdev_nvme.c:7954:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:25:44.004 [2024-10-07 09:46:32.848616] bdev_nvme.c:7954:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:25:44.004 [2024-10-07 09:46:32.848636] bdev_nvme.c:6983:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:44.004 [2024-10-07 09:46:32.848692] bdev_nvme.c:6942:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:44.004 09:46:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.004 09:46:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:25:44.004 09:46:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:44.004 09:46:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:44.004 09:46:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:44.004 09:46:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.004 09:46:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:44.004 09:46:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:44.004 09:46:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:44.004 [2024-10-07 09:46:32.854113] bdev_nvme.c:1735:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xfe1f90 was disconnected and freed. delete nvme_qpair. 00:25:44.004 09:46:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.004 09:46:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:25:44.004 09:46:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:25:44.004 09:46:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:25:44.004 09:46:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:25:44.004 09:46:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:44.004 09:46:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:44.004 09:46:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:44.004 09:46:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.004 09:46:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:44.004 09:46:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:44.004 09:46:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:44.004 09:46:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.004 09:46:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:44.004 09:46:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:45.380 09:46:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:45.380 09:46:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:45.380 09:46:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:45.380 09:46:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.380 09:46:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:45.380 09:46:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:45.380 09:46:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:45.380 09:46:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.380 09:46:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:45.380 09:46:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:46.314 09:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:46.314 09:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:46.314 09:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:46.314 09:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.314 09:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:46.314 09:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:46.314 09:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:46.314 09:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.314 09:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:46.314 09:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:47.249 09:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:47.249 09:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:47.249 09:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:47.249 09:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.249 09:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:47.249 09:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:47.249 09:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:47.249 09:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.249 09:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:47.249 09:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:48.182 09:46:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:48.182 09:46:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:48.182 09:46:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:48.182 09:46:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.182 09:46:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:48.182 09:46:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:48.182 09:46:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:48.182 09:46:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.182 09:46:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:48.182 09:46:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:49.561 09:46:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:49.561 09:46:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:49.561 09:46:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:49.561 09:46:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.561 09:46:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:49.561 09:46:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:49.561 09:46:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:49.561 09:46:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.561 09:46:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:49.561 09:46:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:49.561 [2024-10-07 09:46:38.290260] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:25:49.561 [2024-10-07 09:46:38.290338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.562 [2024-10-07 09:46:38.290358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.562 [2024-10-07 09:46:38.290375] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.562 [2024-10-07 09:46:38.290388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.562 [2024-10-07 09:46:38.290402] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.562 [2024-10-07 09:46:38.290415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.562 [2024-10-07 09:46:38.290428] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.562 [2024-10-07 09:46:38.290440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.562 [2024-10-07 09:46:38.290462] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.562 [2024-10-07 09:46:38.290476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.562 [2024-10-07 09:46:38.290487] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe9c0 is same with the state(6) to be set 00:25:49.562 [2024-10-07 09:46:38.300278] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbe9c0 (9): Bad file descriptor 00:25:49.562 [2024-10-07 09:46:38.310321] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:50.499 09:46:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:50.499 09:46:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:50.499 09:46:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:50.499 09:46:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.499 09:46:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:50.499 09:46:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:50.499 09:46:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:50.499 [2024-10-07 09:46:39.331693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:25:50.499 [2024-10-07 09:46:39.331747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbe9c0 with addr=10.0.0.2, port=4420 00:25:50.499 [2024-10-07 09:46:39.331766] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe9c0 is same with the state(6) to be set 00:25:50.499 [2024-10-07 09:46:39.331795] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbe9c0 (9): Bad file descriptor 00:25:50.499 [2024-10-07 09:46:39.332192] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:50.499 [2024-10-07 09:46:39.332228] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:50.499 [2024-10-07 09:46:39.332244] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:50.499 [2024-10-07 09:46:39.332258] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:50.499 [2024-10-07 09:46:39.332279] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:50.499 [2024-10-07 09:46:39.332294] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:50.499 09:46:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.499 09:46:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:50.499 09:46:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:51.434 [2024-10-07 09:46:40.334793] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:51.434 [2024-10-07 09:46:40.334860] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:51.434 [2024-10-07 09:46:40.334875] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:51.434 [2024-10-07 09:46:40.334888] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:25:51.434 [2024-10-07 09:46:40.334916] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.434 [2024-10-07 09:46:40.334976] bdev_nvme.c:6915:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:25:51.434 [2024-10-07 09:46:40.335058] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:51.434 [2024-10-07 09:46:40.335094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.434 [2024-10-07 09:46:40.335111] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:51.434 [2024-10-07 09:46:40.335125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.434 [2024-10-07 09:46:40.335138] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:51.434 [2024-10-07 09:46:40.335151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.434 [2024-10-07 09:46:40.335164] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:51.434 [2024-10-07 09:46:40.335177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.434 [2024-10-07 09:46:40.335191] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:51.434 [2024-10-07 09:46:40.335204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.434 [2024-10-07 09:46:40.335216] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:25:51.434 [2024-10-07 09:46:40.335301] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfae100 (9): Bad file descriptor 00:25:51.434 [2024-10-07 09:46:40.336329] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:25:51.434 [2024-10-07 09:46:40.336349] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:25:51.434 09:46:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:51.434 09:46:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:51.434 09:46:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:51.434 09:46:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:51.434 09:46:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:51.434 09:46:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:51.434 09:46:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:51.434 09:46:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:51.434 09:46:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:25:51.434 09:46:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:51.434 09:46:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:51.692 09:46:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:25:51.692 09:46:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:51.692 09:46:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:51.692 09:46:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:51.692 09:46:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:51.692 09:46:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:51.692 09:46:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:51.692 09:46:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:51.692 09:46:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:51.692 09:46:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:51.692 09:46:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:52.628 09:46:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:52.628 09:46:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:52.628 09:46:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:52.628 09:46:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.628 09:46:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:52.628 09:46:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:52.628 09:46:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:52.628 09:46:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.628 09:46:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:52.628 09:46:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:53.562 [2024-10-07 09:46:42.391293] bdev_nvme.c:7164:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:53.562 [2024-10-07 09:46:42.391328] bdev_nvme.c:7244:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:53.562 [2024-10-07 09:46:42.391349] bdev_nvme.c:7127:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:53.562 [2024-10-07 09:46:42.520771] bdev_nvme.c:7093:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:25:53.562 09:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:53.562 09:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:53.562 09:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:53.562 09:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:53.562 09:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:53.562 09:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:53.562 09:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:53.562 09:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:53.820 09:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:53.820 09:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:53.820 [2024-10-07 09:46:42.705852] bdev_nvme.c:7954:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:25:53.820 [2024-10-07 09:46:42.705898] bdev_nvme.c:7954:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:25:53.820 [2024-10-07 09:46:42.705927] bdev_nvme.c:7954:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:25:53.820 [2024-10-07 09:46:42.705947] bdev_nvme.c:6983:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:25:53.820 [2024-10-07 09:46:42.705958] bdev_nvme.c:6942:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:53.820 [2024-10-07 09:46:42.709894] bdev_nvme.c:1735:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xfc8df0 was disconnected and freed. delete nvme_qpair. 00:25:54.754 09:46:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:54.754 09:46:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:54.754 09:46:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.754 09:46:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:54.754 09:46:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:54.754 09:46:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:54.754 09:46:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:54.754 09:46:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.754 09:46:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:25:54.754 09:46:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:25:54.754 09:46:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 302861 00:25:54.754 09:46:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 302861 ']' 00:25:54.754 09:46:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 302861 00:25:54.754 09:46:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:25:54.754 09:46:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:54.754 09:46:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 302861 00:25:54.754 09:46:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:54.754 09:46:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:54.754 09:46:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 302861' 00:25:54.754 killing process with pid 302861 00:25:54.754 09:46:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 302861 00:25:54.754 09:46:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 302861 00:25:55.012 09:46:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:25:55.012 09:46:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@514 -- # nvmfcleanup 00:25:55.012 09:46:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:25:55.012 09:46:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:55.012 09:46:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:25:55.012 09:46:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:55.012 09:46:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:55.012 rmmod nvme_tcp 00:25:55.012 rmmod nvme_fabrics 00:25:55.012 rmmod nvme_keyring 00:25:55.012 09:46:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:55.012 09:46:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:25:55.012 09:46:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:25:55.012 09:46:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@515 -- # '[' -n 302841 ']' 00:25:55.012 09:46:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # killprocess 302841 00:25:55.012 09:46:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 302841 ']' 00:25:55.012 09:46:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 302841 00:25:55.012 09:46:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:25:55.013 09:46:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:55.013 09:46:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 302841 00:25:55.013 09:46:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:55.013 09:46:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:55.013 09:46:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 302841' 00:25:55.013 killing process with pid 302841 00:25:55.013 09:46:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 302841 00:25:55.013 09:46:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 302841 00:25:55.272 09:46:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:25:55.272 09:46:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:25:55.272 09:46:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:25:55.272 09:46:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:25:55.272 09:46:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@789 -- # iptables-save 00:25:55.272 09:46:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:25:55.272 09:46:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@789 -- # iptables-restore 00:25:55.272 09:46:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:55.272 09:46:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:55.272 09:46:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:55.272 09:46:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:55.272 09:46:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:57.809 09:46:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:57.809 00:25:57.809 real 0m17.890s 00:25:57.809 user 0m25.977s 00:25:57.809 sys 0m3.021s 00:25:57.809 09:46:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:57.809 09:46:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:57.809 ************************************ 00:25:57.809 END TEST nvmf_discovery_remove_ifc 00:25:57.809 ************************************ 00:25:57.809 09:46:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:25:57.809 09:46:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:57.809 09:46:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:57.809 09:46:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.809 ************************************ 00:25:57.809 START TEST nvmf_identify_kernel_target 00:25:57.809 ************************************ 00:25:57.809 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:25:57.809 * Looking for test storage... 00:25:57.809 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:57.809 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:25:57.809 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # lcov --version 00:25:57.809 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:25:57.809 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:25:57.809 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:57.809 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:57.809 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:57.809 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:25:57.809 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:25:57.809 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:25:57.809 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:25:57.809 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:25:57.809 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:25:57.809 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:25:57.809 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:57.810 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:25:57.810 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:25:57.810 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:57.810 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:57.810 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:25:57.810 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:25:57.810 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:57.810 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:25:57.810 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:25:57.810 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:25:57.810 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:25:57.810 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:57.810 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:25:57.810 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:25:57.810 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:57.810 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:57.810 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:25:57.810 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:57.810 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:25:57.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:57.810 --rc genhtml_branch_coverage=1 00:25:57.810 --rc genhtml_function_coverage=1 00:25:57.810 --rc genhtml_legend=1 00:25:57.810 --rc geninfo_all_blocks=1 00:25:57.810 --rc geninfo_unexecuted_blocks=1 00:25:57.810 00:25:57.810 ' 00:25:57.810 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:25:57.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:57.810 --rc genhtml_branch_coverage=1 00:25:57.810 --rc genhtml_function_coverage=1 00:25:57.810 --rc genhtml_legend=1 00:25:57.810 --rc geninfo_all_blocks=1 00:25:57.810 --rc geninfo_unexecuted_blocks=1 00:25:57.810 00:25:57.810 ' 00:25:57.810 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:25:57.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:57.810 --rc genhtml_branch_coverage=1 00:25:57.810 --rc genhtml_function_coverage=1 00:25:57.810 --rc genhtml_legend=1 00:25:57.810 --rc geninfo_all_blocks=1 00:25:57.810 --rc geninfo_unexecuted_blocks=1 00:25:57.810 00:25:57.810 ' 00:25:57.810 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:25:57.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:57.810 --rc genhtml_branch_coverage=1 00:25:57.810 --rc genhtml_function_coverage=1 00:25:57.810 --rc genhtml_legend=1 00:25:57.810 --rc geninfo_all_blocks=1 00:25:57.810 --rc geninfo_unexecuted_blocks=1 00:25:57.810 00:25:57.810 ' 00:25:57.810 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:57.810 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:25:57.810 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:57.810 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:57.810 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:57.810 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:57.810 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:57.810 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:57.810 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:57.810 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:57.810 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:57.810 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:57.810 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:25:57.810 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:25:57.810 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:57.810 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:57.810 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:57.810 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:57.810 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:57.810 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:25:57.810 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:57.810 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:57.810 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:57.810 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:57.810 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:57.810 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:57.810 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:25:57.810 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:57.810 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:25:57.810 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:57.810 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:57.810 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:57.810 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:57.810 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:57.810 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:57.810 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:57.810 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:57.810 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:57.810 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:57.810 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:25:57.810 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:25:57.810 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:57.810 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:25:57.810 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:25:57.810 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:25:57.810 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:57.810 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:57.810 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:57.810 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:25:57.810 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:25:57.810 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:25:57.810 09:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:25:59.714 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:59.714 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:25:59.714 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:59.714 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:59.714 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:59.714 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:59.714 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:59.714 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:25:59.714 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:59.714 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:25:59.714 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:25:59.714 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:25:59.714 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:25:59.714 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:25:59.714 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:25:59.714 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:59.714 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:59.714 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:59.714 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:59.714 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:59.714 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:59.714 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:59.714 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:59.714 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:59.714 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:59.714 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:59.714 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:59.714 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:59.714 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:59.714 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:59.714 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:59.714 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:59.714 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:59.714 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:59.714 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x1592)' 00:25:59.714 Found 0000:09:00.0 (0x8086 - 0x1592) 00:25:59.714 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:59.714 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:59.714 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:25:59.714 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:25:59.714 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:59.714 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:59.714 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x1592)' 00:25:59.714 Found 0000:09:00.1 (0x8086 - 0x1592) 00:25:59.714 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:59.714 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:59.714 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:25:59.714 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:25:59.714 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:59.715 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:59.715 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:59.715 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:59.715 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:59.715 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:59.715 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:59.715 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:59.715 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:59.715 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:59.715 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:59.715 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:25:59.715 Found net devices under 0000:09:00.0: cvl_0_0 00:25:59.715 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:59.715 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:59.715 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:59.715 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:59.715 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:59.715 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:59.715 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:59.715 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:59.715 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:25:59.715 Found net devices under 0000:09:00.1: cvl_0_1 00:25:59.715 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:59.715 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:25:59.715 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # is_hw=yes 00:25:59.715 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:25:59.715 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:25:59.715 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:25:59.715 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:59.715 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:59.715 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:59.715 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:59.715 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:59.715 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:59.715 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:59.715 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:59.715 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:59.715 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:59.715 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:59.715 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:59.715 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:59.715 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:59.715 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:59.715 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:59.715 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:59.715 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:59.715 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:59.715 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:59.715 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:59.715 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:59.715 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:59.715 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:59.715 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.274 ms 00:25:59.715 00:25:59.715 --- 10.0.0.2 ping statistics --- 00:25:59.715 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:59.715 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:25:59.715 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:59.715 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:59.715 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:25:59.715 00:25:59.715 --- 10.0.0.1 ping statistics --- 00:25:59.715 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:59.715 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:25:59.715 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:59.715 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # return 0 00:25:59.715 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:25:59.715 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:59.715 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:25:59.715 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:25:59.715 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:59.715 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:25:59.715 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:25:59.715 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:25:59.715 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:25:59.715 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@767 -- # local ip 00:25:59.715 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:59.715 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:59.715 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:59.715 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:59.715 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:59.715 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:59.715 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:59.715 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:59.715 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:59.715 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:25:59.715 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:25:59.715 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:25:59.715 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:25:59.715 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:59.715 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:59.715 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:59.715 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # local block nvme 00:25:59.715 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:25:59.715 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # modprobe nvmet 00:25:59.973 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:59.973 09:46:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:00.911 Waiting for block devices as requested 00:26:00.911 0000:84:00.0 (8086 0a54): vfio-pci -> nvme 00:26:01.170 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:26:01.170 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:26:01.430 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:26:01.430 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:26:01.430 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:26:01.689 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:26:01.689 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:26:01.689 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:26:01.689 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:26:01.947 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:26:01.947 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:26:01.947 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:26:01.947 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:26:02.206 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:26:02.206 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:26:02.206 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:26:02.464 09:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:26:02.464 09:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:02.464 09:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:26:02.464 09:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:26:02.464 09:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:02.464 09:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:26:02.464 09:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:26:02.464 09:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:26:02.464 09:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:26:02.464 No valid GPT data, bailing 00:26:02.464 09:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:02.464 09:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:26:02.464 09:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:26:02.464 09:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:26:02.464 09:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@682 -- # [[ -b /dev/nvme0n1 ]] 00:26:02.465 09:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:02.465 09:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:02.465 09:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:02.465 09:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:26:02.465 09:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo 1 00:26:02.465 09:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@694 -- # echo /dev/nvme0n1 00:26:02.465 09:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:26:02.465 09:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 10.0.0.1 00:26:02.465 09:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # echo tcp 00:26:02.465 09:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 4420 00:26:02.465 09:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo ipv4 00:26:02.465 09:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:02.465 09:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid=21b7cb46-a602-e411-a339-001e67bc3be4 -a 10.0.0.1 -t tcp -s 4420 00:26:02.465 00:26:02.465 Discovery Log Number of Records 2, Generation counter 2 00:26:02.465 =====Discovery Log Entry 0====== 00:26:02.465 trtype: tcp 00:26:02.465 adrfam: ipv4 00:26:02.465 subtype: current discovery subsystem 00:26:02.465 treq: not specified, sq flow control disable supported 00:26:02.465 portid: 1 00:26:02.465 trsvcid: 4420 00:26:02.465 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:02.465 traddr: 10.0.0.1 00:26:02.465 eflags: none 00:26:02.465 sectype: none 00:26:02.465 =====Discovery Log Entry 1====== 00:26:02.465 trtype: tcp 00:26:02.465 adrfam: ipv4 00:26:02.465 subtype: nvme subsystem 00:26:02.465 treq: not specified, sq flow control disable supported 00:26:02.465 portid: 1 00:26:02.465 trsvcid: 4420 00:26:02.465 subnqn: nqn.2016-06.io.spdk:testnqn 00:26:02.465 traddr: 10.0.0.1 00:26:02.465 eflags: none 00:26:02.465 sectype: none 00:26:02.465 09:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:26:02.465 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:26:02.725 ===================================================== 00:26:02.725 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:26:02.725 ===================================================== 00:26:02.725 Controller Capabilities/Features 00:26:02.725 ================================ 00:26:02.725 Vendor ID: 0000 00:26:02.725 Subsystem Vendor ID: 0000 00:26:02.725 Serial Number: 7b036e353fda5beb1a96 00:26:02.725 Model Number: Linux 00:26:02.725 Firmware Version: 6.8.9-20 00:26:02.725 Recommended Arb Burst: 0 00:26:02.725 IEEE OUI Identifier: 00 00 00 00:26:02.725 Multi-path I/O 00:26:02.725 May have multiple subsystem ports: No 00:26:02.725 May have multiple controllers: No 00:26:02.725 Associated with SR-IOV VF: No 00:26:02.725 Max Data Transfer Size: Unlimited 00:26:02.725 Max Number of Namespaces: 0 00:26:02.725 Max Number of I/O Queues: 1024 00:26:02.725 NVMe Specification Version (VS): 1.3 00:26:02.725 NVMe Specification Version (Identify): 1.3 00:26:02.725 Maximum Queue Entries: 1024 00:26:02.725 Contiguous Queues Required: No 00:26:02.725 Arbitration Mechanisms Supported 00:26:02.725 Weighted Round Robin: Not Supported 00:26:02.725 Vendor Specific: Not Supported 00:26:02.725 Reset Timeout: 7500 ms 00:26:02.725 Doorbell Stride: 4 bytes 00:26:02.725 NVM Subsystem Reset: Not Supported 00:26:02.725 Command Sets Supported 00:26:02.725 NVM Command Set: Supported 00:26:02.725 Boot Partition: Not Supported 00:26:02.725 Memory Page Size Minimum: 4096 bytes 00:26:02.725 Memory Page Size Maximum: 4096 bytes 00:26:02.725 Persistent Memory Region: Not Supported 00:26:02.725 Optional Asynchronous Events Supported 00:26:02.725 Namespace Attribute Notices: Not Supported 00:26:02.725 Firmware Activation Notices: Not Supported 00:26:02.725 ANA Change Notices: Not Supported 00:26:02.725 PLE Aggregate Log Change Notices: Not Supported 00:26:02.725 LBA Status Info Alert Notices: Not Supported 00:26:02.725 EGE Aggregate Log Change Notices: Not Supported 00:26:02.726 Normal NVM Subsystem Shutdown event: Not Supported 00:26:02.726 Zone Descriptor Change Notices: Not Supported 00:26:02.726 Discovery Log Change Notices: Supported 00:26:02.726 Controller Attributes 00:26:02.726 128-bit Host Identifier: Not Supported 00:26:02.726 Non-Operational Permissive Mode: Not Supported 00:26:02.726 NVM Sets: Not Supported 00:26:02.726 Read Recovery Levels: Not Supported 00:26:02.726 Endurance Groups: Not Supported 00:26:02.726 Predictable Latency Mode: Not Supported 00:26:02.726 Traffic Based Keep ALive: Not Supported 00:26:02.726 Namespace Granularity: Not Supported 00:26:02.726 SQ Associations: Not Supported 00:26:02.726 UUID List: Not Supported 00:26:02.726 Multi-Domain Subsystem: Not Supported 00:26:02.726 Fixed Capacity Management: Not Supported 00:26:02.726 Variable Capacity Management: Not Supported 00:26:02.726 Delete Endurance Group: Not Supported 00:26:02.726 Delete NVM Set: Not Supported 00:26:02.726 Extended LBA Formats Supported: Not Supported 00:26:02.726 Flexible Data Placement Supported: Not Supported 00:26:02.726 00:26:02.726 Controller Memory Buffer Support 00:26:02.726 ================================ 00:26:02.726 Supported: No 00:26:02.726 00:26:02.726 Persistent Memory Region Support 00:26:02.726 ================================ 00:26:02.726 Supported: No 00:26:02.726 00:26:02.726 Admin Command Set Attributes 00:26:02.726 ============================ 00:26:02.726 Security Send/Receive: Not Supported 00:26:02.726 Format NVM: Not Supported 00:26:02.726 Firmware Activate/Download: Not Supported 00:26:02.726 Namespace Management: Not Supported 00:26:02.726 Device Self-Test: Not Supported 00:26:02.726 Directives: Not Supported 00:26:02.726 NVMe-MI: Not Supported 00:26:02.726 Virtualization Management: Not Supported 00:26:02.726 Doorbell Buffer Config: Not Supported 00:26:02.726 Get LBA Status Capability: Not Supported 00:26:02.726 Command & Feature Lockdown Capability: Not Supported 00:26:02.726 Abort Command Limit: 1 00:26:02.726 Async Event Request Limit: 1 00:26:02.726 Number of Firmware Slots: N/A 00:26:02.726 Firmware Slot 1 Read-Only: N/A 00:26:02.726 Firmware Activation Without Reset: N/A 00:26:02.726 Multiple Update Detection Support: N/A 00:26:02.726 Firmware Update Granularity: No Information Provided 00:26:02.726 Per-Namespace SMART Log: No 00:26:02.726 Asymmetric Namespace Access Log Page: Not Supported 00:26:02.726 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:26:02.726 Command Effects Log Page: Not Supported 00:26:02.726 Get Log Page Extended Data: Supported 00:26:02.726 Telemetry Log Pages: Not Supported 00:26:02.726 Persistent Event Log Pages: Not Supported 00:26:02.726 Supported Log Pages Log Page: May Support 00:26:02.726 Commands Supported & Effects Log Page: Not Supported 00:26:02.726 Feature Identifiers & Effects Log Page:May Support 00:26:02.726 NVMe-MI Commands & Effects Log Page: May Support 00:26:02.726 Data Area 4 for Telemetry Log: Not Supported 00:26:02.726 Error Log Page Entries Supported: 1 00:26:02.726 Keep Alive: Not Supported 00:26:02.726 00:26:02.726 NVM Command Set Attributes 00:26:02.726 ========================== 00:26:02.726 Submission Queue Entry Size 00:26:02.726 Max: 1 00:26:02.726 Min: 1 00:26:02.726 Completion Queue Entry Size 00:26:02.726 Max: 1 00:26:02.726 Min: 1 00:26:02.726 Number of Namespaces: 0 00:26:02.726 Compare Command: Not Supported 00:26:02.726 Write Uncorrectable Command: Not Supported 00:26:02.726 Dataset Management Command: Not Supported 00:26:02.726 Write Zeroes Command: Not Supported 00:26:02.726 Set Features Save Field: Not Supported 00:26:02.726 Reservations: Not Supported 00:26:02.726 Timestamp: Not Supported 00:26:02.726 Copy: Not Supported 00:26:02.726 Volatile Write Cache: Not Present 00:26:02.726 Atomic Write Unit (Normal): 1 00:26:02.726 Atomic Write Unit (PFail): 1 00:26:02.726 Atomic Compare & Write Unit: 1 00:26:02.726 Fused Compare & Write: Not Supported 00:26:02.726 Scatter-Gather List 00:26:02.726 SGL Command Set: Supported 00:26:02.726 SGL Keyed: Not Supported 00:26:02.726 SGL Bit Bucket Descriptor: Not Supported 00:26:02.726 SGL Metadata Pointer: Not Supported 00:26:02.726 Oversized SGL: Not Supported 00:26:02.726 SGL Metadata Address: Not Supported 00:26:02.726 SGL Offset: Supported 00:26:02.726 Transport SGL Data Block: Not Supported 00:26:02.726 Replay Protected Memory Block: Not Supported 00:26:02.726 00:26:02.726 Firmware Slot Information 00:26:02.726 ========================= 00:26:02.726 Active slot: 0 00:26:02.726 00:26:02.726 00:26:02.726 Error Log 00:26:02.726 ========= 00:26:02.726 00:26:02.726 Active Namespaces 00:26:02.726 ================= 00:26:02.726 Discovery Log Page 00:26:02.726 ================== 00:26:02.726 Generation Counter: 2 00:26:02.726 Number of Records: 2 00:26:02.726 Record Format: 0 00:26:02.726 00:26:02.726 Discovery Log Entry 0 00:26:02.726 ---------------------- 00:26:02.726 Transport Type: 3 (TCP) 00:26:02.726 Address Family: 1 (IPv4) 00:26:02.726 Subsystem Type: 3 (Current Discovery Subsystem) 00:26:02.726 Entry Flags: 00:26:02.726 Duplicate Returned Information: 0 00:26:02.726 Explicit Persistent Connection Support for Discovery: 0 00:26:02.726 Transport Requirements: 00:26:02.726 Secure Channel: Not Specified 00:26:02.726 Port ID: 1 (0x0001) 00:26:02.726 Controller ID: 65535 (0xffff) 00:26:02.726 Admin Max SQ Size: 32 00:26:02.726 Transport Service Identifier: 4420 00:26:02.726 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:26:02.726 Transport Address: 10.0.0.1 00:26:02.726 Discovery Log Entry 1 00:26:02.726 ---------------------- 00:26:02.726 Transport Type: 3 (TCP) 00:26:02.726 Address Family: 1 (IPv4) 00:26:02.726 Subsystem Type: 2 (NVM Subsystem) 00:26:02.726 Entry Flags: 00:26:02.726 Duplicate Returned Information: 0 00:26:02.726 Explicit Persistent Connection Support for Discovery: 0 00:26:02.726 Transport Requirements: 00:26:02.726 Secure Channel: Not Specified 00:26:02.726 Port ID: 1 (0x0001) 00:26:02.726 Controller ID: 65535 (0xffff) 00:26:02.726 Admin Max SQ Size: 32 00:26:02.726 Transport Service Identifier: 4420 00:26:02.727 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:26:02.727 Transport Address: 10.0.0.1 00:26:02.727 09:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:02.727 get_feature(0x01) failed 00:26:02.727 get_feature(0x02) failed 00:26:02.727 get_feature(0x04) failed 00:26:02.727 ===================================================== 00:26:02.727 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:26:02.727 ===================================================== 00:26:02.727 Controller Capabilities/Features 00:26:02.727 ================================ 00:26:02.727 Vendor ID: 0000 00:26:02.727 Subsystem Vendor ID: 0000 00:26:02.727 Serial Number: ceb573d0e4a4578baa9f 00:26:02.727 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:26:02.727 Firmware Version: 6.8.9-20 00:26:02.727 Recommended Arb Burst: 6 00:26:02.727 IEEE OUI Identifier: 00 00 00 00:26:02.727 Multi-path I/O 00:26:02.727 May have multiple subsystem ports: Yes 00:26:02.727 May have multiple controllers: Yes 00:26:02.727 Associated with SR-IOV VF: No 00:26:02.727 Max Data Transfer Size: Unlimited 00:26:02.727 Max Number of Namespaces: 1024 00:26:02.727 Max Number of I/O Queues: 128 00:26:02.727 NVMe Specification Version (VS): 1.3 00:26:02.727 NVMe Specification Version (Identify): 1.3 00:26:02.727 Maximum Queue Entries: 1024 00:26:02.727 Contiguous Queues Required: No 00:26:02.727 Arbitration Mechanisms Supported 00:26:02.727 Weighted Round Robin: Not Supported 00:26:02.727 Vendor Specific: Not Supported 00:26:02.727 Reset Timeout: 7500 ms 00:26:02.727 Doorbell Stride: 4 bytes 00:26:02.727 NVM Subsystem Reset: Not Supported 00:26:02.727 Command Sets Supported 00:26:02.727 NVM Command Set: Supported 00:26:02.727 Boot Partition: Not Supported 00:26:02.727 Memory Page Size Minimum: 4096 bytes 00:26:02.727 Memory Page Size Maximum: 4096 bytes 00:26:02.727 Persistent Memory Region: Not Supported 00:26:02.727 Optional Asynchronous Events Supported 00:26:02.727 Namespace Attribute Notices: Supported 00:26:02.727 Firmware Activation Notices: Not Supported 00:26:02.727 ANA Change Notices: Supported 00:26:02.727 PLE Aggregate Log Change Notices: Not Supported 00:26:02.727 LBA Status Info Alert Notices: Not Supported 00:26:02.727 EGE Aggregate Log Change Notices: Not Supported 00:26:02.727 Normal NVM Subsystem Shutdown event: Not Supported 00:26:02.727 Zone Descriptor Change Notices: Not Supported 00:26:02.727 Discovery Log Change Notices: Not Supported 00:26:02.727 Controller Attributes 00:26:02.727 128-bit Host Identifier: Supported 00:26:02.727 Non-Operational Permissive Mode: Not Supported 00:26:02.727 NVM Sets: Not Supported 00:26:02.727 Read Recovery Levels: Not Supported 00:26:02.727 Endurance Groups: Not Supported 00:26:02.727 Predictable Latency Mode: Not Supported 00:26:02.727 Traffic Based Keep ALive: Supported 00:26:02.727 Namespace Granularity: Not Supported 00:26:02.727 SQ Associations: Not Supported 00:26:02.727 UUID List: Not Supported 00:26:02.727 Multi-Domain Subsystem: Not Supported 00:26:02.727 Fixed Capacity Management: Not Supported 00:26:02.727 Variable Capacity Management: Not Supported 00:26:02.727 Delete Endurance Group: Not Supported 00:26:02.727 Delete NVM Set: Not Supported 00:26:02.727 Extended LBA Formats Supported: Not Supported 00:26:02.727 Flexible Data Placement Supported: Not Supported 00:26:02.727 00:26:02.727 Controller Memory Buffer Support 00:26:02.727 ================================ 00:26:02.727 Supported: No 00:26:02.727 00:26:02.727 Persistent Memory Region Support 00:26:02.727 ================================ 00:26:02.727 Supported: No 00:26:02.727 00:26:02.727 Admin Command Set Attributes 00:26:02.727 ============================ 00:26:02.727 Security Send/Receive: Not Supported 00:26:02.727 Format NVM: Not Supported 00:26:02.727 Firmware Activate/Download: Not Supported 00:26:02.727 Namespace Management: Not Supported 00:26:02.727 Device Self-Test: Not Supported 00:26:02.727 Directives: Not Supported 00:26:02.727 NVMe-MI: Not Supported 00:26:02.727 Virtualization Management: Not Supported 00:26:02.727 Doorbell Buffer Config: Not Supported 00:26:02.727 Get LBA Status Capability: Not Supported 00:26:02.727 Command & Feature Lockdown Capability: Not Supported 00:26:02.727 Abort Command Limit: 4 00:26:02.727 Async Event Request Limit: 4 00:26:02.727 Number of Firmware Slots: N/A 00:26:02.727 Firmware Slot 1 Read-Only: N/A 00:26:02.727 Firmware Activation Without Reset: N/A 00:26:02.727 Multiple Update Detection Support: N/A 00:26:02.727 Firmware Update Granularity: No Information Provided 00:26:02.727 Per-Namespace SMART Log: Yes 00:26:02.727 Asymmetric Namespace Access Log Page: Supported 00:26:02.727 ANA Transition Time : 10 sec 00:26:02.727 00:26:02.727 Asymmetric Namespace Access Capabilities 00:26:02.727 ANA Optimized State : Supported 00:26:02.727 ANA Non-Optimized State : Supported 00:26:02.727 ANA Inaccessible State : Supported 00:26:02.727 ANA Persistent Loss State : Supported 00:26:02.727 ANA Change State : Supported 00:26:02.727 ANAGRPID is not changed : No 00:26:02.727 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:26:02.727 00:26:02.727 ANA Group Identifier Maximum : 128 00:26:02.727 Number of ANA Group Identifiers : 128 00:26:02.727 Max Number of Allowed Namespaces : 1024 00:26:02.727 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:26:02.727 Command Effects Log Page: Supported 00:26:02.727 Get Log Page Extended Data: Supported 00:26:02.727 Telemetry Log Pages: Not Supported 00:26:02.727 Persistent Event Log Pages: Not Supported 00:26:02.727 Supported Log Pages Log Page: May Support 00:26:02.727 Commands Supported & Effects Log Page: Not Supported 00:26:02.727 Feature Identifiers & Effects Log Page:May Support 00:26:02.727 NVMe-MI Commands & Effects Log Page: May Support 00:26:02.727 Data Area 4 for Telemetry Log: Not Supported 00:26:02.727 Error Log Page Entries Supported: 128 00:26:02.727 Keep Alive: Supported 00:26:02.727 Keep Alive Granularity: 1000 ms 00:26:02.727 00:26:02.727 NVM Command Set Attributes 00:26:02.727 ========================== 00:26:02.727 Submission Queue Entry Size 00:26:02.727 Max: 64 00:26:02.727 Min: 64 00:26:02.727 Completion Queue Entry Size 00:26:02.727 Max: 16 00:26:02.727 Min: 16 00:26:02.727 Number of Namespaces: 1024 00:26:02.727 Compare Command: Not Supported 00:26:02.727 Write Uncorrectable Command: Not Supported 00:26:02.727 Dataset Management Command: Supported 00:26:02.727 Write Zeroes Command: Supported 00:26:02.727 Set Features Save Field: Not Supported 00:26:02.727 Reservations: Not Supported 00:26:02.727 Timestamp: Not Supported 00:26:02.728 Copy: Not Supported 00:26:02.728 Volatile Write Cache: Present 00:26:02.728 Atomic Write Unit (Normal): 1 00:26:02.728 Atomic Write Unit (PFail): 1 00:26:02.728 Atomic Compare & Write Unit: 1 00:26:02.728 Fused Compare & Write: Not Supported 00:26:02.728 Scatter-Gather List 00:26:02.728 SGL Command Set: Supported 00:26:02.728 SGL Keyed: Not Supported 00:26:02.728 SGL Bit Bucket Descriptor: Not Supported 00:26:02.728 SGL Metadata Pointer: Not Supported 00:26:02.728 Oversized SGL: Not Supported 00:26:02.728 SGL Metadata Address: Not Supported 00:26:02.728 SGL Offset: Supported 00:26:02.728 Transport SGL Data Block: Not Supported 00:26:02.728 Replay Protected Memory Block: Not Supported 00:26:02.728 00:26:02.728 Firmware Slot Information 00:26:02.728 ========================= 00:26:02.728 Active slot: 0 00:26:02.728 00:26:02.728 Asymmetric Namespace Access 00:26:02.728 =========================== 00:26:02.728 Change Count : 0 00:26:02.728 Number of ANA Group Descriptors : 1 00:26:02.728 ANA Group Descriptor : 0 00:26:02.728 ANA Group ID : 1 00:26:02.728 Number of NSID Values : 1 00:26:02.728 Change Count : 0 00:26:02.728 ANA State : 1 00:26:02.728 Namespace Identifier : 1 00:26:02.728 00:26:02.728 Commands Supported and Effects 00:26:02.728 ============================== 00:26:02.728 Admin Commands 00:26:02.728 -------------- 00:26:02.728 Get Log Page (02h): Supported 00:26:02.728 Identify (06h): Supported 00:26:02.728 Abort (08h): Supported 00:26:02.728 Set Features (09h): Supported 00:26:02.728 Get Features (0Ah): Supported 00:26:02.728 Asynchronous Event Request (0Ch): Supported 00:26:02.728 Keep Alive (18h): Supported 00:26:02.728 I/O Commands 00:26:02.728 ------------ 00:26:02.728 Flush (00h): Supported 00:26:02.728 Write (01h): Supported LBA-Change 00:26:02.728 Read (02h): Supported 00:26:02.728 Write Zeroes (08h): Supported LBA-Change 00:26:02.728 Dataset Management (09h): Supported 00:26:02.728 00:26:02.728 Error Log 00:26:02.728 ========= 00:26:02.728 Entry: 0 00:26:02.728 Error Count: 0x3 00:26:02.728 Submission Queue Id: 0x0 00:26:02.728 Command Id: 0x5 00:26:02.728 Phase Bit: 0 00:26:02.728 Status Code: 0x2 00:26:02.728 Status Code Type: 0x0 00:26:02.728 Do Not Retry: 1 00:26:02.728 Error Location: 0x28 00:26:02.728 LBA: 0x0 00:26:02.728 Namespace: 0x0 00:26:02.728 Vendor Log Page: 0x0 00:26:02.728 ----------- 00:26:02.728 Entry: 1 00:26:02.728 Error Count: 0x2 00:26:02.728 Submission Queue Id: 0x0 00:26:02.728 Command Id: 0x5 00:26:02.728 Phase Bit: 0 00:26:02.728 Status Code: 0x2 00:26:02.728 Status Code Type: 0x0 00:26:02.728 Do Not Retry: 1 00:26:02.728 Error Location: 0x28 00:26:02.728 LBA: 0x0 00:26:02.728 Namespace: 0x0 00:26:02.728 Vendor Log Page: 0x0 00:26:02.728 ----------- 00:26:02.728 Entry: 2 00:26:02.728 Error Count: 0x1 00:26:02.728 Submission Queue Id: 0x0 00:26:02.728 Command Id: 0x4 00:26:02.728 Phase Bit: 0 00:26:02.728 Status Code: 0x2 00:26:02.728 Status Code Type: 0x0 00:26:02.728 Do Not Retry: 1 00:26:02.728 Error Location: 0x28 00:26:02.728 LBA: 0x0 00:26:02.728 Namespace: 0x0 00:26:02.728 Vendor Log Page: 0x0 00:26:02.728 00:26:02.728 Number of Queues 00:26:02.728 ================ 00:26:02.728 Number of I/O Submission Queues: 128 00:26:02.728 Number of I/O Completion Queues: 128 00:26:02.728 00:26:02.728 ZNS Specific Controller Data 00:26:02.728 ============================ 00:26:02.728 Zone Append Size Limit: 0 00:26:02.728 00:26:02.728 00:26:02.728 Active Namespaces 00:26:02.728 ================= 00:26:02.728 get_feature(0x05) failed 00:26:02.728 Namespace ID:1 00:26:02.728 Command Set Identifier: NVM (00h) 00:26:02.728 Deallocate: Supported 00:26:02.728 Deallocated/Unwritten Error: Not Supported 00:26:02.728 Deallocated Read Value: Unknown 00:26:02.728 Deallocate in Write Zeroes: Not Supported 00:26:02.728 Deallocated Guard Field: 0xFFFF 00:26:02.728 Flush: Supported 00:26:02.728 Reservation: Not Supported 00:26:02.728 Namespace Sharing Capabilities: Multiple Controllers 00:26:02.728 Size (in LBAs): 1953525168 (931GiB) 00:26:02.728 Capacity (in LBAs): 1953525168 (931GiB) 00:26:02.728 Utilization (in LBAs): 1953525168 (931GiB) 00:26:02.728 UUID: 7c72fc59-4a38-4686-bf9e-d2efe422970d 00:26:02.728 Thin Provisioning: Not Supported 00:26:02.728 Per-NS Atomic Units: Yes 00:26:02.728 Atomic Boundary Size (Normal): 0 00:26:02.728 Atomic Boundary Size (PFail): 0 00:26:02.728 Atomic Boundary Offset: 0 00:26:02.728 NGUID/EUI64 Never Reused: No 00:26:02.728 ANA group ID: 1 00:26:02.728 Namespace Write Protected: No 00:26:02.728 Number of LBA Formats: 1 00:26:02.728 Current LBA Format: LBA Format #00 00:26:02.728 LBA Format #00: Data Size: 512 Metadata Size: 0 00:26:02.728 00:26:02.728 09:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:26:02.728 09:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:26:02.728 09:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:26:02.728 09:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:02.728 09:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:26:02.728 09:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:02.728 09:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:02.728 rmmod nvme_tcp 00:26:02.728 rmmod nvme_fabrics 00:26:02.728 09:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:02.728 09:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:26:02.728 09:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:26:02.728 09:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:26:02.728 09:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:26:02.728 09:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:26:02.728 09:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:26:02.728 09:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:26:02.728 09:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@789 -- # iptables-save 00:26:02.728 09:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:26:02.728 09:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@789 -- # iptables-restore 00:26:02.728 09:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:02.728 09:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:02.728 09:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:02.728 09:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:02.728 09:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:05.265 09:46:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:05.265 09:46:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:26:05.265 09:46:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:26:05.265 09:46:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # echo 0 00:26:05.265 09:46:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:05.265 09:46:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:05.265 09:46:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:05.265 09:46:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:05.265 09:46:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:26:05.265 09:46:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modprobe -r nvmet_tcp nvmet 00:26:05.265 09:46:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@724 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:06.202 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:26:06.202 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:26:06.202 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:26:06.202 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:26:06.202 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:26:06.202 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:26:06.202 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:26:06.202 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:26:06.202 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:26:06.202 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:26:06.202 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:26:06.202 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:26:06.202 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:26:06.202 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:26:06.202 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:26:06.202 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:26:07.140 0000:84:00.0 (8086 0a54): nvme -> vfio-pci 00:26:07.140 00:26:07.140 real 0m9.695s 00:26:07.140 user 0m2.013s 00:26:07.140 sys 0m3.691s 00:26:07.140 09:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:07.140 09:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:26:07.140 ************************************ 00:26:07.140 END TEST nvmf_identify_kernel_target 00:26:07.140 ************************************ 00:26:07.140 09:46:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:26:07.140 09:46:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:07.140 09:46:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:07.140 09:46:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.140 ************************************ 00:26:07.140 START TEST nvmf_auth_host 00:26:07.140 ************************************ 00:26:07.140 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:26:07.140 * Looking for test storage... 00:26:07.140 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:07.140 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:26:07.140 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # lcov --version 00:26:07.140 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:26:07.399 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:26:07.399 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:07.399 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:07.399 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:07.399 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:26:07.399 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:26:07.399 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:26:07.399 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:26:07.399 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:26:07.399 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:26:07.399 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:26:07.399 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:07.399 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:26:07.399 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:26:07.399 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:07.399 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:07.399 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:26:07.399 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:26:07.399 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:07.399 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:26:07.399 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:26:07.399 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:26:07.399 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:26:07.399 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:07.399 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:26:07.399 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:26:07.399 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:07.399 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:07.399 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:26:07.399 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:07.399 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:26:07.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:07.399 --rc genhtml_branch_coverage=1 00:26:07.399 --rc genhtml_function_coverage=1 00:26:07.399 --rc genhtml_legend=1 00:26:07.399 --rc geninfo_all_blocks=1 00:26:07.399 --rc geninfo_unexecuted_blocks=1 00:26:07.399 00:26:07.399 ' 00:26:07.399 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:26:07.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:07.399 --rc genhtml_branch_coverage=1 00:26:07.399 --rc genhtml_function_coverage=1 00:26:07.399 --rc genhtml_legend=1 00:26:07.399 --rc geninfo_all_blocks=1 00:26:07.399 --rc geninfo_unexecuted_blocks=1 00:26:07.399 00:26:07.399 ' 00:26:07.399 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:26:07.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:07.399 --rc genhtml_branch_coverage=1 00:26:07.399 --rc genhtml_function_coverage=1 00:26:07.399 --rc genhtml_legend=1 00:26:07.399 --rc geninfo_all_blocks=1 00:26:07.399 --rc geninfo_unexecuted_blocks=1 00:26:07.399 00:26:07.399 ' 00:26:07.399 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:26:07.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:07.399 --rc genhtml_branch_coverage=1 00:26:07.399 --rc genhtml_function_coverage=1 00:26:07.399 --rc genhtml_legend=1 00:26:07.399 --rc geninfo_all_blocks=1 00:26:07.399 --rc geninfo_unexecuted_blocks=1 00:26:07.399 00:26:07.399 ' 00:26:07.399 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:07.399 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:26:07.399 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:07.399 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:07.399 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:07.399 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:07.399 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:07.399 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:07.399 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:07.399 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:07.399 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:07.399 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:07.399 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:26:07.399 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:26:07.399 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:07.399 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:07.399 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:07.399 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:07.399 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:07.399 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:26:07.399 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:07.399 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:07.399 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:07.400 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:07.400 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:07.400 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:07.400 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:26:07.400 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:07.400 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:26:07.400 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:07.400 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:07.400 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:07.400 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:07.400 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:07.400 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:07.400 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:07.400 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:07.400 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:07.400 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:07.400 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:26:07.400 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:26:07.400 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:26:07.400 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:26:07.400 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:07.400 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:07.400 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:26:07.400 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:26:07.400 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:26:07.400 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:26:07.400 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:07.400 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # prepare_net_devs 00:26:07.400 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@436 -- # local -g is_hw=no 00:26:07.400 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # remove_spdk_ns 00:26:07.400 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:07.400 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:07.400 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:07.400 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:26:07.400 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:26:07.400 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:26:07.400 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.304 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:09.304 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:26:09.304 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:09.304 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:09.304 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:09.304 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:09.304 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:09.304 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:26:09.304 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:09.304 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:26:09.304 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:26:09.304 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:26:09.304 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:26:09.304 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:26:09.304 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:26:09.304 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:09.304 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:09.304 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:09.304 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:09.304 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:09.304 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:09.304 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:09.304 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:09.304 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:09.304 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:09.304 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:09.304 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:09.304 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:09.304 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:09.304 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:09.304 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:09.304 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:09.304 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:09.304 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:09.304 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x1592)' 00:26:09.304 Found 0000:09:00.0 (0x8086 - 0x1592) 00:26:09.305 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:09.305 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:09.305 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:26:09.305 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:26:09.305 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:09.305 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:09.305 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x1592)' 00:26:09.305 Found 0000:09:00.1 (0x8086 - 0x1592) 00:26:09.305 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:09.305 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:09.305 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:26:09.305 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:26:09.305 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:09.305 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:09.305 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:09.305 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:09.305 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:09.305 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:09.305 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:09.305 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:09.305 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:09.305 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:09.305 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:09.305 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:26:09.305 Found net devices under 0000:09:00.0: cvl_0_0 00:26:09.305 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:09.305 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:09.305 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:09.305 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:09.305 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:09.305 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:09.305 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:09.305 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:09.305 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:26:09.305 Found net devices under 0000:09:00.1: cvl_0_1 00:26:09.305 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:09.305 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:26:09.305 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # is_hw=yes 00:26:09.305 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:26:09.305 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:26:09.305 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:26:09.305 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:09.305 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:09.305 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:09.305 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:09.305 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:09.305 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:09.305 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:09.305 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:09.305 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:09.305 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:09.305 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:09.305 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:09.305 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:09.305 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:09.305 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:09.305 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:09.305 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:09.305 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:09.305 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:09.564 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:09.564 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:09.564 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:09.564 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:09.564 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:09.564 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.264 ms 00:26:09.564 00:26:09.564 --- 10.0.0.2 ping statistics --- 00:26:09.564 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:09.564 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:26:09.564 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:09.564 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:09.564 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.154 ms 00:26:09.564 00:26:09.564 --- 10.0.0.1 ping statistics --- 00:26:09.564 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:09.564 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:26:09.564 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:09.564 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # return 0 00:26:09.564 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:26:09.564 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:09.564 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:26:09.564 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:26:09.564 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:09.564 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:26:09.564 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:26:09.564 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:26:09.564 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:26:09.564 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:09.564 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.564 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # nvmfpid=309737 00:26:09.564 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:26:09.564 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # waitforlisten 309737 00:26:09.564 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 309737 ']' 00:26:09.564 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:09.564 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:09.564 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:09.564 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:09.564 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.823 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:09.823 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:26:09.823 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:26:09.823 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:09.823 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.823 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:09.823 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:26:09.823 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:26:09.823 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:26:09.823 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:09.823 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:26:09.823 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:26:09.823 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:26:09.823 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:09.823 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=21cc6fd12ed0847b40ff615b84bb9f22 00:26:09.823 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:26:09.823 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.bUa 00:26:09.823 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 21cc6fd12ed0847b40ff615b84bb9f22 0 00:26:09.823 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 21cc6fd12ed0847b40ff615b84bb9f22 0 00:26:09.823 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:26:09.823 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:26:09.823 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=21cc6fd12ed0847b40ff615b84bb9f22 00:26:09.823 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:26:09.823 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:26:09.823 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.bUa 00:26:09.823 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.bUa 00:26:09.823 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.bUa 00:26:09.823 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:26:09.823 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:26:09.823 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:09.823 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:26:09.823 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha512 00:26:09.823 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=64 00:26:09.823 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:26:09.823 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=1d02b07c044acc392b186909bc89e39c3e97f6088671e349d77b6f8fe514ad3b 00:26:09.823 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:26:09.823 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.a7x 00:26:09.823 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 1d02b07c044acc392b186909bc89e39c3e97f6088671e349d77b6f8fe514ad3b 3 00:26:09.823 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 1d02b07c044acc392b186909bc89e39c3e97f6088671e349d77b6f8fe514ad3b 3 00:26:09.823 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:26:09.823 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:26:09.823 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=1d02b07c044acc392b186909bc89e39c3e97f6088671e349d77b6f8fe514ad3b 00:26:09.823 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=3 00:26:09.823 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:26:09.823 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.a7x 00:26:09.823 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.a7x 00:26:09.823 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.a7x 00:26:09.823 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:26:09.823 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:26:09.823 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:09.823 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:26:09.823 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:26:09.823 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:26:09.823 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:09.823 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=be1fdc0c4df2d07697410a4620855b063ea5a065792634e1 00:26:09.823 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:26:09.823 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.TLK 00:26:09.823 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key be1fdc0c4df2d07697410a4620855b063ea5a065792634e1 0 00:26:09.823 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 be1fdc0c4df2d07697410a4620855b063ea5a065792634e1 0 00:26:09.823 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:26:09.823 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:26:09.823 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=be1fdc0c4df2d07697410a4620855b063ea5a065792634e1 00:26:09.823 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:26:09.823 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:26:10.082 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.TLK 00:26:10.082 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.TLK 00:26:10.082 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.TLK 00:26:10.082 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:26:10.082 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:26:10.082 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:10.082 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:26:10.082 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha384 00:26:10.082 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:26:10.082 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:10.082 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=d752c873ee8b9324880aef28888ef43fd180f20c78d6d2df 00:26:10.082 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:26:10.082 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.fDf 00:26:10.082 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key d752c873ee8b9324880aef28888ef43fd180f20c78d6d2df 2 00:26:10.082 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 d752c873ee8b9324880aef28888ef43fd180f20c78d6d2df 2 00:26:10.082 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:26:10.082 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:26:10.082 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=d752c873ee8b9324880aef28888ef43fd180f20c78d6d2df 00:26:10.082 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=2 00:26:10.082 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:26:10.082 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.fDf 00:26:10.082 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.fDf 00:26:10.082 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.fDf 00:26:10.082 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:26:10.082 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:26:10.082 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:10.082 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:26:10.082 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha256 00:26:10.082 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:26:10.082 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:10.082 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=77f055aba53e8ea01e0b0c8a80ea4632 00:26:10.082 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:26:10.082 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.a6B 00:26:10.082 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 77f055aba53e8ea01e0b0c8a80ea4632 1 00:26:10.082 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 77f055aba53e8ea01e0b0c8a80ea4632 1 00:26:10.082 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:26:10.082 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:26:10.082 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=77f055aba53e8ea01e0b0c8a80ea4632 00:26:10.082 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=1 00:26:10.082 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:26:10.082 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.a6B 00:26:10.082 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.a6B 00:26:10.082 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.a6B 00:26:10.082 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:26:10.082 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:26:10.082 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:10.082 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:26:10.082 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha256 00:26:10.082 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:26:10.082 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:10.082 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=79f1d40a49923c367d8aa2f38975ee6a 00:26:10.082 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:26:10.082 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.R3w 00:26:10.082 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 79f1d40a49923c367d8aa2f38975ee6a 1 00:26:10.082 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 79f1d40a49923c367d8aa2f38975ee6a 1 00:26:10.082 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:26:10.082 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:26:10.082 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=79f1d40a49923c367d8aa2f38975ee6a 00:26:10.082 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=1 00:26:10.082 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:26:10.082 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.R3w 00:26:10.082 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.R3w 00:26:10.082 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.R3w 00:26:10.082 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:26:10.082 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:26:10.082 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:10.082 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:26:10.082 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha384 00:26:10.082 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:26:10.082 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:10.082 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=74b22b1b7e77382b6370cae1de7a37310c0c30a302fdc89b 00:26:10.082 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:26:10.082 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.zC7 00:26:10.082 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 74b22b1b7e77382b6370cae1de7a37310c0c30a302fdc89b 2 00:26:10.082 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 74b22b1b7e77382b6370cae1de7a37310c0c30a302fdc89b 2 00:26:10.082 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:26:10.082 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:26:10.082 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=74b22b1b7e77382b6370cae1de7a37310c0c30a302fdc89b 00:26:10.082 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=2 00:26:10.082 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:26:10.082 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.zC7 00:26:10.082 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.zC7 00:26:10.082 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.zC7 00:26:10.082 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:26:10.082 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:26:10.082 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:10.082 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:26:10.083 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:26:10.083 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:26:10.083 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:10.083 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=afb63183f32fb8b5076cb33b59868f4c 00:26:10.083 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:26:10.083 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.p0C 00:26:10.083 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key afb63183f32fb8b5076cb33b59868f4c 0 00:26:10.083 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 afb63183f32fb8b5076cb33b59868f4c 0 00:26:10.083 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:26:10.083 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:26:10.083 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=afb63183f32fb8b5076cb33b59868f4c 00:26:10.083 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:26:10.083 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:26:10.083 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.p0C 00:26:10.083 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.p0C 00:26:10.083 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.p0C 00:26:10.083 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:26:10.083 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:26:10.083 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:10.083 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:26:10.083 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha512 00:26:10.083 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=64 00:26:10.083 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:26:10.083 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=9a3af814a92bfebd9132f44f6b5e3aea3d44aebebd8f56876bab0af3bb242f0a 00:26:10.341 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:26:10.341 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.GGZ 00:26:10.341 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 9a3af814a92bfebd9132f44f6b5e3aea3d44aebebd8f56876bab0af3bb242f0a 3 00:26:10.341 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 9a3af814a92bfebd9132f44f6b5e3aea3d44aebebd8f56876bab0af3bb242f0a 3 00:26:10.341 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:26:10.341 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:26:10.341 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=9a3af814a92bfebd9132f44f6b5e3aea3d44aebebd8f56876bab0af3bb242f0a 00:26:10.341 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=3 00:26:10.341 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:26:10.341 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.GGZ 00:26:10.341 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.GGZ 00:26:10.341 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.GGZ 00:26:10.341 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:26:10.341 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 309737 00:26:10.341 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 309737 ']' 00:26:10.341 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:10.341 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:10.341 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:10.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:10.341 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:10.341 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.599 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:10.599 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:26:10.599 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:10.599 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.bUa 00:26:10.599 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.599 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.600 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.600 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.a7x ]] 00:26:10.600 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.a7x 00:26:10.600 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.600 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.600 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.600 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:10.600 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.TLK 00:26:10.600 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.600 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.600 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.600 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.fDf ]] 00:26:10.600 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.fDf 00:26:10.600 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.600 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.600 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.600 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:10.600 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.a6B 00:26:10.600 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.600 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.600 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.600 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.R3w ]] 00:26:10.600 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.R3w 00:26:10.600 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.600 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.600 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.600 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:10.600 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.zC7 00:26:10.600 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.600 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.600 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.600 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.p0C ]] 00:26:10.600 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.p0C 00:26:10.600 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.600 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.600 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.600 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:10.600 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.GGZ 00:26:10.600 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.600 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.600 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.600 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:26:10.600 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:26:10.600 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:26:10.600 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:10.600 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:10.600 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:10.600 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:10.600 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:10.600 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:10.600 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:10.600 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:10.600 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:10.600 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:10.600 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:26:10.600 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:26:10.600 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:26:10.600 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:10.600 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:10.600 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:10.600 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # local block nvme 00:26:10.600 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:26:10.600 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # modprobe nvmet 00:26:10.600 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:10.600 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:11.533 Waiting for block devices as requested 00:26:11.533 0000:84:00.0 (8086 0a54): vfio-pci -> nvme 00:26:11.533 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:26:11.791 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:26:11.791 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:26:12.050 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:26:12.050 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:26:12.050 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:26:12.050 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:26:12.308 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:26:12.308 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:26:12.308 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:26:12.308 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:26:12.566 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:26:12.566 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:26:12.566 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:26:12.566 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:26:12.824 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:26:13.083 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:26:13.083 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:13.083 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:26:13.083 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:26:13.083 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:13.083 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:26:13.083 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:26:13.083 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:26:13.083 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:26:13.083 No valid GPT data, bailing 00:26:13.083 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:13.083 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:26:13.083 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:26:13.083 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:26:13.083 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@682 -- # [[ -b /dev/nvme0n1 ]] 00:26:13.083 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:13.083 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:13.340 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:13.341 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:26:13.341 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo 1 00:26:13.341 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@694 -- # echo /dev/nvme0n1 00:26:13.341 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:26:13.341 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 10.0.0.1 00:26:13.341 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # echo tcp 00:26:13.341 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 4420 00:26:13.341 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo ipv4 00:26:13.341 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:13.341 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid=21b7cb46-a602-e411-a339-001e67bc3be4 -a 10.0.0.1 -t tcp -s 4420 00:26:13.341 00:26:13.341 Discovery Log Number of Records 2, Generation counter 2 00:26:13.341 =====Discovery Log Entry 0====== 00:26:13.341 trtype: tcp 00:26:13.341 adrfam: ipv4 00:26:13.341 subtype: current discovery subsystem 00:26:13.341 treq: not specified, sq flow control disable supported 00:26:13.341 portid: 1 00:26:13.341 trsvcid: 4420 00:26:13.341 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:13.341 traddr: 10.0.0.1 00:26:13.341 eflags: none 00:26:13.341 sectype: none 00:26:13.341 =====Discovery Log Entry 1====== 00:26:13.341 trtype: tcp 00:26:13.341 adrfam: ipv4 00:26:13.341 subtype: nvme subsystem 00:26:13.341 treq: not specified, sq flow control disable supported 00:26:13.341 portid: 1 00:26:13.341 trsvcid: 4420 00:26:13.341 subnqn: nqn.2024-02.io.spdk:cnode0 00:26:13.341 traddr: 10.0.0.1 00:26:13.341 eflags: none 00:26:13.341 sectype: none 00:26:13.341 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:13.341 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:26:13.341 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:26:13.341 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:13.341 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:13.341 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:13.341 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:13.341 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:13.341 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmUxZmRjMGM0ZGYyZDA3Njk3NDEwYTQ2MjA4NTViMDYzZWE1YTA2NTc5MjYzNGUxMrg7YA==: 00:26:13.341 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDc1MmM4NzNlZThiOTMyNDg4MGFlZjI4ODg4ZWY0M2ZkMTgwZjIwYzc4ZDZkMmRmZWrMPw==: 00:26:13.341 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:13.341 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:13.341 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmUxZmRjMGM0ZGYyZDA3Njk3NDEwYTQ2MjA4NTViMDYzZWE1YTA2NTc5MjYzNGUxMrg7YA==: 00:26:13.341 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDc1MmM4NzNlZThiOTMyNDg4MGFlZjI4ODg4ZWY0M2ZkMTgwZjIwYzc4ZDZkMmRmZWrMPw==: ]] 00:26:13.341 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDc1MmM4NzNlZThiOTMyNDg4MGFlZjI4ODg4ZWY0M2ZkMTgwZjIwYzc4ZDZkMmRmZWrMPw==: 00:26:13.341 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:26:13.341 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:26:13.341 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:26:13.341 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:13.341 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:26:13.341 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:13.341 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:26:13.341 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:13.341 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:13.341 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:13.341 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:13.341 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.341 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.341 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.341 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:13.341 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:13.341 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:13.341 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:13.341 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:13.341 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:13.341 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:13.341 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:13.341 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:13.341 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:13.341 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:13.341 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:13.341 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.341 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.599 nvme0n1 00:26:13.599 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.599 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:13.599 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.599 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.600 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:13.600 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.600 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:13.600 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:13.600 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.600 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.600 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.600 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:13.600 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:13.600 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:13.600 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:26:13.600 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:13.600 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:13.600 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:13.600 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:13.600 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjFjYzZmZDEyZWQwODQ3YjQwZmY2MTViODRiYjlmMjIwnh3i: 00:26:13.600 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWQwMmIwN2MwNDRhY2MzOTJiMTg2OTA5YmM4OWUzOWMzZTk3ZjYwODg2NzFlMzQ5ZDc3YjZmOGZlNTE0YWQzYozPbqQ=: 00:26:13.600 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:13.600 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:13.600 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjFjYzZmZDEyZWQwODQ3YjQwZmY2MTViODRiYjlmMjIwnh3i: 00:26:13.600 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWQwMmIwN2MwNDRhY2MzOTJiMTg2OTA5YmM4OWUzOWMzZTk3ZjYwODg2NzFlMzQ5ZDc3YjZmOGZlNTE0YWQzYozPbqQ=: ]] 00:26:13.600 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWQwMmIwN2MwNDRhY2MzOTJiMTg2OTA5YmM4OWUzOWMzZTk3ZjYwODg2NzFlMzQ5ZDc3YjZmOGZlNTE0YWQzYozPbqQ=: 00:26:13.600 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:26:13.600 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:13.600 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:13.600 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:13.600 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:13.600 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:13.600 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:13.600 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.600 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.600 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.600 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:13.600 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:13.600 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:13.600 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:13.600 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:13.600 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:13.600 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:13.600 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:13.600 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:13.600 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:13.600 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:13.600 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:13.600 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.600 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.858 nvme0n1 00:26:13.858 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.858 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:13.858 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.858 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.858 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:13.858 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.858 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:13.858 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:13.858 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.858 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.858 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.858 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:13.858 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:13.858 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:13.858 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:13.858 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:13.858 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:13.858 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmUxZmRjMGM0ZGYyZDA3Njk3NDEwYTQ2MjA4NTViMDYzZWE1YTA2NTc5MjYzNGUxMrg7YA==: 00:26:13.858 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDc1MmM4NzNlZThiOTMyNDg4MGFlZjI4ODg4ZWY0M2ZkMTgwZjIwYzc4ZDZkMmRmZWrMPw==: 00:26:13.858 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:13.858 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:13.858 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmUxZmRjMGM0ZGYyZDA3Njk3NDEwYTQ2MjA4NTViMDYzZWE1YTA2NTc5MjYzNGUxMrg7YA==: 00:26:13.858 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDc1MmM4NzNlZThiOTMyNDg4MGFlZjI4ODg4ZWY0M2ZkMTgwZjIwYzc4ZDZkMmRmZWrMPw==: ]] 00:26:13.858 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDc1MmM4NzNlZThiOTMyNDg4MGFlZjI4ODg4ZWY0M2ZkMTgwZjIwYzc4ZDZkMmRmZWrMPw==: 00:26:13.858 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:26:13.858 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:13.858 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:13.858 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:13.858 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:13.858 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:13.858 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:13.858 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.858 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.858 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.858 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:13.858 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:13.858 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:13.858 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:13.858 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:13.859 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:13.859 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:13.859 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:13.859 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:13.859 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:13.859 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:13.859 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:13.859 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.859 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.117 nvme0n1 00:26:14.117 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.117 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:14.117 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.117 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.117 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:14.117 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.117 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:14.117 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:14.117 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.117 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.117 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.117 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:14.117 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:26:14.117 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:14.117 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:14.117 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:14.117 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:14.117 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzdmMDU1YWJhNTNlOGVhMDFlMGIwYzhhODBlYTQ2MzI3cUbE: 00:26:14.117 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzlmMWQ0MGE0OTkyM2MzNjdkOGFhMmYzODk3NWVlNmFPiNIw: 00:26:14.117 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:14.117 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:14.117 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzdmMDU1YWJhNTNlOGVhMDFlMGIwYzhhODBlYTQ2MzI3cUbE: 00:26:14.117 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzlmMWQ0MGE0OTkyM2MzNjdkOGFhMmYzODk3NWVlNmFPiNIw: ]] 00:26:14.117 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzlmMWQ0MGE0OTkyM2MzNjdkOGFhMmYzODk3NWVlNmFPiNIw: 00:26:14.118 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:26:14.118 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:14.118 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:14.118 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:14.118 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:14.118 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:14.118 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:14.118 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.118 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.118 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.118 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:14.118 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:14.118 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:14.118 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:14.118 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:14.118 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:14.118 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:14.118 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:14.118 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:14.118 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:14.118 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:14.118 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:14.118 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.118 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.118 nvme0n1 00:26:14.118 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.118 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:14.118 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:14.118 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.118 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.118 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.376 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:14.376 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:14.376 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.376 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.376 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.376 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:14.376 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:26:14.376 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:14.376 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:14.376 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:14.376 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:14.376 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzRiMjJiMWI3ZTc3MzgyYjYzNzBjYWUxZGU3YTM3MzEwYzBjMzBhMzAyZmRjODliz6QBuw==: 00:26:14.376 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWZiNjMxODNmMzJmYjhiNTA3NmNiMzNiNTk4NjhmNGMUhGlU: 00:26:14.376 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:14.376 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:14.376 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzRiMjJiMWI3ZTc3MzgyYjYzNzBjYWUxZGU3YTM3MzEwYzBjMzBhMzAyZmRjODliz6QBuw==: 00:26:14.376 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWZiNjMxODNmMzJmYjhiNTA3NmNiMzNiNTk4NjhmNGMUhGlU: ]] 00:26:14.376 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWZiNjMxODNmMzJmYjhiNTA3NmNiMzNiNTk4NjhmNGMUhGlU: 00:26:14.376 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:26:14.376 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:14.376 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:14.376 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:14.376 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:14.376 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:14.376 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:14.376 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.376 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.376 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.377 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:14.377 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:14.377 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:14.377 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:14.377 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:14.377 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:14.377 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:14.377 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:14.377 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:14.377 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:14.377 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:14.377 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:14.377 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.377 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.377 nvme0n1 00:26:14.377 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.377 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:14.377 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.377 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:14.377 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.377 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.377 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:14.377 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:14.377 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.377 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.636 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.636 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:14.636 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:26:14.636 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:14.636 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:14.636 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:14.636 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:14.636 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWEzYWY4MTRhOTJiZmViZDkxMzJmNDRmNmI1ZTNhZWEzZDQ0YWViZWJkOGY1Njg3NmJhYjBhZjNiYjI0MmYwYVoKU2M=: 00:26:14.636 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:14.636 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:14.636 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:14.636 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWEzYWY4MTRhOTJiZmViZDkxMzJmNDRmNmI1ZTNhZWEzZDQ0YWViZWJkOGY1Njg3NmJhYjBhZjNiYjI0MmYwYVoKU2M=: 00:26:14.636 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:14.636 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:26:14.636 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:14.636 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:14.636 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:14.636 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:14.636 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:14.636 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:14.636 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.636 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.636 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.636 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:14.636 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:14.636 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:14.636 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:14.636 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:14.636 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:14.636 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:14.636 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:14.636 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:14.636 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:14.636 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:14.636 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:14.636 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.636 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.636 nvme0n1 00:26:14.636 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.636 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:14.636 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:14.636 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.636 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.636 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.636 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:14.636 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:14.636 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.636 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.636 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.636 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:14.636 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:14.636 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:26:14.636 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:14.636 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:14.636 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:14.636 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:14.637 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjFjYzZmZDEyZWQwODQ3YjQwZmY2MTViODRiYjlmMjIwnh3i: 00:26:14.637 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWQwMmIwN2MwNDRhY2MzOTJiMTg2OTA5YmM4OWUzOWMzZTk3ZjYwODg2NzFlMzQ5ZDc3YjZmOGZlNTE0YWQzYozPbqQ=: 00:26:14.637 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:14.637 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:15.204 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjFjYzZmZDEyZWQwODQ3YjQwZmY2MTViODRiYjlmMjIwnh3i: 00:26:15.204 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWQwMmIwN2MwNDRhY2MzOTJiMTg2OTA5YmM4OWUzOWMzZTk3ZjYwODg2NzFlMzQ5ZDc3YjZmOGZlNTE0YWQzYozPbqQ=: ]] 00:26:15.204 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWQwMmIwN2MwNDRhY2MzOTJiMTg2OTA5YmM4OWUzOWMzZTk3ZjYwODg2NzFlMzQ5ZDc3YjZmOGZlNTE0YWQzYozPbqQ=: 00:26:15.204 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:26:15.204 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:15.204 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:15.204 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:15.204 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:15.204 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:15.204 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:15.204 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.204 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.204 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.204 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:15.204 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:15.204 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:15.204 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:15.204 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:15.205 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:15.205 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:15.205 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:15.205 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:15.205 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:15.205 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:15.205 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:15.205 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.205 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.205 nvme0n1 00:26:15.205 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.205 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:15.205 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.205 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.205 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:15.205 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.205 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:15.205 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:15.205 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.205 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.205 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.205 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:15.205 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:26:15.205 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:15.205 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:15.205 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:15.205 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:15.205 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmUxZmRjMGM0ZGYyZDA3Njk3NDEwYTQ2MjA4NTViMDYzZWE1YTA2NTc5MjYzNGUxMrg7YA==: 00:26:15.205 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDc1MmM4NzNlZThiOTMyNDg4MGFlZjI4ODg4ZWY0M2ZkMTgwZjIwYzc4ZDZkMmRmZWrMPw==: 00:26:15.205 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:15.205 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:15.205 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmUxZmRjMGM0ZGYyZDA3Njk3NDEwYTQ2MjA4NTViMDYzZWE1YTA2NTc5MjYzNGUxMrg7YA==: 00:26:15.205 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDc1MmM4NzNlZThiOTMyNDg4MGFlZjI4ODg4ZWY0M2ZkMTgwZjIwYzc4ZDZkMmRmZWrMPw==: ]] 00:26:15.205 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDc1MmM4NzNlZThiOTMyNDg4MGFlZjI4ODg4ZWY0M2ZkMTgwZjIwYzc4ZDZkMmRmZWrMPw==: 00:26:15.205 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:26:15.205 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:15.205 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:15.205 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:15.205 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:15.205 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:15.205 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:15.205 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.205 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.205 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.205 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:15.205 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:15.205 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:15.205 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:15.205 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:15.205 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:15.205 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:15.205 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:15.205 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:15.205 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:15.205 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:15.205 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:15.205 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.205 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.464 nvme0n1 00:26:15.464 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.464 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:15.464 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.464 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.464 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:15.464 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.464 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:15.464 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:15.464 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.464 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.464 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.464 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:15.464 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:26:15.464 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:15.464 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:15.464 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:15.464 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:15.464 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzdmMDU1YWJhNTNlOGVhMDFlMGIwYzhhODBlYTQ2MzI3cUbE: 00:26:15.464 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzlmMWQ0MGE0OTkyM2MzNjdkOGFhMmYzODk3NWVlNmFPiNIw: 00:26:15.464 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:15.464 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:15.464 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzdmMDU1YWJhNTNlOGVhMDFlMGIwYzhhODBlYTQ2MzI3cUbE: 00:26:15.464 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzlmMWQ0MGE0OTkyM2MzNjdkOGFhMmYzODk3NWVlNmFPiNIw: ]] 00:26:15.464 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzlmMWQ0MGE0OTkyM2MzNjdkOGFhMmYzODk3NWVlNmFPiNIw: 00:26:15.464 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:26:15.464 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:15.464 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:15.464 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:15.464 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:15.464 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:15.464 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:15.464 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.464 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.464 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.464 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:15.464 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:15.464 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:15.464 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:15.464 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:15.464 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:15.464 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:15.464 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:15.464 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:15.464 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:15.464 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:15.464 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:15.464 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.464 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.723 nvme0n1 00:26:15.723 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.723 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:15.723 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:15.723 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.723 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.723 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.723 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:15.723 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:15.723 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.723 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.723 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.723 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:15.723 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:26:15.723 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:15.723 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:15.723 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:15.723 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:15.723 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzRiMjJiMWI3ZTc3MzgyYjYzNzBjYWUxZGU3YTM3MzEwYzBjMzBhMzAyZmRjODliz6QBuw==: 00:26:15.723 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWZiNjMxODNmMzJmYjhiNTA3NmNiMzNiNTk4NjhmNGMUhGlU: 00:26:15.723 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:15.723 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:15.723 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzRiMjJiMWI3ZTc3MzgyYjYzNzBjYWUxZGU3YTM3MzEwYzBjMzBhMzAyZmRjODliz6QBuw==: 00:26:15.723 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWZiNjMxODNmMzJmYjhiNTA3NmNiMzNiNTk4NjhmNGMUhGlU: ]] 00:26:15.723 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWZiNjMxODNmMzJmYjhiNTA3NmNiMzNiNTk4NjhmNGMUhGlU: 00:26:15.723 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:26:15.723 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:15.723 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:15.723 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:15.723 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:15.723 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:15.723 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:15.723 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.723 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.723 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.723 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:15.723 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:15.723 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:15.723 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:15.723 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:15.723 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:15.723 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:15.723 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:15.723 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:15.723 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:15.723 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:15.723 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:15.723 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.723 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.981 nvme0n1 00:26:15.981 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.981 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:15.981 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.981 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.981 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:15.981 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.981 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:15.981 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:15.981 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.981 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.981 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.981 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:15.981 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:26:15.981 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:15.981 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:15.981 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:15.981 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:15.981 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWEzYWY4MTRhOTJiZmViZDkxMzJmNDRmNmI1ZTNhZWEzZDQ0YWViZWJkOGY1Njg3NmJhYjBhZjNiYjI0MmYwYVoKU2M=: 00:26:15.981 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:15.981 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:15.981 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:15.981 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWEzYWY4MTRhOTJiZmViZDkxMzJmNDRmNmI1ZTNhZWEzZDQ0YWViZWJkOGY1Njg3NmJhYjBhZjNiYjI0MmYwYVoKU2M=: 00:26:15.982 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:15.982 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:26:15.982 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:15.982 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:15.982 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:15.982 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:15.982 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:15.982 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:15.982 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.982 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.982 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.982 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:15.982 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:15.982 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:15.982 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:15.982 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:15.982 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:15.982 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:15.982 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:15.982 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:15.982 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:15.982 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:15.982 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:15.982 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.982 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.240 nvme0n1 00:26:16.240 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.240 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:16.240 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.240 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:16.240 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.240 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.240 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:16.240 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:16.240 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.240 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.240 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.240 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:16.240 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:16.240 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:26:16.240 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:16.240 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:16.240 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:16.240 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:16.240 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjFjYzZmZDEyZWQwODQ3YjQwZmY2MTViODRiYjlmMjIwnh3i: 00:26:16.240 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWQwMmIwN2MwNDRhY2MzOTJiMTg2OTA5YmM4OWUzOWMzZTk3ZjYwODg2NzFlMzQ5ZDc3YjZmOGZlNTE0YWQzYozPbqQ=: 00:26:16.240 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:16.240 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:16.806 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjFjYzZmZDEyZWQwODQ3YjQwZmY2MTViODRiYjlmMjIwnh3i: 00:26:16.806 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWQwMmIwN2MwNDRhY2MzOTJiMTg2OTA5YmM4OWUzOWMzZTk3ZjYwODg2NzFlMzQ5ZDc3YjZmOGZlNTE0YWQzYozPbqQ=: ]] 00:26:16.806 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWQwMmIwN2MwNDRhY2MzOTJiMTg2OTA5YmM4OWUzOWMzZTk3ZjYwODg2NzFlMzQ5ZDc3YjZmOGZlNTE0YWQzYozPbqQ=: 00:26:16.806 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:26:16.806 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:16.806 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:16.806 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:16.806 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:16.806 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:16.806 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:16.806 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.806 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.806 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.806 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:16.806 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:16.806 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:16.806 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:16.806 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:16.806 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:16.806 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:16.806 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:16.806 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:16.806 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:16.806 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:16.806 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:16.806 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.806 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.064 nvme0n1 00:26:17.064 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.064 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:17.064 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.064 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.064 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:17.064 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.064 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:17.064 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:17.064 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.064 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.323 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.323 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:17.323 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:26:17.323 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:17.323 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:17.323 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:17.323 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:17.323 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmUxZmRjMGM0ZGYyZDA3Njk3NDEwYTQ2MjA4NTViMDYzZWE1YTA2NTc5MjYzNGUxMrg7YA==: 00:26:17.323 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDc1MmM4NzNlZThiOTMyNDg4MGFlZjI4ODg4ZWY0M2ZkMTgwZjIwYzc4ZDZkMmRmZWrMPw==: 00:26:17.323 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:17.323 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:17.323 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmUxZmRjMGM0ZGYyZDA3Njk3NDEwYTQ2MjA4NTViMDYzZWE1YTA2NTc5MjYzNGUxMrg7YA==: 00:26:17.323 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDc1MmM4NzNlZThiOTMyNDg4MGFlZjI4ODg4ZWY0M2ZkMTgwZjIwYzc4ZDZkMmRmZWrMPw==: ]] 00:26:17.323 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDc1MmM4NzNlZThiOTMyNDg4MGFlZjI4ODg4ZWY0M2ZkMTgwZjIwYzc4ZDZkMmRmZWrMPw==: 00:26:17.323 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:26:17.323 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:17.323 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:17.323 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:17.323 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:17.323 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:17.323 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:17.323 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.323 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.323 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.323 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:17.323 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:17.323 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:17.323 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:17.323 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:17.323 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:17.323 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:17.323 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:17.323 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:17.323 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:17.323 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:17.323 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:17.323 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.323 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.582 nvme0n1 00:26:17.582 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.582 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:17.582 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.582 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.582 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:17.582 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.582 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:17.582 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:17.582 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.582 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.582 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.582 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:17.582 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:26:17.582 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:17.582 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:17.582 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:17.582 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:17.582 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzdmMDU1YWJhNTNlOGVhMDFlMGIwYzhhODBlYTQ2MzI3cUbE: 00:26:17.582 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzlmMWQ0MGE0OTkyM2MzNjdkOGFhMmYzODk3NWVlNmFPiNIw: 00:26:17.582 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:17.582 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:17.582 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzdmMDU1YWJhNTNlOGVhMDFlMGIwYzhhODBlYTQ2MzI3cUbE: 00:26:17.582 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzlmMWQ0MGE0OTkyM2MzNjdkOGFhMmYzODk3NWVlNmFPiNIw: ]] 00:26:17.582 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzlmMWQ0MGE0OTkyM2MzNjdkOGFhMmYzODk3NWVlNmFPiNIw: 00:26:17.582 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:26:17.582 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:17.582 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:17.582 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:17.582 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:17.582 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:17.582 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:17.582 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.582 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.582 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.582 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:17.582 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:17.582 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:17.582 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:17.582 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:17.582 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:17.582 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:17.583 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:17.583 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:17.583 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:17.583 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:17.583 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:17.583 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.583 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.841 nvme0n1 00:26:17.841 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.841 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:17.841 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.841 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.841 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:17.841 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.841 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:17.841 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:17.841 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.841 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.841 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.841 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:17.841 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:26:17.841 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:17.841 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:17.841 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:17.841 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:17.841 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzRiMjJiMWI3ZTc3MzgyYjYzNzBjYWUxZGU3YTM3MzEwYzBjMzBhMzAyZmRjODliz6QBuw==: 00:26:17.841 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWZiNjMxODNmMzJmYjhiNTA3NmNiMzNiNTk4NjhmNGMUhGlU: 00:26:17.841 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:17.841 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:17.841 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzRiMjJiMWI3ZTc3MzgyYjYzNzBjYWUxZGU3YTM3MzEwYzBjMzBhMzAyZmRjODliz6QBuw==: 00:26:17.841 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWZiNjMxODNmMzJmYjhiNTA3NmNiMzNiNTk4NjhmNGMUhGlU: ]] 00:26:17.841 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWZiNjMxODNmMzJmYjhiNTA3NmNiMzNiNTk4NjhmNGMUhGlU: 00:26:17.841 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:26:17.841 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:17.841 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:17.841 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:17.841 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:17.841 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:17.841 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:17.841 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.841 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.841 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.841 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:17.841 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:17.841 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:17.841 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:17.841 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:17.841 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:17.841 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:17.841 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:17.841 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:17.841 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:17.841 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:17.841 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:17.841 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.841 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.100 nvme0n1 00:26:18.100 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.100 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:18.100 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:18.100 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.100 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.100 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.100 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:18.100 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:18.100 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.100 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.100 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.100 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:18.100 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:26:18.100 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:18.100 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:18.100 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:18.100 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:18.100 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWEzYWY4MTRhOTJiZmViZDkxMzJmNDRmNmI1ZTNhZWEzZDQ0YWViZWJkOGY1Njg3NmJhYjBhZjNiYjI0MmYwYVoKU2M=: 00:26:18.100 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:18.100 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:18.100 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:18.100 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWEzYWY4MTRhOTJiZmViZDkxMzJmNDRmNmI1ZTNhZWEzZDQ0YWViZWJkOGY1Njg3NmJhYjBhZjNiYjI0MmYwYVoKU2M=: 00:26:18.100 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:18.100 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:26:18.100 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:18.100 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:18.100 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:18.100 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:18.100 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:18.100 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:18.100 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.100 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.100 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.100 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:18.100 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:18.100 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:18.100 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:18.100 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:18.100 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:18.100 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:18.359 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:18.359 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:18.359 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:18.359 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:18.359 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:18.359 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.359 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.617 nvme0n1 00:26:18.617 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.617 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:18.617 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.617 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:18.617 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.617 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.617 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:18.617 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:18.617 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.617 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.617 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.617 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:18.617 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:18.617 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:26:18.617 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:18.617 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:18.617 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:18.617 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:18.617 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjFjYzZmZDEyZWQwODQ3YjQwZmY2MTViODRiYjlmMjIwnh3i: 00:26:18.617 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWQwMmIwN2MwNDRhY2MzOTJiMTg2OTA5YmM4OWUzOWMzZTk3ZjYwODg2NzFlMzQ5ZDc3YjZmOGZlNTE0YWQzYozPbqQ=: 00:26:18.617 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:18.617 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:20.527 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjFjYzZmZDEyZWQwODQ3YjQwZmY2MTViODRiYjlmMjIwnh3i: 00:26:20.527 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWQwMmIwN2MwNDRhY2MzOTJiMTg2OTA5YmM4OWUzOWMzZTk3ZjYwODg2NzFlMzQ5ZDc3YjZmOGZlNTE0YWQzYozPbqQ=: ]] 00:26:20.527 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWQwMmIwN2MwNDRhY2MzOTJiMTg2OTA5YmM4OWUzOWMzZTk3ZjYwODg2NzFlMzQ5ZDc3YjZmOGZlNTE0YWQzYozPbqQ=: 00:26:20.527 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:26:20.527 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:20.527 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:20.527 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:20.527 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:20.527 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:20.527 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:20.527 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.527 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.527 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.527 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:20.527 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:20.527 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:20.527 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:20.527 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:20.527 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:20.527 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:20.527 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:20.527 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:20.527 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:20.527 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:20.527 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:20.527 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.527 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.785 nvme0n1 00:26:20.785 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.785 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:20.785 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.785 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.785 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:20.785 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.785 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:20.785 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:20.785 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.785 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.785 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.785 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:20.785 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:26:20.785 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:20.785 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:20.785 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:20.785 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:20.785 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmUxZmRjMGM0ZGYyZDA3Njk3NDEwYTQ2MjA4NTViMDYzZWE1YTA2NTc5MjYzNGUxMrg7YA==: 00:26:20.785 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDc1MmM4NzNlZThiOTMyNDg4MGFlZjI4ODg4ZWY0M2ZkMTgwZjIwYzc4ZDZkMmRmZWrMPw==: 00:26:20.785 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:20.785 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:20.785 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmUxZmRjMGM0ZGYyZDA3Njk3NDEwYTQ2MjA4NTViMDYzZWE1YTA2NTc5MjYzNGUxMrg7YA==: 00:26:20.785 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDc1MmM4NzNlZThiOTMyNDg4MGFlZjI4ODg4ZWY0M2ZkMTgwZjIwYzc4ZDZkMmRmZWrMPw==: ]] 00:26:20.786 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDc1MmM4NzNlZThiOTMyNDg4MGFlZjI4ODg4ZWY0M2ZkMTgwZjIwYzc4ZDZkMmRmZWrMPw==: 00:26:20.786 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:26:20.786 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:20.786 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:20.786 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:20.786 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:20.786 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:20.786 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:20.786 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.786 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.786 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.786 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:20.786 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:20.786 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:20.786 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:20.786 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:20.786 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:20.786 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:20.786 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:20.786 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:20.786 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:20.786 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:20.786 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:20.786 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.786 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.353 nvme0n1 00:26:21.353 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.353 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:21.353 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.353 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.353 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:21.353 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.353 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:21.353 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:21.353 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.353 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.353 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.353 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:21.353 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:26:21.353 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:21.353 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:21.353 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:21.353 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:21.353 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzdmMDU1YWJhNTNlOGVhMDFlMGIwYzhhODBlYTQ2MzI3cUbE: 00:26:21.353 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzlmMWQ0MGE0OTkyM2MzNjdkOGFhMmYzODk3NWVlNmFPiNIw: 00:26:21.353 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:21.353 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:21.353 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzdmMDU1YWJhNTNlOGVhMDFlMGIwYzhhODBlYTQ2MzI3cUbE: 00:26:21.353 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzlmMWQ0MGE0OTkyM2MzNjdkOGFhMmYzODk3NWVlNmFPiNIw: ]] 00:26:21.353 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzlmMWQ0MGE0OTkyM2MzNjdkOGFhMmYzODk3NWVlNmFPiNIw: 00:26:21.353 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:26:21.353 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:21.353 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:21.353 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:21.353 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:21.353 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:21.353 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:21.353 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.353 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.612 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.612 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:21.612 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:21.612 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:21.612 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:21.612 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:21.612 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:21.612 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:21.612 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:21.612 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:21.612 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:21.612 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:21.612 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:21.612 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.612 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.870 nvme0n1 00:26:21.870 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.870 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:21.870 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:21.870 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.870 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.870 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.127 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:22.127 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:22.127 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.127 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.127 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.127 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:22.127 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:26:22.127 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:22.127 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:22.127 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:22.127 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:22.127 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzRiMjJiMWI3ZTc3MzgyYjYzNzBjYWUxZGU3YTM3MzEwYzBjMzBhMzAyZmRjODliz6QBuw==: 00:26:22.127 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWZiNjMxODNmMzJmYjhiNTA3NmNiMzNiNTk4NjhmNGMUhGlU: 00:26:22.127 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:22.127 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:22.127 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzRiMjJiMWI3ZTc3MzgyYjYzNzBjYWUxZGU3YTM3MzEwYzBjMzBhMzAyZmRjODliz6QBuw==: 00:26:22.127 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWZiNjMxODNmMzJmYjhiNTA3NmNiMzNiNTk4NjhmNGMUhGlU: ]] 00:26:22.127 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWZiNjMxODNmMzJmYjhiNTA3NmNiMzNiNTk4NjhmNGMUhGlU: 00:26:22.127 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:26:22.127 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:22.127 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:22.127 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:22.127 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:22.127 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:22.127 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:22.127 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.127 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.127 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.127 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:22.127 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:22.127 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:22.127 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:22.127 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:22.127 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:22.127 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:22.127 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:22.127 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:22.127 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:22.127 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:22.127 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:22.127 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.127 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.693 nvme0n1 00:26:22.693 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.693 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:22.693 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.693 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.693 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:22.693 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.693 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:22.693 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:22.693 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.693 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.693 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.693 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:22.693 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:26:22.693 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:22.693 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:22.693 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:22.693 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:22.693 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWEzYWY4MTRhOTJiZmViZDkxMzJmNDRmNmI1ZTNhZWEzZDQ0YWViZWJkOGY1Njg3NmJhYjBhZjNiYjI0MmYwYVoKU2M=: 00:26:22.693 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:22.693 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:22.693 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:22.693 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWEzYWY4MTRhOTJiZmViZDkxMzJmNDRmNmI1ZTNhZWEzZDQ0YWViZWJkOGY1Njg3NmJhYjBhZjNiYjI0MmYwYVoKU2M=: 00:26:22.693 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:22.693 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:26:22.693 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:22.693 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:22.693 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:22.693 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:22.693 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:22.693 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:22.693 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.693 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.693 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.693 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:22.693 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:22.693 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:22.693 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:22.693 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:22.693 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:22.693 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:22.693 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:22.693 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:22.693 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:22.693 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:22.693 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:22.693 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.693 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.259 nvme0n1 00:26:23.259 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.259 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:23.259 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.259 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.259 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:23.259 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.259 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:23.259 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:23.259 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.259 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.259 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.259 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:23.259 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:23.259 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:26:23.259 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:23.259 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:23.259 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:23.259 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:23.259 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjFjYzZmZDEyZWQwODQ3YjQwZmY2MTViODRiYjlmMjIwnh3i: 00:26:23.259 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWQwMmIwN2MwNDRhY2MzOTJiMTg2OTA5YmM4OWUzOWMzZTk3ZjYwODg2NzFlMzQ5ZDc3YjZmOGZlNTE0YWQzYozPbqQ=: 00:26:23.259 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:23.259 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:23.259 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjFjYzZmZDEyZWQwODQ3YjQwZmY2MTViODRiYjlmMjIwnh3i: 00:26:23.259 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWQwMmIwN2MwNDRhY2MzOTJiMTg2OTA5YmM4OWUzOWMzZTk3ZjYwODg2NzFlMzQ5ZDc3YjZmOGZlNTE0YWQzYozPbqQ=: ]] 00:26:23.259 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWQwMmIwN2MwNDRhY2MzOTJiMTg2OTA5YmM4OWUzOWMzZTk3ZjYwODg2NzFlMzQ5ZDc3YjZmOGZlNTE0YWQzYozPbqQ=: 00:26:23.259 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:26:23.259 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:23.259 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:23.259 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:23.259 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:23.259 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:23.259 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:23.259 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.259 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.259 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.259 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:23.259 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:23.259 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:23.259 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:23.259 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:23.259 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:23.259 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:23.259 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:23.259 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:23.259 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:23.259 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:23.259 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:23.259 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.259 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.193 nvme0n1 00:26:24.193 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.193 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:24.193 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:24.193 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.193 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.193 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.193 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:24.193 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:24.193 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.193 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.193 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.193 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:24.193 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:26:24.193 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:24.193 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:24.193 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:24.193 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:24.193 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmUxZmRjMGM0ZGYyZDA3Njk3NDEwYTQ2MjA4NTViMDYzZWE1YTA2NTc5MjYzNGUxMrg7YA==: 00:26:24.193 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDc1MmM4NzNlZThiOTMyNDg4MGFlZjI4ODg4ZWY0M2ZkMTgwZjIwYzc4ZDZkMmRmZWrMPw==: 00:26:24.193 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:24.193 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:24.193 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmUxZmRjMGM0ZGYyZDA3Njk3NDEwYTQ2MjA4NTViMDYzZWE1YTA2NTc5MjYzNGUxMrg7YA==: 00:26:24.193 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDc1MmM4NzNlZThiOTMyNDg4MGFlZjI4ODg4ZWY0M2ZkMTgwZjIwYzc4ZDZkMmRmZWrMPw==: ]] 00:26:24.193 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDc1MmM4NzNlZThiOTMyNDg4MGFlZjI4ODg4ZWY0M2ZkMTgwZjIwYzc4ZDZkMmRmZWrMPw==: 00:26:24.193 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:26:24.193 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:24.193 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:24.193 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:24.193 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:24.193 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:24.193 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:24.193 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.193 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.193 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.193 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:24.193 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:24.193 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:24.193 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:24.193 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:24.193 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:24.193 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:24.193 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:24.193 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:24.193 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:24.193 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:24.193 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:24.193 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.193 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.128 nvme0n1 00:26:25.128 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.128 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:25.128 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.128 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.128 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:25.128 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.128 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:25.128 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:25.128 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.128 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.128 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.128 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:25.128 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:26:25.128 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:25.128 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:25.128 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:25.128 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:25.128 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzdmMDU1YWJhNTNlOGVhMDFlMGIwYzhhODBlYTQ2MzI3cUbE: 00:26:25.128 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzlmMWQ0MGE0OTkyM2MzNjdkOGFhMmYzODk3NWVlNmFPiNIw: 00:26:25.128 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:25.128 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:25.128 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzdmMDU1YWJhNTNlOGVhMDFlMGIwYzhhODBlYTQ2MzI3cUbE: 00:26:25.128 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzlmMWQ0MGE0OTkyM2MzNjdkOGFhMmYzODk3NWVlNmFPiNIw: ]] 00:26:25.128 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzlmMWQ0MGE0OTkyM2MzNjdkOGFhMmYzODk3NWVlNmFPiNIw: 00:26:25.128 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:26:25.128 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:25.128 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:25.128 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:25.128 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:25.128 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:25.128 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:25.128 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.128 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.128 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.128 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:25.128 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:25.128 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:25.128 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:25.128 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:25.128 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:25.128 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:25.128 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:25.128 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:25.128 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:25.128 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:25.128 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:25.128 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.128 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.070 nvme0n1 00:26:26.070 09:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.070 09:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:26.070 09:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:26.070 09:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.070 09:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.070 09:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.070 09:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:26.070 09:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:26.070 09:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.070 09:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.070 09:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.070 09:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:26.070 09:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:26:26.070 09:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:26.070 09:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:26.070 09:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:26.070 09:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:26.070 09:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzRiMjJiMWI3ZTc3MzgyYjYzNzBjYWUxZGU3YTM3MzEwYzBjMzBhMzAyZmRjODliz6QBuw==: 00:26:26.070 09:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWZiNjMxODNmMzJmYjhiNTA3NmNiMzNiNTk4NjhmNGMUhGlU: 00:26:26.070 09:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:26.070 09:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:26.070 09:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzRiMjJiMWI3ZTc3MzgyYjYzNzBjYWUxZGU3YTM3MzEwYzBjMzBhMzAyZmRjODliz6QBuw==: 00:26:26.070 09:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWZiNjMxODNmMzJmYjhiNTA3NmNiMzNiNTk4NjhmNGMUhGlU: ]] 00:26:26.070 09:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWZiNjMxODNmMzJmYjhiNTA3NmNiMzNiNTk4NjhmNGMUhGlU: 00:26:26.070 09:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:26:26.070 09:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:26.070 09:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:26.070 09:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:26.070 09:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:26.070 09:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:26.070 09:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:26.070 09:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.070 09:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.070 09:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.070 09:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:26.070 09:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:26.070 09:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:26.070 09:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:26.070 09:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:26.070 09:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:26.070 09:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:26.070 09:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:26.070 09:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:26.070 09:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:26.070 09:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:26.070 09:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:26.070 09:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.070 09:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.011 nvme0n1 00:26:27.011 09:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.011 09:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:27.011 09:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.011 09:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.011 09:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:27.011 09:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.011 09:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:27.011 09:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:27.011 09:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.011 09:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.011 09:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.011 09:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:27.011 09:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:26:27.011 09:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:27.011 09:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:27.011 09:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:27.011 09:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:27.011 09:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWEzYWY4MTRhOTJiZmViZDkxMzJmNDRmNmI1ZTNhZWEzZDQ0YWViZWJkOGY1Njg3NmJhYjBhZjNiYjI0MmYwYVoKU2M=: 00:26:27.011 09:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:27.011 09:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:27.011 09:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:27.011 09:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWEzYWY4MTRhOTJiZmViZDkxMzJmNDRmNmI1ZTNhZWEzZDQ0YWViZWJkOGY1Njg3NmJhYjBhZjNiYjI0MmYwYVoKU2M=: 00:26:27.011 09:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:27.011 09:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:26:27.011 09:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:27.011 09:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:27.011 09:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:27.011 09:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:27.011 09:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:27.012 09:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:27.012 09:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.012 09:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.012 09:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.012 09:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:27.012 09:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:27.012 09:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:27.012 09:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:27.012 09:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:27.012 09:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:27.012 09:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:27.012 09:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:27.012 09:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:27.012 09:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:27.012 09:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:27.012 09:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:27.012 09:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.012 09:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.951 nvme0n1 00:26:27.951 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.951 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:27.951 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:27.951 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.951 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.951 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.951 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:27.951 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:27.951 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.951 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.951 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.951 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:27.951 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:27.951 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:27.952 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:26:27.952 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:27.952 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:27.952 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:27.952 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:27.952 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjFjYzZmZDEyZWQwODQ3YjQwZmY2MTViODRiYjlmMjIwnh3i: 00:26:27.952 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWQwMmIwN2MwNDRhY2MzOTJiMTg2OTA5YmM4OWUzOWMzZTk3ZjYwODg2NzFlMzQ5ZDc3YjZmOGZlNTE0YWQzYozPbqQ=: 00:26:27.952 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:27.952 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:27.952 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjFjYzZmZDEyZWQwODQ3YjQwZmY2MTViODRiYjlmMjIwnh3i: 00:26:27.952 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWQwMmIwN2MwNDRhY2MzOTJiMTg2OTA5YmM4OWUzOWMzZTk3ZjYwODg2NzFlMzQ5ZDc3YjZmOGZlNTE0YWQzYozPbqQ=: ]] 00:26:27.952 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWQwMmIwN2MwNDRhY2MzOTJiMTg2OTA5YmM4OWUzOWMzZTk3ZjYwODg2NzFlMzQ5ZDc3YjZmOGZlNTE0YWQzYozPbqQ=: 00:26:27.952 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:26:27.952 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:27.952 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:27.952 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:27.952 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:27.952 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:27.952 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:27.952 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.952 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.952 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.952 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:27.952 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:27.952 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:27.952 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:27.952 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:27.952 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:27.952 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:27.952 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:27.952 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:27.952 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:27.952 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:27.952 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:27.952 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.952 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.952 nvme0n1 00:26:27.952 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.952 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:27.952 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:27.952 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.952 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.952 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.952 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:27.952 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:27.952 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.952 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.952 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.952 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:27.952 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:26:27.952 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:27.952 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:27.952 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:27.952 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:27.952 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmUxZmRjMGM0ZGYyZDA3Njk3NDEwYTQ2MjA4NTViMDYzZWE1YTA2NTc5MjYzNGUxMrg7YA==: 00:26:27.952 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDc1MmM4NzNlZThiOTMyNDg4MGFlZjI4ODg4ZWY0M2ZkMTgwZjIwYzc4ZDZkMmRmZWrMPw==: 00:26:27.952 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:27.952 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:27.952 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmUxZmRjMGM0ZGYyZDA3Njk3NDEwYTQ2MjA4NTViMDYzZWE1YTA2NTc5MjYzNGUxMrg7YA==: 00:26:27.952 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDc1MmM4NzNlZThiOTMyNDg4MGFlZjI4ODg4ZWY0M2ZkMTgwZjIwYzc4ZDZkMmRmZWrMPw==: ]] 00:26:27.952 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDc1MmM4NzNlZThiOTMyNDg4MGFlZjI4ODg4ZWY0M2ZkMTgwZjIwYzc4ZDZkMmRmZWrMPw==: 00:26:27.952 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:26:27.952 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:27.952 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:27.952 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:27.952 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:27.952 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:27.952 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:27.952 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.952 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.952 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.952 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:27.952 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:27.952 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:27.952 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:27.952 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:27.952 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:27.952 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:27.952 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:27.952 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:27.952 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:27.952 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:27.952 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:27.952 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.952 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.213 nvme0n1 00:26:28.213 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.213 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:28.213 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:28.213 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.213 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.213 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.213 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:28.213 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:28.213 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.213 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.213 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.213 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:28.213 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:26:28.213 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:28.213 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:28.213 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:28.213 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:28.213 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzdmMDU1YWJhNTNlOGVhMDFlMGIwYzhhODBlYTQ2MzI3cUbE: 00:26:28.213 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzlmMWQ0MGE0OTkyM2MzNjdkOGFhMmYzODk3NWVlNmFPiNIw: 00:26:28.213 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:28.213 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:28.213 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzdmMDU1YWJhNTNlOGVhMDFlMGIwYzhhODBlYTQ2MzI3cUbE: 00:26:28.213 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzlmMWQ0MGE0OTkyM2MzNjdkOGFhMmYzODk3NWVlNmFPiNIw: ]] 00:26:28.213 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzlmMWQ0MGE0OTkyM2MzNjdkOGFhMmYzODk3NWVlNmFPiNIw: 00:26:28.213 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:26:28.213 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:28.213 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:28.213 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:28.213 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:28.213 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:28.213 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:28.213 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.213 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.213 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.213 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:28.213 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:28.213 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:28.213 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:28.213 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:28.213 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:28.213 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:28.213 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:28.213 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:28.213 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:28.213 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:28.213 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:28.213 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.213 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.473 nvme0n1 00:26:28.473 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.473 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:28.473 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.473 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:28.473 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.473 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.473 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:28.473 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:28.473 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.473 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.473 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.473 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:28.473 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:26:28.473 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:28.473 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:28.473 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:28.473 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:28.473 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzRiMjJiMWI3ZTc3MzgyYjYzNzBjYWUxZGU3YTM3MzEwYzBjMzBhMzAyZmRjODliz6QBuw==: 00:26:28.473 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWZiNjMxODNmMzJmYjhiNTA3NmNiMzNiNTk4NjhmNGMUhGlU: 00:26:28.473 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:28.473 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:28.473 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzRiMjJiMWI3ZTc3MzgyYjYzNzBjYWUxZGU3YTM3MzEwYzBjMzBhMzAyZmRjODliz6QBuw==: 00:26:28.473 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWZiNjMxODNmMzJmYjhiNTA3NmNiMzNiNTk4NjhmNGMUhGlU: ]] 00:26:28.473 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWZiNjMxODNmMzJmYjhiNTA3NmNiMzNiNTk4NjhmNGMUhGlU: 00:26:28.473 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:26:28.473 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:28.473 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:28.473 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:28.473 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:28.473 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:28.473 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:28.473 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.473 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.473 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.473 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:28.473 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:28.473 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:28.473 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:28.473 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:28.473 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:28.473 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:28.473 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:28.473 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:28.473 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:28.473 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:28.473 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:28.473 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.473 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.733 nvme0n1 00:26:28.733 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.733 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:28.733 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.734 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.734 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:28.734 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.734 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:28.734 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:28.734 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.734 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.734 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.734 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:28.734 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:26:28.734 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:28.734 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:28.734 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:28.734 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:28.734 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWEzYWY4MTRhOTJiZmViZDkxMzJmNDRmNmI1ZTNhZWEzZDQ0YWViZWJkOGY1Njg3NmJhYjBhZjNiYjI0MmYwYVoKU2M=: 00:26:28.734 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:28.734 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:28.734 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:28.734 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWEzYWY4MTRhOTJiZmViZDkxMzJmNDRmNmI1ZTNhZWEzZDQ0YWViZWJkOGY1Njg3NmJhYjBhZjNiYjI0MmYwYVoKU2M=: 00:26:28.734 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:28.734 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:26:28.734 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:28.734 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:28.734 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:28.734 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:28.734 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:28.734 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:28.734 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.734 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.734 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.734 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:28.734 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:28.734 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:28.734 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:28.734 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:28.734 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:28.734 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:28.734 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:28.734 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:28.734 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:28.734 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:28.734 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:28.734 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.734 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.734 nvme0n1 00:26:28.734 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.734 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:28.734 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.734 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.992 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:28.993 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.993 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:28.993 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:28.993 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.993 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.993 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.993 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:28.993 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:28.993 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:26:28.993 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:28.993 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:28.993 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:28.993 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:28.993 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjFjYzZmZDEyZWQwODQ3YjQwZmY2MTViODRiYjlmMjIwnh3i: 00:26:28.993 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWQwMmIwN2MwNDRhY2MzOTJiMTg2OTA5YmM4OWUzOWMzZTk3ZjYwODg2NzFlMzQ5ZDc3YjZmOGZlNTE0YWQzYozPbqQ=: 00:26:28.993 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:28.993 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:28.993 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjFjYzZmZDEyZWQwODQ3YjQwZmY2MTViODRiYjlmMjIwnh3i: 00:26:28.993 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWQwMmIwN2MwNDRhY2MzOTJiMTg2OTA5YmM4OWUzOWMzZTk3ZjYwODg2NzFlMzQ5ZDc3YjZmOGZlNTE0YWQzYozPbqQ=: ]] 00:26:28.993 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWQwMmIwN2MwNDRhY2MzOTJiMTg2OTA5YmM4OWUzOWMzZTk3ZjYwODg2NzFlMzQ5ZDc3YjZmOGZlNTE0YWQzYozPbqQ=: 00:26:28.993 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:26:28.993 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:28.993 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:28.993 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:28.993 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:28.993 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:28.993 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:28.993 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.993 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.993 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.993 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:28.993 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:28.993 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:28.993 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:28.993 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:28.993 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:28.993 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:28.993 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:28.993 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:28.993 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:28.993 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:28.993 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:28.993 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.993 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.251 nvme0n1 00:26:29.251 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.251 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:29.251 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.252 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.252 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:29.252 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.252 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:29.252 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:29.252 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.252 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.252 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.252 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:29.252 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:26:29.252 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:29.252 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:29.252 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:29.252 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:29.252 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmUxZmRjMGM0ZGYyZDA3Njk3NDEwYTQ2MjA4NTViMDYzZWE1YTA2NTc5MjYzNGUxMrg7YA==: 00:26:29.252 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDc1MmM4NzNlZThiOTMyNDg4MGFlZjI4ODg4ZWY0M2ZkMTgwZjIwYzc4ZDZkMmRmZWrMPw==: 00:26:29.252 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:29.252 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:29.252 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmUxZmRjMGM0ZGYyZDA3Njk3NDEwYTQ2MjA4NTViMDYzZWE1YTA2NTc5MjYzNGUxMrg7YA==: 00:26:29.252 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDc1MmM4NzNlZThiOTMyNDg4MGFlZjI4ODg4ZWY0M2ZkMTgwZjIwYzc4ZDZkMmRmZWrMPw==: ]] 00:26:29.252 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDc1MmM4NzNlZThiOTMyNDg4MGFlZjI4ODg4ZWY0M2ZkMTgwZjIwYzc4ZDZkMmRmZWrMPw==: 00:26:29.252 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:26:29.252 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:29.252 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:29.252 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:29.252 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:29.252 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:29.252 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:29.252 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.252 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.252 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.252 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:29.252 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:29.252 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:29.252 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:29.252 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:29.252 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:29.252 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:29.252 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:29.252 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:29.252 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:29.252 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:29.252 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:29.252 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.252 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.252 nvme0n1 00:26:29.252 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.252 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:29.252 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.252 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.252 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:29.510 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.510 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:29.510 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:29.510 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.510 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.510 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.510 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:29.510 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:26:29.510 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:29.510 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:29.510 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:29.510 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:29.510 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzdmMDU1YWJhNTNlOGVhMDFlMGIwYzhhODBlYTQ2MzI3cUbE: 00:26:29.510 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzlmMWQ0MGE0OTkyM2MzNjdkOGFhMmYzODk3NWVlNmFPiNIw: 00:26:29.510 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:29.510 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:29.510 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzdmMDU1YWJhNTNlOGVhMDFlMGIwYzhhODBlYTQ2MzI3cUbE: 00:26:29.510 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzlmMWQ0MGE0OTkyM2MzNjdkOGFhMmYzODk3NWVlNmFPiNIw: ]] 00:26:29.510 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzlmMWQ0MGE0OTkyM2MzNjdkOGFhMmYzODk3NWVlNmFPiNIw: 00:26:29.510 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:26:29.510 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:29.510 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:29.510 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:29.510 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:29.510 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:29.510 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:29.510 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.510 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.510 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.510 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:29.510 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:29.510 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:29.510 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:29.510 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:29.510 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:29.510 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:29.510 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:29.511 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:29.511 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:29.511 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:29.511 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:29.511 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.511 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.511 nvme0n1 00:26:29.511 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.511 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:29.511 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.511 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:29.511 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.511 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.769 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:29.769 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:29.769 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.769 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.769 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.769 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:29.769 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:26:29.769 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:29.769 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:29.769 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:29.769 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:29.769 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzRiMjJiMWI3ZTc3MzgyYjYzNzBjYWUxZGU3YTM3MzEwYzBjMzBhMzAyZmRjODliz6QBuw==: 00:26:29.769 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWZiNjMxODNmMzJmYjhiNTA3NmNiMzNiNTk4NjhmNGMUhGlU: 00:26:29.769 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:29.769 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:29.769 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzRiMjJiMWI3ZTc3MzgyYjYzNzBjYWUxZGU3YTM3MzEwYzBjMzBhMzAyZmRjODliz6QBuw==: 00:26:29.769 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWZiNjMxODNmMzJmYjhiNTA3NmNiMzNiNTk4NjhmNGMUhGlU: ]] 00:26:29.769 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWZiNjMxODNmMzJmYjhiNTA3NmNiMzNiNTk4NjhmNGMUhGlU: 00:26:29.769 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:26:29.769 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:29.769 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:29.769 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:29.769 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:29.769 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:29.769 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:29.769 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.769 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.769 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.769 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:29.769 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:29.769 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:29.769 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:29.769 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:29.769 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:29.769 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:29.769 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:29.769 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:29.769 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:29.769 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:29.769 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:29.769 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.769 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.769 nvme0n1 00:26:29.769 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.769 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:29.769 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.769 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:29.769 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.769 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.028 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:30.028 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:30.028 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.028 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.028 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.028 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:30.028 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:26:30.028 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:30.028 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:30.028 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:30.028 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:30.028 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWEzYWY4MTRhOTJiZmViZDkxMzJmNDRmNmI1ZTNhZWEzZDQ0YWViZWJkOGY1Njg3NmJhYjBhZjNiYjI0MmYwYVoKU2M=: 00:26:30.028 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:30.028 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:30.028 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:30.028 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWEzYWY4MTRhOTJiZmViZDkxMzJmNDRmNmI1ZTNhZWEzZDQ0YWViZWJkOGY1Njg3NmJhYjBhZjNiYjI0MmYwYVoKU2M=: 00:26:30.028 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:30.028 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:26:30.028 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:30.028 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:30.028 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:30.028 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:30.028 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:30.028 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:30.028 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.028 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.028 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.028 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:30.028 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:30.028 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:30.028 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:30.028 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:30.028 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:30.028 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:30.028 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:30.028 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:30.028 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:30.028 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:30.028 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:30.028 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.028 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.028 nvme0n1 00:26:30.028 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.029 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:30.029 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.029 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.029 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:30.029 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.029 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:30.029 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:30.029 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.029 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.288 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.288 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:30.288 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:30.288 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:26:30.288 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:30.288 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:30.288 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:30.288 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:30.288 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjFjYzZmZDEyZWQwODQ3YjQwZmY2MTViODRiYjlmMjIwnh3i: 00:26:30.288 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWQwMmIwN2MwNDRhY2MzOTJiMTg2OTA5YmM4OWUzOWMzZTk3ZjYwODg2NzFlMzQ5ZDc3YjZmOGZlNTE0YWQzYozPbqQ=: 00:26:30.288 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:30.288 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:30.288 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjFjYzZmZDEyZWQwODQ3YjQwZmY2MTViODRiYjlmMjIwnh3i: 00:26:30.288 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWQwMmIwN2MwNDRhY2MzOTJiMTg2OTA5YmM4OWUzOWMzZTk3ZjYwODg2NzFlMzQ5ZDc3YjZmOGZlNTE0YWQzYozPbqQ=: ]] 00:26:30.288 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWQwMmIwN2MwNDRhY2MzOTJiMTg2OTA5YmM4OWUzOWMzZTk3ZjYwODg2NzFlMzQ5ZDc3YjZmOGZlNTE0YWQzYozPbqQ=: 00:26:30.288 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:26:30.288 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:30.288 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:30.288 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:30.288 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:30.288 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:30.288 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:30.288 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.288 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.288 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.288 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:30.288 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:30.288 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:30.288 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:30.288 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:30.288 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:30.288 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:30.288 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:30.288 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:30.288 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:30.288 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:30.288 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:30.288 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.288 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.548 nvme0n1 00:26:30.548 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.548 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:30.548 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.548 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.548 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:30.548 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.548 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:30.548 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:30.548 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.548 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.548 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.548 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:30.548 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:26:30.548 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:30.548 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:30.548 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:30.548 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:30.548 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmUxZmRjMGM0ZGYyZDA3Njk3NDEwYTQ2MjA4NTViMDYzZWE1YTA2NTc5MjYzNGUxMrg7YA==: 00:26:30.548 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDc1MmM4NzNlZThiOTMyNDg4MGFlZjI4ODg4ZWY0M2ZkMTgwZjIwYzc4ZDZkMmRmZWrMPw==: 00:26:30.548 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:30.548 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:30.548 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmUxZmRjMGM0ZGYyZDA3Njk3NDEwYTQ2MjA4NTViMDYzZWE1YTA2NTc5MjYzNGUxMrg7YA==: 00:26:30.548 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDc1MmM4NzNlZThiOTMyNDg4MGFlZjI4ODg4ZWY0M2ZkMTgwZjIwYzc4ZDZkMmRmZWrMPw==: ]] 00:26:30.548 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDc1MmM4NzNlZThiOTMyNDg4MGFlZjI4ODg4ZWY0M2ZkMTgwZjIwYzc4ZDZkMmRmZWrMPw==: 00:26:30.548 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:26:30.548 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:30.548 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:30.548 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:30.548 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:30.548 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:30.548 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:30.548 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.548 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.548 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.548 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:30.548 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:30.548 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:30.548 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:30.548 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:30.548 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:30.548 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:30.548 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:30.548 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:30.548 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:30.548 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:30.548 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:30.548 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.548 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.808 nvme0n1 00:26:30.808 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.808 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:30.808 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.808 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:30.808 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.808 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.808 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:30.808 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:30.808 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.808 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.808 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.808 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:30.808 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:26:30.808 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:30.808 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:30.808 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:30.808 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:30.808 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzdmMDU1YWJhNTNlOGVhMDFlMGIwYzhhODBlYTQ2MzI3cUbE: 00:26:30.808 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzlmMWQ0MGE0OTkyM2MzNjdkOGFhMmYzODk3NWVlNmFPiNIw: 00:26:30.808 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:30.809 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:30.809 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzdmMDU1YWJhNTNlOGVhMDFlMGIwYzhhODBlYTQ2MzI3cUbE: 00:26:30.809 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzlmMWQ0MGE0OTkyM2MzNjdkOGFhMmYzODk3NWVlNmFPiNIw: ]] 00:26:30.809 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzlmMWQ0MGE0OTkyM2MzNjdkOGFhMmYzODk3NWVlNmFPiNIw: 00:26:30.809 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:26:30.809 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:30.809 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:30.809 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:30.809 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:30.809 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:30.809 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:30.809 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.809 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.809 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.809 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:30.809 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:30.809 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:30.809 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:30.809 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:30.809 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:30.809 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:30.809 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:30.809 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:30.809 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:30.809 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:30.809 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:30.809 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.809 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.069 nvme0n1 00:26:31.069 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.069 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:31.069 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:31.069 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.069 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.069 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.069 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:31.069 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:31.069 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.069 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.069 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.069 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:31.069 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:26:31.069 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:31.069 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:31.069 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:31.069 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:31.069 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzRiMjJiMWI3ZTc3MzgyYjYzNzBjYWUxZGU3YTM3MzEwYzBjMzBhMzAyZmRjODliz6QBuw==: 00:26:31.069 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWZiNjMxODNmMzJmYjhiNTA3NmNiMzNiNTk4NjhmNGMUhGlU: 00:26:31.069 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:31.069 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:31.069 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzRiMjJiMWI3ZTc3MzgyYjYzNzBjYWUxZGU3YTM3MzEwYzBjMzBhMzAyZmRjODliz6QBuw==: 00:26:31.069 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWZiNjMxODNmMzJmYjhiNTA3NmNiMzNiNTk4NjhmNGMUhGlU: ]] 00:26:31.069 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWZiNjMxODNmMzJmYjhiNTA3NmNiMzNiNTk4NjhmNGMUhGlU: 00:26:31.069 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:26:31.069 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:31.069 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:31.069 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:31.069 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:31.069 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:31.069 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:31.069 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.069 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.069 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.069 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:31.069 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:31.069 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:31.069 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:31.069 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:31.069 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:31.069 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:31.069 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:31.069 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:31.069 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:31.069 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:31.069 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:31.069 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.069 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.327 nvme0n1 00:26:31.327 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.327 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:31.327 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:31.327 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.327 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.586 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.586 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:31.586 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:31.586 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.586 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.586 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.586 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:31.586 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:26:31.586 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:31.586 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:31.586 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:31.586 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:31.586 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWEzYWY4MTRhOTJiZmViZDkxMzJmNDRmNmI1ZTNhZWEzZDQ0YWViZWJkOGY1Njg3NmJhYjBhZjNiYjI0MmYwYVoKU2M=: 00:26:31.586 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:31.586 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:31.586 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:31.586 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWEzYWY4MTRhOTJiZmViZDkxMzJmNDRmNmI1ZTNhZWEzZDQ0YWViZWJkOGY1Njg3NmJhYjBhZjNiYjI0MmYwYVoKU2M=: 00:26:31.586 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:31.586 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:26:31.586 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:31.586 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:31.586 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:31.586 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:31.586 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:31.586 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:31.586 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.586 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.586 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.586 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:31.586 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:31.586 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:31.586 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:31.586 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:31.586 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:31.586 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:31.586 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:31.586 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:31.586 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:31.586 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:31.586 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:31.586 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.586 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.844 nvme0n1 00:26:31.844 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.844 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:31.844 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.844 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.844 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:31.844 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.844 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:31.844 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:31.844 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.844 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.844 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.844 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:31.844 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:31.844 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:26:31.844 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:31.844 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:31.844 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:31.845 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:31.845 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjFjYzZmZDEyZWQwODQ3YjQwZmY2MTViODRiYjlmMjIwnh3i: 00:26:31.845 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWQwMmIwN2MwNDRhY2MzOTJiMTg2OTA5YmM4OWUzOWMzZTk3ZjYwODg2NzFlMzQ5ZDc3YjZmOGZlNTE0YWQzYozPbqQ=: 00:26:31.845 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:31.845 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:31.845 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjFjYzZmZDEyZWQwODQ3YjQwZmY2MTViODRiYjlmMjIwnh3i: 00:26:31.845 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWQwMmIwN2MwNDRhY2MzOTJiMTg2OTA5YmM4OWUzOWMzZTk3ZjYwODg2NzFlMzQ5ZDc3YjZmOGZlNTE0YWQzYozPbqQ=: ]] 00:26:31.845 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWQwMmIwN2MwNDRhY2MzOTJiMTg2OTA5YmM4OWUzOWMzZTk3ZjYwODg2NzFlMzQ5ZDc3YjZmOGZlNTE0YWQzYozPbqQ=: 00:26:31.845 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:26:31.845 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:31.845 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:31.845 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:31.845 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:31.845 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:31.845 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:31.845 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.845 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.845 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.845 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:31.845 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:31.845 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:31.845 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:31.845 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:31.845 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:31.845 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:31.845 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:31.845 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:31.845 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:31.845 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:31.845 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:31.845 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.845 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.413 nvme0n1 00:26:32.413 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.413 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:32.413 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.413 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.413 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:32.413 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.413 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:32.413 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:32.413 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.413 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.413 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.413 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:32.413 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:26:32.413 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:32.413 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:32.413 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:32.413 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:32.413 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmUxZmRjMGM0ZGYyZDA3Njk3NDEwYTQ2MjA4NTViMDYzZWE1YTA2NTc5MjYzNGUxMrg7YA==: 00:26:32.413 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDc1MmM4NzNlZThiOTMyNDg4MGFlZjI4ODg4ZWY0M2ZkMTgwZjIwYzc4ZDZkMmRmZWrMPw==: 00:26:32.413 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:32.413 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:32.413 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmUxZmRjMGM0ZGYyZDA3Njk3NDEwYTQ2MjA4NTViMDYzZWE1YTA2NTc5MjYzNGUxMrg7YA==: 00:26:32.413 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDc1MmM4NzNlZThiOTMyNDg4MGFlZjI4ODg4ZWY0M2ZkMTgwZjIwYzc4ZDZkMmRmZWrMPw==: ]] 00:26:32.413 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDc1MmM4NzNlZThiOTMyNDg4MGFlZjI4ODg4ZWY0M2ZkMTgwZjIwYzc4ZDZkMmRmZWrMPw==: 00:26:32.413 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:26:32.413 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:32.413 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:32.413 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:32.413 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:32.413 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:32.413 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:32.413 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.413 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.413 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.413 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:32.413 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:32.413 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:32.413 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:32.413 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:32.413 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:32.413 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:32.413 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:32.413 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:32.413 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:32.413 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:32.413 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:32.413 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.413 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.980 nvme0n1 00:26:32.980 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.980 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:32.980 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.980 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.980 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:32.980 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.980 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:32.980 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:32.980 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.980 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.980 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.980 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:32.980 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:26:32.980 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:32.980 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:32.980 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:32.980 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:32.980 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzdmMDU1YWJhNTNlOGVhMDFlMGIwYzhhODBlYTQ2MzI3cUbE: 00:26:32.980 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzlmMWQ0MGE0OTkyM2MzNjdkOGFhMmYzODk3NWVlNmFPiNIw: 00:26:32.980 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:32.980 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:32.980 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzdmMDU1YWJhNTNlOGVhMDFlMGIwYzhhODBlYTQ2MzI3cUbE: 00:26:32.980 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzlmMWQ0MGE0OTkyM2MzNjdkOGFhMmYzODk3NWVlNmFPiNIw: ]] 00:26:32.980 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzlmMWQ0MGE0OTkyM2MzNjdkOGFhMmYzODk3NWVlNmFPiNIw: 00:26:32.980 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:26:32.980 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:32.980 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:32.980 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:32.980 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:32.980 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:32.980 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:32.980 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.980 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.980 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.980 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:32.980 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:32.980 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:32.980 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:32.980 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:32.980 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:32.980 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:32.980 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:32.981 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:32.981 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:32.981 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:32.981 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:32.981 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.981 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.549 nvme0n1 00:26:33.549 09:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.549 09:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:33.549 09:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.549 09:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:33.549 09:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.549 09:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.549 09:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:33.549 09:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:33.549 09:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.549 09:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.549 09:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.549 09:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:33.549 09:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:26:33.549 09:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:33.549 09:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:33.549 09:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:33.549 09:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:33.549 09:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzRiMjJiMWI3ZTc3MzgyYjYzNzBjYWUxZGU3YTM3MzEwYzBjMzBhMzAyZmRjODliz6QBuw==: 00:26:33.549 09:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWZiNjMxODNmMzJmYjhiNTA3NmNiMzNiNTk4NjhmNGMUhGlU: 00:26:33.549 09:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:33.549 09:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:33.549 09:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzRiMjJiMWI3ZTc3MzgyYjYzNzBjYWUxZGU3YTM3MzEwYzBjMzBhMzAyZmRjODliz6QBuw==: 00:26:33.549 09:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWZiNjMxODNmMzJmYjhiNTA3NmNiMzNiNTk4NjhmNGMUhGlU: ]] 00:26:33.549 09:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWZiNjMxODNmMzJmYjhiNTA3NmNiMzNiNTk4NjhmNGMUhGlU: 00:26:33.549 09:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:26:33.549 09:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:33.550 09:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:33.550 09:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:33.550 09:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:33.550 09:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:33.550 09:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:33.550 09:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.550 09:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.550 09:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.550 09:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:33.550 09:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:33.550 09:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:33.550 09:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:33.550 09:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:33.550 09:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:33.550 09:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:33.550 09:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:33.550 09:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:33.550 09:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:33.550 09:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:33.550 09:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:33.550 09:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.550 09:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.130 nvme0n1 00:26:34.130 09:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.130 09:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:34.130 09:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.130 09:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.130 09:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:34.130 09:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.130 09:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:34.130 09:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:34.130 09:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.130 09:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.130 09:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.130 09:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:34.130 09:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:26:34.130 09:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:34.130 09:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:34.130 09:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:34.130 09:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:34.130 09:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWEzYWY4MTRhOTJiZmViZDkxMzJmNDRmNmI1ZTNhZWEzZDQ0YWViZWJkOGY1Njg3NmJhYjBhZjNiYjI0MmYwYVoKU2M=: 00:26:34.130 09:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:34.130 09:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:34.130 09:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:34.130 09:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWEzYWY4MTRhOTJiZmViZDkxMzJmNDRmNmI1ZTNhZWEzZDQ0YWViZWJkOGY1Njg3NmJhYjBhZjNiYjI0MmYwYVoKU2M=: 00:26:34.130 09:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:34.130 09:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:26:34.130 09:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:34.130 09:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:34.130 09:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:34.130 09:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:34.130 09:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:34.130 09:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:34.130 09:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.130 09:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.130 09:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.130 09:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:34.130 09:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:34.130 09:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:34.130 09:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:34.130 09:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:34.130 09:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:34.130 09:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:34.130 09:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:34.130 09:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:34.130 09:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:34.130 09:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:34.130 09:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:34.130 09:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.130 09:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.756 nvme0n1 00:26:34.756 09:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.756 09:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:34.756 09:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.756 09:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.756 09:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:34.756 09:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.756 09:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:34.756 09:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:34.756 09:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.756 09:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.756 09:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.756 09:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:34.757 09:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:34.757 09:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:26:34.757 09:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:34.757 09:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:34.757 09:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:34.757 09:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:34.757 09:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjFjYzZmZDEyZWQwODQ3YjQwZmY2MTViODRiYjlmMjIwnh3i: 00:26:34.757 09:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWQwMmIwN2MwNDRhY2MzOTJiMTg2OTA5YmM4OWUzOWMzZTk3ZjYwODg2NzFlMzQ5ZDc3YjZmOGZlNTE0YWQzYozPbqQ=: 00:26:34.757 09:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:34.757 09:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:34.757 09:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjFjYzZmZDEyZWQwODQ3YjQwZmY2MTViODRiYjlmMjIwnh3i: 00:26:34.757 09:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWQwMmIwN2MwNDRhY2MzOTJiMTg2OTA5YmM4OWUzOWMzZTk3ZjYwODg2NzFlMzQ5ZDc3YjZmOGZlNTE0YWQzYozPbqQ=: ]] 00:26:34.757 09:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWQwMmIwN2MwNDRhY2MzOTJiMTg2OTA5YmM4OWUzOWMzZTk3ZjYwODg2NzFlMzQ5ZDc3YjZmOGZlNTE0YWQzYozPbqQ=: 00:26:34.757 09:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:26:34.757 09:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:34.757 09:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:34.757 09:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:34.757 09:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:34.757 09:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:34.757 09:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:34.757 09:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.757 09:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.757 09:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.757 09:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:34.757 09:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:34.757 09:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:34.757 09:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:34.757 09:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:34.757 09:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:34.757 09:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:34.757 09:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:34.757 09:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:34.757 09:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:34.757 09:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:34.757 09:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:34.757 09:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.757 09:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.382 nvme0n1 00:26:35.382 09:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.382 09:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:35.382 09:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.382 09:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.382 09:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:35.382 09:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.382 09:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:35.382 09:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:35.382 09:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.382 09:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.382 09:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.382 09:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:35.382 09:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:26:35.382 09:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:35.382 09:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:35.382 09:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:35.382 09:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:35.382 09:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmUxZmRjMGM0ZGYyZDA3Njk3NDEwYTQ2MjA4NTViMDYzZWE1YTA2NTc5MjYzNGUxMrg7YA==: 00:26:35.382 09:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDc1MmM4NzNlZThiOTMyNDg4MGFlZjI4ODg4ZWY0M2ZkMTgwZjIwYzc4ZDZkMmRmZWrMPw==: 00:26:35.382 09:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:35.382 09:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:35.382 09:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmUxZmRjMGM0ZGYyZDA3Njk3NDEwYTQ2MjA4NTViMDYzZWE1YTA2NTc5MjYzNGUxMrg7YA==: 00:26:35.383 09:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDc1MmM4NzNlZThiOTMyNDg4MGFlZjI4ODg4ZWY0M2ZkMTgwZjIwYzc4ZDZkMmRmZWrMPw==: ]] 00:26:35.383 09:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDc1MmM4NzNlZThiOTMyNDg4MGFlZjI4ODg4ZWY0M2ZkMTgwZjIwYzc4ZDZkMmRmZWrMPw==: 00:26:35.383 09:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:26:35.383 09:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:35.383 09:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:35.383 09:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:35.383 09:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:35.383 09:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:35.383 09:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:35.383 09:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.690 09:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.690 09:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.690 09:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:35.690 09:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:35.690 09:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:35.690 09:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:35.690 09:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:35.690 09:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:35.690 09:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:35.690 09:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:35.690 09:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:35.690 09:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:35.690 09:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:35.690 09:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:35.690 09:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.690 09:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.287 nvme0n1 00:26:36.287 09:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.287 09:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:36.287 09:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.287 09:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:36.287 09:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.287 09:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.287 09:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:36.287 09:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:36.288 09:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.288 09:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.288 09:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.288 09:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:36.288 09:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:26:36.288 09:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:36.288 09:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:36.288 09:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:36.288 09:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:36.288 09:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzdmMDU1YWJhNTNlOGVhMDFlMGIwYzhhODBlYTQ2MzI3cUbE: 00:26:36.288 09:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzlmMWQ0MGE0OTkyM2MzNjdkOGFhMmYzODk3NWVlNmFPiNIw: 00:26:36.288 09:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:36.288 09:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:36.288 09:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzdmMDU1YWJhNTNlOGVhMDFlMGIwYzhhODBlYTQ2MzI3cUbE: 00:26:36.288 09:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzlmMWQ0MGE0OTkyM2MzNjdkOGFhMmYzODk3NWVlNmFPiNIw: ]] 00:26:36.288 09:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzlmMWQ0MGE0OTkyM2MzNjdkOGFhMmYzODk3NWVlNmFPiNIw: 00:26:36.288 09:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:26:36.288 09:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:36.288 09:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:36.288 09:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:36.288 09:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:36.288 09:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:36.288 09:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:36.288 09:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.288 09:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.546 09:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.546 09:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:36.546 09:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:36.546 09:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:36.546 09:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:36.546 09:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:36.546 09:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:36.546 09:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:36.546 09:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:36.546 09:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:36.546 09:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:36.546 09:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:36.546 09:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:36.546 09:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.546 09:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.483 nvme0n1 00:26:37.483 09:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:37.483 09:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:37.483 09:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:37.483 09:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:37.483 09:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.483 09:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:37.483 09:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:37.483 09:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:37.483 09:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:37.483 09:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.483 09:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:37.483 09:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:37.483 09:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:26:37.483 09:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:37.483 09:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:37.483 09:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:37.483 09:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:37.483 09:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzRiMjJiMWI3ZTc3MzgyYjYzNzBjYWUxZGU3YTM3MzEwYzBjMzBhMzAyZmRjODliz6QBuw==: 00:26:37.483 09:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWZiNjMxODNmMzJmYjhiNTA3NmNiMzNiNTk4NjhmNGMUhGlU: 00:26:37.483 09:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:37.483 09:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:37.483 09:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzRiMjJiMWI3ZTc3MzgyYjYzNzBjYWUxZGU3YTM3MzEwYzBjMzBhMzAyZmRjODliz6QBuw==: 00:26:37.483 09:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWZiNjMxODNmMzJmYjhiNTA3NmNiMzNiNTk4NjhmNGMUhGlU: ]] 00:26:37.483 09:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWZiNjMxODNmMzJmYjhiNTA3NmNiMzNiNTk4NjhmNGMUhGlU: 00:26:37.483 09:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:26:37.483 09:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:37.483 09:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:37.483 09:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:37.483 09:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:37.483 09:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:37.483 09:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:37.483 09:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:37.483 09:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.483 09:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:37.483 09:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:37.483 09:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:37.483 09:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:37.483 09:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:37.483 09:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:37.483 09:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:37.483 09:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:37.483 09:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:37.483 09:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:37.483 09:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:37.483 09:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:37.483 09:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:37.483 09:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:37.483 09:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.051 nvme0n1 00:26:38.051 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:38.051 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:38.051 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:38.051 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:38.051 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.051 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:38.310 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:38.310 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:38.310 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:38.310 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.310 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:38.310 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:38.310 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:26:38.310 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:38.310 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:38.310 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:38.310 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:38.310 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWEzYWY4MTRhOTJiZmViZDkxMzJmNDRmNmI1ZTNhZWEzZDQ0YWViZWJkOGY1Njg3NmJhYjBhZjNiYjI0MmYwYVoKU2M=: 00:26:38.310 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:38.310 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:38.310 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:38.310 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWEzYWY4MTRhOTJiZmViZDkxMzJmNDRmNmI1ZTNhZWEzZDQ0YWViZWJkOGY1Njg3NmJhYjBhZjNiYjI0MmYwYVoKU2M=: 00:26:38.310 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:38.310 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:26:38.310 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:38.310 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:38.310 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:38.310 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:38.310 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:38.310 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:38.310 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:38.310 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.310 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:38.310 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:38.310 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:38.310 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:38.310 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:38.310 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:38.311 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:38.311 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:38.311 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:38.311 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:38.311 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:38.311 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:38.311 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:38.311 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:38.311 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.249 nvme0n1 00:26:39.249 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:39.249 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:39.249 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:39.249 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.249 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:39.249 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:39.249 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:39.249 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:39.249 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:39.249 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.249 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:39.249 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:39.249 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:39.249 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:39.249 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:26:39.249 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:39.249 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:39.249 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:39.249 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:39.249 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjFjYzZmZDEyZWQwODQ3YjQwZmY2MTViODRiYjlmMjIwnh3i: 00:26:39.249 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWQwMmIwN2MwNDRhY2MzOTJiMTg2OTA5YmM4OWUzOWMzZTk3ZjYwODg2NzFlMzQ5ZDc3YjZmOGZlNTE0YWQzYozPbqQ=: 00:26:39.249 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:39.249 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:39.249 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjFjYzZmZDEyZWQwODQ3YjQwZmY2MTViODRiYjlmMjIwnh3i: 00:26:39.249 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWQwMmIwN2MwNDRhY2MzOTJiMTg2OTA5YmM4OWUzOWMzZTk3ZjYwODg2NzFlMzQ5ZDc3YjZmOGZlNTE0YWQzYozPbqQ=: ]] 00:26:39.249 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWQwMmIwN2MwNDRhY2MzOTJiMTg2OTA5YmM4OWUzOWMzZTk3ZjYwODg2NzFlMzQ5ZDc3YjZmOGZlNTE0YWQzYozPbqQ=: 00:26:39.249 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:26:39.249 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:39.249 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:39.249 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:39.249 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:39.249 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:39.249 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:39.249 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:39.249 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.249 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:39.249 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:39.249 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:39.249 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:39.249 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:39.249 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:39.249 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:39.249 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:39.249 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:39.249 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:39.249 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:39.249 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:39.249 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:39.249 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:39.249 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.249 nvme0n1 00:26:39.249 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:39.249 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:39.249 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:39.249 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.249 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:39.249 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:39.249 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:39.249 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:39.249 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:39.250 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.250 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:39.250 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:39.250 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:26:39.250 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:39.250 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:39.250 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:39.250 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:39.250 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmUxZmRjMGM0ZGYyZDA3Njk3NDEwYTQ2MjA4NTViMDYzZWE1YTA2NTc5MjYzNGUxMrg7YA==: 00:26:39.250 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDc1MmM4NzNlZThiOTMyNDg4MGFlZjI4ODg4ZWY0M2ZkMTgwZjIwYzc4ZDZkMmRmZWrMPw==: 00:26:39.250 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:39.250 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:39.250 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmUxZmRjMGM0ZGYyZDA3Njk3NDEwYTQ2MjA4NTViMDYzZWE1YTA2NTc5MjYzNGUxMrg7YA==: 00:26:39.250 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDc1MmM4NzNlZThiOTMyNDg4MGFlZjI4ODg4ZWY0M2ZkMTgwZjIwYzc4ZDZkMmRmZWrMPw==: ]] 00:26:39.250 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDc1MmM4NzNlZThiOTMyNDg4MGFlZjI4ODg4ZWY0M2ZkMTgwZjIwYzc4ZDZkMmRmZWrMPw==: 00:26:39.250 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:26:39.250 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:39.250 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:39.250 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:39.250 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:39.250 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:39.250 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:39.250 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:39.250 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.511 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:39.511 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:39.511 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:39.511 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:39.511 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:39.511 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:39.511 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:39.511 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:39.511 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:39.511 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:39.511 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:39.511 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:39.511 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:39.511 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:39.511 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.511 nvme0n1 00:26:39.511 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:39.511 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:39.511 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:39.511 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.511 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:39.511 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:39.511 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:39.511 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:39.511 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:39.511 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.511 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:39.511 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:39.511 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:26:39.511 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:39.511 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:39.511 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:39.511 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:39.511 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzdmMDU1YWJhNTNlOGVhMDFlMGIwYzhhODBlYTQ2MzI3cUbE: 00:26:39.511 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzlmMWQ0MGE0OTkyM2MzNjdkOGFhMmYzODk3NWVlNmFPiNIw: 00:26:39.511 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:39.511 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:39.511 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzdmMDU1YWJhNTNlOGVhMDFlMGIwYzhhODBlYTQ2MzI3cUbE: 00:26:39.511 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzlmMWQ0MGE0OTkyM2MzNjdkOGFhMmYzODk3NWVlNmFPiNIw: ]] 00:26:39.511 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzlmMWQ0MGE0OTkyM2MzNjdkOGFhMmYzODk3NWVlNmFPiNIw: 00:26:39.511 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:26:39.511 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:39.511 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:39.511 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:39.511 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:39.511 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:39.511 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:39.511 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:39.511 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.511 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:39.511 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:39.511 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:39.511 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:39.511 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:39.511 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:39.511 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:39.511 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:39.511 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:39.511 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:39.511 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:39.511 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:39.511 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:39.511 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:39.511 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.771 nvme0n1 00:26:39.771 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:39.771 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:39.771 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:39.771 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.771 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:39.771 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:39.771 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:39.771 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:39.771 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:39.771 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.771 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:39.771 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:39.771 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:26:39.771 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:39.771 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:39.771 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:39.771 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:39.771 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzRiMjJiMWI3ZTc3MzgyYjYzNzBjYWUxZGU3YTM3MzEwYzBjMzBhMzAyZmRjODliz6QBuw==: 00:26:39.771 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWZiNjMxODNmMzJmYjhiNTA3NmNiMzNiNTk4NjhmNGMUhGlU: 00:26:39.771 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:39.771 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:39.771 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzRiMjJiMWI3ZTc3MzgyYjYzNzBjYWUxZGU3YTM3MzEwYzBjMzBhMzAyZmRjODliz6QBuw==: 00:26:39.771 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWZiNjMxODNmMzJmYjhiNTA3NmNiMzNiNTk4NjhmNGMUhGlU: ]] 00:26:39.771 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWZiNjMxODNmMzJmYjhiNTA3NmNiMzNiNTk4NjhmNGMUhGlU: 00:26:39.771 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:26:39.771 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:39.771 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:39.771 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:39.771 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:39.771 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:39.771 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:39.771 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:39.771 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.771 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:39.771 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:39.771 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:39.771 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:39.771 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:39.771 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:39.771 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:39.771 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:39.771 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:39.771 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:39.771 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:39.771 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:39.771 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:39.771 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:39.771 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.030 nvme0n1 00:26:40.030 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:40.030 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:40.030 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:40.030 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:40.030 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.030 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:40.030 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:40.030 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:40.030 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:40.030 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.030 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:40.030 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:40.030 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:26:40.030 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:40.030 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:40.030 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:40.030 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:40.030 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWEzYWY4MTRhOTJiZmViZDkxMzJmNDRmNmI1ZTNhZWEzZDQ0YWViZWJkOGY1Njg3NmJhYjBhZjNiYjI0MmYwYVoKU2M=: 00:26:40.030 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:40.030 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:40.030 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:40.030 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWEzYWY4MTRhOTJiZmViZDkxMzJmNDRmNmI1ZTNhZWEzZDQ0YWViZWJkOGY1Njg3NmJhYjBhZjNiYjI0MmYwYVoKU2M=: 00:26:40.030 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:40.030 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:26:40.030 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:40.030 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:40.030 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:40.030 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:40.030 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:40.030 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:40.030 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:40.030 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.030 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:40.031 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:40.031 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:40.031 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:40.031 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:40.031 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:40.031 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:40.031 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:40.031 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:40.031 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:40.031 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:40.031 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:40.031 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:40.031 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:40.031 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.289 nvme0n1 00:26:40.289 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:40.289 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:40.289 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:40.289 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.289 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:40.289 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:40.289 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:40.289 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:40.289 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:40.289 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.289 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:40.289 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:40.289 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:40.289 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:26:40.289 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:40.289 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:40.289 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:40.289 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:40.289 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjFjYzZmZDEyZWQwODQ3YjQwZmY2MTViODRiYjlmMjIwnh3i: 00:26:40.289 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWQwMmIwN2MwNDRhY2MzOTJiMTg2OTA5YmM4OWUzOWMzZTk3ZjYwODg2NzFlMzQ5ZDc3YjZmOGZlNTE0YWQzYozPbqQ=: 00:26:40.289 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:40.289 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:40.289 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjFjYzZmZDEyZWQwODQ3YjQwZmY2MTViODRiYjlmMjIwnh3i: 00:26:40.289 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWQwMmIwN2MwNDRhY2MzOTJiMTg2OTA5YmM4OWUzOWMzZTk3ZjYwODg2NzFlMzQ5ZDc3YjZmOGZlNTE0YWQzYozPbqQ=: ]] 00:26:40.289 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWQwMmIwN2MwNDRhY2MzOTJiMTg2OTA5YmM4OWUzOWMzZTk3ZjYwODg2NzFlMzQ5ZDc3YjZmOGZlNTE0YWQzYozPbqQ=: 00:26:40.289 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:26:40.289 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:40.289 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:40.289 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:40.289 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:40.289 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:40.290 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:40.290 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:40.290 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.290 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:40.290 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:40.290 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:40.290 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:40.290 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:40.290 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:40.290 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:40.290 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:40.290 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:40.290 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:40.290 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:40.290 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:40.290 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:40.290 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:40.290 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.548 nvme0n1 00:26:40.548 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:40.548 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:40.548 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:40.548 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:40.548 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.548 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:40.548 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:40.548 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:40.548 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:40.548 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.548 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:40.548 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:40.548 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:26:40.548 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:40.548 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:40.548 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:40.548 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:40.548 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmUxZmRjMGM0ZGYyZDA3Njk3NDEwYTQ2MjA4NTViMDYzZWE1YTA2NTc5MjYzNGUxMrg7YA==: 00:26:40.548 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDc1MmM4NzNlZThiOTMyNDg4MGFlZjI4ODg4ZWY0M2ZkMTgwZjIwYzc4ZDZkMmRmZWrMPw==: 00:26:40.548 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:40.548 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:40.548 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmUxZmRjMGM0ZGYyZDA3Njk3NDEwYTQ2MjA4NTViMDYzZWE1YTA2NTc5MjYzNGUxMrg7YA==: 00:26:40.548 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDc1MmM4NzNlZThiOTMyNDg4MGFlZjI4ODg4ZWY0M2ZkMTgwZjIwYzc4ZDZkMmRmZWrMPw==: ]] 00:26:40.548 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDc1MmM4NzNlZThiOTMyNDg4MGFlZjI4ODg4ZWY0M2ZkMTgwZjIwYzc4ZDZkMmRmZWrMPw==: 00:26:40.548 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:26:40.548 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:40.548 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:40.548 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:40.548 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:40.548 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:40.548 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:40.548 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:40.548 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.548 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:40.548 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:40.548 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:40.548 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:40.548 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:40.548 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:40.548 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:40.548 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:40.548 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:40.548 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:40.548 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:40.548 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:40.548 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:40.548 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:40.548 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.807 nvme0n1 00:26:40.807 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:40.807 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:40.807 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:40.807 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:40.807 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.807 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:40.807 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:40.807 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:40.807 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:40.807 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.807 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:40.807 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:40.807 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:26:40.807 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:40.807 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:40.807 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:40.807 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:40.807 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzdmMDU1YWJhNTNlOGVhMDFlMGIwYzhhODBlYTQ2MzI3cUbE: 00:26:40.807 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzlmMWQ0MGE0OTkyM2MzNjdkOGFhMmYzODk3NWVlNmFPiNIw: 00:26:40.807 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:40.807 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:40.807 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzdmMDU1YWJhNTNlOGVhMDFlMGIwYzhhODBlYTQ2MzI3cUbE: 00:26:40.807 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzlmMWQ0MGE0OTkyM2MzNjdkOGFhMmYzODk3NWVlNmFPiNIw: ]] 00:26:40.807 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzlmMWQ0MGE0OTkyM2MzNjdkOGFhMmYzODk3NWVlNmFPiNIw: 00:26:40.807 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:26:40.807 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:40.807 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:40.807 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:40.808 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:40.808 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:40.808 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:40.808 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:40.808 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.808 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:40.808 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:40.808 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:40.808 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:40.808 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:40.808 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:40.808 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:40.808 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:40.808 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:40.808 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:40.808 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:40.808 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:40.808 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:40.808 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:40.808 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.067 nvme0n1 00:26:41.067 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:41.067 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:41.067 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:41.067 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.067 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:41.067 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:41.067 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:41.067 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:41.067 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:41.067 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.067 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:41.067 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:41.067 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:26:41.067 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:41.067 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:41.067 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:41.067 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:41.067 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzRiMjJiMWI3ZTc3MzgyYjYzNzBjYWUxZGU3YTM3MzEwYzBjMzBhMzAyZmRjODliz6QBuw==: 00:26:41.067 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWZiNjMxODNmMzJmYjhiNTA3NmNiMzNiNTk4NjhmNGMUhGlU: 00:26:41.067 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:41.067 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:41.067 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzRiMjJiMWI3ZTc3MzgyYjYzNzBjYWUxZGU3YTM3MzEwYzBjMzBhMzAyZmRjODliz6QBuw==: 00:26:41.067 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWZiNjMxODNmMzJmYjhiNTA3NmNiMzNiNTk4NjhmNGMUhGlU: ]] 00:26:41.067 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWZiNjMxODNmMzJmYjhiNTA3NmNiMzNiNTk4NjhmNGMUhGlU: 00:26:41.067 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:26:41.068 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:41.068 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:41.068 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:41.068 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:41.068 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:41.068 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:41.068 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:41.068 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.068 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:41.068 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:41.068 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:41.068 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:41.068 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:41.068 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:41.068 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:41.068 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:41.068 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:41.068 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:41.068 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:41.068 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:41.068 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:41.068 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:41.068 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.328 nvme0n1 00:26:41.328 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:41.328 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:41.328 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:41.328 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:41.328 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.328 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:41.328 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:41.328 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:41.328 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:41.328 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.328 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:41.328 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:41.328 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:26:41.328 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:41.328 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:41.328 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:41.328 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:41.328 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWEzYWY4MTRhOTJiZmViZDkxMzJmNDRmNmI1ZTNhZWEzZDQ0YWViZWJkOGY1Njg3NmJhYjBhZjNiYjI0MmYwYVoKU2M=: 00:26:41.328 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:41.328 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:41.328 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:41.328 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWEzYWY4MTRhOTJiZmViZDkxMzJmNDRmNmI1ZTNhZWEzZDQ0YWViZWJkOGY1Njg3NmJhYjBhZjNiYjI0MmYwYVoKU2M=: 00:26:41.328 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:41.328 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:26:41.328 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:41.328 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:41.328 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:41.328 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:41.328 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:41.328 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:41.328 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:41.328 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.328 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:41.328 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:41.328 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:41.328 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:41.328 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:41.328 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:41.328 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:41.328 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:41.328 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:41.328 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:41.328 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:41.328 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:41.328 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:41.328 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:41.328 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.589 nvme0n1 00:26:41.589 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:41.589 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:41.589 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:41.589 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.589 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:41.589 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:41.589 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:41.589 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:41.589 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:41.589 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.589 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:41.589 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:41.589 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:41.589 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:26:41.589 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:41.589 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:41.589 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:41.589 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:41.589 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjFjYzZmZDEyZWQwODQ3YjQwZmY2MTViODRiYjlmMjIwnh3i: 00:26:41.589 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWQwMmIwN2MwNDRhY2MzOTJiMTg2OTA5YmM4OWUzOWMzZTk3ZjYwODg2NzFlMzQ5ZDc3YjZmOGZlNTE0YWQzYozPbqQ=: 00:26:41.589 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:41.589 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:41.589 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjFjYzZmZDEyZWQwODQ3YjQwZmY2MTViODRiYjlmMjIwnh3i: 00:26:41.589 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWQwMmIwN2MwNDRhY2MzOTJiMTg2OTA5YmM4OWUzOWMzZTk3ZjYwODg2NzFlMzQ5ZDc3YjZmOGZlNTE0YWQzYozPbqQ=: ]] 00:26:41.589 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWQwMmIwN2MwNDRhY2MzOTJiMTg2OTA5YmM4OWUzOWMzZTk3ZjYwODg2NzFlMzQ5ZDc3YjZmOGZlNTE0YWQzYozPbqQ=: 00:26:41.589 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:26:41.589 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:41.589 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:41.589 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:41.589 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:41.589 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:41.589 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:41.589 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:41.589 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.589 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:41.589 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:41.589 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:41.589 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:41.589 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:41.589 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:41.589 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:41.589 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:41.589 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:41.589 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:41.589 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:41.589 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:41.589 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:41.589 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:41.589 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.851 nvme0n1 00:26:41.851 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:41.851 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:41.851 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:41.851 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:41.851 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.851 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:41.851 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:41.851 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:41.851 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:41.851 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.851 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:41.851 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:41.851 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:26:41.851 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:41.851 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:41.851 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:41.851 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:41.851 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmUxZmRjMGM0ZGYyZDA3Njk3NDEwYTQ2MjA4NTViMDYzZWE1YTA2NTc5MjYzNGUxMrg7YA==: 00:26:41.851 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDc1MmM4NzNlZThiOTMyNDg4MGFlZjI4ODg4ZWY0M2ZkMTgwZjIwYzc4ZDZkMmRmZWrMPw==: 00:26:41.851 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:41.851 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:41.851 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmUxZmRjMGM0ZGYyZDA3Njk3NDEwYTQ2MjA4NTViMDYzZWE1YTA2NTc5MjYzNGUxMrg7YA==: 00:26:41.851 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDc1MmM4NzNlZThiOTMyNDg4MGFlZjI4ODg4ZWY0M2ZkMTgwZjIwYzc4ZDZkMmRmZWrMPw==: ]] 00:26:41.851 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDc1MmM4NzNlZThiOTMyNDg4MGFlZjI4ODg4ZWY0M2ZkMTgwZjIwYzc4ZDZkMmRmZWrMPw==: 00:26:41.851 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:26:41.851 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:41.851 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:41.851 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:41.851 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:41.851 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:41.851 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:41.851 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:41.851 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.851 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:41.851 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:41.851 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:41.851 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:41.851 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:41.851 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:41.851 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:41.851 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:41.851 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:41.851 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:41.851 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:41.851 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:41.851 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:41.851 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:41.851 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.111 nvme0n1 00:26:42.111 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:42.111 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:42.111 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:42.111 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:42.111 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.111 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:42.111 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:42.111 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:42.111 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:42.111 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.111 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:42.111 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:42.111 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:26:42.111 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:42.111 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:42.111 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:42.111 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:42.111 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzdmMDU1YWJhNTNlOGVhMDFlMGIwYzhhODBlYTQ2MzI3cUbE: 00:26:42.111 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzlmMWQ0MGE0OTkyM2MzNjdkOGFhMmYzODk3NWVlNmFPiNIw: 00:26:42.111 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:42.111 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:42.111 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzdmMDU1YWJhNTNlOGVhMDFlMGIwYzhhODBlYTQ2MzI3cUbE: 00:26:42.111 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzlmMWQ0MGE0OTkyM2MzNjdkOGFhMmYzODk3NWVlNmFPiNIw: ]] 00:26:42.111 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzlmMWQ0MGE0OTkyM2MzNjdkOGFhMmYzODk3NWVlNmFPiNIw: 00:26:42.111 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:26:42.112 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:42.112 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:42.112 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:42.112 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:42.112 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:42.112 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:42.112 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:42.112 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.112 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:42.112 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:42.112 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:42.112 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:42.112 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:42.112 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:42.112 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:42.112 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:42.112 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:42.112 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:42.112 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:42.112 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:42.112 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:42.112 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:42.112 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.371 nvme0n1 00:26:42.371 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:42.371 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:42.371 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:42.371 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:42.371 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.632 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:42.632 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:42.632 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:42.632 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:42.632 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.632 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:42.632 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:42.632 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:26:42.632 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:42.632 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:42.632 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:42.632 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:42.632 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzRiMjJiMWI3ZTc3MzgyYjYzNzBjYWUxZGU3YTM3MzEwYzBjMzBhMzAyZmRjODliz6QBuw==: 00:26:42.632 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWZiNjMxODNmMzJmYjhiNTA3NmNiMzNiNTk4NjhmNGMUhGlU: 00:26:42.632 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:42.632 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:42.632 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzRiMjJiMWI3ZTc3MzgyYjYzNzBjYWUxZGU3YTM3MzEwYzBjMzBhMzAyZmRjODliz6QBuw==: 00:26:42.632 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWZiNjMxODNmMzJmYjhiNTA3NmNiMzNiNTk4NjhmNGMUhGlU: ]] 00:26:42.632 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWZiNjMxODNmMzJmYjhiNTA3NmNiMzNiNTk4NjhmNGMUhGlU: 00:26:42.632 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:26:42.632 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:42.632 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:42.632 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:42.632 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:42.632 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:42.632 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:42.632 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:42.632 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.632 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:42.632 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:42.632 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:42.632 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:42.632 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:42.632 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:42.632 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:42.632 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:42.632 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:42.632 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:42.632 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:42.632 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:42.632 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:42.632 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:42.632 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.892 nvme0n1 00:26:42.892 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:42.892 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:42.892 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:42.892 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.892 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:42.892 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:42.892 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:42.892 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:42.892 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:42.892 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.892 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:42.892 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:42.892 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:26:42.892 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:42.892 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:42.892 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:42.892 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:42.892 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWEzYWY4MTRhOTJiZmViZDkxMzJmNDRmNmI1ZTNhZWEzZDQ0YWViZWJkOGY1Njg3NmJhYjBhZjNiYjI0MmYwYVoKU2M=: 00:26:42.892 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:42.892 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:42.892 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:42.892 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWEzYWY4MTRhOTJiZmViZDkxMzJmNDRmNmI1ZTNhZWEzZDQ0YWViZWJkOGY1Njg3NmJhYjBhZjNiYjI0MmYwYVoKU2M=: 00:26:42.892 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:42.892 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:26:42.892 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:42.892 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:42.892 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:42.892 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:42.892 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:42.892 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:42.892 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:42.892 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.892 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:42.892 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:42.892 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:42.892 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:42.892 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:42.892 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:42.892 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:42.892 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:42.892 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:42.892 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:42.892 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:42.892 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:42.892 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:42.892 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:42.892 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.151 nvme0n1 00:26:43.151 09:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.151 09:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:43.151 09:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.151 09:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.151 09:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:43.151 09:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.151 09:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:43.151 09:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:43.151 09:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.151 09:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.151 09:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.151 09:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:43.151 09:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:43.151 09:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:26:43.151 09:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:43.151 09:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:43.151 09:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:43.151 09:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:43.151 09:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjFjYzZmZDEyZWQwODQ3YjQwZmY2MTViODRiYjlmMjIwnh3i: 00:26:43.151 09:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWQwMmIwN2MwNDRhY2MzOTJiMTg2OTA5YmM4OWUzOWMzZTk3ZjYwODg2NzFlMzQ5ZDc3YjZmOGZlNTE0YWQzYozPbqQ=: 00:26:43.151 09:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:43.151 09:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:43.151 09:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjFjYzZmZDEyZWQwODQ3YjQwZmY2MTViODRiYjlmMjIwnh3i: 00:26:43.151 09:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWQwMmIwN2MwNDRhY2MzOTJiMTg2OTA5YmM4OWUzOWMzZTk3ZjYwODg2NzFlMzQ5ZDc3YjZmOGZlNTE0YWQzYozPbqQ=: ]] 00:26:43.151 09:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWQwMmIwN2MwNDRhY2MzOTJiMTg2OTA5YmM4OWUzOWMzZTk3ZjYwODg2NzFlMzQ5ZDc3YjZmOGZlNTE0YWQzYozPbqQ=: 00:26:43.151 09:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:26:43.151 09:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:43.151 09:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:43.151 09:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:43.151 09:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:43.151 09:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:43.151 09:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:43.151 09:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.151 09:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.151 09:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.151 09:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:43.151 09:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:43.151 09:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:43.151 09:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:43.151 09:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:43.151 09:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:43.151 09:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:43.151 09:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:43.151 09:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:43.151 09:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:43.151 09:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:43.151 09:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:43.151 09:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.151 09:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.720 nvme0n1 00:26:43.720 09:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.720 09:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:43.720 09:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.720 09:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.720 09:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:43.720 09:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.720 09:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:43.720 09:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:43.720 09:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.720 09:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.720 09:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.720 09:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:43.720 09:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:26:43.720 09:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:43.720 09:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:43.720 09:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:43.720 09:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:43.720 09:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmUxZmRjMGM0ZGYyZDA3Njk3NDEwYTQ2MjA4NTViMDYzZWE1YTA2NTc5MjYzNGUxMrg7YA==: 00:26:43.720 09:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDc1MmM4NzNlZThiOTMyNDg4MGFlZjI4ODg4ZWY0M2ZkMTgwZjIwYzc4ZDZkMmRmZWrMPw==: 00:26:43.720 09:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:43.720 09:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:43.720 09:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmUxZmRjMGM0ZGYyZDA3Njk3NDEwYTQ2MjA4NTViMDYzZWE1YTA2NTc5MjYzNGUxMrg7YA==: 00:26:43.720 09:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDc1MmM4NzNlZThiOTMyNDg4MGFlZjI4ODg4ZWY0M2ZkMTgwZjIwYzc4ZDZkMmRmZWrMPw==: ]] 00:26:43.720 09:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDc1MmM4NzNlZThiOTMyNDg4MGFlZjI4ODg4ZWY0M2ZkMTgwZjIwYzc4ZDZkMmRmZWrMPw==: 00:26:43.720 09:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:26:43.720 09:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:43.720 09:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:43.720 09:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:43.720 09:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:43.720 09:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:43.720 09:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:43.720 09:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.720 09:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.720 09:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.720 09:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:43.720 09:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:43.720 09:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:43.720 09:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:43.720 09:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:43.720 09:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:43.720 09:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:43.720 09:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:43.720 09:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:43.721 09:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:43.721 09:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:43.721 09:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:43.721 09:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.721 09:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.287 nvme0n1 00:26:44.287 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:44.287 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:44.287 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:44.287 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:44.287 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.287 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:44.287 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:44.287 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:44.287 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:44.287 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.287 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:44.287 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:44.287 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:26:44.287 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:44.287 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:44.287 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:44.287 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:44.287 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzdmMDU1YWJhNTNlOGVhMDFlMGIwYzhhODBlYTQ2MzI3cUbE: 00:26:44.287 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzlmMWQ0MGE0OTkyM2MzNjdkOGFhMmYzODk3NWVlNmFPiNIw: 00:26:44.287 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:44.287 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:44.287 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzdmMDU1YWJhNTNlOGVhMDFlMGIwYzhhODBlYTQ2MzI3cUbE: 00:26:44.287 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzlmMWQ0MGE0OTkyM2MzNjdkOGFhMmYzODk3NWVlNmFPiNIw: ]] 00:26:44.287 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzlmMWQ0MGE0OTkyM2MzNjdkOGFhMmYzODk3NWVlNmFPiNIw: 00:26:44.287 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:26:44.287 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:44.287 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:44.288 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:44.288 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:44.288 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:44.288 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:44.288 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:44.288 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.288 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:44.288 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:44.288 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:44.288 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:44.288 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:44.288 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:44.288 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:44.288 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:44.288 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:44.288 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:44.288 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:44.288 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:44.288 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:44.288 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:44.288 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.858 nvme0n1 00:26:44.858 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:44.858 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:44.858 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:44.858 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.858 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:44.858 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:44.858 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:44.858 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:44.858 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:44.858 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.858 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:44.858 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:44.858 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:26:44.858 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:44.858 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:44.858 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:44.858 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:44.858 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzRiMjJiMWI3ZTc3MzgyYjYzNzBjYWUxZGU3YTM3MzEwYzBjMzBhMzAyZmRjODliz6QBuw==: 00:26:44.858 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWZiNjMxODNmMzJmYjhiNTA3NmNiMzNiNTk4NjhmNGMUhGlU: 00:26:44.858 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:44.858 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:44.858 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzRiMjJiMWI3ZTc3MzgyYjYzNzBjYWUxZGU3YTM3MzEwYzBjMzBhMzAyZmRjODliz6QBuw==: 00:26:44.858 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWZiNjMxODNmMzJmYjhiNTA3NmNiMzNiNTk4NjhmNGMUhGlU: ]] 00:26:44.858 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWZiNjMxODNmMzJmYjhiNTA3NmNiMzNiNTk4NjhmNGMUhGlU: 00:26:44.858 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:26:44.858 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:44.858 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:44.858 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:44.858 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:44.858 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:44.858 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:44.858 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:44.858 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.858 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:44.858 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:44.858 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:44.858 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:44.858 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:44.858 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:44.858 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:44.858 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:44.858 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:44.858 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:44.858 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:44.858 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:44.858 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:44.858 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:44.858 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.428 nvme0n1 00:26:45.428 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.428 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:45.428 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:45.428 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.428 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.428 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.428 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:45.428 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:45.428 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.428 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.428 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.428 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:45.428 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:26:45.428 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:45.428 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:45.428 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:45.428 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:45.428 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWEzYWY4MTRhOTJiZmViZDkxMzJmNDRmNmI1ZTNhZWEzZDQ0YWViZWJkOGY1Njg3NmJhYjBhZjNiYjI0MmYwYVoKU2M=: 00:26:45.428 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:45.428 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:45.428 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:45.428 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWEzYWY4MTRhOTJiZmViZDkxMzJmNDRmNmI1ZTNhZWEzZDQ0YWViZWJkOGY1Njg3NmJhYjBhZjNiYjI0MmYwYVoKU2M=: 00:26:45.428 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:45.428 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:26:45.428 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:45.428 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:45.428 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:45.428 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:45.428 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:45.429 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:45.429 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.429 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.429 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.429 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:45.429 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:45.429 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:45.429 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:45.429 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:45.429 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:45.429 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:45.429 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:45.429 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:45.429 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:45.429 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:45.429 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:45.429 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.429 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.000 nvme0n1 00:26:46.000 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.000 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:46.000 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:46.000 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.000 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.000 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.000 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:46.000 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:46.000 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.000 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.000 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.000 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:46.000 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:46.000 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:26:46.000 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:46.000 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:46.000 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:46.000 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:46.000 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjFjYzZmZDEyZWQwODQ3YjQwZmY2MTViODRiYjlmMjIwnh3i: 00:26:46.000 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWQwMmIwN2MwNDRhY2MzOTJiMTg2OTA5YmM4OWUzOWMzZTk3ZjYwODg2NzFlMzQ5ZDc3YjZmOGZlNTE0YWQzYozPbqQ=: 00:26:46.000 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:46.000 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:46.000 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjFjYzZmZDEyZWQwODQ3YjQwZmY2MTViODRiYjlmMjIwnh3i: 00:26:46.000 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWQwMmIwN2MwNDRhY2MzOTJiMTg2OTA5YmM4OWUzOWMzZTk3ZjYwODg2NzFlMzQ5ZDc3YjZmOGZlNTE0YWQzYozPbqQ=: ]] 00:26:46.000 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWQwMmIwN2MwNDRhY2MzOTJiMTg2OTA5YmM4OWUzOWMzZTk3ZjYwODg2NzFlMzQ5ZDc3YjZmOGZlNTE0YWQzYozPbqQ=: 00:26:46.000 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:26:46.000 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:46.000 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:46.001 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:46.001 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:46.001 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:46.001 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:46.001 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.001 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.001 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.001 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:46.001 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:46.001 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:46.001 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:46.001 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:46.001 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:46.001 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:46.001 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:46.001 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:46.001 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:46.001 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:46.001 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:46.001 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.001 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.942 nvme0n1 00:26:46.942 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.942 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:46.942 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.942 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.942 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:46.942 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.942 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:46.942 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:46.942 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.942 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.942 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.942 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:46.942 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:26:46.942 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:46.942 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:46.942 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:46.942 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:46.942 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmUxZmRjMGM0ZGYyZDA3Njk3NDEwYTQ2MjA4NTViMDYzZWE1YTA2NTc5MjYzNGUxMrg7YA==: 00:26:46.942 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDc1MmM4NzNlZThiOTMyNDg4MGFlZjI4ODg4ZWY0M2ZkMTgwZjIwYzc4ZDZkMmRmZWrMPw==: 00:26:46.942 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:46.942 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:46.942 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmUxZmRjMGM0ZGYyZDA3Njk3NDEwYTQ2MjA4NTViMDYzZWE1YTA2NTc5MjYzNGUxMrg7YA==: 00:26:46.942 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDc1MmM4NzNlZThiOTMyNDg4MGFlZjI4ODg4ZWY0M2ZkMTgwZjIwYzc4ZDZkMmRmZWrMPw==: ]] 00:26:46.942 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDc1MmM4NzNlZThiOTMyNDg4MGFlZjI4ODg4ZWY0M2ZkMTgwZjIwYzc4ZDZkMmRmZWrMPw==: 00:26:46.943 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:26:46.943 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:46.943 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:46.943 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:46.943 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:46.943 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:46.943 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:46.943 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.943 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.943 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.943 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:46.943 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:46.943 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:46.943 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:46.943 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:46.943 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:46.943 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:46.943 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:46.943 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:46.943 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:46.943 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:46.943 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:46.943 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.943 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.881 nvme0n1 00:26:47.881 09:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.881 09:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:47.881 09:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:47.881 09:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.881 09:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.881 09:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.881 09:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:47.881 09:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:47.881 09:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.881 09:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.881 09:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.881 09:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:47.881 09:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:26:47.881 09:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:47.881 09:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:47.881 09:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:47.881 09:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:47.882 09:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzdmMDU1YWJhNTNlOGVhMDFlMGIwYzhhODBlYTQ2MzI3cUbE: 00:26:47.882 09:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzlmMWQ0MGE0OTkyM2MzNjdkOGFhMmYzODk3NWVlNmFPiNIw: 00:26:47.882 09:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:47.882 09:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:47.882 09:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzdmMDU1YWJhNTNlOGVhMDFlMGIwYzhhODBlYTQ2MzI3cUbE: 00:26:47.882 09:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzlmMWQ0MGE0OTkyM2MzNjdkOGFhMmYzODk3NWVlNmFPiNIw: ]] 00:26:47.882 09:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzlmMWQ0MGE0OTkyM2MzNjdkOGFhMmYzODk3NWVlNmFPiNIw: 00:26:47.882 09:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:26:47.882 09:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:47.882 09:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:47.882 09:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:47.882 09:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:47.882 09:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:47.882 09:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:47.882 09:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.882 09:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.882 09:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.882 09:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:47.882 09:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:47.882 09:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:47.882 09:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:47.882 09:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:47.882 09:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:47.882 09:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:47.882 09:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:47.882 09:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:47.882 09:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:47.882 09:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:47.882 09:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:47.882 09:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.882 09:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.826 nvme0n1 00:26:48.826 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.826 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:48.826 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:48.826 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.826 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.826 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.826 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:48.826 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:48.826 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.826 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.826 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.826 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:48.826 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:26:48.826 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:48.826 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:48.826 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:48.826 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:48.826 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzRiMjJiMWI3ZTc3MzgyYjYzNzBjYWUxZGU3YTM3MzEwYzBjMzBhMzAyZmRjODliz6QBuw==: 00:26:48.826 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWZiNjMxODNmMzJmYjhiNTA3NmNiMzNiNTk4NjhmNGMUhGlU: 00:26:48.826 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:48.826 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:48.826 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzRiMjJiMWI3ZTc3MzgyYjYzNzBjYWUxZGU3YTM3MzEwYzBjMzBhMzAyZmRjODliz6QBuw==: 00:26:48.826 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWZiNjMxODNmMzJmYjhiNTA3NmNiMzNiNTk4NjhmNGMUhGlU: ]] 00:26:48.826 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWZiNjMxODNmMzJmYjhiNTA3NmNiMzNiNTk4NjhmNGMUhGlU: 00:26:48.826 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:26:48.826 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:48.826 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:48.826 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:48.826 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:48.826 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:48.826 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:48.826 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.826 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.826 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.826 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:48.826 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:48.826 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:48.826 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:48.826 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:48.826 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:48.826 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:48.826 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:48.826 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:48.826 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:48.826 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:48.826 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:48.826 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.826 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.766 nvme0n1 00:26:49.766 09:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.766 09:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:49.766 09:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:49.766 09:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.766 09:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.766 09:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.766 09:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:49.766 09:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:49.766 09:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.766 09:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.766 09:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.766 09:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:49.766 09:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:26:49.766 09:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:49.766 09:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:49.766 09:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:49.766 09:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:49.766 09:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWEzYWY4MTRhOTJiZmViZDkxMzJmNDRmNmI1ZTNhZWEzZDQ0YWViZWJkOGY1Njg3NmJhYjBhZjNiYjI0MmYwYVoKU2M=: 00:26:49.766 09:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:49.766 09:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:49.766 09:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:49.766 09:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWEzYWY4MTRhOTJiZmViZDkxMzJmNDRmNmI1ZTNhZWEzZDQ0YWViZWJkOGY1Njg3NmJhYjBhZjNiYjI0MmYwYVoKU2M=: 00:26:49.766 09:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:49.766 09:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:26:49.766 09:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:49.766 09:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:49.766 09:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:49.766 09:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:49.766 09:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:49.766 09:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:49.766 09:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.766 09:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.766 09:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.766 09:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:49.766 09:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:49.766 09:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:49.766 09:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:49.766 09:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:49.766 09:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:49.766 09:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:49.766 09:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:49.766 09:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:49.766 09:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:49.766 09:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:49.766 09:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:49.766 09:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.766 09:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.705 nvme0n1 00:26:50.705 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.705 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:50.705 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:50.705 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.705 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.705 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.705 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:50.705 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:50.705 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.705 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.705 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.705 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:50.705 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:50.705 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:50.705 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:50.705 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:50.705 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmUxZmRjMGM0ZGYyZDA3Njk3NDEwYTQ2MjA4NTViMDYzZWE1YTA2NTc5MjYzNGUxMrg7YA==: 00:26:50.705 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDc1MmM4NzNlZThiOTMyNDg4MGFlZjI4ODg4ZWY0M2ZkMTgwZjIwYzc4ZDZkMmRmZWrMPw==: 00:26:50.705 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:50.705 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:50.705 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmUxZmRjMGM0ZGYyZDA3Njk3NDEwYTQ2MjA4NTViMDYzZWE1YTA2NTc5MjYzNGUxMrg7YA==: 00:26:50.705 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDc1MmM4NzNlZThiOTMyNDg4MGFlZjI4ODg4ZWY0M2ZkMTgwZjIwYzc4ZDZkMmRmZWrMPw==: ]] 00:26:50.705 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDc1MmM4NzNlZThiOTMyNDg4MGFlZjI4ODg4ZWY0M2ZkMTgwZjIwYzc4ZDZkMmRmZWrMPw==: 00:26:50.705 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:50.705 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.705 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.705 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.705 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:26:50.705 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:50.705 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:50.705 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:50.705 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:50.705 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:50.705 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:50.705 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:50.705 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:50.705 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:50.705 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:50.705 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:50.705 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:26:50.705 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:50.705 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:50.705 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:50.705 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:50.705 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:50.705 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:50.705 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.705 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.705 request: 00:26:50.705 { 00:26:50.705 "name": "nvme0", 00:26:50.705 "trtype": "tcp", 00:26:50.705 "traddr": "10.0.0.1", 00:26:50.705 "adrfam": "ipv4", 00:26:50.705 "trsvcid": "4420", 00:26:50.705 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:50.705 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:50.705 "prchk_reftag": false, 00:26:50.705 "prchk_guard": false, 00:26:50.705 "hdgst": false, 00:26:50.705 "ddgst": false, 00:26:50.705 "allow_unrecognized_csi": false, 00:26:50.705 "method": "bdev_nvme_attach_controller", 00:26:50.705 "req_id": 1 00:26:50.705 } 00:26:50.705 Got JSON-RPC error response 00:26:50.705 response: 00:26:50.705 { 00:26:50.705 "code": -5, 00:26:50.705 "message": "Input/output error" 00:26:50.705 } 00:26:50.705 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:50.705 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:26:50.705 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:50.705 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:50.705 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:50.705 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:26:50.705 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.705 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.705 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:26:50.705 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.705 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:26:50.706 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:26:50.706 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:50.706 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:50.706 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:50.706 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:50.706 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:50.706 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:50.706 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:50.706 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:50.706 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:50.706 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:50.706 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:50.706 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:26:50.706 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:50.706 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:50.706 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:50.706 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:50.706 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:50.706 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:50.706 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.706 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.706 request: 00:26:50.706 { 00:26:50.706 "name": "nvme0", 00:26:50.706 "trtype": "tcp", 00:26:50.706 "traddr": "10.0.0.1", 00:26:50.706 "adrfam": "ipv4", 00:26:50.706 "trsvcid": "4420", 00:26:50.706 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:50.706 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:50.706 "prchk_reftag": false, 00:26:50.706 "prchk_guard": false, 00:26:50.706 "hdgst": false, 00:26:50.706 "ddgst": false, 00:26:50.706 "dhchap_key": "key2", 00:26:50.706 "allow_unrecognized_csi": false, 00:26:50.706 "method": "bdev_nvme_attach_controller", 00:26:50.706 "req_id": 1 00:26:50.706 } 00:26:50.706 Got JSON-RPC error response 00:26:50.706 response: 00:26:50.706 { 00:26:50.706 "code": -5, 00:26:50.706 "message": "Input/output error" 00:26:50.706 } 00:26:50.706 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:50.706 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:26:50.706 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:50.706 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:50.706 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:50.706 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:26:50.706 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:26:50.706 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.706 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.706 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.706 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:26:50.706 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:26:50.706 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:50.706 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:50.706 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:50.706 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:50.706 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:50.706 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:50.706 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:50.706 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:50.706 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:50.706 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:50.706 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:50.706 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:26:50.706 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:50.706 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:50.706 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:50.706 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:50.706 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:50.706 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:50.706 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.706 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.966 request: 00:26:50.966 { 00:26:50.966 "name": "nvme0", 00:26:50.966 "trtype": "tcp", 00:26:50.966 "traddr": "10.0.0.1", 00:26:50.966 "adrfam": "ipv4", 00:26:50.966 "trsvcid": "4420", 00:26:50.966 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:50.966 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:50.966 "prchk_reftag": false, 00:26:50.966 "prchk_guard": false, 00:26:50.966 "hdgst": false, 00:26:50.966 "ddgst": false, 00:26:50.966 "dhchap_key": "key1", 00:26:50.966 "dhchap_ctrlr_key": "ckey2", 00:26:50.966 "allow_unrecognized_csi": false, 00:26:50.966 "method": "bdev_nvme_attach_controller", 00:26:50.966 "req_id": 1 00:26:50.966 } 00:26:50.966 Got JSON-RPC error response 00:26:50.966 response: 00:26:50.966 { 00:26:50.966 "code": -5, 00:26:50.966 "message": "Input/output error" 00:26:50.966 } 00:26:50.966 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:50.966 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:26:50.966 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:50.966 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:50.966 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:50.966 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:26:50.967 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:50.967 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:50.967 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:50.967 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:50.967 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:50.967 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:50.967 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:50.967 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:50.967 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:50.967 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:50.967 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:26:50.967 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.967 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.967 nvme0n1 00:26:50.967 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.967 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:26:50.967 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:50.967 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:50.967 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:50.967 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:50.967 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzdmMDU1YWJhNTNlOGVhMDFlMGIwYzhhODBlYTQ2MzI3cUbE: 00:26:50.967 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzlmMWQ0MGE0OTkyM2MzNjdkOGFhMmYzODk3NWVlNmFPiNIw: 00:26:50.967 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:50.967 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:50.967 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzdmMDU1YWJhNTNlOGVhMDFlMGIwYzhhODBlYTQ2MzI3cUbE: 00:26:50.967 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzlmMWQ0MGE0OTkyM2MzNjdkOGFhMmYzODk3NWVlNmFPiNIw: ]] 00:26:50.967 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzlmMWQ0MGE0OTkyM2MzNjdkOGFhMmYzODk3NWVlNmFPiNIw: 00:26:50.967 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:50.967 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.967 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.226 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.226 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:26:51.226 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:26:51.226 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.226 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.226 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.226 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:51.226 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:51.226 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:26:51.226 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:51.226 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:51.226 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:51.226 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:51.226 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:51.226 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:51.226 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.226 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.226 request: 00:26:51.226 { 00:26:51.226 "name": "nvme0", 00:26:51.226 "dhchap_key": "key1", 00:26:51.226 "dhchap_ctrlr_key": "ckey2", 00:26:51.226 "method": "bdev_nvme_set_keys", 00:26:51.226 "req_id": 1 00:26:51.226 } 00:26:51.226 Got JSON-RPC error response 00:26:51.226 response: 00:26:51.226 { 00:26:51.226 "code": -13, 00:26:51.226 "message": "Permission denied" 00:26:51.226 } 00:26:51.226 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:51.227 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:26:51.227 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:51.227 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:51.227 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:51.227 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:26:51.227 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:26:51.227 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.227 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.227 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.227 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:26:51.227 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:26:52.165 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:26:52.165 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:52.165 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.165 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:26:52.165 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:52.425 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:26:52.425 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:26:53.363 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:26:53.363 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:26:53.363 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.363 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.363 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.363 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:26:53.363 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:53.363 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:53.363 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:53.363 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:53.363 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:53.363 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmUxZmRjMGM0ZGYyZDA3Njk3NDEwYTQ2MjA4NTViMDYzZWE1YTA2NTc5MjYzNGUxMrg7YA==: 00:26:53.363 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDc1MmM4NzNlZThiOTMyNDg4MGFlZjI4ODg4ZWY0M2ZkMTgwZjIwYzc4ZDZkMmRmZWrMPw==: 00:26:53.363 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:53.363 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:53.363 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmUxZmRjMGM0ZGYyZDA3Njk3NDEwYTQ2MjA4NTViMDYzZWE1YTA2NTc5MjYzNGUxMrg7YA==: 00:26:53.363 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDc1MmM4NzNlZThiOTMyNDg4MGFlZjI4ODg4ZWY0M2ZkMTgwZjIwYzc4ZDZkMmRmZWrMPw==: ]] 00:26:53.363 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDc1MmM4NzNlZThiOTMyNDg4MGFlZjI4ODg4ZWY0M2ZkMTgwZjIwYzc4ZDZkMmRmZWrMPw==: 00:26:53.363 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:26:53.363 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:53.363 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:53.363 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:53.363 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:53.363 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:53.363 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:53.363 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:53.363 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:53.363 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:53.363 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:53.363 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:26:53.363 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.363 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.623 nvme0n1 00:26:53.623 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.623 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:26:53.623 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:53.623 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:53.623 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:53.623 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:53.623 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzdmMDU1YWJhNTNlOGVhMDFlMGIwYzhhODBlYTQ2MzI3cUbE: 00:26:53.623 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzlmMWQ0MGE0OTkyM2MzNjdkOGFhMmYzODk3NWVlNmFPiNIw: 00:26:53.623 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:53.623 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:53.623 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzdmMDU1YWJhNTNlOGVhMDFlMGIwYzhhODBlYTQ2MzI3cUbE: 00:26:53.623 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzlmMWQ0MGE0OTkyM2MzNjdkOGFhMmYzODk3NWVlNmFPiNIw: ]] 00:26:53.623 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzlmMWQ0MGE0OTkyM2MzNjdkOGFhMmYzODk3NWVlNmFPiNIw: 00:26:53.623 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:26:53.623 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:26:53.623 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:26:53.623 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:53.623 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:53.623 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:53.623 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:53.623 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:26:53.623 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.623 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.623 request: 00:26:53.623 { 00:26:53.623 "name": "nvme0", 00:26:53.623 "dhchap_key": "key2", 00:26:53.623 "dhchap_ctrlr_key": "ckey1", 00:26:53.623 "method": "bdev_nvme_set_keys", 00:26:53.623 "req_id": 1 00:26:53.623 } 00:26:53.623 Got JSON-RPC error response 00:26:53.623 response: 00:26:53.623 { 00:26:53.623 "code": -13, 00:26:53.623 "message": "Permission denied" 00:26:53.623 } 00:26:53.623 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:53.623 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:26:53.623 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:53.623 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:53.623 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:53.623 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:26:53.623 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:26:53.623 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.623 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.623 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.623 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:26:53.623 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:26:54.563 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:26:54.563 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:26:54.563 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:54.563 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.563 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:54.563 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:26:54.563 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:26:54.563 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:26:54.563 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:26:54.563 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@514 -- # nvmfcleanup 00:26:54.563 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:26:54.563 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:54.563 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:26:54.563 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:54.563 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:54.563 rmmod nvme_tcp 00:26:54.563 rmmod nvme_fabrics 00:26:54.563 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:54.563 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:26:54.563 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:26:54.563 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@515 -- # '[' -n 309737 ']' 00:26:54.563 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # killprocess 309737 00:26:54.563 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 309737 ']' 00:26:54.563 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 309737 00:26:54.823 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:26:54.823 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:54.823 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 309737 00:26:54.823 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:54.823 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:54.823 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 309737' 00:26:54.823 killing process with pid 309737 00:26:54.823 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 309737 00:26:54.824 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 309737 00:26:55.084 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:26:55.084 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:26:55.084 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:26:55.084 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:26:55.084 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@789 -- # iptables-save 00:26:55.084 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:26:55.084 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@789 -- # iptables-restore 00:26:55.084 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:55.084 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:55.084 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:55.084 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:55.084 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:56.994 09:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:56.994 09:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:26:56.994 09:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:56.994 09:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:26:56.994 09:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:26:56.994 09:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # echo 0 00:26:56.994 09:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:56.994 09:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:56.994 09:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:56.994 09:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:56.994 09:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:26:56.994 09:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modprobe -r nvmet_tcp nvmet 00:26:56.994 09:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:58.375 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:26:58.376 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:26:58.376 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:26:58.376 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:26:58.376 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:26:58.376 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:26:58.376 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:26:58.376 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:26:58.376 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:26:58.376 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:26:58.376 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:26:58.376 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:26:58.376 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:26:58.376 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:26:58.376 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:26:58.376 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:26:59.310 0000:84:00.0 (8086 0a54): nvme -> vfio-pci 00:26:59.310 09:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.bUa /tmp/spdk.key-null.TLK /tmp/spdk.key-sha256.a6B /tmp/spdk.key-sha384.zC7 /tmp/spdk.key-sha512.GGZ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:26:59.310 09:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:00.689 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:27:00.689 0000:84:00.0 (8086 0a54): Already using the vfio-pci driver 00:27:00.689 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:27:00.689 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:27:00.689 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:27:00.689 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:27:00.689 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:27:00.689 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:27:00.689 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:27:00.689 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:27:00.689 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:27:00.689 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:27:00.689 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:27:00.689 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:27:00.689 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:27:00.689 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:27:00.689 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:27:00.689 00:27:00.689 real 0m53.585s 00:27:00.689 user 0m51.130s 00:27:00.689 sys 0m5.751s 00:27:00.689 09:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:00.689 09:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.689 ************************************ 00:27:00.689 END TEST nvmf_auth_host 00:27:00.689 ************************************ 00:27:00.689 09:47:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:27:00.689 09:47:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:27:00.689 09:47:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:00.689 09:47:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:00.689 09:47:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.949 ************************************ 00:27:00.949 START TEST nvmf_digest 00:27:00.949 ************************************ 00:27:00.949 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:27:00.949 * Looking for test storage... 00:27:00.949 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:00.949 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:27:00.949 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # lcov --version 00:27:00.949 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:27:00.949 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:27:00.949 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:00.949 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:00.949 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:00.949 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:27:00.949 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:27:00.949 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:27:00.949 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:27:00.949 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:27:00.949 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:27:00.949 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:27:00.949 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:00.949 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:27:00.949 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:27:00.949 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:00.949 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:00.949 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:27:00.949 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:27:00.949 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:00.949 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:27:00.949 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:27:00.949 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:27:00.949 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:27:00.949 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:00.949 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:27:00.949 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:27:00.949 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:00.949 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:00.949 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:27:00.949 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:00.950 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:27:00.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:00.950 --rc genhtml_branch_coverage=1 00:27:00.950 --rc genhtml_function_coverage=1 00:27:00.950 --rc genhtml_legend=1 00:27:00.950 --rc geninfo_all_blocks=1 00:27:00.950 --rc geninfo_unexecuted_blocks=1 00:27:00.950 00:27:00.950 ' 00:27:00.950 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:27:00.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:00.950 --rc genhtml_branch_coverage=1 00:27:00.950 --rc genhtml_function_coverage=1 00:27:00.950 --rc genhtml_legend=1 00:27:00.950 --rc geninfo_all_blocks=1 00:27:00.950 --rc geninfo_unexecuted_blocks=1 00:27:00.950 00:27:00.950 ' 00:27:00.950 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:27:00.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:00.950 --rc genhtml_branch_coverage=1 00:27:00.950 --rc genhtml_function_coverage=1 00:27:00.950 --rc genhtml_legend=1 00:27:00.950 --rc geninfo_all_blocks=1 00:27:00.950 --rc geninfo_unexecuted_blocks=1 00:27:00.950 00:27:00.950 ' 00:27:00.950 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:27:00.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:00.950 --rc genhtml_branch_coverage=1 00:27:00.950 --rc genhtml_function_coverage=1 00:27:00.950 --rc genhtml_legend=1 00:27:00.950 --rc geninfo_all_blocks=1 00:27:00.950 --rc geninfo_unexecuted_blocks=1 00:27:00.950 00:27:00.950 ' 00:27:00.950 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:00.950 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:27:00.950 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:00.950 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:00.950 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:00.950 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:00.950 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:00.950 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:00.950 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:00.950 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:00.950 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:00.950 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:00.950 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:27:00.950 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:27:00.950 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:00.950 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:00.950 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:00.950 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:00.950 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:00.950 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:27:00.950 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:00.950 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:00.950 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:00.950 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:00.950 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:00.950 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:00.950 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:27:00.950 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:00.950 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:27:00.950 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:00.950 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:00.950 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:00.950 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:00.950 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:00.950 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:00.950 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:00.950 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:00.950 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:00.950 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:00.950 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:27:00.950 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:27:00.950 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:27:00.950 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:27:00.950 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:27:00.950 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:27:00.950 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:00.950 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # prepare_net_devs 00:27:00.950 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@436 -- # local -g is_hw=no 00:27:00.950 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # remove_spdk_ns 00:27:00.950 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:00.950 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:00.950 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:00.950 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:27:00.950 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:27:00.950 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:27:00.950 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:02.855 09:47:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:02.855 09:47:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:27:02.855 09:47:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:02.855 09:47:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:02.855 09:47:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:02.855 09:47:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:02.855 09:47:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:02.855 09:47:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:27:02.855 09:47:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:02.855 09:47:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:27:02.855 09:47:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:27:02.855 09:47:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:27:02.855 09:47:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:27:02.855 09:47:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:27:02.855 09:47:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:27:02.855 09:47:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:02.855 09:47:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:02.855 09:47:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:02.856 09:47:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:02.856 09:47:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:02.856 09:47:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:02.856 09:47:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:02.856 09:47:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:02.856 09:47:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:02.856 09:47:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:02.856 09:47:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:02.856 09:47:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:02.856 09:47:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:02.856 09:47:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:02.856 09:47:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:02.856 09:47:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:02.856 09:47:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:02.856 09:47:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:02.856 09:47:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:02.856 09:47:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x1592)' 00:27:02.856 Found 0000:09:00.0 (0x8086 - 0x1592) 00:27:02.856 09:47:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:02.856 09:47:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:02.856 09:47:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:27:02.856 09:47:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:27:02.856 09:47:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:02.856 09:47:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:02.856 09:47:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x1592)' 00:27:02.856 Found 0000:09:00.1 (0x8086 - 0x1592) 00:27:02.856 09:47:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:02.856 09:47:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:02.856 09:47:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:27:02.856 09:47:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:27:02.856 09:47:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:02.856 09:47:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:02.856 09:47:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:02.856 09:47:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:02.856 09:47:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:02.856 09:47:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:02.856 09:47:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:02.856 09:47:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:02.856 09:47:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:02.856 09:47:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:02.856 09:47:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:02.856 09:47:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:27:02.856 Found net devices under 0000:09:00.0: cvl_0_0 00:27:02.856 09:47:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:02.856 09:47:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:02.856 09:47:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:02.856 09:47:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:02.856 09:47:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:02.856 09:47:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:02.856 09:47:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:02.856 09:47:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:02.856 09:47:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:27:02.856 Found net devices under 0000:09:00.1: cvl_0_1 00:27:02.856 09:47:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:02.856 09:47:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:27:02.856 09:47:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # is_hw=yes 00:27:02.856 09:47:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:27:02.856 09:47:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:27:02.856 09:47:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:27:02.856 09:47:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:02.856 09:47:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:02.856 09:47:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:02.856 09:47:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:02.856 09:47:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:02.856 09:47:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:02.856 09:47:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:02.856 09:47:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:02.856 09:47:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:02.856 09:47:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:02.856 09:47:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:02.856 09:47:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:02.856 09:47:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:02.856 09:47:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:02.856 09:47:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:03.115 09:47:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:03.115 09:47:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:03.115 09:47:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:03.115 09:47:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:03.115 09:47:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:03.115 09:47:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:03.115 09:47:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:03.115 09:47:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:03.115 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:03.115 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.170 ms 00:27:03.115 00:27:03.115 --- 10.0.0.2 ping statistics --- 00:27:03.115 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:03.115 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:27:03.115 09:47:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:03.115 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:03.115 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.080 ms 00:27:03.115 00:27:03.115 --- 10.0.0.1 ping statistics --- 00:27:03.115 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:03.115 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:27:03.115 09:47:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:03.115 09:47:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@448 -- # return 0 00:27:03.115 09:47:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:27:03.115 09:47:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:03.115 09:47:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:27:03.115 09:47:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:27:03.115 09:47:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:03.115 09:47:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:27:03.115 09:47:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:27:03.115 09:47:51 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:27:03.115 09:47:51 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:27:03.115 09:47:51 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:27:03.115 09:47:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:03.115 09:47:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:03.115 09:47:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:03.115 ************************************ 00:27:03.115 START TEST nvmf_digest_clean 00:27:03.115 ************************************ 00:27:03.115 09:47:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # run_digest 00:27:03.115 09:47:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:27:03.115 09:47:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:27:03.115 09:47:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:27:03.115 09:47:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:27:03.115 09:47:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:27:03.115 09:47:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:27:03.115 09:47:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:03.115 09:47:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:03.115 09:47:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # nvmfpid=319348 00:27:03.115 09:47:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:27:03.115 09:47:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # waitforlisten 319348 00:27:03.115 09:47:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 319348 ']' 00:27:03.115 09:47:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:03.115 09:47:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:03.115 09:47:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:03.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:03.115 09:47:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:03.115 09:47:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:03.115 [2024-10-07 09:47:52.064236] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:27:03.115 [2024-10-07 09:47:52.064311] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:03.375 [2024-10-07 09:47:52.125490] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:03.375 [2024-10-07 09:47:52.231545] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:03.375 [2024-10-07 09:47:52.231615] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:03.375 [2024-10-07 09:47:52.231629] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:03.375 [2024-10-07 09:47:52.231640] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:03.375 [2024-10-07 09:47:52.231649] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:03.375 [2024-10-07 09:47:52.232230] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:27:03.375 09:47:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:03.375 09:47:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:27:03.376 09:47:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:27:03.376 09:47:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:03.376 09:47:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:03.376 09:47:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:03.376 09:47:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:27:03.376 09:47:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:27:03.376 09:47:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:27:03.376 09:47:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.376 09:47:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:03.635 null0 00:27:03.635 [2024-10-07 09:47:52.437181] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:03.635 [2024-10-07 09:47:52.461430] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:03.635 09:47:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.635 09:47:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:27:03.635 09:47:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:03.635 09:47:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:03.635 09:47:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:27:03.635 09:47:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:27:03.635 09:47:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:27:03.635 09:47:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:03.635 09:47:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=319369 00:27:03.635 09:47:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:27:03.635 09:47:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 319369 /var/tmp/bperf.sock 00:27:03.635 09:47:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 319369 ']' 00:27:03.635 09:47:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:03.635 09:47:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:03.635 09:47:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:03.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:03.635 09:47:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:03.635 09:47:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:03.635 [2024-10-07 09:47:52.509335] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:27:03.635 [2024-10-07 09:47:52.509395] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid319369 ] 00:27:03.635 [2024-10-07 09:47:52.564326] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:03.893 [2024-10-07 09:47:52.673114] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:27:03.893 09:47:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:03.893 09:47:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:27:03.893 09:47:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:03.893 09:47:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:03.893 09:47:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:04.152 09:47:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:04.152 09:47:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:04.718 nvme0n1 00:27:04.718 09:47:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:04.718 09:47:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:04.718 Running I/O for 2 seconds... 00:27:07.035 18599.00 IOPS, 72.65 MiB/s 18649.50 IOPS, 72.85 MiB/s 00:27:07.035 Latency(us) 00:27:07.035 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:07.035 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:27:07.035 nvme0n1 : 2.00 18667.89 72.92 0.00 0.00 6848.84 3568.07 14466.47 00:27:07.035 =================================================================================================================== 00:27:07.035 Total : 18667.89 72.92 0.00 0.00 6848.84 3568.07 14466.47 00:27:07.035 { 00:27:07.035 "results": [ 00:27:07.035 { 00:27:07.035 "job": "nvme0n1", 00:27:07.035 "core_mask": "0x2", 00:27:07.035 "workload": "randread", 00:27:07.035 "status": "finished", 00:27:07.035 "queue_depth": 128, 00:27:07.035 "io_size": 4096, 00:27:07.035 "runtime": 2.004886, 00:27:07.035 "iops": 18667.894334141693, 00:27:07.035 "mibps": 72.92146224274099, 00:27:07.035 "io_failed": 0, 00:27:07.035 "io_timeout": 0, 00:27:07.035 "avg_latency_us": 6848.837206176172, 00:27:07.035 "min_latency_us": 3568.071111111111, 00:27:07.035 "max_latency_us": 14466.465185185185 00:27:07.035 } 00:27:07.035 ], 00:27:07.035 "core_count": 1 00:27:07.035 } 00:27:07.035 09:47:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:07.035 09:47:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:07.035 09:47:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:07.035 09:47:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:07.035 09:47:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:07.035 | select(.opcode=="crc32c") 00:27:07.035 | "\(.module_name) \(.executed)"' 00:27:07.035 09:47:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:07.035 09:47:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:07.035 09:47:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:07.035 09:47:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:07.035 09:47:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 319369 00:27:07.035 09:47:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 319369 ']' 00:27:07.035 09:47:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 319369 00:27:07.035 09:47:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:27:07.035 09:47:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:07.035 09:47:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 319369 00:27:07.293 09:47:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:07.293 09:47:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:07.293 09:47:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 319369' 00:27:07.293 killing process with pid 319369 00:27:07.293 09:47:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 319369 00:27:07.293 Received shutdown signal, test time was about 2.000000 seconds 00:27:07.293 00:27:07.293 Latency(us) 00:27:07.293 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:07.293 =================================================================================================================== 00:27:07.294 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:07.294 09:47:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 319369 00:27:07.552 09:47:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:27:07.552 09:47:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:07.552 09:47:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:07.552 09:47:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:27:07.552 09:47:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:27:07.552 09:47:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:27:07.552 09:47:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:07.552 09:47:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=319872 00:27:07.552 09:47:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:27:07.552 09:47:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 319872 /var/tmp/bperf.sock 00:27:07.552 09:47:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 319872 ']' 00:27:07.552 09:47:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:07.552 09:47:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:07.552 09:47:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:07.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:07.552 09:47:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:07.552 09:47:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:07.552 [2024-10-07 09:47:56.372591] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:27:07.552 [2024-10-07 09:47:56.372688] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid319872 ] 00:27:07.552 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:07.552 Zero copy mechanism will not be used. 00:27:07.552 [2024-10-07 09:47:56.428409] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:07.552 [2024-10-07 09:47:56.534294] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:27:07.810 09:47:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:07.810 09:47:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:27:07.810 09:47:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:07.810 09:47:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:07.810 09:47:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:08.069 09:47:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:08.069 09:47:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:08.635 nvme0n1 00:27:08.635 09:47:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:08.635 09:47:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:08.635 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:08.635 Zero copy mechanism will not be used. 00:27:08.635 Running I/O for 2 seconds... 00:27:10.955 6524.00 IOPS, 815.50 MiB/s 6669.00 IOPS, 833.62 MiB/s 00:27:10.955 Latency(us) 00:27:10.955 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:10.955 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:27:10.955 nvme0n1 : 2.00 6669.01 833.63 0.00 0.00 2395.03 697.84 12233.39 00:27:10.955 =================================================================================================================== 00:27:10.955 Total : 6669.01 833.63 0.00 0.00 2395.03 697.84 12233.39 00:27:10.955 { 00:27:10.955 "results": [ 00:27:10.955 { 00:27:10.955 "job": "nvme0n1", 00:27:10.955 "core_mask": "0x2", 00:27:10.955 "workload": "randread", 00:27:10.955 "status": "finished", 00:27:10.955 "queue_depth": 16, 00:27:10.955 "io_size": 131072, 00:27:10.955 "runtime": 2.002395, 00:27:10.955 "iops": 6669.01385590755, 00:27:10.955 "mibps": 833.6267319884438, 00:27:10.955 "io_failed": 0, 00:27:10.955 "io_timeout": 0, 00:27:10.955 "avg_latency_us": 2395.029095124779, 00:27:10.955 "min_latency_us": 697.837037037037, 00:27:10.955 "max_latency_us": 12233.386666666667 00:27:10.955 } 00:27:10.955 ], 00:27:10.955 "core_count": 1 00:27:10.955 } 00:27:10.955 09:47:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:10.955 09:47:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:10.955 09:47:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:10.955 09:47:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:10.955 | select(.opcode=="crc32c") 00:27:10.955 | "\(.module_name) \(.executed)"' 00:27:10.955 09:47:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:10.955 09:47:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:10.955 09:47:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:10.955 09:47:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:10.955 09:47:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:10.955 09:47:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 319872 00:27:10.955 09:47:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 319872 ']' 00:27:10.955 09:47:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 319872 00:27:10.955 09:47:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:27:10.955 09:47:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:10.956 09:47:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 319872 00:27:10.956 09:47:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:10.956 09:47:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:10.956 09:47:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 319872' 00:27:10.956 killing process with pid 319872 00:27:10.956 09:47:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 319872 00:27:10.956 Received shutdown signal, test time was about 2.000000 seconds 00:27:10.956 00:27:10.956 Latency(us) 00:27:10.956 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:10.956 =================================================================================================================== 00:27:10.956 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:10.956 09:47:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 319872 00:27:11.214 09:48:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:27:11.214 09:48:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:11.214 09:48:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:11.214 09:48:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:27:11.214 09:48:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:27:11.215 09:48:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:27:11.215 09:48:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:11.215 09:48:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=320287 00:27:11.215 09:48:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:27:11.215 09:48:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 320287 /var/tmp/bperf.sock 00:27:11.215 09:48:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 320287 ']' 00:27:11.215 09:48:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:11.215 09:48:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:11.215 09:48:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:11.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:11.215 09:48:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:11.215 09:48:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:11.215 [2024-10-07 09:48:00.180552] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:27:11.215 [2024-10-07 09:48:00.180642] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid320287 ] 00:27:11.473 [2024-10-07 09:48:00.239563] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:11.473 [2024-10-07 09:48:00.354465] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:27:11.473 09:48:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:11.473 09:48:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:27:11.473 09:48:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:11.473 09:48:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:11.473 09:48:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:12.042 09:48:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:12.042 09:48:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:12.609 nvme0n1 00:27:12.609 09:48:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:12.609 09:48:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:12.609 Running I/O for 2 seconds... 00:27:14.483 21816.00 IOPS, 85.22 MiB/s 21803.00 IOPS, 85.17 MiB/s 00:27:14.483 Latency(us) 00:27:14.483 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:14.483 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:14.483 nvme0n1 : 2.00 21815.74 85.22 0.00 0.00 5861.14 2767.08 17476.27 00:27:14.483 =================================================================================================================== 00:27:14.483 Total : 21815.74 85.22 0.00 0.00 5861.14 2767.08 17476.27 00:27:14.483 { 00:27:14.483 "results": [ 00:27:14.483 { 00:27:14.483 "job": "nvme0n1", 00:27:14.483 "core_mask": "0x2", 00:27:14.483 "workload": "randwrite", 00:27:14.483 "status": "finished", 00:27:14.483 "queue_depth": 128, 00:27:14.483 "io_size": 4096, 00:27:14.483 "runtime": 2.004699, 00:27:14.483 "iops": 21815.743909684195, 00:27:14.483 "mibps": 85.21774964720389, 00:27:14.483 "io_failed": 0, 00:27:14.483 "io_timeout": 0, 00:27:14.483 "avg_latency_us": 5861.137215997724, 00:27:14.483 "min_latency_us": 2767.0755555555556, 00:27:14.483 "max_latency_us": 17476.266666666666 00:27:14.483 } 00:27:14.483 ], 00:27:14.483 "core_count": 1 00:27:14.483 } 00:27:14.483 09:48:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:14.483 09:48:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:14.483 09:48:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:14.483 09:48:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:14.483 09:48:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:14.483 | select(.opcode=="crc32c") 00:27:14.483 | "\(.module_name) \(.executed)"' 00:27:14.741 09:48:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:14.741 09:48:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:14.741 09:48:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:14.741 09:48:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:14.741 09:48:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 320287 00:27:14.741 09:48:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 320287 ']' 00:27:14.741 09:48:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 320287 00:27:14.741 09:48:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:27:14.741 09:48:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:14.741 09:48:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 320287 00:27:15.000 09:48:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:15.000 09:48:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:15.000 09:48:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 320287' 00:27:15.000 killing process with pid 320287 00:27:15.000 09:48:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 320287 00:27:15.000 Received shutdown signal, test time was about 2.000000 seconds 00:27:15.000 00:27:15.000 Latency(us) 00:27:15.000 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:15.000 =================================================================================================================== 00:27:15.000 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:15.000 09:48:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 320287 00:27:15.258 09:48:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:27:15.258 09:48:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:15.258 09:48:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:15.258 09:48:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:27:15.258 09:48:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:27:15.258 09:48:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:27:15.258 09:48:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:15.258 09:48:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=320883 00:27:15.258 09:48:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 320883 /var/tmp/bperf.sock 00:27:15.258 09:48:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:27:15.258 09:48:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 320883 ']' 00:27:15.258 09:48:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:15.258 09:48:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:15.258 09:48:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:15.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:15.258 09:48:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:15.258 09:48:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:15.258 [2024-10-07 09:48:04.078155] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:27:15.258 [2024-10-07 09:48:04.078242] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid320883 ] 00:27:15.258 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:15.258 Zero copy mechanism will not be used. 00:27:15.258 [2024-10-07 09:48:04.133875] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:15.258 [2024-10-07 09:48:04.237011] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:27:15.517 09:48:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:15.517 09:48:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:27:15.517 09:48:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:15.517 09:48:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:15.517 09:48:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:15.775 09:48:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:15.776 09:48:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:16.345 nvme0n1 00:27:16.345 09:48:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:16.345 09:48:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:16.345 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:16.345 Zero copy mechanism will not be used. 00:27:16.345 Running I/O for 2 seconds... 00:27:18.660 5645.00 IOPS, 705.62 MiB/s 5594.50 IOPS, 699.31 MiB/s 00:27:18.660 Latency(us) 00:27:18.660 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:18.660 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:27:18.660 nvme0n1 : 2.00 5593.81 699.23 0.00 0.00 2853.31 1844.72 12087.75 00:27:18.660 =================================================================================================================== 00:27:18.660 Total : 5593.81 699.23 0.00 0.00 2853.31 1844.72 12087.75 00:27:18.660 { 00:27:18.660 "results": [ 00:27:18.660 { 00:27:18.660 "job": "nvme0n1", 00:27:18.660 "core_mask": "0x2", 00:27:18.660 "workload": "randwrite", 00:27:18.660 "status": "finished", 00:27:18.660 "queue_depth": 16, 00:27:18.660 "io_size": 131072, 00:27:18.660 "runtime": 2.004, 00:27:18.660 "iops": 5593.812375249501, 00:27:18.660 "mibps": 699.2265469061877, 00:27:18.660 "io_failed": 0, 00:27:18.660 "io_timeout": 0, 00:27:18.660 "avg_latency_us": 2853.309299765421, 00:27:18.660 "min_latency_us": 1844.717037037037, 00:27:18.660 "max_latency_us": 12087.75111111111 00:27:18.660 } 00:27:18.660 ], 00:27:18.660 "core_count": 1 00:27:18.660 } 00:27:18.661 09:48:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:18.661 09:48:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:18.661 09:48:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:18.661 09:48:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:18.661 | select(.opcode=="crc32c") 00:27:18.661 | "\(.module_name) \(.executed)"' 00:27:18.661 09:48:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:18.661 09:48:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:18.661 09:48:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:18.661 09:48:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:18.661 09:48:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:18.661 09:48:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 320883 00:27:18.661 09:48:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 320883 ']' 00:27:18.661 09:48:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 320883 00:27:18.661 09:48:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:27:18.661 09:48:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:18.661 09:48:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 320883 00:27:18.661 09:48:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:18.661 09:48:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:18.661 09:48:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 320883' 00:27:18.661 killing process with pid 320883 00:27:18.661 09:48:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 320883 00:27:18.661 Received shutdown signal, test time was about 2.000000 seconds 00:27:18.661 00:27:18.661 Latency(us) 00:27:18.661 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:18.661 =================================================================================================================== 00:27:18.661 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:18.661 09:48:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 320883 00:27:18.920 09:48:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 319348 00:27:18.920 09:48:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 319348 ']' 00:27:18.920 09:48:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 319348 00:27:18.920 09:48:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:27:18.920 09:48:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:18.920 09:48:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 319348 00:27:18.920 09:48:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:18.921 09:48:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:18.921 09:48:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 319348' 00:27:18.921 killing process with pid 319348 00:27:18.921 09:48:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 319348 00:27:18.921 09:48:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 319348 00:27:19.179 00:27:19.179 real 0m16.153s 00:27:19.179 user 0m32.461s 00:27:19.179 sys 0m4.183s 00:27:19.179 09:48:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:19.179 09:48:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:19.179 ************************************ 00:27:19.179 END TEST nvmf_digest_clean 00:27:19.179 ************************************ 00:27:19.438 09:48:08 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:27:19.438 09:48:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:19.438 09:48:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:19.438 09:48:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:19.438 ************************************ 00:27:19.438 START TEST nvmf_digest_error 00:27:19.438 ************************************ 00:27:19.438 09:48:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # run_digest_error 00:27:19.438 09:48:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:27:19.438 09:48:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:27:19.438 09:48:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:19.438 09:48:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:19.438 09:48:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # nvmfpid=321666 00:27:19.438 09:48:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:27:19.438 09:48:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # waitforlisten 321666 00:27:19.438 09:48:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 321666 ']' 00:27:19.438 09:48:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:19.438 09:48:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:19.438 09:48:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:19.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:19.438 09:48:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:19.438 09:48:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:19.438 [2024-10-07 09:48:08.276802] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:27:19.438 [2024-10-07 09:48:08.276890] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:19.438 [2024-10-07 09:48:08.338769] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:19.695 [2024-10-07 09:48:08.448565] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:19.695 [2024-10-07 09:48:08.448633] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:19.695 [2024-10-07 09:48:08.448660] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:19.695 [2024-10-07 09:48:08.448679] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:19.695 [2024-10-07 09:48:08.448690] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:19.695 [2024-10-07 09:48:08.449243] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:27:19.695 09:48:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:19.695 09:48:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:27:19.695 09:48:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:27:19.695 09:48:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:19.695 09:48:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:19.695 09:48:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:19.695 09:48:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:27:19.695 09:48:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.695 09:48:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:19.695 [2024-10-07 09:48:08.537854] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:27:19.695 09:48:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.696 09:48:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:27:19.696 09:48:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:27:19.696 09:48:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.696 09:48:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:19.696 null0 00:27:19.696 [2024-10-07 09:48:08.642622] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:19.696 [2024-10-07 09:48:08.666904] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:19.696 09:48:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.696 09:48:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:27:19.696 09:48:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:19.696 09:48:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:27:19.696 09:48:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:27:19.696 09:48:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:27:19.696 09:48:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=321900 00:27:19.696 09:48:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:27:19.696 09:48:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 321900 /var/tmp/bperf.sock 00:27:19.696 09:48:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 321900 ']' 00:27:19.696 09:48:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:19.696 09:48:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:19.696 09:48:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:19.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:19.696 09:48:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:19.696 09:48:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:19.954 [2024-10-07 09:48:08.715254] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:27:19.954 [2024-10-07 09:48:08.715316] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid321900 ] 00:27:19.954 [2024-10-07 09:48:08.772276] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:19.954 [2024-10-07 09:48:08.883757] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:27:20.211 09:48:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:20.211 09:48:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:27:20.211 09:48:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:20.211 09:48:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:20.471 09:48:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:20.472 09:48:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.472 09:48:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:20.472 09:48:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.472 09:48:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:20.472 09:48:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:20.730 nvme0n1 00:27:20.730 09:48:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:27:20.730 09:48:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.730 09:48:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:20.730 09:48:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.730 09:48:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:20.731 09:48:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:20.991 Running I/O for 2 seconds... 00:27:20.991 [2024-10-07 09:48:09.753853] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:20.991 [2024-10-07 09:48:09.753907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.991 [2024-10-07 09:48:09.753926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.991 [2024-10-07 09:48:09.768922] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:20.991 [2024-10-07 09:48:09.768955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:15747 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.991 [2024-10-07 09:48:09.768986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.991 [2024-10-07 09:48:09.781397] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:20.991 [2024-10-07 09:48:09.781427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:20820 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.991 [2024-10-07 09:48:09.781443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.991 [2024-10-07 09:48:09.793291] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:20.991 [2024-10-07 09:48:09.793321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21086 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.991 [2024-10-07 09:48:09.793338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.991 [2024-10-07 09:48:09.806573] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:20.991 [2024-10-07 09:48:09.806604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:20217 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.991 [2024-10-07 09:48:09.806620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.991 [2024-10-07 09:48:09.819138] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:20.991 [2024-10-07 09:48:09.819168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:25204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.991 [2024-10-07 09:48:09.819194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.991 [2024-10-07 09:48:09.832109] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:20.991 [2024-10-07 09:48:09.832140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:25447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.991 [2024-10-07 09:48:09.832156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.991 [2024-10-07 09:48:09.846334] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:20.991 [2024-10-07 09:48:09.846379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:13447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.991 [2024-10-07 09:48:09.846396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.991 [2024-10-07 09:48:09.857599] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:20.991 [2024-10-07 09:48:09.857628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.991 [2024-10-07 09:48:09.857644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.991 [2024-10-07 09:48:09.872512] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:20.991 [2024-10-07 09:48:09.872542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6877 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.991 [2024-10-07 09:48:09.872559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.991 [2024-10-07 09:48:09.887444] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:20.991 [2024-10-07 09:48:09.887479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.991 [2024-10-07 09:48:09.887497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.991 [2024-10-07 09:48:09.902277] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:20.991 [2024-10-07 09:48:09.902309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:16997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.991 [2024-10-07 09:48:09.902326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.991 [2024-10-07 09:48:09.913786] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:20.991 [2024-10-07 09:48:09.913836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:17909 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.991 [2024-10-07 09:48:09.913854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.991 [2024-10-07 09:48:09.926360] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:20.991 [2024-10-07 09:48:09.926389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:12529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.991 [2024-10-07 09:48:09.926405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.991 [2024-10-07 09:48:09.942764] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:20.991 [2024-10-07 09:48:09.942804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.991 [2024-10-07 09:48:09.942823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.991 [2024-10-07 09:48:09.958536] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:20.991 [2024-10-07 09:48:09.958566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.991 [2024-10-07 09:48:09.958582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.991 [2024-10-07 09:48:09.974987] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:20.992 [2024-10-07 09:48:09.975022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.992 [2024-10-07 09:48:09.975039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.252 [2024-10-07 09:48:09.990719] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:21.252 [2024-10-07 09:48:09.990765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:6355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.252 [2024-10-07 09:48:09.990782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.253 [2024-10-07 09:48:10.002401] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:21.253 [2024-10-07 09:48:10.002434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:25082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.253 [2024-10-07 09:48:10.002451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.253 [2024-10-07 09:48:10.017489] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:21.253 [2024-10-07 09:48:10.017546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:8148 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.253 [2024-10-07 09:48:10.017565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.253 [2024-10-07 09:48:10.034285] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:21.253 [2024-10-07 09:48:10.034330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:11015 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.253 [2024-10-07 09:48:10.034348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.253 [2024-10-07 09:48:10.048442] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:21.253 [2024-10-07 09:48:10.048479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:12945 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.253 [2024-10-07 09:48:10.048498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.253 [2024-10-07 09:48:10.060219] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:21.253 [2024-10-07 09:48:10.060250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.253 [2024-10-07 09:48:10.060267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.253 [2024-10-07 09:48:10.075749] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:21.253 [2024-10-07 09:48:10.075783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.253 [2024-10-07 09:48:10.075800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.253 [2024-10-07 09:48:10.091581] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:21.253 [2024-10-07 09:48:10.091614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:3693 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.253 [2024-10-07 09:48:10.091630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.253 [2024-10-07 09:48:10.104382] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:21.253 [2024-10-07 09:48:10.104413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.253 [2024-10-07 09:48:10.104430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.253 [2024-10-07 09:48:10.117740] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:21.253 [2024-10-07 09:48:10.117772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18887 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.253 [2024-10-07 09:48:10.117789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.253 [2024-10-07 09:48:10.134564] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:21.253 [2024-10-07 09:48:10.134596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1332 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.253 [2024-10-07 09:48:10.134613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.253 [2024-10-07 09:48:10.150919] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:21.253 [2024-10-07 09:48:10.150966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:3100 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.253 [2024-10-07 09:48:10.150990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.253 [2024-10-07 09:48:10.161629] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:21.253 [2024-10-07 09:48:10.161659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:14238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.253 [2024-10-07 09:48:10.161698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.253 [2024-10-07 09:48:10.178110] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:21.253 [2024-10-07 09:48:10.178156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.253 [2024-10-07 09:48:10.178174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.253 [2024-10-07 09:48:10.192632] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:21.253 [2024-10-07 09:48:10.192691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:8837 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.253 [2024-10-07 09:48:10.192720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.253 [2024-10-07 09:48:10.204593] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:21.253 [2024-10-07 09:48:10.204638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:22762 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.253 [2024-10-07 09:48:10.204654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.253 [2024-10-07 09:48:10.221888] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:21.253 [2024-10-07 09:48:10.221920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.253 [2024-10-07 09:48:10.221952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.253 [2024-10-07 09:48:10.237176] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:21.253 [2024-10-07 09:48:10.237209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:14109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.253 [2024-10-07 09:48:10.237227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.513 [2024-10-07 09:48:10.253402] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:21.513 [2024-10-07 09:48:10.253445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:17427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.513 [2024-10-07 09:48:10.253463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.513 [2024-10-07 09:48:10.264799] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:21.513 [2024-10-07 09:48:10.264831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:22046 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.513 [2024-10-07 09:48:10.264849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.513 [2024-10-07 09:48:10.281261] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:21.513 [2024-10-07 09:48:10.281293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.513 [2024-10-07 09:48:10.281325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.513 [2024-10-07 09:48:10.296273] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:21.513 [2024-10-07 09:48:10.296303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:21893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.513 [2024-10-07 09:48:10.296334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.513 [2024-10-07 09:48:10.312620] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:21.513 [2024-10-07 09:48:10.312676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:2023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.513 [2024-10-07 09:48:10.312702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.513 [2024-10-07 09:48:10.324918] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:21.513 [2024-10-07 09:48:10.324970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:19309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.513 [2024-10-07 09:48:10.324987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.513 [2024-10-07 09:48:10.338757] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:21.513 [2024-10-07 09:48:10.338789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:10575 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.513 [2024-10-07 09:48:10.338807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.513 [2024-10-07 09:48:10.354480] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:21.513 [2024-10-07 09:48:10.354511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:23083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.513 [2024-10-07 09:48:10.354528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.513 [2024-10-07 09:48:10.370736] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:21.513 [2024-10-07 09:48:10.370782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:16637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.513 [2024-10-07 09:48:10.370801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.513 [2024-10-07 09:48:10.384324] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:21.513 [2024-10-07 09:48:10.384378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:10491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.513 [2024-10-07 09:48:10.384404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.513 [2024-10-07 09:48:10.395750] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:21.513 [2024-10-07 09:48:10.395782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.513 [2024-10-07 09:48:10.395827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.513 [2024-10-07 09:48:10.410972] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:21.513 [2024-10-07 09:48:10.411003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:1966 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.513 [2024-10-07 09:48:10.411039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.513 [2024-10-07 09:48:10.424673] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:21.513 [2024-10-07 09:48:10.424704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:19770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.513 [2024-10-07 09:48:10.424724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.514 [2024-10-07 09:48:10.436513] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:21.514 [2024-10-07 09:48:10.436545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.514 [2024-10-07 09:48:10.436562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.514 [2024-10-07 09:48:10.450948] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:21.514 [2024-10-07 09:48:10.451002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:5034 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.514 [2024-10-07 09:48:10.451017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.514 [2024-10-07 09:48:10.467306] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:21.514 [2024-10-07 09:48:10.467335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:16923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.514 [2024-10-07 09:48:10.467354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.514 [2024-10-07 09:48:10.483892] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:21.514 [2024-10-07 09:48:10.483939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:19324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.514 [2024-10-07 09:48:10.483956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.514 [2024-10-07 09:48:10.499107] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:21.514 [2024-10-07 09:48:10.499156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:20382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.514 [2024-10-07 09:48:10.499175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.772 [2024-10-07 09:48:10.512468] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:21.772 [2024-10-07 09:48:10.512499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:16206 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.772 [2024-10-07 09:48:10.512517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.772 [2024-10-07 09:48:10.525584] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:21.772 [2024-10-07 09:48:10.525613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:19697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.772 [2024-10-07 09:48:10.525637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.772 [2024-10-07 09:48:10.539200] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:21.772 [2024-10-07 09:48:10.539245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:15755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.772 [2024-10-07 09:48:10.539266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.772 [2024-10-07 09:48:10.552507] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:21.772 [2024-10-07 09:48:10.552539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:1324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.773 [2024-10-07 09:48:10.552561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.773 [2024-10-07 09:48:10.564116] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:21.773 [2024-10-07 09:48:10.564152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:4253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.773 [2024-10-07 09:48:10.564173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.773 [2024-10-07 09:48:10.577014] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:21.773 [2024-10-07 09:48:10.577061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:12571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.773 [2024-10-07 09:48:10.577078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.773 [2024-10-07 09:48:10.593171] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:21.773 [2024-10-07 09:48:10.593200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:21023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.773 [2024-10-07 09:48:10.593233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.773 [2024-10-07 09:48:10.609879] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:21.773 [2024-10-07 09:48:10.609911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.773 [2024-10-07 09:48:10.609928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.773 [2024-10-07 09:48:10.626401] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:21.773 [2024-10-07 09:48:10.626430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:265 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.773 [2024-10-07 09:48:10.626448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.773 [2024-10-07 09:48:10.641302] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:21.773 [2024-10-07 09:48:10.641332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:3833 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.773 [2024-10-07 09:48:10.641349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.773 [2024-10-07 09:48:10.659008] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:21.773 [2024-10-07 09:48:10.659042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:20195 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.773 [2024-10-07 09:48:10.659061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.773 [2024-10-07 09:48:10.675524] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:21.773 [2024-10-07 09:48:10.675553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:11330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.773 [2024-10-07 09:48:10.675572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.773 [2024-10-07 09:48:10.686165] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:21.773 [2024-10-07 09:48:10.686196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:3059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.773 [2024-10-07 09:48:10.686228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.773 [2024-10-07 09:48:10.702417] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:21.773 [2024-10-07 09:48:10.702450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:10326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.773 [2024-10-07 09:48:10.702468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.773 [2024-10-07 09:48:10.717185] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:21.773 [2024-10-07 09:48:10.717218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4720 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.773 [2024-10-07 09:48:10.717236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.773 [2024-10-07 09:48:10.730117] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:21.773 [2024-10-07 09:48:10.730146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:2788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.773 [2024-10-07 09:48:10.730162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.773 17767.00 IOPS, 69.40 MiB/s [2024-10-07 09:48:10.744185] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:21.773 [2024-10-07 09:48:10.744214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:18605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.773 [2024-10-07 09:48:10.744231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.773 [2024-10-07 09:48:10.759311] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:21.773 [2024-10-07 09:48:10.759342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.773 [2024-10-07 09:48:10.759362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.032 [2024-10-07 09:48:10.772240] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:22.033 [2024-10-07 09:48:10.772285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:11855 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.033 [2024-10-07 09:48:10.772319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.033 [2024-10-07 09:48:10.783535] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:22.033 [2024-10-07 09:48:10.783563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:21260 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.033 [2024-10-07 09:48:10.783585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.033 [2024-10-07 09:48:10.797646] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:22.033 [2024-10-07 09:48:10.797684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:6265 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.033 [2024-10-07 09:48:10.797707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.033 [2024-10-07 09:48:10.812299] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:22.033 [2024-10-07 09:48:10.812329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:21495 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.033 [2024-10-07 09:48:10.812359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.033 [2024-10-07 09:48:10.827208] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:22.033 [2024-10-07 09:48:10.827248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:6395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.033 [2024-10-07 09:48:10.827290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.033 [2024-10-07 09:48:10.842607] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:22.033 [2024-10-07 09:48:10.842637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:21600 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.033 [2024-10-07 09:48:10.842677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.033 [2024-10-07 09:48:10.856843] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:22.033 [2024-10-07 09:48:10.856877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:15494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.033 [2024-10-07 09:48:10.856896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.033 [2024-10-07 09:48:10.868611] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:22.033 [2024-10-07 09:48:10.868640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:2822 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.033 [2024-10-07 09:48:10.868682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.033 [2024-10-07 09:48:10.881265] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:22.033 [2024-10-07 09:48:10.881294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:15324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.033 [2024-10-07 09:48:10.881312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.033 [2024-10-07 09:48:10.897329] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:22.033 [2024-10-07 09:48:10.897359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:12533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.033 [2024-10-07 09:48:10.897379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.033 [2024-10-07 09:48:10.911460] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:22.033 [2024-10-07 09:48:10.911489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:11257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.033 [2024-10-07 09:48:10.911512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.033 [2024-10-07 09:48:10.923170] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:22.033 [2024-10-07 09:48:10.923199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:4908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.033 [2024-10-07 09:48:10.923216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.033 [2024-10-07 09:48:10.934808] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:22.033 [2024-10-07 09:48:10.934844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:12620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.033 [2024-10-07 09:48:10.934862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.033 [2024-10-07 09:48:10.947228] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:22.033 [2024-10-07 09:48:10.947271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:1874 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.033 [2024-10-07 09:48:10.947288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.033 [2024-10-07 09:48:10.962601] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:22.033 [2024-10-07 09:48:10.962631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:19938 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.033 [2024-10-07 09:48:10.962663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.033 [2024-10-07 09:48:10.975832] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:22.033 [2024-10-07 09:48:10.975863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:24421 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.033 [2024-10-07 09:48:10.975882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.033 [2024-10-07 09:48:10.988243] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:22.033 [2024-10-07 09:48:10.988285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:8854 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.033 [2024-10-07 09:48:10.988303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.033 [2024-10-07 09:48:11.000880] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:22.033 [2024-10-07 09:48:11.000913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:15466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.033 [2024-10-07 09:48:11.000931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.033 [2024-10-07 09:48:11.014800] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:22.033 [2024-10-07 09:48:11.014830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:8002 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.033 [2024-10-07 09:48:11.014849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.294 [2024-10-07 09:48:11.030448] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:22.294 [2024-10-07 09:48:11.030483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:4127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.294 [2024-10-07 09:48:11.030507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.294 [2024-10-07 09:48:11.043122] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:22.294 [2024-10-07 09:48:11.043153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:23025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.294 [2024-10-07 09:48:11.043192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.294 [2024-10-07 09:48:11.054458] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:22.294 [2024-10-07 09:48:11.054486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:3494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.294 [2024-10-07 09:48:11.054507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.294 [2024-10-07 09:48:11.070651] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:22.294 [2024-10-07 09:48:11.070704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:17674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.294 [2024-10-07 09:48:11.070723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.294 [2024-10-07 09:48:11.085961] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:22.294 [2024-10-07 09:48:11.086007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:7481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.294 [2024-10-07 09:48:11.086025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.294 [2024-10-07 09:48:11.100276] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:22.294 [2024-10-07 09:48:11.100305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:22663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.294 [2024-10-07 09:48:11.100322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.294 [2024-10-07 09:48:11.116764] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:22.294 [2024-10-07 09:48:11.116793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13016 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.294 [2024-10-07 09:48:11.116813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.294 [2024-10-07 09:48:11.129398] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:22.294 [2024-10-07 09:48:11.129426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:3613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.294 [2024-10-07 09:48:11.129443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.294 [2024-10-07 09:48:11.142077] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:22.294 [2024-10-07 09:48:11.142105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:22526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.294 [2024-10-07 09:48:11.142124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.294 [2024-10-07 09:48:11.156605] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:22.294 [2024-10-07 09:48:11.156635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6259 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.294 [2024-10-07 09:48:11.156673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.294 [2024-10-07 09:48:11.171891] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:22.294 [2024-10-07 09:48:11.171929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:72 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.295 [2024-10-07 09:48:11.171960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.295 [2024-10-07 09:48:11.187634] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:22.295 [2024-10-07 09:48:11.187663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:5453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.295 [2024-10-07 09:48:11.187702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.295 [2024-10-07 09:48:11.202229] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:22.295 [2024-10-07 09:48:11.202258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.295 [2024-10-07 09:48:11.202274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.295 [2024-10-07 09:48:11.214640] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:22.295 [2024-10-07 09:48:11.214692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:17086 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.295 [2024-10-07 09:48:11.214712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.295 [2024-10-07 09:48:11.226523] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:22.295 [2024-10-07 09:48:11.226553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:12458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.295 [2024-10-07 09:48:11.226568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.295 [2024-10-07 09:48:11.241187] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:22.295 [2024-10-07 09:48:11.241219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:3929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.295 [2024-10-07 09:48:11.241236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.295 [2024-10-07 09:48:11.255969] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:22.295 [2024-10-07 09:48:11.256014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:7923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.295 [2024-10-07 09:48:11.256030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.295 [2024-10-07 09:48:11.267032] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:22.295 [2024-10-07 09:48:11.267062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.295 [2024-10-07 09:48:11.267078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.295 [2024-10-07 09:48:11.281910] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:22.295 [2024-10-07 09:48:11.281956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20259 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.295 [2024-10-07 09:48:11.281972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.554 [2024-10-07 09:48:11.298031] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:22.554 [2024-10-07 09:48:11.298064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:22772 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.554 [2024-10-07 09:48:11.298081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.554 [2024-10-07 09:48:11.310603] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:22.554 [2024-10-07 09:48:11.310634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:17134 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.554 [2024-10-07 09:48:11.310650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.554 [2024-10-07 09:48:11.323392] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:22.554 [2024-10-07 09:48:11.323441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:6469 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.554 [2024-10-07 09:48:11.323459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.554 [2024-10-07 09:48:11.335895] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:22.554 [2024-10-07 09:48:11.335928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:13643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.554 [2024-10-07 09:48:11.335946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.554 [2024-10-07 09:48:11.346732] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:22.554 [2024-10-07 09:48:11.346763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:10431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.554 [2024-10-07 09:48:11.346779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.554 [2024-10-07 09:48:11.358454] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:22.554 [2024-10-07 09:48:11.358483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:22571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.555 [2024-10-07 09:48:11.358499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.555 [2024-10-07 09:48:11.371937] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:22.555 [2024-10-07 09:48:11.371984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:3139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.555 [2024-10-07 09:48:11.372001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.555 [2024-10-07 09:48:11.386808] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:22.555 [2024-10-07 09:48:11.386841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:14629 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.555 [2024-10-07 09:48:11.386859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.555 [2024-10-07 09:48:11.400910] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:22.555 [2024-10-07 09:48:11.400939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:1357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.555 [2024-10-07 09:48:11.400962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.555 [2024-10-07 09:48:11.415820] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:22.555 [2024-10-07 09:48:11.415856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:11560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.555 [2024-10-07 09:48:11.415875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.555 [2024-10-07 09:48:11.426314] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:22.555 [2024-10-07 09:48:11.426347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.555 [2024-10-07 09:48:11.426364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.555 [2024-10-07 09:48:11.441558] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:22.555 [2024-10-07 09:48:11.441588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:13097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.555 [2024-10-07 09:48:11.441605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.555 [2024-10-07 09:48:11.454370] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:22.555 [2024-10-07 09:48:11.454398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.555 [2024-10-07 09:48:11.454414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.555 [2024-10-07 09:48:11.466734] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:22.555 [2024-10-07 09:48:11.466764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.555 [2024-10-07 09:48:11.466782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.555 [2024-10-07 09:48:11.479422] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:22.555 [2024-10-07 09:48:11.479471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:18346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.555 [2024-10-07 09:48:11.479489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.555 [2024-10-07 09:48:11.491874] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:22.555 [2024-10-07 09:48:11.491903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:12345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.555 [2024-10-07 09:48:11.491919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.555 [2024-10-07 09:48:11.507281] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:22.555 [2024-10-07 09:48:11.507310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:22179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.555 [2024-10-07 09:48:11.507327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.555 [2024-10-07 09:48:11.522160] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:22.555 [2024-10-07 09:48:11.522195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22919 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.555 [2024-10-07 09:48:11.522211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.555 [2024-10-07 09:48:11.536417] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:22.555 [2024-10-07 09:48:11.536448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17011 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.555 [2024-10-07 09:48:11.536480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.555 [2024-10-07 09:48:11.549021] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:22.555 [2024-10-07 09:48:11.549053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:16301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.555 [2024-10-07 09:48:11.549070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.814 [2024-10-07 09:48:11.562775] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:22.814 [2024-10-07 09:48:11.562813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:25151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.814 [2024-10-07 09:48:11.562832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.814 [2024-10-07 09:48:11.576124] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:22.814 [2024-10-07 09:48:11.576157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:20756 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.814 [2024-10-07 09:48:11.576174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.814 [2024-10-07 09:48:11.587545] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:22.814 [2024-10-07 09:48:11.587576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:16670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.814 [2024-10-07 09:48:11.587592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.814 [2024-10-07 09:48:11.601789] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:22.814 [2024-10-07 09:48:11.601828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:14667 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.814 [2024-10-07 09:48:11.601847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.814 [2024-10-07 09:48:11.616765] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:22.814 [2024-10-07 09:48:11.616804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:15427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.814 [2024-10-07 09:48:11.616820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.814 [2024-10-07 09:48:11.631279] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:22.814 [2024-10-07 09:48:11.631308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:12471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.814 [2024-10-07 09:48:11.631324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.814 [2024-10-07 09:48:11.642900] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:22.814 [2024-10-07 09:48:11.642931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:23603 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.814 [2024-10-07 09:48:11.642962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.814 [2024-10-07 09:48:11.657469] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:22.814 [2024-10-07 09:48:11.657501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:3911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.814 [2024-10-07 09:48:11.657518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.814 [2024-10-07 09:48:11.670948] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:22.814 [2024-10-07 09:48:11.670979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:6384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.814 [2024-10-07 09:48:11.671013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.814 [2024-10-07 09:48:11.686994] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:22.814 [2024-10-07 09:48:11.687026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:15901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.814 [2024-10-07 09:48:11.687044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.814 [2024-10-07 09:48:11.698773] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:22.814 [2024-10-07 09:48:11.698820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:17708 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.814 [2024-10-07 09:48:11.698837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.814 [2024-10-07 09:48:11.713314] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:22.814 [2024-10-07 09:48:11.713345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.814 [2024-10-07 09:48:11.713361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.815 [2024-10-07 09:48:11.729338] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:22.815 [2024-10-07 09:48:11.729369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:8312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.815 [2024-10-07 09:48:11.729385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.815 18230.00 IOPS, 71.21 MiB/s [2024-10-07 09:48:11.743629] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14f45b0) 00:27:22.815 [2024-10-07 09:48:11.743660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:19044 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.815 [2024-10-07 09:48:11.743712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.815 00:27:22.815 Latency(us) 00:27:22.815 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:22.815 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:27:22.815 nvme0n1 : 2.01 18233.07 71.22 0.00 0.00 7012.38 3689.43 23204.60 00:27:22.815 =================================================================================================================== 00:27:22.815 Total : 18233.07 71.22 0.00 0.00 7012.38 3689.43 23204.60 00:27:22.815 { 00:27:22.815 "results": [ 00:27:22.815 { 00:27:22.815 "job": "nvme0n1", 00:27:22.815 "core_mask": "0x2", 00:27:22.815 "workload": "randread", 00:27:22.815 "status": "finished", 00:27:22.815 "queue_depth": 128, 00:27:22.815 "io_size": 4096, 00:27:22.815 "runtime": 2.006683, 00:27:22.815 "iops": 18233.07418261878, 00:27:22.815 "mibps": 71.22294602585461, 00:27:22.815 "io_failed": 0, 00:27:22.815 "io_timeout": 0, 00:27:22.815 "avg_latency_us": 7012.383378966591, 00:27:22.815 "min_latency_us": 3689.434074074074, 00:27:22.815 "max_latency_us": 23204.59851851852 00:27:22.815 } 00:27:22.815 ], 00:27:22.815 "core_count": 1 00:27:22.815 } 00:27:22.815 09:48:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:22.815 09:48:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:22.815 09:48:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:22.815 | .driver_specific 00:27:22.815 | .nvme_error 00:27:22.815 | .status_code 00:27:22.815 | .command_transient_transport_error' 00:27:22.815 09:48:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:23.073 09:48:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 143 > 0 )) 00:27:23.073 09:48:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 321900 00:27:23.073 09:48:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 321900 ']' 00:27:23.073 09:48:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 321900 00:27:23.073 09:48:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:27:23.073 09:48:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:23.073 09:48:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 321900 00:27:23.332 09:48:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:23.332 09:48:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:23.332 09:48:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 321900' 00:27:23.332 killing process with pid 321900 00:27:23.332 09:48:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 321900 00:27:23.332 Received shutdown signal, test time was about 2.000000 seconds 00:27:23.332 00:27:23.332 Latency(us) 00:27:23.332 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:23.332 =================================================================================================================== 00:27:23.332 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:23.332 09:48:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 321900 00:27:23.591 09:48:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:27:23.591 09:48:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:23.591 09:48:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:27:23.591 09:48:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:27:23.591 09:48:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:27:23.591 09:48:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=322338 00:27:23.591 09:48:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:27:23.591 09:48:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 322338 /var/tmp/bperf.sock 00:27:23.591 09:48:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 322338 ']' 00:27:23.591 09:48:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:23.591 09:48:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:23.591 09:48:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:23.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:23.591 09:48:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:23.591 09:48:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:23.591 [2024-10-07 09:48:12.409582] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:27:23.591 [2024-10-07 09:48:12.409693] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid322338 ] 00:27:23.591 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:23.591 Zero copy mechanism will not be used. 00:27:23.591 [2024-10-07 09:48:12.465277] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:23.591 [2024-10-07 09:48:12.574355] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:27:23.849 09:48:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:23.849 09:48:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:27:23.849 09:48:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:23.849 09:48:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:24.107 09:48:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:24.107 09:48:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.107 09:48:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:24.107 09:48:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.107 09:48:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:24.107 09:48:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:24.365 nvme0n1 00:27:24.365 09:48:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:27:24.365 09:48:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.365 09:48:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:24.365 09:48:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.365 09:48:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:24.365 09:48:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:24.626 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:24.626 Zero copy mechanism will not be used. 00:27:24.626 Running I/O for 2 seconds... 00:27:24.626 [2024-10-07 09:48:13.482702] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:24.626 [2024-10-07 09:48:13.482757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.626 [2024-10-07 09:48:13.482777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.626 [2024-10-07 09:48:13.487296] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:24.626 [2024-10-07 09:48:13.487330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.626 [2024-10-07 09:48:13.487348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.626 [2024-10-07 09:48:13.491862] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:24.626 [2024-10-07 09:48:13.491897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.626 [2024-10-07 09:48:13.491915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.626 [2024-10-07 09:48:13.496369] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:24.626 [2024-10-07 09:48:13.496401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.626 [2024-10-07 09:48:13.496418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.626 [2024-10-07 09:48:13.501180] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:24.626 [2024-10-07 09:48:13.501212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.626 [2024-10-07 09:48:13.501229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.626 [2024-10-07 09:48:13.506067] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:24.626 [2024-10-07 09:48:13.506099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.626 [2024-10-07 09:48:13.506117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.626 [2024-10-07 09:48:13.511055] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:24.626 [2024-10-07 09:48:13.511088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.626 [2024-10-07 09:48:13.511105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.626 [2024-10-07 09:48:13.516035] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:24.626 [2024-10-07 09:48:13.516068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.626 [2024-10-07 09:48:13.516086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.626 [2024-10-07 09:48:13.521006] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:24.626 [2024-10-07 09:48:13.521039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.626 [2024-10-07 09:48:13.521057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.626 [2024-10-07 09:48:13.525804] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:24.626 [2024-10-07 09:48:13.525837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.626 [2024-10-07 09:48:13.525854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.626 [2024-10-07 09:48:13.530572] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:24.626 [2024-10-07 09:48:13.530603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.626 [2024-10-07 09:48:13.530620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.626 [2024-10-07 09:48:13.535540] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:24.626 [2024-10-07 09:48:13.535571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.626 [2024-10-07 09:48:13.535588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.626 [2024-10-07 09:48:13.540407] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:24.626 [2024-10-07 09:48:13.540453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.626 [2024-10-07 09:48:13.540469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.626 [2024-10-07 09:48:13.545324] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:24.626 [2024-10-07 09:48:13.545356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.626 [2024-10-07 09:48:13.545374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.626 [2024-10-07 09:48:13.550146] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:24.626 [2024-10-07 09:48:13.550194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.626 [2024-10-07 09:48:13.550212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.626 [2024-10-07 09:48:13.554468] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:24.626 [2024-10-07 09:48:13.554516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.626 [2024-10-07 09:48:13.554533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.626 [2024-10-07 09:48:13.557138] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:24.626 [2024-10-07 09:48:13.557169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.626 [2024-10-07 09:48:13.557192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.626 [2024-10-07 09:48:13.560771] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:24.626 [2024-10-07 09:48:13.560803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.626 [2024-10-07 09:48:13.560820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.626 [2024-10-07 09:48:13.564973] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:24.626 [2024-10-07 09:48:13.565009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.626 [2024-10-07 09:48:13.565027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.626 [2024-10-07 09:48:13.569439] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:24.626 [2024-10-07 09:48:13.569474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.626 [2024-10-07 09:48:13.569491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.626 [2024-10-07 09:48:13.573917] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:24.626 [2024-10-07 09:48:13.573949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.626 [2024-10-07 09:48:13.573966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.627 [2024-10-07 09:48:13.578428] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:24.627 [2024-10-07 09:48:13.578462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.627 [2024-10-07 09:48:13.578480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.627 [2024-10-07 09:48:13.582826] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:24.627 [2024-10-07 09:48:13.582858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.627 [2024-10-07 09:48:13.582875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.627 [2024-10-07 09:48:13.587391] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:24.627 [2024-10-07 09:48:13.587422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.627 [2024-10-07 09:48:13.587440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.627 [2024-10-07 09:48:13.591858] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:24.627 [2024-10-07 09:48:13.591889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.627 [2024-10-07 09:48:13.591906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.627 [2024-10-07 09:48:13.596348] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:24.627 [2024-10-07 09:48:13.596388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.627 [2024-10-07 09:48:13.596407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.627 [2024-10-07 09:48:13.600720] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:24.627 [2024-10-07 09:48:13.600752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.627 [2024-10-07 09:48:13.600769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.627 [2024-10-07 09:48:13.605051] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:24.627 [2024-10-07 09:48:13.605083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.627 [2024-10-07 09:48:13.605100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.627 [2024-10-07 09:48:13.609687] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:24.627 [2024-10-07 09:48:13.609719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.627 [2024-10-07 09:48:13.609737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.627 [2024-10-07 09:48:13.614367] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:24.627 [2024-10-07 09:48:13.614403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.627 [2024-10-07 09:48:13.614421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.627 [2024-10-07 09:48:13.618813] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:24.627 [2024-10-07 09:48:13.618845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.627 [2024-10-07 09:48:13.618863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.888 [2024-10-07 09:48:13.623343] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:24.888 [2024-10-07 09:48:13.623374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.888 [2024-10-07 09:48:13.623391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.888 [2024-10-07 09:48:13.628219] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:24.888 [2024-10-07 09:48:13.628265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.888 [2024-10-07 09:48:13.628282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.888 [2024-10-07 09:48:13.633250] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:24.888 [2024-10-07 09:48:13.633283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.888 [2024-10-07 09:48:13.633301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.888 [2024-10-07 09:48:13.637672] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:24.888 [2024-10-07 09:48:13.637704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.888 [2024-10-07 09:48:13.637721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.888 [2024-10-07 09:48:13.642161] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:24.888 [2024-10-07 09:48:13.642192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.888 [2024-10-07 09:48:13.642209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.889 [2024-10-07 09:48:13.646611] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:24.889 [2024-10-07 09:48:13.646642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.889 [2024-10-07 09:48:13.646659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.889 [2024-10-07 09:48:13.651023] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:24.889 [2024-10-07 09:48:13.651054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.889 [2024-10-07 09:48:13.651071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.889 [2024-10-07 09:48:13.655428] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:24.889 [2024-10-07 09:48:13.655459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.889 [2024-10-07 09:48:13.655476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.889 [2024-10-07 09:48:13.659873] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:24.889 [2024-10-07 09:48:13.659904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.889 [2024-10-07 09:48:13.659921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.889 [2024-10-07 09:48:13.664268] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:24.889 [2024-10-07 09:48:13.664299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.889 [2024-10-07 09:48:13.664316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.889 [2024-10-07 09:48:13.668680] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:24.889 [2024-10-07 09:48:13.668712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.889 [2024-10-07 09:48:13.668728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.889 [2024-10-07 09:48:13.673212] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:24.889 [2024-10-07 09:48:13.673253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.889 [2024-10-07 09:48:13.673287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.889 [2024-10-07 09:48:13.677886] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:24.889 [2024-10-07 09:48:13.677917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.889 [2024-10-07 09:48:13.677934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.889 [2024-10-07 09:48:13.682375] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:24.889 [2024-10-07 09:48:13.682406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.889 [2024-10-07 09:48:13.682423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.889 [2024-10-07 09:48:13.687062] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:24.889 [2024-10-07 09:48:13.687092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.889 [2024-10-07 09:48:13.687109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.889 [2024-10-07 09:48:13.692394] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:24.889 [2024-10-07 09:48:13.692430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.889 [2024-10-07 09:48:13.692449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.889 [2024-10-07 09:48:13.697169] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:24.889 [2024-10-07 09:48:13.697205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.889 [2024-10-07 09:48:13.697223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.889 [2024-10-07 09:48:13.702536] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:24.889 [2024-10-07 09:48:13.702568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.889 [2024-10-07 09:48:13.702589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.889 [2024-10-07 09:48:13.709860] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:24.889 [2024-10-07 09:48:13.709891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.889 [2024-10-07 09:48:13.709909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.889 [2024-10-07 09:48:13.716192] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:24.889 [2024-10-07 09:48:13.716224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.889 [2024-10-07 09:48:13.716241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.889 [2024-10-07 09:48:13.722156] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:24.889 [2024-10-07 09:48:13.722189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.889 [2024-10-07 09:48:13.722207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.889 [2024-10-07 09:48:13.728419] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:24.889 [2024-10-07 09:48:13.728451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.889 [2024-10-07 09:48:13.728468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.889 [2024-10-07 09:48:13.733337] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:24.889 [2024-10-07 09:48:13.733370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.889 [2024-10-07 09:48:13.733387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.889 [2024-10-07 09:48:13.738366] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:24.889 [2024-10-07 09:48:13.738397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.889 [2024-10-07 09:48:13.738415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.889 [2024-10-07 09:48:13.743249] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:24.889 [2024-10-07 09:48:13.743280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.889 [2024-10-07 09:48:13.743298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.889 [2024-10-07 09:48:13.747769] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:24.889 [2024-10-07 09:48:13.747804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.889 [2024-10-07 09:48:13.747823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.889 [2024-10-07 09:48:13.752336] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:24.889 [2024-10-07 09:48:13.752367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.889 [2024-10-07 09:48:13.752385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.890 [2024-10-07 09:48:13.757095] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:24.890 [2024-10-07 09:48:13.757127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.890 [2024-10-07 09:48:13.757149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.890 [2024-10-07 09:48:13.762170] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:24.890 [2024-10-07 09:48:13.762201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.890 [2024-10-07 09:48:13.762225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.890 [2024-10-07 09:48:13.766727] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:24.890 [2024-10-07 09:48:13.766758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.890 [2024-10-07 09:48:13.766779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.890 [2024-10-07 09:48:13.771284] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:24.890 [2024-10-07 09:48:13.771315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.890 [2024-10-07 09:48:13.771332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.890 [2024-10-07 09:48:13.775743] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:24.890 [2024-10-07 09:48:13.775773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.890 [2024-10-07 09:48:13.775791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.890 [2024-10-07 09:48:13.780218] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:24.890 [2024-10-07 09:48:13.780249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.890 [2024-10-07 09:48:13.780266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.890 [2024-10-07 09:48:13.784705] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:24.890 [2024-10-07 09:48:13.784734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.890 [2024-10-07 09:48:13.784751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.890 [2024-10-07 09:48:13.788825] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:24.890 [2024-10-07 09:48:13.788854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.890 [2024-10-07 09:48:13.788871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.890 [2024-10-07 09:48:13.791548] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:24.890 [2024-10-07 09:48:13.791577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.890 [2024-10-07 09:48:13.791607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.890 [2024-10-07 09:48:13.795890] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:24.890 [2024-10-07 09:48:13.795920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.890 [2024-10-07 09:48:13.795943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.890 [2024-10-07 09:48:13.800284] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:24.890 [2024-10-07 09:48:13.800333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.890 [2024-10-07 09:48:13.800349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.890 [2024-10-07 09:48:13.805045] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:24.890 [2024-10-07 09:48:13.805090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.890 [2024-10-07 09:48:13.805107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.890 [2024-10-07 09:48:13.809506] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:24.890 [2024-10-07 09:48:13.809537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.890 [2024-10-07 09:48:13.809553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.890 [2024-10-07 09:48:13.813845] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:24.890 [2024-10-07 09:48:13.813876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.890 [2024-10-07 09:48:13.813893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.890 [2024-10-07 09:48:13.818300] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:24.890 [2024-10-07 09:48:13.818331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.890 [2024-10-07 09:48:13.818347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.890 [2024-10-07 09:48:13.822961] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:24.890 [2024-10-07 09:48:13.822992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.890 [2024-10-07 09:48:13.823010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.890 [2024-10-07 09:48:13.827491] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:24.890 [2024-10-07 09:48:13.827522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.890 [2024-10-07 09:48:13.827539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.890 [2024-10-07 09:48:13.831973] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:24.890 [2024-10-07 09:48:13.832005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.890 [2024-10-07 09:48:13.832022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.890 [2024-10-07 09:48:13.837166] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:24.890 [2024-10-07 09:48:13.837212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.890 [2024-10-07 09:48:13.837230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.890 [2024-10-07 09:48:13.843860] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:24.890 [2024-10-07 09:48:13.843905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.890 [2024-10-07 09:48:13.843922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.890 [2024-10-07 09:48:13.850832] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:24.891 [2024-10-07 09:48:13.850864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.891 [2024-10-07 09:48:13.850881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.891 [2024-10-07 09:48:13.856511] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:24.891 [2024-10-07 09:48:13.856542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.891 [2024-10-07 09:48:13.856576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.891 [2024-10-07 09:48:13.862712] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:24.891 [2024-10-07 09:48:13.862744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.891 [2024-10-07 09:48:13.862761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.891 [2024-10-07 09:48:13.869002] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:24.891 [2024-10-07 09:48:13.869034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.891 [2024-10-07 09:48:13.869050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.891 [2024-10-07 09:48:13.874835] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:24.891 [2024-10-07 09:48:13.874868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.891 [2024-10-07 09:48:13.874886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.891 [2024-10-07 09:48:13.880101] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:24.891 [2024-10-07 09:48:13.880132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.891 [2024-10-07 09:48:13.880149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.151 [2024-10-07 09:48:13.885642] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.151 [2024-10-07 09:48:13.885699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.151 [2024-10-07 09:48:13.885733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:25.151 [2024-10-07 09:48:13.891308] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.151 [2024-10-07 09:48:13.891341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.151 [2024-10-07 09:48:13.891379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:25.151 [2024-10-07 09:48:13.896978] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.151 [2024-10-07 09:48:13.897010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.151 [2024-10-07 09:48:13.897028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:25.151 [2024-10-07 09:48:13.903591] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.151 [2024-10-07 09:48:13.903624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.151 [2024-10-07 09:48:13.903641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.151 [2024-10-07 09:48:13.909846] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.151 [2024-10-07 09:48:13.909879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.151 [2024-10-07 09:48:13.909897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:25.151 [2024-10-07 09:48:13.915405] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.151 [2024-10-07 09:48:13.915437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.151 [2024-10-07 09:48:13.915454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:25.151 [2024-10-07 09:48:13.921413] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.151 [2024-10-07 09:48:13.921445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.151 [2024-10-07 09:48:13.921462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:25.151 [2024-10-07 09:48:13.927591] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.151 [2024-10-07 09:48:13.927638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.151 [2024-10-07 09:48:13.927655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.151 [2024-10-07 09:48:13.933538] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.151 [2024-10-07 09:48:13.933585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.151 [2024-10-07 09:48:13.933601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:25.151 [2024-10-07 09:48:13.939063] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.151 [2024-10-07 09:48:13.939094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.151 [2024-10-07 09:48:13.939126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:25.151 [2024-10-07 09:48:13.945028] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.151 [2024-10-07 09:48:13.945066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.151 [2024-10-07 09:48:13.945099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:25.152 [2024-10-07 09:48:13.950779] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.152 [2024-10-07 09:48:13.950813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.152 [2024-10-07 09:48:13.950830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.152 [2024-10-07 09:48:13.956094] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.152 [2024-10-07 09:48:13.956145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.152 [2024-10-07 09:48:13.956163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:25.152 [2024-10-07 09:48:13.961493] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.152 [2024-10-07 09:48:13.961541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.152 [2024-10-07 09:48:13.961559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:25.152 [2024-10-07 09:48:13.966514] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.152 [2024-10-07 09:48:13.966546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.152 [2024-10-07 09:48:13.966580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:25.152 [2024-10-07 09:48:13.971705] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.152 [2024-10-07 09:48:13.971737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.152 [2024-10-07 09:48:13.971755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.152 [2024-10-07 09:48:13.978209] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.152 [2024-10-07 09:48:13.978240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.152 [2024-10-07 09:48:13.978256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:25.152 [2024-10-07 09:48:13.985471] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.152 [2024-10-07 09:48:13.985505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.152 [2024-10-07 09:48:13.985538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:25.152 [2024-10-07 09:48:13.990731] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.152 [2024-10-07 09:48:13.990762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.152 [2024-10-07 09:48:13.990785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:25.152 [2024-10-07 09:48:13.995957] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.152 [2024-10-07 09:48:13.996006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.152 [2024-10-07 09:48:13.996023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.152 [2024-10-07 09:48:14.000839] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.152 [2024-10-07 09:48:14.000870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.152 [2024-10-07 09:48:14.000901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:25.152 [2024-10-07 09:48:14.006199] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.152 [2024-10-07 09:48:14.006233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.152 [2024-10-07 09:48:14.006251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:25.152 [2024-10-07 09:48:14.012935] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.152 [2024-10-07 09:48:14.012968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.152 [2024-10-07 09:48:14.012986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:25.152 [2024-10-07 09:48:14.019967] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.152 [2024-10-07 09:48:14.019999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.152 [2024-10-07 09:48:14.020016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.152 [2024-10-07 09:48:14.025351] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.152 [2024-10-07 09:48:14.025382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.152 [2024-10-07 09:48:14.025414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:25.152 [2024-10-07 09:48:14.031311] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.152 [2024-10-07 09:48:14.031358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.152 [2024-10-07 09:48:14.031376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:25.152 [2024-10-07 09:48:14.035934] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.152 [2024-10-07 09:48:14.035981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.152 [2024-10-07 09:48:14.035999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:25.152 [2024-10-07 09:48:14.040874] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.152 [2024-10-07 09:48:14.040911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.152 [2024-10-07 09:48:14.040930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.152 [2024-10-07 09:48:14.046145] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.152 [2024-10-07 09:48:14.046177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.152 [2024-10-07 09:48:14.046195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:25.152 [2024-10-07 09:48:14.051161] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.152 [2024-10-07 09:48:14.051192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.152 [2024-10-07 09:48:14.051209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:25.152 [2024-10-07 09:48:14.056179] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.152 [2024-10-07 09:48:14.056211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.152 [2024-10-07 09:48:14.056228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:25.152 [2024-10-07 09:48:14.061810] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.152 [2024-10-07 09:48:14.061843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.152 [2024-10-07 09:48:14.061861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.152 [2024-10-07 09:48:14.067389] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.152 [2024-10-07 09:48:14.067421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.153 [2024-10-07 09:48:14.067438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:25.153 [2024-10-07 09:48:14.073727] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.153 [2024-10-07 09:48:14.073769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.153 [2024-10-07 09:48:14.073787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:25.153 [2024-10-07 09:48:14.080294] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.153 [2024-10-07 09:48:14.080340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.153 [2024-10-07 09:48:14.080358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:25.153 [2024-10-07 09:48:14.086202] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.153 [2024-10-07 09:48:14.086234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.153 [2024-10-07 09:48:14.086251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.153 [2024-10-07 09:48:14.092157] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.153 [2024-10-07 09:48:14.092187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.153 [2024-10-07 09:48:14.092205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:25.153 [2024-10-07 09:48:14.097659] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.153 [2024-10-07 09:48:14.097699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.153 [2024-10-07 09:48:14.097717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:25.153 [2024-10-07 09:48:14.102919] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.153 [2024-10-07 09:48:14.102951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.153 [2024-10-07 09:48:14.102968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:25.153 [2024-10-07 09:48:14.108235] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.153 [2024-10-07 09:48:14.108267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.153 [2024-10-07 09:48:14.108284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.153 [2024-10-07 09:48:14.113611] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.153 [2024-10-07 09:48:14.113644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.153 [2024-10-07 09:48:14.113663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:25.153 [2024-10-07 09:48:14.118590] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.153 [2024-10-07 09:48:14.118621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.153 [2024-10-07 09:48:14.118638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:25.153 [2024-10-07 09:48:14.123770] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.153 [2024-10-07 09:48:14.123801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.153 [2024-10-07 09:48:14.123819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:25.153 [2024-10-07 09:48:14.128946] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.153 [2024-10-07 09:48:14.128978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.153 [2024-10-07 09:48:14.128996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.153 [2024-10-07 09:48:14.133655] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.153 [2024-10-07 09:48:14.133694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.153 [2024-10-07 09:48:14.133718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:25.153 [2024-10-07 09:48:14.138013] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.153 [2024-10-07 09:48:14.138043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.153 [2024-10-07 09:48:14.138060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:25.153 [2024-10-07 09:48:14.142367] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.153 [2024-10-07 09:48:14.142397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.153 [2024-10-07 09:48:14.142414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:25.413 [2024-10-07 09:48:14.146756] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.413 [2024-10-07 09:48:14.146789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.413 [2024-10-07 09:48:14.146808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.413 [2024-10-07 09:48:14.151153] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.413 [2024-10-07 09:48:14.151183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.413 [2024-10-07 09:48:14.151199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:25.414 [2024-10-07 09:48:14.155729] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.414 [2024-10-07 09:48:14.155760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.414 [2024-10-07 09:48:14.155777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:25.414 [2024-10-07 09:48:14.160703] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.414 [2024-10-07 09:48:14.160733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.414 [2024-10-07 09:48:14.160750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:25.414 [2024-10-07 09:48:14.165325] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.414 [2024-10-07 09:48:14.165359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.414 [2024-10-07 09:48:14.165377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.414 [2024-10-07 09:48:14.170612] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.414 [2024-10-07 09:48:14.170644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.414 [2024-10-07 09:48:14.170662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:25.414 [2024-10-07 09:48:14.176546] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.414 [2024-10-07 09:48:14.176587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.414 [2024-10-07 09:48:14.176606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:25.414 [2024-10-07 09:48:14.182211] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.414 [2024-10-07 09:48:14.182244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.414 [2024-10-07 09:48:14.182277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:25.414 [2024-10-07 09:48:14.187486] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.414 [2024-10-07 09:48:14.187521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.414 [2024-10-07 09:48:14.187540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.414 [2024-10-07 09:48:14.192218] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.414 [2024-10-07 09:48:14.192249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.414 [2024-10-07 09:48:14.192267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:25.414 [2024-10-07 09:48:14.195029] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.414 [2024-10-07 09:48:14.195059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.414 [2024-10-07 09:48:14.195076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:25.414 [2024-10-07 09:48:14.199402] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.414 [2024-10-07 09:48:14.199432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.414 [2024-10-07 09:48:14.199447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:25.414 [2024-10-07 09:48:14.203705] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.414 [2024-10-07 09:48:14.203749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.414 [2024-10-07 09:48:14.203766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.414 [2024-10-07 09:48:14.208186] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.414 [2024-10-07 09:48:14.208215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.414 [2024-10-07 09:48:14.208245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:25.414 [2024-10-07 09:48:14.212958] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.414 [2024-10-07 09:48:14.213002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.414 [2024-10-07 09:48:14.213019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:25.414 [2024-10-07 09:48:14.217337] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.414 [2024-10-07 09:48:14.217367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.414 [2024-10-07 09:48:14.217383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:25.414 [2024-10-07 09:48:14.221919] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.414 [2024-10-07 09:48:14.221960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.414 [2024-10-07 09:48:14.221976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.414 [2024-10-07 09:48:14.226523] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.414 [2024-10-07 09:48:14.226551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.414 [2024-10-07 09:48:14.226566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:25.414 [2024-10-07 09:48:14.231077] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.414 [2024-10-07 09:48:14.231106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.414 [2024-10-07 09:48:14.231122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:25.414 [2024-10-07 09:48:14.235679] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.414 [2024-10-07 09:48:14.235709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.414 [2024-10-07 09:48:14.235726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:25.414 [2024-10-07 09:48:14.240022] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.414 [2024-10-07 09:48:14.240052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.414 [2024-10-07 09:48:14.240084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.414 [2024-10-07 09:48:14.244871] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.414 [2024-10-07 09:48:14.244900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.414 [2024-10-07 09:48:14.244917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:25.414 [2024-10-07 09:48:14.250029] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.414 [2024-10-07 09:48:14.250056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.414 [2024-10-07 09:48:14.250072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:25.414 [2024-10-07 09:48:14.254446] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.414 [2024-10-07 09:48:14.254498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.414 [2024-10-07 09:48:14.254516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:25.414 [2024-10-07 09:48:14.258711] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.414 [2024-10-07 09:48:14.258742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.415 [2024-10-07 09:48:14.258773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.415 [2024-10-07 09:48:14.263179] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.415 [2024-10-07 09:48:14.263223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.415 [2024-10-07 09:48:14.263239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:25.415 [2024-10-07 09:48:14.267858] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.415 [2024-10-07 09:48:14.267902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.415 [2024-10-07 09:48:14.267918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:25.415 [2024-10-07 09:48:14.272989] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.415 [2024-10-07 09:48:14.273032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.415 [2024-10-07 09:48:14.273048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:25.415 [2024-10-07 09:48:14.277919] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.415 [2024-10-07 09:48:14.277949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.415 [2024-10-07 09:48:14.277965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.415 [2024-10-07 09:48:14.283388] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.415 [2024-10-07 09:48:14.283417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.415 [2024-10-07 09:48:14.283434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:25.415 [2024-10-07 09:48:14.289840] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.415 [2024-10-07 09:48:14.289871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.415 [2024-10-07 09:48:14.289888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:25.415 [2024-10-07 09:48:14.296906] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.415 [2024-10-07 09:48:14.296936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.415 [2024-10-07 09:48:14.296954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:25.415 [2024-10-07 09:48:14.302229] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.415 [2024-10-07 09:48:14.302256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.415 [2024-10-07 09:48:14.302272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.415 [2024-10-07 09:48:14.307215] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.415 [2024-10-07 09:48:14.307259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.415 [2024-10-07 09:48:14.307274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:25.415 [2024-10-07 09:48:14.312047] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.415 [2024-10-07 09:48:14.312091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.415 [2024-10-07 09:48:14.312107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:25.415 [2024-10-07 09:48:14.317366] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.415 [2024-10-07 09:48:14.317412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.415 [2024-10-07 09:48:14.317428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:25.415 [2024-10-07 09:48:14.322253] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.415 [2024-10-07 09:48:14.322298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.415 [2024-10-07 09:48:14.322315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.415 [2024-10-07 09:48:14.327368] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.415 [2024-10-07 09:48:14.327398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.415 [2024-10-07 09:48:14.327415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:25.415 [2024-10-07 09:48:14.332015] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.415 [2024-10-07 09:48:14.332061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.415 [2024-10-07 09:48:14.332079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:25.415 [2024-10-07 09:48:14.336461] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.415 [2024-10-07 09:48:14.336490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.415 [2024-10-07 09:48:14.336521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:25.415 [2024-10-07 09:48:14.340776] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.415 [2024-10-07 09:48:14.340807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.415 [2024-10-07 09:48:14.340829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.415 [2024-10-07 09:48:14.345294] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.415 [2024-10-07 09:48:14.345325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.415 [2024-10-07 09:48:14.345356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:25.415 [2024-10-07 09:48:14.349438] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.415 [2024-10-07 09:48:14.349470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.415 [2024-10-07 09:48:14.349487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:25.415 [2024-10-07 09:48:14.353576] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.415 [2024-10-07 09:48:14.353606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.415 [2024-10-07 09:48:14.353622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:25.415 [2024-10-07 09:48:14.358297] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.415 [2024-10-07 09:48:14.358327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.415 [2024-10-07 09:48:14.358344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.415 [2024-10-07 09:48:14.363145] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.415 [2024-10-07 09:48:14.363190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.415 [2024-10-07 09:48:14.363208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:25.415 [2024-10-07 09:48:14.368072] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.416 [2024-10-07 09:48:14.368118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.416 [2024-10-07 09:48:14.368135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:25.416 [2024-10-07 09:48:14.372578] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.416 [2024-10-07 09:48:14.372608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.416 [2024-10-07 09:48:14.372626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:25.416 [2024-10-07 09:48:14.377102] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.416 [2024-10-07 09:48:14.377133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.416 [2024-10-07 09:48:14.377149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.416 [2024-10-07 09:48:14.381603] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.416 [2024-10-07 09:48:14.381639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.416 [2024-10-07 09:48:14.381657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:25.416 [2024-10-07 09:48:14.386408] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.416 [2024-10-07 09:48:14.386439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.416 [2024-10-07 09:48:14.386456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:25.416 [2024-10-07 09:48:14.391365] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.416 [2024-10-07 09:48:14.391396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.416 [2024-10-07 09:48:14.391413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:25.416 [2024-10-07 09:48:14.396068] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.416 [2024-10-07 09:48:14.396099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.416 [2024-10-07 09:48:14.396116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.416 [2024-10-07 09:48:14.401072] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.416 [2024-10-07 09:48:14.401103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.416 [2024-10-07 09:48:14.401134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:25.416 [2024-10-07 09:48:14.406664] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.416 [2024-10-07 09:48:14.406703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.416 [2024-10-07 09:48:14.406720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:25.676 [2024-10-07 09:48:14.412132] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.676 [2024-10-07 09:48:14.412178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.676 [2024-10-07 09:48:14.412194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:25.676 [2024-10-07 09:48:14.417398] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.676 [2024-10-07 09:48:14.417429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.676 [2024-10-07 09:48:14.417445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.676 [2024-10-07 09:48:14.422418] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.676 [2024-10-07 09:48:14.422464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.676 [2024-10-07 09:48:14.422482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:25.676 [2024-10-07 09:48:14.427145] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.676 [2024-10-07 09:48:14.427176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.676 [2024-10-07 09:48:14.427208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:25.676 [2024-10-07 09:48:14.432242] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.676 [2024-10-07 09:48:14.432272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.676 [2024-10-07 09:48:14.432289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:25.676 [2024-10-07 09:48:14.437577] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.676 [2024-10-07 09:48:14.437609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.676 [2024-10-07 09:48:14.437642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.676 [2024-10-07 09:48:14.443523] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.676 [2024-10-07 09:48:14.443568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.676 [2024-10-07 09:48:14.443584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:25.676 [2024-10-07 09:48:14.449910] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.676 [2024-10-07 09:48:14.449942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.676 [2024-10-07 09:48:14.449975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:25.676 [2024-10-07 09:48:14.454569] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.676 [2024-10-07 09:48:14.454597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.676 [2024-10-07 09:48:14.454612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:25.676 [2024-10-07 09:48:14.462405] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.676 [2024-10-07 09:48:14.462435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.676 [2024-10-07 09:48:14.462466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.676 [2024-10-07 09:48:14.468750] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.676 [2024-10-07 09:48:14.468797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.676 [2024-10-07 09:48:14.468813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:25.676 6127.00 IOPS, 765.88 MiB/s [2024-10-07 09:48:14.474813] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.676 [2024-10-07 09:48:14.474844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.676 [2024-10-07 09:48:14.474868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:25.676 [2024-10-07 09:48:14.480529] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.676 [2024-10-07 09:48:14.480561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.677 [2024-10-07 09:48:14.480595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:25.677 [2024-10-07 09:48:14.486142] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.677 [2024-10-07 09:48:14.486175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.677 [2024-10-07 09:48:14.486192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.677 [2024-10-07 09:48:14.490652] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.677 [2024-10-07 09:48:14.490693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.677 [2024-10-07 09:48:14.490711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:25.677 [2024-10-07 09:48:14.495916] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.677 [2024-10-07 09:48:14.495947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.677 [2024-10-07 09:48:14.495979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:25.677 [2024-10-07 09:48:14.501654] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.677 [2024-10-07 09:48:14.501694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.677 [2024-10-07 09:48:14.501712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:25.677 [2024-10-07 09:48:14.506983] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.677 [2024-10-07 09:48:14.507018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.677 [2024-10-07 09:48:14.507036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.677 [2024-10-07 09:48:14.511259] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.677 [2024-10-07 09:48:14.511291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.677 [2024-10-07 09:48:14.511309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:25.677 [2024-10-07 09:48:14.514502] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.677 [2024-10-07 09:48:14.514533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.677 [2024-10-07 09:48:14.514550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:25.677 [2024-10-07 09:48:14.518744] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.677 [2024-10-07 09:48:14.518775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.677 [2024-10-07 09:48:14.518807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:25.677 [2024-10-07 09:48:14.523469] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.677 [2024-10-07 09:48:14.523498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.677 [2024-10-07 09:48:14.523515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.677 [2024-10-07 09:48:14.528255] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.677 [2024-10-07 09:48:14.528287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.677 [2024-10-07 09:48:14.528319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:25.677 [2024-10-07 09:48:14.532818] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.677 [2024-10-07 09:48:14.532849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.677 [2024-10-07 09:48:14.532867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:25.677 [2024-10-07 09:48:14.537320] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.677 [2024-10-07 09:48:14.537350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.677 [2024-10-07 09:48:14.537382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:25.677 [2024-10-07 09:48:14.541774] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.677 [2024-10-07 09:48:14.541805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.677 [2024-10-07 09:48:14.541823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.677 [2024-10-07 09:48:14.546228] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.677 [2024-10-07 09:48:14.546258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.677 [2024-10-07 09:48:14.546275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:25.677 [2024-10-07 09:48:14.550512] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.677 [2024-10-07 09:48:14.550542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.677 [2024-10-07 09:48:14.550574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:25.677 [2024-10-07 09:48:14.554872] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.677 [2024-10-07 09:48:14.554902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.677 [2024-10-07 09:48:14.554925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:25.677 [2024-10-07 09:48:14.559280] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.677 [2024-10-07 09:48:14.559310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.677 [2024-10-07 09:48:14.559342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.677 [2024-10-07 09:48:14.563560] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.677 [2024-10-07 09:48:14.563590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.677 [2024-10-07 09:48:14.563607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:25.677 [2024-10-07 09:48:14.567781] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.677 [2024-10-07 09:48:14.567811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.677 [2024-10-07 09:48:14.567828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:25.677 [2024-10-07 09:48:14.570757] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.677 [2024-10-07 09:48:14.570787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.677 [2024-10-07 09:48:14.570805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:25.677 [2024-10-07 09:48:14.574626] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.678 [2024-10-07 09:48:14.574657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.678 [2024-10-07 09:48:14.574683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.678 [2024-10-07 09:48:14.579286] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.678 [2024-10-07 09:48:14.579317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.678 [2024-10-07 09:48:14.579349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:25.678 [2024-10-07 09:48:14.584420] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.678 [2024-10-07 09:48:14.584464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.678 [2024-10-07 09:48:14.584481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:25.678 [2024-10-07 09:48:14.591123] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.678 [2024-10-07 09:48:14.591167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.678 [2024-10-07 09:48:14.591184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:25.678 [2024-10-07 09:48:14.598018] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.678 [2024-10-07 09:48:14.598071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.678 [2024-10-07 09:48:14.598088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.678 [2024-10-07 09:48:14.603431] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.678 [2024-10-07 09:48:14.603468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.678 [2024-10-07 09:48:14.603487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:25.678 [2024-10-07 09:48:14.608698] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.678 [2024-10-07 09:48:14.608729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.678 [2024-10-07 09:48:14.608746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:25.678 [2024-10-07 09:48:14.613270] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.678 [2024-10-07 09:48:14.613301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.678 [2024-10-07 09:48:14.613317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:25.678 [2024-10-07 09:48:14.618037] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.678 [2024-10-07 09:48:14.618069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.678 [2024-10-07 09:48:14.618101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.678 [2024-10-07 09:48:14.623095] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.678 [2024-10-07 09:48:14.623141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.678 [2024-10-07 09:48:14.623159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:25.678 [2024-10-07 09:48:14.627887] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.678 [2024-10-07 09:48:14.627918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.678 [2024-10-07 09:48:14.627935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:25.678 [2024-10-07 09:48:14.632580] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.678 [2024-10-07 09:48:14.632612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.678 [2024-10-07 09:48:14.632646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:25.678 [2024-10-07 09:48:14.637467] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.678 [2024-10-07 09:48:14.637498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.678 [2024-10-07 09:48:14.637515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.678 [2024-10-07 09:48:14.642498] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.678 [2024-10-07 09:48:14.642529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.678 [2024-10-07 09:48:14.642547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:25.678 [2024-10-07 09:48:14.647258] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.678 [2024-10-07 09:48:14.647290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.678 [2024-10-07 09:48:14.647307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:25.678 [2024-10-07 09:48:14.652797] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.678 [2024-10-07 09:48:14.652828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.678 [2024-10-07 09:48:14.652845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:25.678 [2024-10-07 09:48:14.657609] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.678 [2024-10-07 09:48:14.657641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.678 [2024-10-07 09:48:14.657658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.678 [2024-10-07 09:48:14.662548] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.678 [2024-10-07 09:48:14.662580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.678 [2024-10-07 09:48:14.662597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:25.678 [2024-10-07 09:48:14.667500] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.678 [2024-10-07 09:48:14.667531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.678 [2024-10-07 09:48:14.667549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:25.939 [2024-10-07 09:48:14.672634] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.939 [2024-10-07 09:48:14.672674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.939 [2024-10-07 09:48:14.672694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:25.939 [2024-10-07 09:48:14.677569] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.939 [2024-10-07 09:48:14.677601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.939 [2024-10-07 09:48:14.677618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.939 [2024-10-07 09:48:14.682520] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.939 [2024-10-07 09:48:14.682551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.939 [2024-10-07 09:48:14.682575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:25.939 [2024-10-07 09:48:14.687617] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.939 [2024-10-07 09:48:14.687662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.939 [2024-10-07 09:48:14.687696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:25.939 [2024-10-07 09:48:14.693413] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.939 [2024-10-07 09:48:14.693444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.939 [2024-10-07 09:48:14.693462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:25.939 [2024-10-07 09:48:14.697215] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.939 [2024-10-07 09:48:14.697245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.939 [2024-10-07 09:48:14.697262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.939 [2024-10-07 09:48:14.701212] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.939 [2024-10-07 09:48:14.701240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.939 [2024-10-07 09:48:14.701256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:25.939 [2024-10-07 09:48:14.706442] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.939 [2024-10-07 09:48:14.706489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.939 [2024-10-07 09:48:14.706506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:25.939 [2024-10-07 09:48:14.712266] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.939 [2024-10-07 09:48:14.712312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.939 [2024-10-07 09:48:14.712330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:25.939 [2024-10-07 09:48:14.717587] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.939 [2024-10-07 09:48:14.717633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.939 [2024-10-07 09:48:14.717655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.939 [2024-10-07 09:48:14.721917] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.939 [2024-10-07 09:48:14.721948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.939 [2024-10-07 09:48:14.721965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:25.939 [2024-10-07 09:48:14.724789] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.939 [2024-10-07 09:48:14.724824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.939 [2024-10-07 09:48:14.724841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:25.939 [2024-10-07 09:48:14.728584] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.939 [2024-10-07 09:48:14.728615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.939 [2024-10-07 09:48:14.728632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:25.939 [2024-10-07 09:48:14.732063] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.940 [2024-10-07 09:48:14.732095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.940 [2024-10-07 09:48:14.732112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.940 [2024-10-07 09:48:14.736266] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.940 [2024-10-07 09:48:14.736294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.940 [2024-10-07 09:48:14.736314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:25.940 [2024-10-07 09:48:14.741089] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.940 [2024-10-07 09:48:14.741135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.940 [2024-10-07 09:48:14.741152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:25.940 [2024-10-07 09:48:14.746279] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.940 [2024-10-07 09:48:14.746307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.940 [2024-10-07 09:48:14.746323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:25.940 [2024-10-07 09:48:14.751091] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.940 [2024-10-07 09:48:14.751120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.940 [2024-10-07 09:48:14.751136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.940 [2024-10-07 09:48:14.756057] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.940 [2024-10-07 09:48:14.756089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.940 [2024-10-07 09:48:14.756106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:25.940 [2024-10-07 09:48:14.760938] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.940 [2024-10-07 09:48:14.760971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.940 [2024-10-07 09:48:14.760994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:25.940 [2024-10-07 09:48:14.764233] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.940 [2024-10-07 09:48:14.764264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.940 [2024-10-07 09:48:14.764280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:25.940 [2024-10-07 09:48:14.767838] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.940 [2024-10-07 09:48:14.767872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.940 [2024-10-07 09:48:14.767889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.940 [2024-10-07 09:48:14.771463] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.940 [2024-10-07 09:48:14.771494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.940 [2024-10-07 09:48:14.771511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:25.940 [2024-10-07 09:48:14.774770] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.940 [2024-10-07 09:48:14.774801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.940 [2024-10-07 09:48:14.774818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:25.940 [2024-10-07 09:48:14.777563] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.940 [2024-10-07 09:48:14.777592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.940 [2024-10-07 09:48:14.777609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:25.940 [2024-10-07 09:48:14.782235] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.940 [2024-10-07 09:48:14.782267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.940 [2024-10-07 09:48:14.782284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.940 [2024-10-07 09:48:14.788550] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.940 [2024-10-07 09:48:14.788582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.940 [2024-10-07 09:48:14.788599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:25.940 [2024-10-07 09:48:14.793715] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.940 [2024-10-07 09:48:14.793760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.940 [2024-10-07 09:48:14.793778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:25.940 [2024-10-07 09:48:14.801164] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.940 [2024-10-07 09:48:14.801200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.940 [2024-10-07 09:48:14.801218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:25.940 [2024-10-07 09:48:14.807435] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.940 [2024-10-07 09:48:14.807481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.940 [2024-10-07 09:48:14.807498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.940 [2024-10-07 09:48:14.812627] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.940 [2024-10-07 09:48:14.812681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.940 [2024-10-07 09:48:14.812701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:25.940 [2024-10-07 09:48:14.817012] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.940 [2024-10-07 09:48:14.817057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.940 [2024-10-07 09:48:14.817074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:25.940 [2024-10-07 09:48:14.821490] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.940 [2024-10-07 09:48:14.821535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.940 [2024-10-07 09:48:14.821552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:25.940 [2024-10-07 09:48:14.825989] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.940 [2024-10-07 09:48:14.826019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.940 [2024-10-07 09:48:14.826036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.940 [2024-10-07 09:48:14.830488] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.940 [2024-10-07 09:48:14.830519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.941 [2024-10-07 09:48:14.830536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:25.941 [2024-10-07 09:48:14.834940] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.941 [2024-10-07 09:48:14.834970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.941 [2024-10-07 09:48:14.834988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:25.941 [2024-10-07 09:48:14.839509] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.941 [2024-10-07 09:48:14.839555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.941 [2024-10-07 09:48:14.839571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:25.941 [2024-10-07 09:48:14.843989] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.941 [2024-10-07 09:48:14.844033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.941 [2024-10-07 09:48:14.844050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.941 [2024-10-07 09:48:14.848724] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.941 [2024-10-07 09:48:14.848755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.941 [2024-10-07 09:48:14.848771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:25.941 [2024-10-07 09:48:14.854111] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.941 [2024-10-07 09:48:14.854157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.941 [2024-10-07 09:48:14.854174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:25.941 [2024-10-07 09:48:14.861615] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.941 [2024-10-07 09:48:14.861647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.941 [2024-10-07 09:48:14.861688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:25.941 [2024-10-07 09:48:14.867518] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.941 [2024-10-07 09:48:14.867549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.941 [2024-10-07 09:48:14.867580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.941 [2024-10-07 09:48:14.872942] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.941 [2024-10-07 09:48:14.872987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.941 [2024-10-07 09:48:14.873005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:25.941 [2024-10-07 09:48:14.878221] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.941 [2024-10-07 09:48:14.878254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.941 [2024-10-07 09:48:14.878286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:25.941 [2024-10-07 09:48:14.883187] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.941 [2024-10-07 09:48:14.883232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.941 [2024-10-07 09:48:14.883250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:25.941 [2024-10-07 09:48:14.888558] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.941 [2024-10-07 09:48:14.888604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.941 [2024-10-07 09:48:14.888630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.941 [2024-10-07 09:48:14.896124] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.941 [2024-10-07 09:48:14.896171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.941 [2024-10-07 09:48:14.896189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:25.941 [2024-10-07 09:48:14.902014] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.941 [2024-10-07 09:48:14.902061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.941 [2024-10-07 09:48:14.902078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:25.941 [2024-10-07 09:48:14.907254] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.941 [2024-10-07 09:48:14.907302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.941 [2024-10-07 09:48:14.907320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:25.941 [2024-10-07 09:48:14.912433] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.941 [2024-10-07 09:48:14.912480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.941 [2024-10-07 09:48:14.912496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.941 [2024-10-07 09:48:14.917304] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.941 [2024-10-07 09:48:14.917335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.941 [2024-10-07 09:48:14.917365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:25.941 [2024-10-07 09:48:14.923135] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.941 [2024-10-07 09:48:14.923165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.941 [2024-10-07 09:48:14.923187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:25.941 [2024-10-07 09:48:14.930707] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:25.941 [2024-10-07 09:48:14.930738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.941 [2024-10-07 09:48:14.930767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:26.202 [2024-10-07 09:48:14.936396] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:26.202 [2024-10-07 09:48:14.936429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.202 [2024-10-07 09:48:14.936466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:26.202 [2024-10-07 09:48:14.941792] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:26.202 [2024-10-07 09:48:14.941829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.202 [2024-10-07 09:48:14.941848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:26.202 [2024-10-07 09:48:14.946995] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:26.202 [2024-10-07 09:48:14.947028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.202 [2024-10-07 09:48:14.947047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:26.202 [2024-10-07 09:48:14.951963] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:26.202 [2024-10-07 09:48:14.951994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.202 [2024-10-07 09:48:14.952027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:26.202 [2024-10-07 09:48:14.957170] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:26.203 [2024-10-07 09:48:14.957202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.203 [2024-10-07 09:48:14.957221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:26.203 [2024-10-07 09:48:14.962042] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:26.203 [2024-10-07 09:48:14.962074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.203 [2024-10-07 09:48:14.962092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:26.203 [2024-10-07 09:48:14.966980] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:26.203 [2024-10-07 09:48:14.967012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.203 [2024-10-07 09:48:14.967029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:26.203 [2024-10-07 09:48:14.971614] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:26.203 [2024-10-07 09:48:14.971646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.203 [2024-10-07 09:48:14.971678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:26.203 [2024-10-07 09:48:14.976088] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:26.203 [2024-10-07 09:48:14.976119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.203 [2024-10-07 09:48:14.976153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:26.203 [2024-10-07 09:48:14.980576] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:26.203 [2024-10-07 09:48:14.980610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.203 [2024-10-07 09:48:14.980629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:26.203 [2024-10-07 09:48:14.985499] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:26.203 [2024-10-07 09:48:14.985530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.203 [2024-10-07 09:48:14.985574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:26.203 [2024-10-07 09:48:14.990748] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:26.203 [2024-10-07 09:48:14.990780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.203 [2024-10-07 09:48:14.990808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:26.203 [2024-10-07 09:48:14.995347] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:26.203 [2024-10-07 09:48:14.995378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.203 [2024-10-07 09:48:14.995399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:26.203 [2024-10-07 09:48:15.000052] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:26.203 [2024-10-07 09:48:15.000083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.203 [2024-10-07 09:48:15.000102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:26.203 [2024-10-07 09:48:15.004602] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:26.203 [2024-10-07 09:48:15.004633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.203 [2024-10-07 09:48:15.004672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:26.203 [2024-10-07 09:48:15.009193] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:26.203 [2024-10-07 09:48:15.009223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.203 [2024-10-07 09:48:15.009240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:26.203 [2024-10-07 09:48:15.013725] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:26.203 [2024-10-07 09:48:15.013758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.203 [2024-10-07 09:48:15.013784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:26.203 [2024-10-07 09:48:15.018616] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:26.203 [2024-10-07 09:48:15.018646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.203 [2024-10-07 09:48:15.018675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:26.203 [2024-10-07 09:48:15.023837] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:26.203 [2024-10-07 09:48:15.023868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.203 [2024-10-07 09:48:15.023895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:26.203 [2024-10-07 09:48:15.028443] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:26.203 [2024-10-07 09:48:15.028488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.203 [2024-10-07 09:48:15.028506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:26.203 [2024-10-07 09:48:15.033176] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:26.203 [2024-10-07 09:48:15.033222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.203 [2024-10-07 09:48:15.033242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:26.203 [2024-10-07 09:48:15.037767] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:26.203 [2024-10-07 09:48:15.037798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.203 [2024-10-07 09:48:15.037839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:26.203 [2024-10-07 09:48:15.042272] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:26.203 [2024-10-07 09:48:15.042302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.203 [2024-10-07 09:48:15.042318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:26.203 [2024-10-07 09:48:15.046715] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:26.203 [2024-10-07 09:48:15.046746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.203 [2024-10-07 09:48:15.046768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:26.203 [2024-10-07 09:48:15.051298] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:26.203 [2024-10-07 09:48:15.051330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.203 [2024-10-07 09:48:15.051349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:26.203 [2024-10-07 09:48:15.055830] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:26.203 [2024-10-07 09:48:15.055862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.203 [2024-10-07 09:48:15.055883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:26.203 [2024-10-07 09:48:15.060454] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:26.203 [2024-10-07 09:48:15.060485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.203 [2024-10-07 09:48:15.060505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:26.203 [2024-10-07 09:48:15.064963] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:26.203 [2024-10-07 09:48:15.065001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.203 [2024-10-07 09:48:15.065019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:26.203 [2024-10-07 09:48:15.069389] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:26.203 [2024-10-07 09:48:15.069425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.203 [2024-10-07 09:48:15.069443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:26.203 [2024-10-07 09:48:15.074021] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:26.203 [2024-10-07 09:48:15.074052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.203 [2024-10-07 09:48:15.074070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:26.203 [2024-10-07 09:48:15.078958] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:26.203 [2024-10-07 09:48:15.078994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.204 [2024-10-07 09:48:15.079015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:26.204 [2024-10-07 09:48:15.082461] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:26.204 [2024-10-07 09:48:15.082505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.204 [2024-10-07 09:48:15.082521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:26.204 [2024-10-07 09:48:15.086276] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:26.204 [2024-10-07 09:48:15.086306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.204 [2024-10-07 09:48:15.086322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:26.204 [2024-10-07 09:48:15.090813] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:26.204 [2024-10-07 09:48:15.090842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.204 [2024-10-07 09:48:15.090858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:26.204 [2024-10-07 09:48:15.095878] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:26.204 [2024-10-07 09:48:15.095909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.204 [2024-10-07 09:48:15.095925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:26.204 [2024-10-07 09:48:15.100691] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:26.204 [2024-10-07 09:48:15.100721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.204 [2024-10-07 09:48:15.100744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:26.204 [2024-10-07 09:48:15.105101] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:26.204 [2024-10-07 09:48:15.105130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.204 [2024-10-07 09:48:15.105146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:26.204 [2024-10-07 09:48:15.110132] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:26.204 [2024-10-07 09:48:15.110161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.204 [2024-10-07 09:48:15.110178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:26.204 [2024-10-07 09:48:15.113796] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:26.204 [2024-10-07 09:48:15.113837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.204 [2024-10-07 09:48:15.113855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:26.204 [2024-10-07 09:48:15.118215] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:26.204 [2024-10-07 09:48:15.118263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.204 [2024-10-07 09:48:15.118280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:26.204 [2024-10-07 09:48:15.122802] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:26.204 [2024-10-07 09:48:15.122832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.204 [2024-10-07 09:48:15.122848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:26.204 [2024-10-07 09:48:15.127181] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:26.204 [2024-10-07 09:48:15.127209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.204 [2024-10-07 09:48:15.127239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:26.204 [2024-10-07 09:48:15.131633] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:26.204 [2024-10-07 09:48:15.131664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.204 [2024-10-07 09:48:15.131692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:26.204 [2024-10-07 09:48:15.136171] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:26.204 [2024-10-07 09:48:15.136214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.204 [2024-10-07 09:48:15.136230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:26.204 [2024-10-07 09:48:15.140764] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:26.204 [2024-10-07 09:48:15.140814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.204 [2024-10-07 09:48:15.140832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:26.204 [2024-10-07 09:48:15.145382] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:26.204 [2024-10-07 09:48:15.145429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.204 [2024-10-07 09:48:15.145444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:26.204 [2024-10-07 09:48:15.149817] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:26.204 [2024-10-07 09:48:15.149847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.204 [2024-10-07 09:48:15.149864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:26.204 [2024-10-07 09:48:15.154167] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:26.204 [2024-10-07 09:48:15.154198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.204 [2024-10-07 09:48:15.154215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:26.204 [2024-10-07 09:48:15.159282] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:26.204 [2024-10-07 09:48:15.159326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.204 [2024-10-07 09:48:15.159343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:26.204 [2024-10-07 09:48:15.164061] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:26.204 [2024-10-07 09:48:15.164107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.204 [2024-10-07 09:48:15.164124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:26.204 [2024-10-07 09:48:15.168560] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:26.204 [2024-10-07 09:48:15.168605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.204 [2024-10-07 09:48:15.168621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:26.204 [2024-10-07 09:48:15.172908] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:26.204 [2024-10-07 09:48:15.172939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.204 [2024-10-07 09:48:15.172956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:26.204 [2024-10-07 09:48:15.177323] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:26.204 [2024-10-07 09:48:15.177353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.204 [2024-10-07 09:48:15.177369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:26.204 [2024-10-07 09:48:15.181579] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:26.204 [2024-10-07 09:48:15.181609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.204 [2024-10-07 09:48:15.181625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:26.204 [2024-10-07 09:48:15.185879] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:26.204 [2024-10-07 09:48:15.185910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.204 [2024-10-07 09:48:15.185927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:26.204 [2024-10-07 09:48:15.190078] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:26.204 [2024-10-07 09:48:15.190123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.204 [2024-10-07 09:48:15.190139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:26.204 [2024-10-07 09:48:15.194507] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:26.204 [2024-10-07 09:48:15.194537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.204 [2024-10-07 09:48:15.194553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:26.467 [2024-10-07 09:48:15.199082] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:26.467 [2024-10-07 09:48:15.199112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.467 [2024-10-07 09:48:15.199127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:26.467 [2024-10-07 09:48:15.203756] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:26.467 [2024-10-07 09:48:15.203802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.467 [2024-10-07 09:48:15.203819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:26.467 [2024-10-07 09:48:15.208159] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:26.467 [2024-10-07 09:48:15.208190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.467 [2024-10-07 09:48:15.208206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:26.467 [2024-10-07 09:48:15.212639] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:26.467 [2024-10-07 09:48:15.212683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.467 [2024-10-07 09:48:15.212704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:26.467 [2024-10-07 09:48:15.217032] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:26.467 [2024-10-07 09:48:15.217064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.467 [2024-10-07 09:48:15.217087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:26.467 [2024-10-07 09:48:15.221634] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:26.467 [2024-10-07 09:48:15.221675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.467 [2024-10-07 09:48:15.221694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:26.467 [2024-10-07 09:48:15.226207] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:26.467 [2024-10-07 09:48:15.226237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.467 [2024-10-07 09:48:15.226253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:26.467 [2024-10-07 09:48:15.230507] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:26.467 [2024-10-07 09:48:15.230540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.467 [2024-10-07 09:48:15.230572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:26.467 [2024-10-07 09:48:15.235150] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:26.467 [2024-10-07 09:48:15.235180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.467 [2024-10-07 09:48:15.235197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:26.467 [2024-10-07 09:48:15.239524] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:26.467 [2024-10-07 09:48:15.239552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.467 [2024-10-07 09:48:15.239569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:26.467 [2024-10-07 09:48:15.243787] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:26.467 [2024-10-07 09:48:15.243818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.467 [2024-10-07 09:48:15.243835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:26.467 [2024-10-07 09:48:15.248286] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:26.467 [2024-10-07 09:48:15.248316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.467 [2024-10-07 09:48:15.248346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:26.467 [2024-10-07 09:48:15.252925] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:26.467 [2024-10-07 09:48:15.252960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.467 [2024-10-07 09:48:15.252992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:26.467 [2024-10-07 09:48:15.257495] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:26.467 [2024-10-07 09:48:15.257530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.467 [2024-10-07 09:48:15.257551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:26.467 [2024-10-07 09:48:15.262120] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:26.467 [2024-10-07 09:48:15.262165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.467 [2024-10-07 09:48:15.262181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:26.467 [2024-10-07 09:48:15.266713] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:26.467 [2024-10-07 09:48:15.266767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.467 [2024-10-07 09:48:15.266784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:26.467 [2024-10-07 09:48:15.271233] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:26.467 [2024-10-07 09:48:15.271263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.467 [2024-10-07 09:48:15.271280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:26.467 [2024-10-07 09:48:15.275849] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:26.467 [2024-10-07 09:48:15.275893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.467 [2024-10-07 09:48:15.275909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:26.467 [2024-10-07 09:48:15.280602] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:26.468 [2024-10-07 09:48:15.280647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.468 [2024-10-07 09:48:15.280664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:26.468 [2024-10-07 09:48:15.285211] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:26.468 [2024-10-07 09:48:15.285241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.468 [2024-10-07 09:48:15.285272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:26.468 [2024-10-07 09:48:15.289775] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:26.468 [2024-10-07 09:48:15.289831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.468 [2024-10-07 09:48:15.289848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:26.468 [2024-10-07 09:48:15.294610] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:26.468 [2024-10-07 09:48:15.294640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.468 [2024-10-07 09:48:15.294681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:26.468 [2024-10-07 09:48:15.298944] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:26.468 [2024-10-07 09:48:15.298975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.468 [2024-10-07 09:48:15.299007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:26.468 [2024-10-07 09:48:15.303451] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:26.468 [2024-10-07 09:48:15.303496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.468 [2024-10-07 09:48:15.303513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:26.468 [2024-10-07 09:48:15.307920] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:26.468 [2024-10-07 09:48:15.307950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.468 [2024-10-07 09:48:15.307982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:26.468 [2024-10-07 09:48:15.312522] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:26.468 [2024-10-07 09:48:15.312569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.468 [2024-10-07 09:48:15.312586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:26.468 [2024-10-07 09:48:15.316981] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:26.468 [2024-10-07 09:48:15.317010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.468 [2024-10-07 09:48:15.317025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:26.468 [2024-10-07 09:48:15.321575] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:26.468 [2024-10-07 09:48:15.321604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.468 [2024-10-07 09:48:15.321620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:26.468 [2024-10-07 09:48:15.325986] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:26.468 [2024-10-07 09:48:15.326017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.468 [2024-10-07 09:48:15.326053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:26.468 [2024-10-07 09:48:15.330312] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:26.468 [2024-10-07 09:48:15.330342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.468 [2024-10-07 09:48:15.330359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:26.468 [2024-10-07 09:48:15.334816] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:26.468 [2024-10-07 09:48:15.334847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.468 [2024-10-07 09:48:15.334869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:26.468 [2024-10-07 09:48:15.339358] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:26.468 [2024-10-07 09:48:15.339404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.468 [2024-10-07 09:48:15.339421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:26.468 [2024-10-07 09:48:15.343932] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:26.468 [2024-10-07 09:48:15.343964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.468 [2024-10-07 09:48:15.343996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:26.468 [2024-10-07 09:48:15.348403] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:26.468 [2024-10-07 09:48:15.348449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.468 [2024-10-07 09:48:15.348466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:26.468 [2024-10-07 09:48:15.352927] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:26.468 [2024-10-07 09:48:15.352971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.468 [2024-10-07 09:48:15.352988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:26.468 [2024-10-07 09:48:15.357428] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:26.468 [2024-10-07 09:48:15.357474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.468 [2024-10-07 09:48:15.357491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:26.468 [2024-10-07 09:48:15.361973] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:26.468 [2024-10-07 09:48:15.362003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.468 [2024-10-07 09:48:15.362033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:26.468 [2024-10-07 09:48:15.366725] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:26.468 [2024-10-07 09:48:15.366756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.468 [2024-10-07 09:48:15.366774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:26.468 [2024-10-07 09:48:15.371772] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:26.468 [2024-10-07 09:48:15.371803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.468 [2024-10-07 09:48:15.371820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:26.468 [2024-10-07 09:48:15.376414] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:26.468 [2024-10-07 09:48:15.376445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.468 [2024-10-07 09:48:15.376478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:26.468 [2024-10-07 09:48:15.381407] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:26.468 [2024-10-07 09:48:15.381437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.468 [2024-10-07 09:48:15.381454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:26.468 [2024-10-07 09:48:15.386313] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:26.468 [2024-10-07 09:48:15.386343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.468 [2024-10-07 09:48:15.386374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:26.468 [2024-10-07 09:48:15.391464] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:26.468 [2024-10-07 09:48:15.391509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.468 [2024-10-07 09:48:15.391527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:26.468 [2024-10-07 09:48:15.396699] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:26.468 [2024-10-07 09:48:15.396731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.468 [2024-10-07 09:48:15.396748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:26.468 [2024-10-07 09:48:15.401453] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:26.468 [2024-10-07 09:48:15.401483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.468 [2024-10-07 09:48:15.401515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:26.468 [2024-10-07 09:48:15.406971] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:26.469 [2024-10-07 09:48:15.407002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.469 [2024-10-07 09:48:15.407033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:26.469 [2024-10-07 09:48:15.412248] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:26.469 [2024-10-07 09:48:15.412292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.469 [2024-10-07 09:48:15.412308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:26.469 [2024-10-07 09:48:15.417210] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:26.469 [2024-10-07 09:48:15.417257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.469 [2024-10-07 09:48:15.417279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:26.469 [2024-10-07 09:48:15.421823] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:26.469 [2024-10-07 09:48:15.421854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.469 [2024-10-07 09:48:15.421872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:26.469 [2024-10-07 09:48:15.426692] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:26.469 [2024-10-07 09:48:15.426722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.469 [2024-10-07 09:48:15.426739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:26.469 [2024-10-07 09:48:15.431418] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:26.469 [2024-10-07 09:48:15.431464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.469 [2024-10-07 09:48:15.431481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:26.469 [2024-10-07 09:48:15.435919] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:26.469 [2024-10-07 09:48:15.435965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.469 [2024-10-07 09:48:15.435981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:26.469 [2024-10-07 09:48:15.440711] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:26.469 [2024-10-07 09:48:15.440742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.469 [2024-10-07 09:48:15.440759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:26.469 [2024-10-07 09:48:15.445688] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:26.469 [2024-10-07 09:48:15.445719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.469 [2024-10-07 09:48:15.445736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:26.469 [2024-10-07 09:48:15.450060] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:26.469 [2024-10-07 09:48:15.450090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.469 [2024-10-07 09:48:15.450106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:26.469 [2024-10-07 09:48:15.454712] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:26.469 [2024-10-07 09:48:15.454743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.469 [2024-10-07 09:48:15.454760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:26.469 [2024-10-07 09:48:15.459075] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:26.469 [2024-10-07 09:48:15.459110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.469 [2024-10-07 09:48:15.459129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:26.728 [2024-10-07 09:48:15.463638] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:26.728 [2024-10-07 09:48:15.463690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.728 [2024-10-07 09:48:15.463708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:26.728 [2024-10-07 09:48:15.468375] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:26.728 [2024-10-07 09:48:15.468404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.728 [2024-10-07 09:48:15.468436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:26.728 [2024-10-07 09:48:15.472593] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2146480) 00:27:26.728 [2024-10-07 09:48:15.472622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.728 [2024-10-07 09:48:15.472639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:26.728 6326.00 IOPS, 790.75 MiB/s 00:27:26.728 Latency(us) 00:27:26.728 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:26.728 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:27:26.728 nvme0n1 : 2.00 6326.56 790.82 0.00 0.00 2524.76 685.70 11116.85 00:27:26.728 =================================================================================================================== 00:27:26.728 Total : 6326.56 790.82 0.00 0.00 2524.76 685.70 11116.85 00:27:26.728 { 00:27:26.728 "results": [ 00:27:26.728 { 00:27:26.728 "job": "nvme0n1", 00:27:26.728 "core_mask": "0x2", 00:27:26.728 "workload": "randread", 00:27:26.728 "status": "finished", 00:27:26.728 "queue_depth": 16, 00:27:26.728 "io_size": 131072, 00:27:26.728 "runtime": 2.002353, 00:27:26.728 "iops": 6326.556805917838, 00:27:26.728 "mibps": 790.8196007397297, 00:27:26.728 "io_failed": 0, 00:27:26.728 "io_timeout": 0, 00:27:26.728 "avg_latency_us": 2524.7601445461883, 00:27:26.728 "min_latency_us": 685.7007407407407, 00:27:26.728 "max_latency_us": 11116.847407407407 00:27:26.728 } 00:27:26.728 ], 00:27:26.728 "core_count": 1 00:27:26.728 } 00:27:26.728 09:48:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:26.728 09:48:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:26.728 09:48:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:26.728 09:48:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:26.728 | .driver_specific 00:27:26.728 | .nvme_error 00:27:26.728 | .status_code 00:27:26.728 | .command_transient_transport_error' 00:27:26.988 09:48:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 408 > 0 )) 00:27:26.988 09:48:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 322338 00:27:26.988 09:48:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 322338 ']' 00:27:26.988 09:48:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 322338 00:27:26.988 09:48:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:27:26.988 09:48:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:26.988 09:48:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 322338 00:27:26.988 09:48:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:26.988 09:48:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:26.988 09:48:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 322338' 00:27:26.988 killing process with pid 322338 00:27:26.988 09:48:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 322338 00:27:26.988 Received shutdown signal, test time was about 2.000000 seconds 00:27:26.988 00:27:26.988 Latency(us) 00:27:26.988 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:26.988 =================================================================================================================== 00:27:26.988 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:26.988 09:48:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 322338 00:27:27.247 09:48:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:27:27.247 09:48:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:27.247 09:48:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:27:27.247 09:48:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:27:27.247 09:48:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:27:27.247 09:48:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=322729 00:27:27.247 09:48:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 322729 /var/tmp/bperf.sock 00:27:27.247 09:48:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 322729 ']' 00:27:27.247 09:48:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:27.247 09:48:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:27:27.247 09:48:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:27.247 09:48:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:27.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:27.247 09:48:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:27.247 09:48:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:27.247 [2024-10-07 09:48:16.117702] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:27:27.247 [2024-10-07 09:48:16.117785] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid322729 ] 00:27:27.247 [2024-10-07 09:48:16.174042] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:27.505 [2024-10-07 09:48:16.281284] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:27:27.505 09:48:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:27.505 09:48:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:27:27.505 09:48:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:27.505 09:48:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:27.763 09:48:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:27.763 09:48:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.763 09:48:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:27.763 09:48:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.763 09:48:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:27.763 09:48:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:28.333 nvme0n1 00:27:28.333 09:48:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:27:28.333 09:48:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.333 09:48:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:28.333 09:48:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.333 09:48:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:28.333 09:48:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:28.333 Running I/O for 2 seconds... 00:27:28.333 [2024-10-07 09:48:17.291528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:28.333 [2024-10-07 09:48:17.291780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.333 [2024-10-07 09:48:17.291815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.333 [2024-10-07 09:48:17.304790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:28.333 [2024-10-07 09:48:17.305037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.333 [2024-10-07 09:48:17.305070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.333 [2024-10-07 09:48:17.318001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:28.333 [2024-10-07 09:48:17.318294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.333 [2024-10-07 09:48:17.318341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.596 [2024-10-07 09:48:17.330798] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:28.596 [2024-10-07 09:48:17.331020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.596 [2024-10-07 09:48:17.331052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.596 [2024-10-07 09:48:17.343798] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:28.596 [2024-10-07 09:48:17.344117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.596 [2024-10-07 09:48:17.344162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.596 [2024-10-07 09:48:17.356622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:28.596 [2024-10-07 09:48:17.356839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.596 [2024-10-07 09:48:17.356883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.596 [2024-10-07 09:48:17.369651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:28.596 [2024-10-07 09:48:17.369881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.596 [2024-10-07 09:48:17.369913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.596 [2024-10-07 09:48:17.382507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:28.596 [2024-10-07 09:48:17.382762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.596 [2024-10-07 09:48:17.382791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.596 [2024-10-07 09:48:17.395535] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:28.596 [2024-10-07 09:48:17.395804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.596 [2024-10-07 09:48:17.395835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.596 [2024-10-07 09:48:17.408706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:28.596 [2024-10-07 09:48:17.408941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.597 [2024-10-07 09:48:17.408971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.597 [2024-10-07 09:48:17.421903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:28.597 [2024-10-07 09:48:17.422225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.597 [2024-10-07 09:48:17.422270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.597 [2024-10-07 09:48:17.434787] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:28.597 [2024-10-07 09:48:17.434992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.597 [2024-10-07 09:48:17.435039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.597 [2024-10-07 09:48:17.447551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:28.597 [2024-10-07 09:48:17.447845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.597 [2024-10-07 09:48:17.447873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.597 [2024-10-07 09:48:17.460213] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:28.597 [2024-10-07 09:48:17.460479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.597 [2024-10-07 09:48:17.460507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.597 [2024-10-07 09:48:17.472941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:28.597 [2024-10-07 09:48:17.473193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.597 [2024-10-07 09:48:17.473237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.597 [2024-10-07 09:48:17.485783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:28.597 [2024-10-07 09:48:17.485987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.597 [2024-10-07 09:48:17.486032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.597 [2024-10-07 09:48:17.498514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:28.597 [2024-10-07 09:48:17.498763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.597 [2024-10-07 09:48:17.498809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.597 [2024-10-07 09:48:17.511299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:28.597 [2024-10-07 09:48:17.511573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.597 [2024-10-07 09:48:17.511601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.597 [2024-10-07 09:48:17.524208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:28.597 [2024-10-07 09:48:17.524497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.597 [2024-10-07 09:48:17.524540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.597 [2024-10-07 09:48:17.536886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:28.597 [2024-10-07 09:48:17.537155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.597 [2024-10-07 09:48:17.537183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.597 [2024-10-07 09:48:17.549696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:28.597 [2024-10-07 09:48:17.549920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.597 [2024-10-07 09:48:17.549948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.597 [2024-10-07 09:48:17.562849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:28.597 [2024-10-07 09:48:17.563126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.597 [2024-10-07 09:48:17.563160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.597 [2024-10-07 09:48:17.575549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:28.597 [2024-10-07 09:48:17.575810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.597 [2024-10-07 09:48:17.575838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.597 [2024-10-07 09:48:17.588499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:28.597 [2024-10-07 09:48:17.588715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.597 [2024-10-07 09:48:17.588745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.857 [2024-10-07 09:48:17.601206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:28.857 [2024-10-07 09:48:17.601443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4000 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.857 [2024-10-07 09:48:17.601486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.857 [2024-10-07 09:48:17.614195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:28.857 [2024-10-07 09:48:17.614543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.857 [2024-10-07 09:48:17.614576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.857 [2024-10-07 09:48:17.627117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:28.857 [2024-10-07 09:48:17.627358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.857 [2024-10-07 09:48:17.627400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.857 [2024-10-07 09:48:17.639938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:28.857 [2024-10-07 09:48:17.640254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.857 [2024-10-07 09:48:17.640283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.857 [2024-10-07 09:48:17.652674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:28.857 [2024-10-07 09:48:17.652972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.857 [2024-10-07 09:48:17.653000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.857 [2024-10-07 09:48:17.665519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:28.857 [2024-10-07 09:48:17.665764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.857 [2024-10-07 09:48:17.665793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.857 [2024-10-07 09:48:17.678430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:28.857 [2024-10-07 09:48:17.678691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.857 [2024-10-07 09:48:17.678729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.857 [2024-10-07 09:48:17.691176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:28.857 [2024-10-07 09:48:17.691486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.857 [2024-10-07 09:48:17.691514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.857 [2024-10-07 09:48:17.704166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:28.857 [2024-10-07 09:48:17.704509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.857 [2024-10-07 09:48:17.704554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.857 [2024-10-07 09:48:17.716969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:28.857 [2024-10-07 09:48:17.717224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.857 [2024-10-07 09:48:17.717252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.857 [2024-10-07 09:48:17.729618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:28.857 [2024-10-07 09:48:17.729876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.858 [2024-10-07 09:48:17.729905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.858 [2024-10-07 09:48:17.742291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:28.858 [2024-10-07 09:48:17.742548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.858 [2024-10-07 09:48:17.742576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.858 [2024-10-07 09:48:17.755038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:28.858 [2024-10-07 09:48:17.755317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.858 [2024-10-07 09:48:17.755345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.858 [2024-10-07 09:48:17.767792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:28.858 [2024-10-07 09:48:17.768052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.858 [2024-10-07 09:48:17.768080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.858 [2024-10-07 09:48:17.780514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:28.858 [2024-10-07 09:48:17.780732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.858 [2024-10-07 09:48:17.780775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.858 [2024-10-07 09:48:17.793285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:28.858 [2024-10-07 09:48:17.793539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8085 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.858 [2024-10-07 09:48:17.793567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.858 [2024-10-07 09:48:17.806302] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:28.858 [2024-10-07 09:48:17.806640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.858 [2024-10-07 09:48:17.806695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.858 [2024-10-07 09:48:17.819513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:28.858 [2024-10-07 09:48:17.819726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.858 [2024-10-07 09:48:17.819772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.858 [2024-10-07 09:48:17.832278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:28.858 [2024-10-07 09:48:17.832479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.858 [2024-10-07 09:48:17.832526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.858 [2024-10-07 09:48:17.844952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:28.858 [2024-10-07 09:48:17.845294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.858 [2024-10-07 09:48:17.845340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.119 [2024-10-07 09:48:17.857695] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:29.119 [2024-10-07 09:48:17.857932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.119 [2024-10-07 09:48:17.857961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.119 [2024-10-07 09:48:17.870427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:29.119 [2024-10-07 09:48:17.870682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.119 [2024-10-07 09:48:17.870712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.119 [2024-10-07 09:48:17.883208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:29.119 [2024-10-07 09:48:17.883482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.119 [2024-10-07 09:48:17.883528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.119 [2024-10-07 09:48:17.896171] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:29.119 [2024-10-07 09:48:17.896473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15033 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.119 [2024-10-07 09:48:17.896518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.119 [2024-10-07 09:48:17.909019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:29.119 [2024-10-07 09:48:17.909263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.119 [2024-10-07 09:48:17.909303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.119 [2024-10-07 09:48:17.922152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:29.119 [2024-10-07 09:48:17.922362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.119 [2024-10-07 09:48:17.922394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.119 [2024-10-07 09:48:17.935313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:29.119 [2024-10-07 09:48:17.935561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.119 [2024-10-07 09:48:17.935591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.119 [2024-10-07 09:48:17.948799] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:29.119 [2024-10-07 09:48:17.949053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.119 [2024-10-07 09:48:17.949083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.119 [2024-10-07 09:48:17.961972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:29.119 [2024-10-07 09:48:17.962281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.119 [2024-10-07 09:48:17.962327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.119 [2024-10-07 09:48:17.975548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:29.119 [2024-10-07 09:48:17.975899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.119 [2024-10-07 09:48:17.975932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.119 [2024-10-07 09:48:17.988921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:29.119 [2024-10-07 09:48:17.989234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.119 [2024-10-07 09:48:17.989265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.119 [2024-10-07 09:48:18.002135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:29.119 [2024-10-07 09:48:18.002417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.119 [2024-10-07 09:48:18.002446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.119 [2024-10-07 09:48:18.015366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:29.119 [2024-10-07 09:48:18.015676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.119 [2024-10-07 09:48:18.015732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.119 [2024-10-07 09:48:18.028696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:29.119 [2024-10-07 09:48:18.028942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.119 [2024-10-07 09:48:18.028974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.119 [2024-10-07 09:48:18.041837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:29.119 [2024-10-07 09:48:18.042108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21802 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.119 [2024-10-07 09:48:18.042138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.119 [2024-10-07 09:48:18.054874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:29.119 [2024-10-07 09:48:18.055149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12858 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.119 [2024-10-07 09:48:18.055195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.119 [2024-10-07 09:48:18.068020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:29.119 [2024-10-07 09:48:18.068299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.119 [2024-10-07 09:48:18.068327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.119 [2024-10-07 09:48:18.081329] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:29.119 [2024-10-07 09:48:18.081588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.119 [2024-10-07 09:48:18.081632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.119 [2024-10-07 09:48:18.094565] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:29.119 [2024-10-07 09:48:18.094789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.119 [2024-10-07 09:48:18.094836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.119 [2024-10-07 09:48:18.107776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:29.119 [2024-10-07 09:48:18.108060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.119 [2024-10-07 09:48:18.108105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.379 [2024-10-07 09:48:18.120861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:29.379 [2024-10-07 09:48:18.121119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.379 [2024-10-07 09:48:18.121148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.379 [2024-10-07 09:48:18.134309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:29.379 [2024-10-07 09:48:18.134600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.379 [2024-10-07 09:48:18.134644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.379 [2024-10-07 09:48:18.147688] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:29.379 [2024-10-07 09:48:18.147897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22858 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.379 [2024-10-07 09:48:18.147944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.379 [2024-10-07 09:48:18.160922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:29.379 [2024-10-07 09:48:18.161231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.379 [2024-10-07 09:48:18.161276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.379 [2024-10-07 09:48:18.174395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:29.379 [2024-10-07 09:48:18.174682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.379 [2024-10-07 09:48:18.174712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.379 [2024-10-07 09:48:18.187630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:29.379 [2024-10-07 09:48:18.187885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.380 [2024-10-07 09:48:18.187918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.380 [2024-10-07 09:48:18.201074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:29.380 [2024-10-07 09:48:18.201327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.380 [2024-10-07 09:48:18.201356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.380 [2024-10-07 09:48:18.214166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:29.380 [2024-10-07 09:48:18.214396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.380 [2024-10-07 09:48:18.214428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.380 [2024-10-07 09:48:18.227629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:29.380 [2024-10-07 09:48:18.227887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.380 [2024-10-07 09:48:18.227916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.380 [2024-10-07 09:48:18.240864] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:29.380 [2024-10-07 09:48:18.241149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.380 [2024-10-07 09:48:18.241178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.380 [2024-10-07 09:48:18.254293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:29.380 [2024-10-07 09:48:18.254627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.380 [2024-10-07 09:48:18.254677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.380 [2024-10-07 09:48:18.267563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:29.380 [2024-10-07 09:48:18.267807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.380 [2024-10-07 09:48:18.267857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.380 19499.00 IOPS, 76.17 MiB/s [2024-10-07 09:48:18.280705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:29.380 [2024-10-07 09:48:18.280932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.380 [2024-10-07 09:48:18.280964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.380 [2024-10-07 09:48:18.293924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:29.380 [2024-10-07 09:48:18.294180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.380 [2024-10-07 09:48:18.294222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.380 [2024-10-07 09:48:18.307309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:29.380 [2024-10-07 09:48:18.307609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:6444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.380 [2024-10-07 09:48:18.307654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.380 [2024-10-07 09:48:18.320552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:29.380 [2024-10-07 09:48:18.320825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:16770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.380 [2024-10-07 09:48:18.320871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.380 [2024-10-07 09:48:18.333801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:29.380 [2024-10-07 09:48:18.334065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.380 [2024-10-07 09:48:18.334094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.380 [2024-10-07 09:48:18.346943] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:29.380 [2024-10-07 09:48:18.347158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.380 [2024-10-07 09:48:18.347202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.380 [2024-10-07 09:48:18.360151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:29.380 [2024-10-07 09:48:18.360420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.380 [2024-10-07 09:48:18.360456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.380 [2024-10-07 09:48:18.373249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:29.380 [2024-10-07 09:48:18.373508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.380 [2024-10-07 09:48:18.373537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.638 [2024-10-07 09:48:18.386573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:29.638 [2024-10-07 09:48:18.386862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:16912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.638 [2024-10-07 09:48:18.386891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.638 [2024-10-07 09:48:18.399808] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:29.638 [2024-10-07 09:48:18.400073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.638 [2024-10-07 09:48:18.400116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.638 [2024-10-07 09:48:18.413105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:29.638 [2024-10-07 09:48:18.413425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.638 [2024-10-07 09:48:18.413470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.638 [2024-10-07 09:48:18.426600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:29.638 [2024-10-07 09:48:18.426823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.638 [2024-10-07 09:48:18.426856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.638 [2024-10-07 09:48:18.439777] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:29.638 [2024-10-07 09:48:18.440068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.638 [2024-10-07 09:48:18.440116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.638 [2024-10-07 09:48:18.453421] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:29.638 [2024-10-07 09:48:18.453721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.638 [2024-10-07 09:48:18.453750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.638 [2024-10-07 09:48:18.466513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:29.638 [2024-10-07 09:48:18.466742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.638 [2024-10-07 09:48:18.466790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.638 [2024-10-07 09:48:18.479899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:29.638 [2024-10-07 09:48:18.480246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.638 [2024-10-07 09:48:18.480282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.638 [2024-10-07 09:48:18.493321] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:29.638 [2024-10-07 09:48:18.493580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11819 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.638 [2024-10-07 09:48:18.493623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.638 [2024-10-07 09:48:18.506630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:29.638 [2024-10-07 09:48:18.506865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8000 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.638 [2024-10-07 09:48:18.506897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.638 [2024-10-07 09:48:18.519840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:29.638 [2024-10-07 09:48:18.520098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:16824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.638 [2024-10-07 09:48:18.520142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.638 [2024-10-07 09:48:18.533029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:29.638 [2024-10-07 09:48:18.533369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.638 [2024-10-07 09:48:18.533399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.638 [2024-10-07 09:48:18.546126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:29.638 [2024-10-07 09:48:18.546383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.638 [2024-10-07 09:48:18.546413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.638 [2024-10-07 09:48:18.559244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:29.638 [2024-10-07 09:48:18.559496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.638 [2024-10-07 09:48:18.559525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.638 [2024-10-07 09:48:18.572380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:29.638 [2024-10-07 09:48:18.572678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:18620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.638 [2024-10-07 09:48:18.572707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.638 [2024-10-07 09:48:18.585691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:29.638 [2024-10-07 09:48:18.585904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.638 [2024-10-07 09:48:18.585936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.638 [2024-10-07 09:48:18.599031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:29.638 [2024-10-07 09:48:18.599315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.638 [2024-10-07 09:48:18.599343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.638 [2024-10-07 09:48:18.612317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:29.638 [2024-10-07 09:48:18.612596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.638 [2024-10-07 09:48:18.612624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.638 [2024-10-07 09:48:18.625680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:29.638 [2024-10-07 09:48:18.625892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:15483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.638 [2024-10-07 09:48:18.625938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.898 [2024-10-07 09:48:18.638788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:29.898 [2024-10-07 09:48:18.639059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.898 [2024-10-07 09:48:18.639088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.898 [2024-10-07 09:48:18.652076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:29.898 [2024-10-07 09:48:18.652348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:14879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.898 [2024-10-07 09:48:18.652376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.898 [2024-10-07 09:48:18.665291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:29.898 [2024-10-07 09:48:18.665546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.898 [2024-10-07 09:48:18.665575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.898 [2024-10-07 09:48:18.678442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:29.898 [2024-10-07 09:48:18.678737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.898 [2024-10-07 09:48:18.678766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.898 [2024-10-07 09:48:18.691822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:29.898 [2024-10-07 09:48:18.692137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.898 [2024-10-07 09:48:18.692166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.898 [2024-10-07 09:48:18.705293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:29.898 [2024-10-07 09:48:18.705623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.898 [2024-10-07 09:48:18.705654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.898 [2024-10-07 09:48:18.718465] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:29.898 [2024-10-07 09:48:18.718714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.898 [2024-10-07 09:48:18.718747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.898 [2024-10-07 09:48:18.731986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:29.898 [2024-10-07 09:48:18.732274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:6668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.898 [2024-10-07 09:48:18.732322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.898 [2024-10-07 09:48:18.745201] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:29.898 [2024-10-07 09:48:18.745456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:3454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.898 [2024-10-07 09:48:18.745484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.898 [2024-10-07 09:48:18.758597] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:29.898 [2024-10-07 09:48:18.758835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.898 [2024-10-07 09:48:18.758863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.898 [2024-10-07 09:48:18.771807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:29.899 [2024-10-07 09:48:18.772062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13485 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.899 [2024-10-07 09:48:18.772091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.899 [2024-10-07 09:48:18.785136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:29.899 [2024-10-07 09:48:18.785396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:6467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.899 [2024-10-07 09:48:18.785443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.899 [2024-10-07 09:48:18.798450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:29.899 [2024-10-07 09:48:18.798728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.899 [2024-10-07 09:48:18.798757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.899 [2024-10-07 09:48:18.811631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:29.899 [2024-10-07 09:48:18.811885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.899 [2024-10-07 09:48:18.811917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.899 [2024-10-07 09:48:18.824939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:29.899 [2024-10-07 09:48:18.825278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.899 [2024-10-07 09:48:18.825317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.899 [2024-10-07 09:48:18.838351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:29.899 [2024-10-07 09:48:18.838673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:16389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.899 [2024-10-07 09:48:18.838706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.899 [2024-10-07 09:48:18.851538] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:29.899 [2024-10-07 09:48:18.851795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:14286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.899 [2024-10-07 09:48:18.851828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.899 [2024-10-07 09:48:18.864794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:29.899 [2024-10-07 09:48:18.865049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24068 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.899 [2024-10-07 09:48:18.865079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.899 [2024-10-07 09:48:18.877883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:29.899 [2024-10-07 09:48:18.878151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.899 [2024-10-07 09:48:18.878180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.899 [2024-10-07 09:48:18.890967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:29.899 [2024-10-07 09:48:18.891207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:14717 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.899 [2024-10-07 09:48:18.891240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:30.159 [2024-10-07 09:48:18.903977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:30.159 [2024-10-07 09:48:18.904333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.159 [2024-10-07 09:48:18.904380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:30.159 [2024-10-07 09:48:18.916807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:30.159 [2024-10-07 09:48:18.917075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.159 [2024-10-07 09:48:18.917103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:30.159 [2024-10-07 09:48:18.929635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:30.159 [2024-10-07 09:48:18.929909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.159 [2024-10-07 09:48:18.929939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:30.159 [2024-10-07 09:48:18.942502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:30.159 [2024-10-07 09:48:18.942738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.159 [2024-10-07 09:48:18.942770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:30.159 [2024-10-07 09:48:18.955431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:30.159 [2024-10-07 09:48:18.955687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.159 [2024-10-07 09:48:18.955725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:30.159 [2024-10-07 09:48:18.968192] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:30.159 [2024-10-07 09:48:18.968497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.159 [2024-10-07 09:48:18.968528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:30.159 [2024-10-07 09:48:18.981178] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:30.159 [2024-10-07 09:48:18.981496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:16321 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.159 [2024-10-07 09:48:18.981539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:30.159 [2024-10-07 09:48:18.994513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:30.159 [2024-10-07 09:48:18.994751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.159 [2024-10-07 09:48:18.994785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:30.159 [2024-10-07 09:48:19.007617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:30.159 [2024-10-07 09:48:19.007845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.159 [2024-10-07 09:48:19.007876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:30.159 [2024-10-07 09:48:19.020508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:30.159 [2024-10-07 09:48:19.020725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:6025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.159 [2024-10-07 09:48:19.020755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:30.159 [2024-10-07 09:48:19.033305] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:30.159 [2024-10-07 09:48:19.033569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.159 [2024-10-07 09:48:19.033615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:30.159 [2024-10-07 09:48:19.046363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:30.159 [2024-10-07 09:48:19.046640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5799 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.159 [2024-10-07 09:48:19.046674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:30.159 [2024-10-07 09:48:19.059160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:30.159 [2024-10-07 09:48:19.059415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:15081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.159 [2024-10-07 09:48:19.059462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:30.159 [2024-10-07 09:48:19.072093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:30.159 [2024-10-07 09:48:19.072373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.159 [2024-10-07 09:48:19.072402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:30.159 [2024-10-07 09:48:19.085010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:30.159 [2024-10-07 09:48:19.085261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.159 [2024-10-07 09:48:19.085319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:30.159 [2024-10-07 09:48:19.098368] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:30.159 [2024-10-07 09:48:19.098680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.159 [2024-10-07 09:48:19.098710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:30.159 [2024-10-07 09:48:19.111176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:30.159 [2024-10-07 09:48:19.111450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24950 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.159 [2024-10-07 09:48:19.111479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:30.159 [2024-10-07 09:48:19.123948] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:30.159 [2024-10-07 09:48:19.124230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.159 [2024-10-07 09:48:19.124258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:30.159 [2024-10-07 09:48:19.136607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:30.159 [2024-10-07 09:48:19.136825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.159 [2024-10-07 09:48:19.136857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:30.159 [2024-10-07 09:48:19.149268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:30.159 [2024-10-07 09:48:19.149477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.159 [2024-10-07 09:48:19.149504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:30.417 [2024-10-07 09:48:19.161855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:30.417 [2024-10-07 09:48:19.162137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:14764 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.417 [2024-10-07 09:48:19.162170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:30.417 [2024-10-07 09:48:19.174709] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:30.417 [2024-10-07 09:48:19.174915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.417 [2024-10-07 09:48:19.174942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:30.417 [2024-10-07 09:48:19.187513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:30.417 [2024-10-07 09:48:19.187730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.417 [2024-10-07 09:48:19.187774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:30.417 [2024-10-07 09:48:19.200279] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:30.417 [2024-10-07 09:48:19.200635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.417 [2024-10-07 09:48:19.200662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:30.417 [2024-10-07 09:48:19.212918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:30.417 [2024-10-07 09:48:19.213128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.417 [2024-10-07 09:48:19.213170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:30.417 [2024-10-07 09:48:19.225819] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:30.417 [2024-10-07 09:48:19.226025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.417 [2024-10-07 09:48:19.226052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:30.417 [2024-10-07 09:48:19.238494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:30.417 [2024-10-07 09:48:19.238719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.417 [2024-10-07 09:48:19.238765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:30.417 [2024-10-07 09:48:19.251149] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:30.417 [2024-10-07 09:48:19.251447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.417 [2024-10-07 09:48:19.251475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:30.417 [2024-10-07 09:48:19.263991] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:30.417 [2024-10-07 09:48:19.264271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:3298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.417 [2024-10-07 09:48:19.264300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:30.417 [2024-10-07 09:48:19.276778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7990) with pdu=0x2000198fd208 00:27:30.418 [2024-10-07 09:48:19.277961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.418 [2024-10-07 09:48:19.277995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:30.418 19499.50 IOPS, 76.17 MiB/s 00:27:30.418 Latency(us) 00:27:30.418 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:30.418 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:30.418 nvme0n1 : 2.01 19501.58 76.18 0.00 0.00 6549.83 5000.15 14466.47 00:27:30.418 =================================================================================================================== 00:27:30.418 Total : 19501.58 76.18 0.00 0.00 6549.83 5000.15 14466.47 00:27:30.418 { 00:27:30.418 "results": [ 00:27:30.418 { 00:27:30.418 "job": "nvme0n1", 00:27:30.418 "core_mask": "0x2", 00:27:30.418 "workload": "randwrite", 00:27:30.418 "status": "finished", 00:27:30.418 "queue_depth": 128, 00:27:30.418 "io_size": 4096, 00:27:30.418 "runtime": 2.00635, 00:27:30.418 "iops": 19501.582475639843, 00:27:30.418 "mibps": 76.17805654546814, 00:27:30.418 "io_failed": 0, 00:27:30.418 "io_timeout": 0, 00:27:30.418 "avg_latency_us": 6549.832900081311, 00:27:30.418 "min_latency_us": 5000.154074074074, 00:27:30.418 "max_latency_us": 14466.465185185185 00:27:30.418 } 00:27:30.418 ], 00:27:30.418 "core_count": 1 00:27:30.418 } 00:27:30.418 09:48:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:30.418 09:48:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:30.418 09:48:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:30.418 09:48:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:30.418 | .driver_specific 00:27:30.418 | .nvme_error 00:27:30.418 | .status_code 00:27:30.418 | .command_transient_transport_error' 00:27:30.724 09:48:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 153 > 0 )) 00:27:30.724 09:48:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 322729 00:27:30.724 09:48:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 322729 ']' 00:27:30.724 09:48:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 322729 00:27:30.724 09:48:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:27:30.724 09:48:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:30.724 09:48:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 322729 00:27:30.724 09:48:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:30.724 09:48:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:30.724 09:48:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 322729' 00:27:30.724 killing process with pid 322729 00:27:30.724 09:48:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 322729 00:27:30.724 Received shutdown signal, test time was about 2.000000 seconds 00:27:30.724 00:27:30.724 Latency(us) 00:27:30.724 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:30.724 =================================================================================================================== 00:27:30.724 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:30.724 09:48:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 322729 00:27:30.981 09:48:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:27:30.982 09:48:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:30.982 09:48:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:27:30.982 09:48:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:27:30.982 09:48:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:27:30.982 09:48:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=323226 00:27:30.982 09:48:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:27:30.982 09:48:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 323226 /var/tmp/bperf.sock 00:27:30.982 09:48:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 323226 ']' 00:27:30.982 09:48:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:30.982 09:48:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:30.982 09:48:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:30.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:30.982 09:48:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:30.982 09:48:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:30.982 [2024-10-07 09:48:19.925105] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:27:30.982 [2024-10-07 09:48:19.925194] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid323226 ] 00:27:30.982 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:30.982 Zero copy mechanism will not be used. 00:27:31.239 [2024-10-07 09:48:19.983379] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:31.239 [2024-10-07 09:48:20.104816] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:27:31.239 09:48:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:31.239 09:48:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:27:31.239 09:48:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:31.239 09:48:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:31.802 09:48:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:31.802 09:48:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.802 09:48:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:31.802 09:48:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.802 09:48:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:31.802 09:48:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:32.061 nvme0n1 00:27:32.061 09:48:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:27:32.061 09:48:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.061 09:48:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:32.061 09:48:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.061 09:48:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:32.061 09:48:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:32.061 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:32.061 Zero copy mechanism will not be used. 00:27:32.061 Running I/O for 2 seconds... 00:27:32.061 [2024-10-07 09:48:21.038096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.061 [2024-10-07 09:48:21.038479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.061 [2024-10-07 09:48:21.038517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:32.061 [2024-10-07 09:48:21.044087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.061 [2024-10-07 09:48:21.044433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.061 [2024-10-07 09:48:21.044464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:32.061 [2024-10-07 09:48:21.050328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.061 [2024-10-07 09:48:21.050599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.061 [2024-10-07 09:48:21.050629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:32.061 [2024-10-07 09:48:21.057324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.321 [2024-10-07 09:48:21.057796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.321 [2024-10-07 09:48:21.057842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.321 [2024-10-07 09:48:21.063460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.321 [2024-10-07 09:48:21.063851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.321 [2024-10-07 09:48:21.063896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:32.321 [2024-10-07 09:48:21.068828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.321 [2024-10-07 09:48:21.069158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.321 [2024-10-07 09:48:21.069187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:32.321 [2024-10-07 09:48:21.074544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.321 [2024-10-07 09:48:21.074855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.321 [2024-10-07 09:48:21.074885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:32.321 [2024-10-07 09:48:21.079895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.321 [2024-10-07 09:48:21.080213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.321 [2024-10-07 09:48:21.080243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.321 [2024-10-07 09:48:21.085131] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.321 [2024-10-07 09:48:21.085399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.321 [2024-10-07 09:48:21.085429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:32.321 [2024-10-07 09:48:21.090267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.321 [2024-10-07 09:48:21.090536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.321 [2024-10-07 09:48:21.090567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:32.321 [2024-10-07 09:48:21.095408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.321 [2024-10-07 09:48:21.095710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.321 [2024-10-07 09:48:21.095742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:32.321 [2024-10-07 09:48:21.100561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.321 [2024-10-07 09:48:21.100862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.321 [2024-10-07 09:48:21.100892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.321 [2024-10-07 09:48:21.105826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.321 [2024-10-07 09:48:21.106125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.321 [2024-10-07 09:48:21.106154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:32.321 [2024-10-07 09:48:21.111344] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.321 [2024-10-07 09:48:21.111662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.321 [2024-10-07 09:48:21.111698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:32.321 [2024-10-07 09:48:21.116954] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.321 [2024-10-07 09:48:21.117260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.321 [2024-10-07 09:48:21.117289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:32.321 [2024-10-07 09:48:21.122228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.321 [2024-10-07 09:48:21.122526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.321 [2024-10-07 09:48:21.122560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.321 [2024-10-07 09:48:21.127552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.321 [2024-10-07 09:48:21.127910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.321 [2024-10-07 09:48:21.127940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:32.321 [2024-10-07 09:48:21.133215] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.321 [2024-10-07 09:48:21.133506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.321 [2024-10-07 09:48:21.133535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:32.321 [2024-10-07 09:48:21.139151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.321 [2024-10-07 09:48:21.139440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.321 [2024-10-07 09:48:21.139469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:32.321 [2024-10-07 09:48:21.145166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.321 [2024-10-07 09:48:21.145435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.322 [2024-10-07 09:48:21.145465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.322 [2024-10-07 09:48:21.150402] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.322 [2024-10-07 09:48:21.150780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.322 [2024-10-07 09:48:21.150812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:32.322 [2024-10-07 09:48:21.156707] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.322 [2024-10-07 09:48:21.156895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.322 [2024-10-07 09:48:21.156927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:32.322 [2024-10-07 09:48:21.163079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.322 [2024-10-07 09:48:21.163425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.322 [2024-10-07 09:48:21.163454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:32.322 [2024-10-07 09:48:21.170087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.322 [2024-10-07 09:48:21.170402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.322 [2024-10-07 09:48:21.170432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.322 [2024-10-07 09:48:21.176705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.322 [2024-10-07 09:48:21.177042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.322 [2024-10-07 09:48:21.177071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:32.322 [2024-10-07 09:48:21.183366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.322 [2024-10-07 09:48:21.183679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.322 [2024-10-07 09:48:21.183709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:32.322 [2024-10-07 09:48:21.190381] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.322 [2024-10-07 09:48:21.190693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.322 [2024-10-07 09:48:21.190723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:32.322 [2024-10-07 09:48:21.197233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.322 [2024-10-07 09:48:21.197573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.322 [2024-10-07 09:48:21.197603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.322 [2024-10-07 09:48:21.203365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.322 [2024-10-07 09:48:21.203630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.322 [2024-10-07 09:48:21.203685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:32.322 [2024-10-07 09:48:21.208649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.322 [2024-10-07 09:48:21.208931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.322 [2024-10-07 09:48:21.208978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:32.322 [2024-10-07 09:48:21.213958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.322 [2024-10-07 09:48:21.214374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.322 [2024-10-07 09:48:21.214403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:32.322 [2024-10-07 09:48:21.219292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.322 [2024-10-07 09:48:21.219583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.322 [2024-10-07 09:48:21.219611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.322 [2024-10-07 09:48:21.225173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.322 [2024-10-07 09:48:21.225509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.322 [2024-10-07 09:48:21.225538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:32.322 [2024-10-07 09:48:21.231605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.322 [2024-10-07 09:48:21.231924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.322 [2024-10-07 09:48:21.231954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:32.322 [2024-10-07 09:48:21.238044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.322 [2024-10-07 09:48:21.238356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.322 [2024-10-07 09:48:21.238386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:32.322 [2024-10-07 09:48:21.244481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.322 [2024-10-07 09:48:21.244874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.322 [2024-10-07 09:48:21.244903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.322 [2024-10-07 09:48:21.251890] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.322 [2024-10-07 09:48:21.252220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.322 [2024-10-07 09:48:21.252251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:32.322 [2024-10-07 09:48:21.259268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.322 [2024-10-07 09:48:21.259584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.322 [2024-10-07 09:48:21.259613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:32.322 [2024-10-07 09:48:21.266242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.322 [2024-10-07 09:48:21.266597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.322 [2024-10-07 09:48:21.266625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:32.322 [2024-10-07 09:48:21.272233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.322 [2024-10-07 09:48:21.272501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.322 [2024-10-07 09:48:21.272530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.322 [2024-10-07 09:48:21.277425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.322 [2024-10-07 09:48:21.277716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.322 [2024-10-07 09:48:21.277746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:32.322 [2024-10-07 09:48:21.282547] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.322 [2024-10-07 09:48:21.282853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.322 [2024-10-07 09:48:21.282888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:32.322 [2024-10-07 09:48:21.287790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.322 [2024-10-07 09:48:21.288102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.322 [2024-10-07 09:48:21.288131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:32.322 [2024-10-07 09:48:21.293094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.322 [2024-10-07 09:48:21.293377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.322 [2024-10-07 09:48:21.293416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.322 [2024-10-07 09:48:21.298317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.322 [2024-10-07 09:48:21.298746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.322 [2024-10-07 09:48:21.298778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:32.322 [2024-10-07 09:48:21.303627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.322 [2024-10-07 09:48:21.304069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.322 [2024-10-07 09:48:21.304113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:32.322 [2024-10-07 09:48:21.309117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.323 [2024-10-07 09:48:21.309422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.323 [2024-10-07 09:48:21.309450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:32.323 [2024-10-07 09:48:21.314389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.323 [2024-10-07 09:48:21.314732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.323 [2024-10-07 09:48:21.314764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.585 [2024-10-07 09:48:21.319526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.585 [2024-10-07 09:48:21.319879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.585 [2024-10-07 09:48:21.319910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:32.585 [2024-10-07 09:48:21.324806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.585 [2024-10-07 09:48:21.325104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.585 [2024-10-07 09:48:21.325133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:32.585 [2024-10-07 09:48:21.330007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.585 [2024-10-07 09:48:21.330290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.585 [2024-10-07 09:48:21.330319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:32.585 [2024-10-07 09:48:21.335083] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.585 [2024-10-07 09:48:21.335359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.585 [2024-10-07 09:48:21.335387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.585 [2024-10-07 09:48:21.340448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.585 [2024-10-07 09:48:21.340745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.585 [2024-10-07 09:48:21.340777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:32.585 [2024-10-07 09:48:21.346653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.585 [2024-10-07 09:48:21.346996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.585 [2024-10-07 09:48:21.347040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:32.585 [2024-10-07 09:48:21.353256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.585 [2024-10-07 09:48:21.353513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.585 [2024-10-07 09:48:21.353544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:32.585 [2024-10-07 09:48:21.360213] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.585 [2024-10-07 09:48:21.360490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.585 [2024-10-07 09:48:21.360520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.585 [2024-10-07 09:48:21.366971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.585 [2024-10-07 09:48:21.367380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.585 [2024-10-07 09:48:21.367411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:32.585 [2024-10-07 09:48:21.374230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.585 [2024-10-07 09:48:21.374532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.585 [2024-10-07 09:48:21.374563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:32.585 [2024-10-07 09:48:21.381403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.585 [2024-10-07 09:48:21.381722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.585 [2024-10-07 09:48:21.381753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:32.585 [2024-10-07 09:48:21.387860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.585 [2024-10-07 09:48:21.388144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.585 [2024-10-07 09:48:21.388189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.585 [2024-10-07 09:48:21.393102] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.585 [2024-10-07 09:48:21.393375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.585 [2024-10-07 09:48:21.393403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:32.585 [2024-10-07 09:48:21.398646] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.585 [2024-10-07 09:48:21.398954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.585 [2024-10-07 09:48:21.398999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:32.585 [2024-10-07 09:48:21.404037] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.585 [2024-10-07 09:48:21.404318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.585 [2024-10-07 09:48:21.404348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:32.585 [2024-10-07 09:48:21.409273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.585 [2024-10-07 09:48:21.409524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.585 [2024-10-07 09:48:21.409568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.585 [2024-10-07 09:48:21.414538] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.585 [2024-10-07 09:48:21.414829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.585 [2024-10-07 09:48:21.414860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:32.585 [2024-10-07 09:48:21.419885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.585 [2024-10-07 09:48:21.420160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.585 [2024-10-07 09:48:21.420190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:32.585 [2024-10-07 09:48:21.425149] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.585 [2024-10-07 09:48:21.425406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.585 [2024-10-07 09:48:21.425435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:32.585 [2024-10-07 09:48:21.430572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.585 [2024-10-07 09:48:21.430859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.585 [2024-10-07 09:48:21.430899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.585 [2024-10-07 09:48:21.435988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.585 [2024-10-07 09:48:21.436251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.585 [2024-10-07 09:48:21.436281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:32.585 [2024-10-07 09:48:21.441700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.585 [2024-10-07 09:48:21.441973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.585 [2024-10-07 09:48:21.442003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:32.585 [2024-10-07 09:48:21.448067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.585 [2024-10-07 09:48:21.448364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.585 [2024-10-07 09:48:21.448392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:32.585 [2024-10-07 09:48:21.454553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.585 [2024-10-07 09:48:21.454874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.585 [2024-10-07 09:48:21.454904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.585 [2024-10-07 09:48:21.460721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.585 [2024-10-07 09:48:21.461034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.585 [2024-10-07 09:48:21.461077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:32.586 [2024-10-07 09:48:21.467257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.586 [2024-10-07 09:48:21.467549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.586 [2024-10-07 09:48:21.467580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:32.586 [2024-10-07 09:48:21.473909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.586 [2024-10-07 09:48:21.474254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.586 [2024-10-07 09:48:21.474282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:32.586 [2024-10-07 09:48:21.480380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.586 [2024-10-07 09:48:21.480655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.586 [2024-10-07 09:48:21.480694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.586 [2024-10-07 09:48:21.487201] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.586 [2024-10-07 09:48:21.487476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.586 [2024-10-07 09:48:21.487506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:32.586 [2024-10-07 09:48:21.493194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.586 [2024-10-07 09:48:21.493495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.586 [2024-10-07 09:48:21.493526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:32.586 [2024-10-07 09:48:21.499139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.586 [2024-10-07 09:48:21.499417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.586 [2024-10-07 09:48:21.499448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:32.586 [2024-10-07 09:48:21.505173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.586 [2024-10-07 09:48:21.505486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.586 [2024-10-07 09:48:21.505530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.586 [2024-10-07 09:48:21.511313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.586 [2024-10-07 09:48:21.511612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.586 [2024-10-07 09:48:21.511642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:32.586 [2024-10-07 09:48:21.517266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.586 [2024-10-07 09:48:21.517551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.586 [2024-10-07 09:48:21.517582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:32.586 [2024-10-07 09:48:21.523218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.586 [2024-10-07 09:48:21.523502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.586 [2024-10-07 09:48:21.523532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:32.586 [2024-10-07 09:48:21.529102] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.586 [2024-10-07 09:48:21.529391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.586 [2024-10-07 09:48:21.529420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.586 [2024-10-07 09:48:21.534868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.586 [2024-10-07 09:48:21.535152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.586 [2024-10-07 09:48:21.535188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:32.586 [2024-10-07 09:48:21.540832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.586 [2024-10-07 09:48:21.541123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.586 [2024-10-07 09:48:21.541153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:32.586 [2024-10-07 09:48:21.546862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.586 [2024-10-07 09:48:21.547250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.586 [2024-10-07 09:48:21.547278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:32.586 [2024-10-07 09:48:21.553072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.586 [2024-10-07 09:48:21.553376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.586 [2024-10-07 09:48:21.553408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.586 [2024-10-07 09:48:21.557867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.586 [2024-10-07 09:48:21.558141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.586 [2024-10-07 09:48:21.558171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:32.586 [2024-10-07 09:48:21.562705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.586 [2024-10-07 09:48:21.562981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.586 [2024-10-07 09:48:21.563012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:32.586 [2024-10-07 09:48:21.567540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.586 [2024-10-07 09:48:21.567823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.586 [2024-10-07 09:48:21.567854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:32.586 [2024-10-07 09:48:21.572361] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.586 [2024-10-07 09:48:21.572621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.586 [2024-10-07 09:48:21.572651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.586 [2024-10-07 09:48:21.577058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.586 [2024-10-07 09:48:21.577357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.586 [2024-10-07 09:48:21.577388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:32.848 [2024-10-07 09:48:21.581853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.848 [2024-10-07 09:48:21.582150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.848 [2024-10-07 09:48:21.582181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:32.848 [2024-10-07 09:48:21.586884] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.848 [2024-10-07 09:48:21.587151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.848 [2024-10-07 09:48:21.587181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:32.848 [2024-10-07 09:48:21.591592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.848 [2024-10-07 09:48:21.591886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.848 [2024-10-07 09:48:21.591916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.848 [2024-10-07 09:48:21.596462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.848 [2024-10-07 09:48:21.596734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.848 [2024-10-07 09:48:21.596765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:32.848 [2024-10-07 09:48:21.601208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.848 [2024-10-07 09:48:21.601506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.848 [2024-10-07 09:48:21.601539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:32.848 [2024-10-07 09:48:21.605965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.848 [2024-10-07 09:48:21.606244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.848 [2024-10-07 09:48:21.606273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:32.848 [2024-10-07 09:48:21.610738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.848 [2024-10-07 09:48:21.611020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.848 [2024-10-07 09:48:21.611050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.848 [2024-10-07 09:48:21.616344] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.848 [2024-10-07 09:48:21.616613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.848 [2024-10-07 09:48:21.616657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:32.848 [2024-10-07 09:48:21.622306] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.848 [2024-10-07 09:48:21.622632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.848 [2024-10-07 09:48:21.622662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:32.848 [2024-10-07 09:48:21.628305] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.848 [2024-10-07 09:48:21.628593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.848 [2024-10-07 09:48:21.628622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:32.848 [2024-10-07 09:48:21.634552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.848 [2024-10-07 09:48:21.634855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.848 [2024-10-07 09:48:21.634889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.848 [2024-10-07 09:48:21.641334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.848 [2024-10-07 09:48:21.641716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.848 [2024-10-07 09:48:21.641747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:32.848 [2024-10-07 09:48:21.647174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.848 [2024-10-07 09:48:21.647436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.848 [2024-10-07 09:48:21.647465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:32.848 [2024-10-07 09:48:21.652089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.848 [2024-10-07 09:48:21.652372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.848 [2024-10-07 09:48:21.652402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:32.848 [2024-10-07 09:48:21.657010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.848 [2024-10-07 09:48:21.657270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.848 [2024-10-07 09:48:21.657299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.848 [2024-10-07 09:48:21.662013] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.848 [2024-10-07 09:48:21.662270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.848 [2024-10-07 09:48:21.662300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:32.848 [2024-10-07 09:48:21.666961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.848 [2024-10-07 09:48:21.667231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.848 [2024-10-07 09:48:21.667260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:32.848 [2024-10-07 09:48:21.672081] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.848 [2024-10-07 09:48:21.672348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.848 [2024-10-07 09:48:21.672384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:32.848 [2024-10-07 09:48:21.677181] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.848 [2024-10-07 09:48:21.677462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.848 [2024-10-07 09:48:21.677506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.848 [2024-10-07 09:48:21.682273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.848 [2024-10-07 09:48:21.682520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.848 [2024-10-07 09:48:21.682565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:32.848 [2024-10-07 09:48:21.687287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.848 [2024-10-07 09:48:21.687547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.848 [2024-10-07 09:48:21.687577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:32.848 [2024-10-07 09:48:21.692117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.848 [2024-10-07 09:48:21.692398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.848 [2024-10-07 09:48:21.692429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:32.848 [2024-10-07 09:48:21.697078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.848 [2024-10-07 09:48:21.697407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.848 [2024-10-07 09:48:21.697436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.848 [2024-10-07 09:48:21.701858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.848 [2024-10-07 09:48:21.702138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.848 [2024-10-07 09:48:21.702168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:32.848 [2024-10-07 09:48:21.706551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.848 [2024-10-07 09:48:21.706867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.848 [2024-10-07 09:48:21.706898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:32.848 [2024-10-07 09:48:21.711442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.848 [2024-10-07 09:48:21.711717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.848 [2024-10-07 09:48:21.711761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:32.848 [2024-10-07 09:48:21.716194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.848 [2024-10-07 09:48:21.716438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.848 [2024-10-07 09:48:21.716466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.848 [2024-10-07 09:48:21.720790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.848 [2024-10-07 09:48:21.721072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.848 [2024-10-07 09:48:21.721099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:32.848 [2024-10-07 09:48:21.726180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.848 [2024-10-07 09:48:21.726465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.848 [2024-10-07 09:48:21.726494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:32.848 [2024-10-07 09:48:21.732129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.848 [2024-10-07 09:48:21.732374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.848 [2024-10-07 09:48:21.732419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:32.848 [2024-10-07 09:48:21.738771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.848 [2024-10-07 09:48:21.739024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.848 [2024-10-07 09:48:21.739054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.848 [2024-10-07 09:48:21.744825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.848 [2024-10-07 09:48:21.745037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.848 [2024-10-07 09:48:21.745082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:32.848 [2024-10-07 09:48:21.749913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.848 [2024-10-07 09:48:21.750164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.848 [2024-10-07 09:48:21.750193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:32.848 [2024-10-07 09:48:21.755354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.848 [2024-10-07 09:48:21.755595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.848 [2024-10-07 09:48:21.755624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:32.848 [2024-10-07 09:48:21.760897] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.848 [2024-10-07 09:48:21.761134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.848 [2024-10-07 09:48:21.761167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.848 [2024-10-07 09:48:21.766452] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.848 [2024-10-07 09:48:21.766644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.848 [2024-10-07 09:48:21.766715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:32.848 [2024-10-07 09:48:21.771679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.848 [2024-10-07 09:48:21.771868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.848 [2024-10-07 09:48:21.771898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:32.848 [2024-10-07 09:48:21.777050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.848 [2024-10-07 09:48:21.777255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.848 [2024-10-07 09:48:21.777300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:32.848 [2024-10-07 09:48:21.782230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.849 [2024-10-07 09:48:21.782386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.849 [2024-10-07 09:48:21.782418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.849 [2024-10-07 09:48:21.787467] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.849 [2024-10-07 09:48:21.787624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.849 [2024-10-07 09:48:21.787677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:32.849 [2024-10-07 09:48:21.792000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.849 [2024-10-07 09:48:21.792158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.849 [2024-10-07 09:48:21.792188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:32.849 [2024-10-07 09:48:21.796551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.849 [2024-10-07 09:48:21.796735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.849 [2024-10-07 09:48:21.796766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:32.849 [2024-10-07 09:48:21.801295] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.849 [2024-10-07 09:48:21.801469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.849 [2024-10-07 09:48:21.801499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.849 [2024-10-07 09:48:21.805604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.849 [2024-10-07 09:48:21.805773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.849 [2024-10-07 09:48:21.805808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:32.849 [2024-10-07 09:48:21.809832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.849 [2024-10-07 09:48:21.810010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.849 [2024-10-07 09:48:21.810044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:32.849 [2024-10-07 09:48:21.814090] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.849 [2024-10-07 09:48:21.814251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.849 [2024-10-07 09:48:21.814279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:32.849 [2024-10-07 09:48:21.818544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.849 [2024-10-07 09:48:21.818762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.849 [2024-10-07 09:48:21.818797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.849 [2024-10-07 09:48:21.823712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.849 [2024-10-07 09:48:21.823986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.849 [2024-10-07 09:48:21.824029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:32.849 [2024-10-07 09:48:21.828966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.849 [2024-10-07 09:48:21.829195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.849 [2024-10-07 09:48:21.829224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:32.849 [2024-10-07 09:48:21.834634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.849 [2024-10-07 09:48:21.834961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.849 [2024-10-07 09:48:21.835006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:32.849 [2024-10-07 09:48:21.840432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:32.849 [2024-10-07 09:48:21.840646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.849 [2024-10-07 09:48:21.840698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.113 [2024-10-07 09:48:21.845489] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.113 [2024-10-07 09:48:21.845746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.113 [2024-10-07 09:48:21.845788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:33.113 [2024-10-07 09:48:21.850716] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.113 [2024-10-07 09:48:21.850941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.113 [2024-10-07 09:48:21.850972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:33.113 [2024-10-07 09:48:21.855809] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.113 [2024-10-07 09:48:21.856049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.113 [2024-10-07 09:48:21.856077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:33.113 [2024-10-07 09:48:21.860929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.113 [2024-10-07 09:48:21.861106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.113 [2024-10-07 09:48:21.861141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.113 [2024-10-07 09:48:21.866148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.113 [2024-10-07 09:48:21.866312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.113 [2024-10-07 09:48:21.866342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:33.113 [2024-10-07 09:48:21.872324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.113 [2024-10-07 09:48:21.872589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.113 [2024-10-07 09:48:21.872619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:33.113 [2024-10-07 09:48:21.877391] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.113 [2024-10-07 09:48:21.877662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.113 [2024-10-07 09:48:21.877700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:33.113 [2024-10-07 09:48:21.882434] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.113 [2024-10-07 09:48:21.882617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.113 [2024-10-07 09:48:21.882660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.113 [2024-10-07 09:48:21.887533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.113 [2024-10-07 09:48:21.887703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.113 [2024-10-07 09:48:21.887731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:33.113 [2024-10-07 09:48:21.893403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.113 [2024-10-07 09:48:21.893672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.113 [2024-10-07 09:48:21.893711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:33.113 [2024-10-07 09:48:21.898566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.113 [2024-10-07 09:48:21.898742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.113 [2024-10-07 09:48:21.898771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:33.113 [2024-10-07 09:48:21.903639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.113 [2024-10-07 09:48:21.903824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.113 [2024-10-07 09:48:21.903866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.113 [2024-10-07 09:48:21.908764] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.113 [2024-10-07 09:48:21.909012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.113 [2024-10-07 09:48:21.909043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:33.113 [2024-10-07 09:48:21.913099] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.113 [2024-10-07 09:48:21.913266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.113 [2024-10-07 09:48:21.913295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:33.113 [2024-10-07 09:48:21.918094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.113 [2024-10-07 09:48:21.918330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.113 [2024-10-07 09:48:21.918360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:33.113 [2024-10-07 09:48:21.923395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.113 [2024-10-07 09:48:21.923661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.113 [2024-10-07 09:48:21.923699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.113 [2024-10-07 09:48:21.929207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.113 [2024-10-07 09:48:21.929407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.113 [2024-10-07 09:48:21.929437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:33.113 [2024-10-07 09:48:21.933570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.113 [2024-10-07 09:48:21.933715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.113 [2024-10-07 09:48:21.933746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:33.113 [2024-10-07 09:48:21.937832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.113 [2024-10-07 09:48:21.938000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.113 [2024-10-07 09:48:21.938036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:33.113 [2024-10-07 09:48:21.942172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.113 [2024-10-07 09:48:21.942327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.113 [2024-10-07 09:48:21.942357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.113 [2024-10-07 09:48:21.946399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.113 [2024-10-07 09:48:21.946555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.113 [2024-10-07 09:48:21.946594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:33.113 [2024-10-07 09:48:21.950687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.113 [2024-10-07 09:48:21.950838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.113 [2024-10-07 09:48:21.950868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:33.113 [2024-10-07 09:48:21.955683] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.113 [2024-10-07 09:48:21.955885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.113 [2024-10-07 09:48:21.955916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:33.113 [2024-10-07 09:48:21.960516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.113 [2024-10-07 09:48:21.960675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.113 [2024-10-07 09:48:21.960705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.113 [2024-10-07 09:48:21.964899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.113 [2024-10-07 09:48:21.965062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.113 [2024-10-07 09:48:21.965092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:33.113 [2024-10-07 09:48:21.969364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.113 [2024-10-07 09:48:21.969517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.114 [2024-10-07 09:48:21.969546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:33.114 [2024-10-07 09:48:21.973743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.114 [2024-10-07 09:48:21.973899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.114 [2024-10-07 09:48:21.973928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:33.114 [2024-10-07 09:48:21.978016] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.114 [2024-10-07 09:48:21.978166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.114 [2024-10-07 09:48:21.978195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.114 [2024-10-07 09:48:21.982338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.114 [2024-10-07 09:48:21.982500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.114 [2024-10-07 09:48:21.982528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:33.114 [2024-10-07 09:48:21.986757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.114 [2024-10-07 09:48:21.986910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.114 [2024-10-07 09:48:21.986940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:33.114 [2024-10-07 09:48:21.991201] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.114 [2024-10-07 09:48:21.991359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.114 [2024-10-07 09:48:21.991389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:33.114 [2024-10-07 09:48:21.995648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.114 [2024-10-07 09:48:21.995829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.114 [2024-10-07 09:48:21.995859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.114 [2024-10-07 09:48:22.000092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.114 [2024-10-07 09:48:22.000289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.114 [2024-10-07 09:48:22.000319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:33.114 [2024-10-07 09:48:22.004503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.114 [2024-10-07 09:48:22.004680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.114 [2024-10-07 09:48:22.004722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:33.114 [2024-10-07 09:48:22.008898] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.114 [2024-10-07 09:48:22.009066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.114 [2024-10-07 09:48:22.009096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:33.114 [2024-10-07 09:48:22.013237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.114 [2024-10-07 09:48:22.013418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.114 [2024-10-07 09:48:22.013455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.114 [2024-10-07 09:48:22.017734] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.114 [2024-10-07 09:48:22.017883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.114 [2024-10-07 09:48:22.017913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:33.114 [2024-10-07 09:48:22.022194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.114 [2024-10-07 09:48:22.022359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.114 [2024-10-07 09:48:22.022388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:33.114 [2024-10-07 09:48:22.026607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.114 [2024-10-07 09:48:22.026768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.114 [2024-10-07 09:48:22.026803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:33.114 5673.00 IOPS, 709.12 MiB/s [2024-10-07 09:48:22.032249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.114 [2024-10-07 09:48:22.032396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.114 [2024-10-07 09:48:22.032426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.114 [2024-10-07 09:48:22.036612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.114 [2024-10-07 09:48:22.036803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.114 [2024-10-07 09:48:22.036834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:33.114 [2024-10-07 09:48:22.041109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.114 [2024-10-07 09:48:22.041247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.114 [2024-10-07 09:48:22.041275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:33.114 [2024-10-07 09:48:22.046198] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.114 [2024-10-07 09:48:22.046336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.114 [2024-10-07 09:48:22.046364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:33.114 [2024-10-07 09:48:22.052222] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.114 [2024-10-07 09:48:22.052469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.114 [2024-10-07 09:48:22.052517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.114 [2024-10-07 09:48:22.057295] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.114 [2024-10-07 09:48:22.057540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.114 [2024-10-07 09:48:22.057571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:33.114 [2024-10-07 09:48:22.062342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.114 [2024-10-07 09:48:22.062507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.114 [2024-10-07 09:48:22.062536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:33.114 [2024-10-07 09:48:22.067487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.114 [2024-10-07 09:48:22.067732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.114 [2024-10-07 09:48:22.067762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:33.114 [2024-10-07 09:48:22.072531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.114 [2024-10-07 09:48:22.072792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.114 [2024-10-07 09:48:22.072823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.114 [2024-10-07 09:48:22.077734] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.114 [2024-10-07 09:48:22.077867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.114 [2024-10-07 09:48:22.077896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:33.114 [2024-10-07 09:48:22.082774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.114 [2024-10-07 09:48:22.082945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.114 [2024-10-07 09:48:22.082973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:33.114 [2024-10-07 09:48:22.087964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.114 [2024-10-07 09:48:22.088164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.114 [2024-10-07 09:48:22.088192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:33.114 [2024-10-07 09:48:22.092992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.114 [2024-10-07 09:48:22.093225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.114 [2024-10-07 09:48:22.093261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.114 [2024-10-07 09:48:22.098286] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.114 [2024-10-07 09:48:22.098449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.114 [2024-10-07 09:48:22.098478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:33.115 [2024-10-07 09:48:22.103381] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.115 [2024-10-07 09:48:22.103516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.115 [2024-10-07 09:48:22.103544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:33.376 [2024-10-07 09:48:22.108680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.376 [2024-10-07 09:48:22.108860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.376 [2024-10-07 09:48:22.108888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:33.376 [2024-10-07 09:48:22.113748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.376 [2024-10-07 09:48:22.113928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.376 [2024-10-07 09:48:22.113957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.376 [2024-10-07 09:48:22.118846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.376 [2024-10-07 09:48:22.119012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.377 [2024-10-07 09:48:22.119042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:33.377 [2024-10-07 09:48:22.123985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.377 [2024-10-07 09:48:22.124233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.377 [2024-10-07 09:48:22.124263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:33.377 [2024-10-07 09:48:22.129102] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.377 [2024-10-07 09:48:22.129293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.377 [2024-10-07 09:48:22.129322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:33.377 [2024-10-07 09:48:22.134224] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.377 [2024-10-07 09:48:22.134376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.377 [2024-10-07 09:48:22.134405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.377 [2024-10-07 09:48:22.139430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.377 [2024-10-07 09:48:22.139576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.377 [2024-10-07 09:48:22.139605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:33.377 [2024-10-07 09:48:22.144483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.377 [2024-10-07 09:48:22.144676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.377 [2024-10-07 09:48:22.144717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:33.377 [2024-10-07 09:48:22.149609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.377 [2024-10-07 09:48:22.149747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.377 [2024-10-07 09:48:22.149781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:33.377 [2024-10-07 09:48:22.154677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.377 [2024-10-07 09:48:22.154826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.377 [2024-10-07 09:48:22.154854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.377 [2024-10-07 09:48:22.159639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.377 [2024-10-07 09:48:22.159783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.377 [2024-10-07 09:48:22.159813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:33.377 [2024-10-07 09:48:22.164732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.377 [2024-10-07 09:48:22.164872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.377 [2024-10-07 09:48:22.164900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:33.377 [2024-10-07 09:48:22.169896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.377 [2024-10-07 09:48:22.170087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.377 [2024-10-07 09:48:22.170115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:33.377 [2024-10-07 09:48:22.174982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.377 [2024-10-07 09:48:22.175167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.377 [2024-10-07 09:48:22.175196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.377 [2024-10-07 09:48:22.180071] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.377 [2024-10-07 09:48:22.180253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.377 [2024-10-07 09:48:22.180281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:33.377 [2024-10-07 09:48:22.185123] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.377 [2024-10-07 09:48:22.185271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.377 [2024-10-07 09:48:22.185298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:33.377 [2024-10-07 09:48:22.190398] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.377 [2024-10-07 09:48:22.190596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.377 [2024-10-07 09:48:22.190624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:33.377 [2024-10-07 09:48:22.195535] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.377 [2024-10-07 09:48:22.195703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.377 [2024-10-07 09:48:22.195739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.377 [2024-10-07 09:48:22.200690] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.377 [2024-10-07 09:48:22.200892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.377 [2024-10-07 09:48:22.200925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:33.377 [2024-10-07 09:48:22.205743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.377 [2024-10-07 09:48:22.205861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.377 [2024-10-07 09:48:22.205889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:33.377 [2024-10-07 09:48:22.210914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.377 [2024-10-07 09:48:22.211065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.377 [2024-10-07 09:48:22.211094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:33.377 [2024-10-07 09:48:22.216007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.377 [2024-10-07 09:48:22.216141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.377 [2024-10-07 09:48:22.216169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.377 [2024-10-07 09:48:22.221065] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.377 [2024-10-07 09:48:22.221215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.377 [2024-10-07 09:48:22.221242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:33.377 [2024-10-07 09:48:22.226156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.377 [2024-10-07 09:48:22.226301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.377 [2024-10-07 09:48:22.226329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:33.377 [2024-10-07 09:48:22.231240] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.377 [2024-10-07 09:48:22.231384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.377 [2024-10-07 09:48:22.231418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:33.377 [2024-10-07 09:48:22.236344] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.377 [2024-10-07 09:48:22.236471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.377 [2024-10-07 09:48:22.236498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.377 [2024-10-07 09:48:22.241391] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.377 [2024-10-07 09:48:22.241541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.377 [2024-10-07 09:48:22.241583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:33.377 [2024-10-07 09:48:22.246474] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.377 [2024-10-07 09:48:22.246618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.377 [2024-10-07 09:48:22.246646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:33.377 [2024-10-07 09:48:22.251420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.377 [2024-10-07 09:48:22.251534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.377 [2024-10-07 09:48:22.251561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:33.377 [2024-10-07 09:48:22.255771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.378 [2024-10-07 09:48:22.255916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.378 [2024-10-07 09:48:22.255944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.378 [2024-10-07 09:48:22.260830] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.378 [2024-10-07 09:48:22.260995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.378 [2024-10-07 09:48:22.261029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:33.378 [2024-10-07 09:48:22.265899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.378 [2024-10-07 09:48:22.266073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.378 [2024-10-07 09:48:22.266101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:33.378 [2024-10-07 09:48:22.270882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.378 [2024-10-07 09:48:22.271033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.378 [2024-10-07 09:48:22.271075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:33.378 [2024-10-07 09:48:22.275969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.378 [2024-10-07 09:48:22.276110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.378 [2024-10-07 09:48:22.276137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.378 [2024-10-07 09:48:22.281019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.378 [2024-10-07 09:48:22.281182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.378 [2024-10-07 09:48:22.281210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:33.378 [2024-10-07 09:48:22.286119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.378 [2024-10-07 09:48:22.286264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.378 [2024-10-07 09:48:22.286307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:33.378 [2024-10-07 09:48:22.291273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.378 [2024-10-07 09:48:22.291475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.378 [2024-10-07 09:48:22.291503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:33.378 [2024-10-07 09:48:22.296428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.378 [2024-10-07 09:48:22.296608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.378 [2024-10-07 09:48:22.296637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.378 [2024-10-07 09:48:22.301595] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.378 [2024-10-07 09:48:22.301804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.378 [2024-10-07 09:48:22.301833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:33.378 [2024-10-07 09:48:22.306598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.378 [2024-10-07 09:48:22.306762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.378 [2024-10-07 09:48:22.306790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:33.378 [2024-10-07 09:48:22.311754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.378 [2024-10-07 09:48:22.311964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.378 [2024-10-07 09:48:22.311999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:33.378 [2024-10-07 09:48:22.316872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.378 [2024-10-07 09:48:22.317051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.378 [2024-10-07 09:48:22.317080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.378 [2024-10-07 09:48:22.321988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.378 [2024-10-07 09:48:22.322136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.378 [2024-10-07 09:48:22.322164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:33.378 [2024-10-07 09:48:22.327181] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.378 [2024-10-07 09:48:22.327347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.378 [2024-10-07 09:48:22.327375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:33.378 [2024-10-07 09:48:22.332283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.378 [2024-10-07 09:48:22.332556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.378 [2024-10-07 09:48:22.332587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:33.378 [2024-10-07 09:48:22.337357] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.378 [2024-10-07 09:48:22.337515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.378 [2024-10-07 09:48:22.337549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.378 [2024-10-07 09:48:22.342455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.378 [2024-10-07 09:48:22.342734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.378 [2024-10-07 09:48:22.342766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:33.378 [2024-10-07 09:48:22.347601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.378 [2024-10-07 09:48:22.347794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.378 [2024-10-07 09:48:22.347823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:33.378 [2024-10-07 09:48:22.352575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.378 [2024-10-07 09:48:22.352773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.378 [2024-10-07 09:48:22.352801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:33.378 [2024-10-07 09:48:22.357790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.378 [2024-10-07 09:48:22.357972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.378 [2024-10-07 09:48:22.358001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.378 [2024-10-07 09:48:22.362882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.378 [2024-10-07 09:48:22.363051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.378 [2024-10-07 09:48:22.363092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:33.378 [2024-10-07 09:48:22.368051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.378 [2024-10-07 09:48:22.368250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.378 [2024-10-07 09:48:22.368278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:33.640 [2024-10-07 09:48:22.373198] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.640 [2024-10-07 09:48:22.373452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.640 [2024-10-07 09:48:22.373483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:33.640 [2024-10-07 09:48:22.378342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.640 [2024-10-07 09:48:22.378533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.640 [2024-10-07 09:48:22.378567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.640 [2024-10-07 09:48:22.383463] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.640 [2024-10-07 09:48:22.383619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.640 [2024-10-07 09:48:22.383647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:33.640 [2024-10-07 09:48:22.388660] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.640 [2024-10-07 09:48:22.388826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.640 [2024-10-07 09:48:22.388855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:33.640 [2024-10-07 09:48:22.393640] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.640 [2024-10-07 09:48:22.393894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.640 [2024-10-07 09:48:22.393925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:33.640 [2024-10-07 09:48:22.399028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.640 [2024-10-07 09:48:22.399251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.640 [2024-10-07 09:48:22.399282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.641 [2024-10-07 09:48:22.404033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.641 [2024-10-07 09:48:22.404191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.641 [2024-10-07 09:48:22.404220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:33.641 [2024-10-07 09:48:22.409148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.641 [2024-10-07 09:48:22.409395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.641 [2024-10-07 09:48:22.409426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:33.641 [2024-10-07 09:48:22.414164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.641 [2024-10-07 09:48:22.414358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.641 [2024-10-07 09:48:22.414386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:33.641 [2024-10-07 09:48:22.419210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.641 [2024-10-07 09:48:22.419415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.641 [2024-10-07 09:48:22.419442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.641 [2024-10-07 09:48:22.424313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.641 [2024-10-07 09:48:22.424491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.641 [2024-10-07 09:48:22.424519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:33.641 [2024-10-07 09:48:22.429531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.641 [2024-10-07 09:48:22.429712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.641 [2024-10-07 09:48:22.429740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:33.641 [2024-10-07 09:48:22.434478] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.641 [2024-10-07 09:48:22.434650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.641 [2024-10-07 09:48:22.434688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:33.641 [2024-10-07 09:48:22.439571] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.641 [2024-10-07 09:48:22.439741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.641 [2024-10-07 09:48:22.439770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.641 [2024-10-07 09:48:22.444646] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.641 [2024-10-07 09:48:22.444816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.641 [2024-10-07 09:48:22.444844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:33.641 [2024-10-07 09:48:22.449703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.641 [2024-10-07 09:48:22.449883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.641 [2024-10-07 09:48:22.449911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:33.641 [2024-10-07 09:48:22.454700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.641 [2024-10-07 09:48:22.454837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.641 [2024-10-07 09:48:22.454866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:33.641 [2024-10-07 09:48:22.459763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.641 [2024-10-07 09:48:22.459923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.641 [2024-10-07 09:48:22.459951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.641 [2024-10-07 09:48:22.465063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.641 [2024-10-07 09:48:22.465267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.641 [2024-10-07 09:48:22.465295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:33.641 [2024-10-07 09:48:22.470225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.641 [2024-10-07 09:48:22.470458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.641 [2024-10-07 09:48:22.470489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:33.641 [2024-10-07 09:48:22.475268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.641 [2024-10-07 09:48:22.475416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.641 [2024-10-07 09:48:22.475445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:33.641 [2024-10-07 09:48:22.480186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.641 [2024-10-07 09:48:22.480389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.641 [2024-10-07 09:48:22.480417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.641 [2024-10-07 09:48:22.485261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.641 [2024-10-07 09:48:22.485461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.641 [2024-10-07 09:48:22.485489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:33.641 [2024-10-07 09:48:22.490394] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.641 [2024-10-07 09:48:22.490530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.641 [2024-10-07 09:48:22.490564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:33.641 [2024-10-07 09:48:22.495437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.641 [2024-10-07 09:48:22.495612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.641 [2024-10-07 09:48:22.495647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:33.641 [2024-10-07 09:48:22.500642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.641 [2024-10-07 09:48:22.500819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.641 [2024-10-07 09:48:22.500849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.641 [2024-10-07 09:48:22.505831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.641 [2024-10-07 09:48:22.506026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.641 [2024-10-07 09:48:22.506054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:33.642 [2024-10-07 09:48:22.510927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.642 [2024-10-07 09:48:22.511112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.642 [2024-10-07 09:48:22.511141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:33.642 [2024-10-07 09:48:22.516091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.642 [2024-10-07 09:48:22.516305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.642 [2024-10-07 09:48:22.516335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:33.642 [2024-10-07 09:48:22.521247] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.642 [2024-10-07 09:48:22.521406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.642 [2024-10-07 09:48:22.521434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.642 [2024-10-07 09:48:22.526326] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.642 [2024-10-07 09:48:22.526483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.642 [2024-10-07 09:48:22.526517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:33.642 [2024-10-07 09:48:22.531408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.642 [2024-10-07 09:48:22.531560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.642 [2024-10-07 09:48:22.531588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:33.642 [2024-10-07 09:48:22.536481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.642 [2024-10-07 09:48:22.536641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.642 [2024-10-07 09:48:22.536676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:33.642 [2024-10-07 09:48:22.541540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.642 [2024-10-07 09:48:22.541720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.642 [2024-10-07 09:48:22.541754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.642 [2024-10-07 09:48:22.546571] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.642 [2024-10-07 09:48:22.546798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.642 [2024-10-07 09:48:22.546827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:33.642 [2024-10-07 09:48:22.551595] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.642 [2024-10-07 09:48:22.551756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.642 [2024-10-07 09:48:22.551785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:33.642 [2024-10-07 09:48:22.556633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.642 [2024-10-07 09:48:22.556837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.642 [2024-10-07 09:48:22.556866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:33.642 [2024-10-07 09:48:22.561630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.642 [2024-10-07 09:48:22.561790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.642 [2024-10-07 09:48:22.561819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.642 [2024-10-07 09:48:22.566740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.642 [2024-10-07 09:48:22.566870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.642 [2024-10-07 09:48:22.566906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:33.642 [2024-10-07 09:48:22.571890] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.642 [2024-10-07 09:48:22.572079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.642 [2024-10-07 09:48:22.572107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:33.642 [2024-10-07 09:48:22.576992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.642 [2024-10-07 09:48:22.577149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.642 [2024-10-07 09:48:22.577177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:33.642 [2024-10-07 09:48:22.581950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.642 [2024-10-07 09:48:22.582108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.642 [2024-10-07 09:48:22.582137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.642 [2024-10-07 09:48:22.587158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.642 [2024-10-07 09:48:22.587323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.642 [2024-10-07 09:48:22.587352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:33.642 [2024-10-07 09:48:22.592326] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.642 [2024-10-07 09:48:22.592517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.642 [2024-10-07 09:48:22.592546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:33.642 [2024-10-07 09:48:22.597295] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.642 [2024-10-07 09:48:22.597474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.642 [2024-10-07 09:48:22.597502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:33.642 [2024-10-07 09:48:22.602495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.642 [2024-10-07 09:48:22.602701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.642 [2024-10-07 09:48:22.602730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.642 [2024-10-07 09:48:22.607595] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.642 [2024-10-07 09:48:22.607779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.642 [2024-10-07 09:48:22.607808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:33.642 [2024-10-07 09:48:22.612690] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.642 [2024-10-07 09:48:22.612852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.642 [2024-10-07 09:48:22.612881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:33.642 [2024-10-07 09:48:22.617779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.643 [2024-10-07 09:48:22.617923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.643 [2024-10-07 09:48:22.617951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:33.643 [2024-10-07 09:48:22.622910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.643 [2024-10-07 09:48:22.623127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.643 [2024-10-07 09:48:22.623161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.643 [2024-10-07 09:48:22.627945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.643 [2024-10-07 09:48:22.628102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.643 [2024-10-07 09:48:22.628136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:33.643 [2024-10-07 09:48:22.632975] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.643 [2024-10-07 09:48:22.633167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.643 [2024-10-07 09:48:22.633195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:33.903 [2024-10-07 09:48:22.638099] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.903 [2024-10-07 09:48:22.638251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.903 [2024-10-07 09:48:22.638279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:33.903 [2024-10-07 09:48:22.643293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.903 [2024-10-07 09:48:22.643441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.903 [2024-10-07 09:48:22.643469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.903 [2024-10-07 09:48:22.648457] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.903 [2024-10-07 09:48:22.648681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.903 [2024-10-07 09:48:22.648717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:33.903 [2024-10-07 09:48:22.653550] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.903 [2024-10-07 09:48:22.653721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.903 [2024-10-07 09:48:22.653749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:33.903 [2024-10-07 09:48:22.658673] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.903 [2024-10-07 09:48:22.658819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.903 [2024-10-07 09:48:22.658846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:33.903 [2024-10-07 09:48:22.663761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.903 [2024-10-07 09:48:22.663907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.903 [2024-10-07 09:48:22.663941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.903 [2024-10-07 09:48:22.668815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.903 [2024-10-07 09:48:22.668976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.903 [2024-10-07 09:48:22.669004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:33.903 [2024-10-07 09:48:22.674013] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.903 [2024-10-07 09:48:22.674186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.903 [2024-10-07 09:48:22.674214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:33.903 [2024-10-07 09:48:22.679075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.903 [2024-10-07 09:48:22.679345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.903 [2024-10-07 09:48:22.679375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:33.903 [2024-10-07 09:48:22.684179] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.903 [2024-10-07 09:48:22.684411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.903 [2024-10-07 09:48:22.684441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.903 [2024-10-07 09:48:22.689327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.903 [2024-10-07 09:48:22.689565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.903 [2024-10-07 09:48:22.689596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:33.903 [2024-10-07 09:48:22.694403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.903 [2024-10-07 09:48:22.694625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.903 [2024-10-07 09:48:22.694662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:33.903 [2024-10-07 09:48:22.699465] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.903 [2024-10-07 09:48:22.699605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.903 [2024-10-07 09:48:22.699633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:33.903 [2024-10-07 09:48:22.704546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.903 [2024-10-07 09:48:22.704806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.903 [2024-10-07 09:48:22.704836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.903 [2024-10-07 09:48:22.709880] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.903 [2024-10-07 09:48:22.710135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.903 [2024-10-07 09:48:22.710165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:33.903 [2024-10-07 09:48:22.714862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.903 [2024-10-07 09:48:22.715085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.903 [2024-10-07 09:48:22.715122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:33.903 [2024-10-07 09:48:22.719895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.903 [2024-10-07 09:48:22.720046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.903 [2024-10-07 09:48:22.720074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:33.903 [2024-10-07 09:48:22.725046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.903 [2024-10-07 09:48:22.725226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.903 [2024-10-07 09:48:22.725253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.903 [2024-10-07 09:48:22.730113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.903 [2024-10-07 09:48:22.730320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.903 [2024-10-07 09:48:22.730348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:33.903 [2024-10-07 09:48:22.735260] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.904 [2024-10-07 09:48:22.735414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.904 [2024-10-07 09:48:22.735442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:33.904 [2024-10-07 09:48:22.740323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.904 [2024-10-07 09:48:22.740486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.904 [2024-10-07 09:48:22.740514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:33.904 [2024-10-07 09:48:22.745444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.904 [2024-10-07 09:48:22.745702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.904 [2024-10-07 09:48:22.745742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.904 [2024-10-07 09:48:22.750548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.904 [2024-10-07 09:48:22.750773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.904 [2024-10-07 09:48:22.750801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:33.904 [2024-10-07 09:48:22.755584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.904 [2024-10-07 09:48:22.755757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.904 [2024-10-07 09:48:22.755786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:33.904 [2024-10-07 09:48:22.760710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.904 [2024-10-07 09:48:22.760940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.904 [2024-10-07 09:48:22.760970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:33.904 [2024-10-07 09:48:22.765738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.904 [2024-10-07 09:48:22.765882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.904 [2024-10-07 09:48:22.765910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.904 [2024-10-07 09:48:22.770801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.904 [2024-10-07 09:48:22.770944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.904 [2024-10-07 09:48:22.770971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:33.904 [2024-10-07 09:48:22.775823] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.904 [2024-10-07 09:48:22.776017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.904 [2024-10-07 09:48:22.776045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:33.904 [2024-10-07 09:48:22.780978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.904 [2024-10-07 09:48:22.781100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.904 [2024-10-07 09:48:22.781127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:33.904 [2024-10-07 09:48:22.786036] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.904 [2024-10-07 09:48:22.786191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.904 [2024-10-07 09:48:22.786219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.904 [2024-10-07 09:48:22.791163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.904 [2024-10-07 09:48:22.791257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.904 [2024-10-07 09:48:22.791285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:33.904 [2024-10-07 09:48:22.796327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.904 [2024-10-07 09:48:22.796485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.904 [2024-10-07 09:48:22.796513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:33.904 [2024-10-07 09:48:22.801506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.904 [2024-10-07 09:48:22.801690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.904 [2024-10-07 09:48:22.801718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:33.904 [2024-10-07 09:48:22.806742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.904 [2024-10-07 09:48:22.806885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.904 [2024-10-07 09:48:22.806912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.904 [2024-10-07 09:48:22.811854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.904 [2024-10-07 09:48:22.811969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.904 [2024-10-07 09:48:22.811997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:33.904 [2024-10-07 09:48:22.816945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.904 [2024-10-07 09:48:22.817173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.904 [2024-10-07 09:48:22.817215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:33.904 [2024-10-07 09:48:22.822030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.904 [2024-10-07 09:48:22.822240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.904 [2024-10-07 09:48:22.822275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:33.904 [2024-10-07 09:48:22.827183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.904 [2024-10-07 09:48:22.827323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.904 [2024-10-07 09:48:22.827351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.904 [2024-10-07 09:48:22.832288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.904 [2024-10-07 09:48:22.832415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.904 [2024-10-07 09:48:22.832442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:33.904 [2024-10-07 09:48:22.837482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.904 [2024-10-07 09:48:22.837623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.904 [2024-10-07 09:48:22.837650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:33.904 [2024-10-07 09:48:22.842502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.905 [2024-10-07 09:48:22.842705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.905 [2024-10-07 09:48:22.842734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:33.905 [2024-10-07 09:48:22.847632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.905 [2024-10-07 09:48:22.847770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.905 [2024-10-07 09:48:22.847805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.905 [2024-10-07 09:48:22.852754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.905 [2024-10-07 09:48:22.852863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.905 [2024-10-07 09:48:22.852891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:33.905 [2024-10-07 09:48:22.857761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.905 [2024-10-07 09:48:22.857941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.905 [2024-10-07 09:48:22.857969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:33.905 [2024-10-07 09:48:22.862773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.905 [2024-10-07 09:48:22.862899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.905 [2024-10-07 09:48:22.862927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:33.905 [2024-10-07 09:48:22.867832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.905 [2024-10-07 09:48:22.867970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.905 [2024-10-07 09:48:22.868015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.905 [2024-10-07 09:48:22.872941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.905 [2024-10-07 09:48:22.873073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.905 [2024-10-07 09:48:22.873101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:33.905 [2024-10-07 09:48:22.878135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.905 [2024-10-07 09:48:22.878258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.905 [2024-10-07 09:48:22.878287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:33.905 [2024-10-07 09:48:22.883308] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.905 [2024-10-07 09:48:22.883463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.905 [2024-10-07 09:48:22.883491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:33.905 [2024-10-07 09:48:22.888428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.905 [2024-10-07 09:48:22.888537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.905 [2024-10-07 09:48:22.888565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.905 [2024-10-07 09:48:22.893558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:33.905 [2024-10-07 09:48:22.893772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.905 [2024-10-07 09:48:22.893807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:34.164 [2024-10-07 09:48:22.898740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:34.164 [2024-10-07 09:48:22.898840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.164 [2024-10-07 09:48:22.898869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:34.164 [2024-10-07 09:48:22.903778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:34.164 [2024-10-07 09:48:22.903924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.164 [2024-10-07 09:48:22.903954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:34.164 [2024-10-07 09:48:22.908796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:34.164 [2024-10-07 09:48:22.909013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.164 [2024-10-07 09:48:22.909055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:34.164 [2024-10-07 09:48:22.913999] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:34.164 [2024-10-07 09:48:22.914226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.165 [2024-10-07 09:48:22.914254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:34.165 [2024-10-07 09:48:22.919006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:34.165 [2024-10-07 09:48:22.919161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.165 [2024-10-07 09:48:22.919187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:34.165 [2024-10-07 09:48:22.924053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:34.165 [2024-10-07 09:48:22.924250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.165 [2024-10-07 09:48:22.924278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:34.165 [2024-10-07 09:48:22.929261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:34.165 [2024-10-07 09:48:22.929438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.165 [2024-10-07 09:48:22.929465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:34.165 [2024-10-07 09:48:22.934294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:34.165 [2024-10-07 09:48:22.934467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.165 [2024-10-07 09:48:22.934496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:34.165 [2024-10-07 09:48:22.939340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:34.165 [2024-10-07 09:48:22.939534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.165 [2024-10-07 09:48:22.939563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:34.165 [2024-10-07 09:48:22.944418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:34.165 [2024-10-07 09:48:22.944605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.165 [2024-10-07 09:48:22.944641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:34.165 [2024-10-07 09:48:22.949465] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:34.165 [2024-10-07 09:48:22.949674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.165 [2024-10-07 09:48:22.949718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:34.165 [2024-10-07 09:48:22.954488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:34.165 [2024-10-07 09:48:22.954633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.165 [2024-10-07 09:48:22.954691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:34.165 [2024-10-07 09:48:22.959556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:34.165 [2024-10-07 09:48:22.959739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.165 [2024-10-07 09:48:22.959769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:34.165 [2024-10-07 09:48:22.964626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:34.165 [2024-10-07 09:48:22.964793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.165 [2024-10-07 09:48:22.964837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:34.165 [2024-10-07 09:48:22.969842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:34.165 [2024-10-07 09:48:22.970028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.165 [2024-10-07 09:48:22.970057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:34.165 [2024-10-07 09:48:22.974917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:34.165 [2024-10-07 09:48:22.975093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.165 [2024-10-07 09:48:22.975120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:34.165 [2024-10-07 09:48:22.980006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:34.165 [2024-10-07 09:48:22.980150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.165 [2024-10-07 09:48:22.980183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:34.165 [2024-10-07 09:48:22.985187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:34.165 [2024-10-07 09:48:22.985356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.165 [2024-10-07 09:48:22.985383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:34.165 [2024-10-07 09:48:22.990301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:34.165 [2024-10-07 09:48:22.990429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.165 [2024-10-07 09:48:22.990458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:34.165 [2024-10-07 09:48:22.995377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:34.165 [2024-10-07 09:48:22.995536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.165 [2024-10-07 09:48:22.995564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:34.165 [2024-10-07 09:48:23.000661] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:34.165 [2024-10-07 09:48:23.000852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.165 [2024-10-07 09:48:23.000887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:34.165 [2024-10-07 09:48:23.005793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:34.165 [2024-10-07 09:48:23.005943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.165 [2024-10-07 09:48:23.005971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:34.165 [2024-10-07 09:48:23.010867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:34.165 [2024-10-07 09:48:23.011033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.165 [2024-10-07 09:48:23.011060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:34.165 [2024-10-07 09:48:23.015962] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:34.165 [2024-10-07 09:48:23.016140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.165 [2024-10-07 09:48:23.016168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:34.165 [2024-10-07 09:48:23.021022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:34.165 [2024-10-07 09:48:23.021175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.165 [2024-10-07 09:48:23.021203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:34.165 [2024-10-07 09:48:23.026029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:34.165 [2024-10-07 09:48:23.026249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.165 [2024-10-07 09:48:23.026276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:34.165 5884.00 IOPS, 735.50 MiB/s [2024-10-07 09:48:23.032427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7cd0) with pdu=0x2000198fef90 00:27:34.165 [2024-10-07 09:48:23.032643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.166 [2024-10-07 09:48:23.032701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:34.166 00:27:34.166 Latency(us) 00:27:34.166 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:34.166 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:27:34.166 nvme0n1 : 2.00 5880.26 735.03 0.00 0.00 2713.39 1711.22 10825.58 00:27:34.166 =================================================================================================================== 00:27:34.166 Total : 5880.26 735.03 0.00 0.00 2713.39 1711.22 10825.58 00:27:34.166 { 00:27:34.166 "results": [ 00:27:34.166 { 00:27:34.166 "job": "nvme0n1", 00:27:34.166 "core_mask": "0x2", 00:27:34.166 "workload": "randwrite", 00:27:34.166 "status": "finished", 00:27:34.166 "queue_depth": 16, 00:27:34.166 "io_size": 131072, 00:27:34.166 "runtime": 2.004502, 00:27:34.166 "iops": 5880.263526801171, 00:27:34.166 "mibps": 735.0329408501464, 00:27:34.166 "io_failed": 0, 00:27:34.166 "io_timeout": 0, 00:27:34.166 "avg_latency_us": 2713.3863187629813, 00:27:34.166 "min_latency_us": 1711.2177777777779, 00:27:34.166 "max_latency_us": 10825.576296296296 00:27:34.166 } 00:27:34.166 ], 00:27:34.166 "core_count": 1 00:27:34.166 } 00:27:34.166 09:48:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:34.166 09:48:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:34.166 09:48:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:34.166 09:48:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:34.166 | .driver_specific 00:27:34.166 | .nvme_error 00:27:34.166 | .status_code 00:27:34.166 | .command_transient_transport_error' 00:27:34.424 09:48:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 380 > 0 )) 00:27:34.424 09:48:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 323226 00:27:34.424 09:48:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 323226 ']' 00:27:34.424 09:48:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 323226 00:27:34.424 09:48:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:27:34.424 09:48:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:34.424 09:48:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 323226 00:27:34.684 09:48:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:34.684 09:48:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:34.684 09:48:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 323226' 00:27:34.684 killing process with pid 323226 00:27:34.684 09:48:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 323226 00:27:34.684 Received shutdown signal, test time was about 2.000000 seconds 00:27:34.684 00:27:34.684 Latency(us) 00:27:34.684 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:34.684 =================================================================================================================== 00:27:34.684 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:34.684 09:48:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 323226 00:27:34.943 09:48:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 321666 00:27:34.943 09:48:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 321666 ']' 00:27:34.943 09:48:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 321666 00:27:34.943 09:48:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:27:34.943 09:48:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:34.943 09:48:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 321666 00:27:34.943 09:48:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:34.943 09:48:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:34.943 09:48:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 321666' 00:27:34.943 killing process with pid 321666 00:27:34.943 09:48:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 321666 00:27:34.943 09:48:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 321666 00:27:35.200 00:27:35.200 real 0m15.785s 00:27:35.200 user 0m31.507s 00:27:35.200 sys 0m4.429s 00:27:35.200 09:48:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:35.201 09:48:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:35.201 ************************************ 00:27:35.201 END TEST nvmf_digest_error 00:27:35.201 ************************************ 00:27:35.201 09:48:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:27:35.201 09:48:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:27:35.201 09:48:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@514 -- # nvmfcleanup 00:27:35.201 09:48:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:27:35.201 09:48:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:35.201 09:48:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:27:35.201 09:48:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:35.201 09:48:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:35.201 rmmod nvme_tcp 00:27:35.201 rmmod nvme_fabrics 00:27:35.201 rmmod nvme_keyring 00:27:35.201 09:48:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:35.201 09:48:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:27:35.201 09:48:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:27:35.201 09:48:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@515 -- # '[' -n 321666 ']' 00:27:35.201 09:48:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # killprocess 321666 00:27:35.201 09:48:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@950 -- # '[' -z 321666 ']' 00:27:35.201 09:48:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # kill -0 321666 00:27:35.201 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (321666) - No such process 00:27:35.201 09:48:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@977 -- # echo 'Process with pid 321666 is not found' 00:27:35.201 Process with pid 321666 is not found 00:27:35.201 09:48:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:27:35.201 09:48:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:27:35.201 09:48:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:27:35.201 09:48:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:27:35.201 09:48:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@789 -- # iptables-save 00:27:35.201 09:48:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:27:35.201 09:48:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@789 -- # iptables-restore 00:27:35.201 09:48:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:35.201 09:48:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:35.201 09:48:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:35.201 09:48:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:35.201 09:48:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:37.739 09:48:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:37.739 00:27:37.739 real 0m36.410s 00:27:37.739 user 1m4.868s 00:27:37.739 sys 0m10.187s 00:27:37.739 09:48:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:37.739 09:48:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:37.739 ************************************ 00:27:37.739 END TEST nvmf_digest 00:27:37.739 ************************************ 00:27:37.739 09:48:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:27:37.739 09:48:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:27:37.739 09:48:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:27:37.739 09:48:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:27:37.739 09:48:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:37.739 09:48:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:37.739 09:48:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.739 ************************************ 00:27:37.739 START TEST nvmf_bdevperf 00:27:37.739 ************************************ 00:27:37.739 09:48:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:27:37.739 * Looking for test storage... 00:27:37.739 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:37.739 09:48:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:27:37.739 09:48:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1681 -- # lcov --version 00:27:37.739 09:48:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:27:37.739 09:48:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:27:37.739 09:48:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:37.739 09:48:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:37.739 09:48:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:37.739 09:48:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:27:37.739 09:48:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:27:37.739 09:48:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:27:37.739 09:48:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:27:37.739 09:48:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:27:37.739 09:48:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:27:37.739 09:48:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:27:37.739 09:48:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:37.739 09:48:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:27:37.739 09:48:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:27:37.739 09:48:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:37.739 09:48:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:37.739 09:48:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:27:37.739 09:48:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:27:37.739 09:48:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:37.739 09:48:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:27:37.739 09:48:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:27:37.739 09:48:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:27:37.739 09:48:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:27:37.739 09:48:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:37.739 09:48:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:27:37.739 09:48:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:27:37.739 09:48:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:37.739 09:48:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:37.739 09:48:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:27:37.739 09:48:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:37.739 09:48:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:27:37.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:37.740 --rc genhtml_branch_coverage=1 00:27:37.740 --rc genhtml_function_coverage=1 00:27:37.740 --rc genhtml_legend=1 00:27:37.740 --rc geninfo_all_blocks=1 00:27:37.740 --rc geninfo_unexecuted_blocks=1 00:27:37.740 00:27:37.740 ' 00:27:37.740 09:48:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:27:37.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:37.740 --rc genhtml_branch_coverage=1 00:27:37.740 --rc genhtml_function_coverage=1 00:27:37.740 --rc genhtml_legend=1 00:27:37.740 --rc geninfo_all_blocks=1 00:27:37.740 --rc geninfo_unexecuted_blocks=1 00:27:37.740 00:27:37.740 ' 00:27:37.740 09:48:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:27:37.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:37.740 --rc genhtml_branch_coverage=1 00:27:37.740 --rc genhtml_function_coverage=1 00:27:37.740 --rc genhtml_legend=1 00:27:37.740 --rc geninfo_all_blocks=1 00:27:37.740 --rc geninfo_unexecuted_blocks=1 00:27:37.740 00:27:37.740 ' 00:27:37.740 09:48:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:27:37.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:37.740 --rc genhtml_branch_coverage=1 00:27:37.740 --rc genhtml_function_coverage=1 00:27:37.740 --rc genhtml_legend=1 00:27:37.740 --rc geninfo_all_blocks=1 00:27:37.740 --rc geninfo_unexecuted_blocks=1 00:27:37.740 00:27:37.740 ' 00:27:37.740 09:48:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:37.740 09:48:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:27:37.740 09:48:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:37.740 09:48:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:37.740 09:48:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:37.740 09:48:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:37.740 09:48:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:37.740 09:48:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:37.740 09:48:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:37.740 09:48:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:37.740 09:48:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:37.740 09:48:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:37.740 09:48:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:27:37.740 09:48:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:27:37.740 09:48:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:37.740 09:48:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:37.740 09:48:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:37.740 09:48:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:37.740 09:48:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:37.740 09:48:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:27:37.740 09:48:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:37.740 09:48:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:37.740 09:48:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:37.740 09:48:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:37.740 09:48:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:37.740 09:48:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:37.740 09:48:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:27:37.740 09:48:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:37.740 09:48:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:27:37.740 09:48:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:37.740 09:48:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:37.740 09:48:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:37.740 09:48:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:37.740 09:48:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:37.740 09:48:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:37.740 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:37.740 09:48:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:37.740 09:48:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:37.740 09:48:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:37.740 09:48:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:37.740 09:48:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:37.740 09:48:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:27:37.740 09:48:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:27:37.740 09:48:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:37.740 09:48:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # prepare_net_devs 00:27:37.740 09:48:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:27:37.740 09:48:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:27:37.740 09:48:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:37.740 09:48:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:37.740 09:48:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:37.740 09:48:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:27:37.740 09:48:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:27:37.740 09:48:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:27:37.740 09:48:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:39.641 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:39.641 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:27:39.641 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:39.642 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:39.642 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:39.642 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:39.642 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:39.642 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:27:39.642 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:39.642 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:27:39.642 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:27:39.642 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:27:39.642 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:27:39.642 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:27:39.642 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:27:39.642 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:39.642 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:39.642 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:39.642 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:39.642 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:39.642 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:39.642 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:39.642 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:39.642 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:39.642 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:39.642 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:39.642 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:39.642 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:39.642 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:39.642 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:39.642 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:39.642 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:39.642 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:39.642 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:39.642 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x1592)' 00:27:39.642 Found 0000:09:00.0 (0x8086 - 0x1592) 00:27:39.642 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:39.642 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:39.642 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:27:39.642 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:27:39.642 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:39.642 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:39.642 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x1592)' 00:27:39.642 Found 0000:09:00.1 (0x8086 - 0x1592) 00:27:39.642 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:39.642 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:39.642 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:27:39.642 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:27:39.642 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:39.642 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:39.642 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:39.642 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:39.642 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:39.642 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:39.642 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:39.642 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:39.642 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:39.642 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:39.642 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:39.642 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:27:39.642 Found net devices under 0000:09:00.0: cvl_0_0 00:27:39.642 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:39.642 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:39.642 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:39.642 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:39.642 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:39.642 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:39.642 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:39.642 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:39.642 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:27:39.642 Found net devices under 0000:09:00.1: cvl_0_1 00:27:39.642 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:39.642 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:27:39.642 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # is_hw=yes 00:27:39.642 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:27:39.642 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:27:39.642 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:27:39.642 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:39.642 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:39.642 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:39.642 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:39.642 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:39.642 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:39.642 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:39.642 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:39.642 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:39.642 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:39.642 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:39.642 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:39.642 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:39.642 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:39.642 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:39.642 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:39.642 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:39.642 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:39.642 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:39.642 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:39.643 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:39.643 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:39.643 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:39.643 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:39.643 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.338 ms 00:27:39.643 00:27:39.643 --- 10.0.0.2 ping statistics --- 00:27:39.643 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:39.643 rtt min/avg/max/mdev = 0.338/0.338/0.338/0.000 ms 00:27:39.643 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:39.643 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:39.643 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:27:39.643 00:27:39.643 --- 10.0.0.1 ping statistics --- 00:27:39.643 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:39.643 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:27:39.643 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:39.643 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # return 0 00:27:39.643 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:27:39.643 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:39.643 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:27:39.643 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:27:39.643 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:39.643 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:27:39.643 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:27:39.643 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:27:39.643 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:27:39.643 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:27:39.643 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:39.643 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:39.643 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # nvmfpid=325481 00:27:39.643 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:39.643 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # waitforlisten 325481 00:27:39.643 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 325481 ']' 00:27:39.643 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:39.643 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:39.643 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:39.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:39.643 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:39.643 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:39.643 [2024-10-07 09:48:28.497371] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:27:39.643 [2024-10-07 09:48:28.497440] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:39.643 [2024-10-07 09:48:28.557994] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:39.901 [2024-10-07 09:48:28.665703] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:39.901 [2024-10-07 09:48:28.665756] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:39.901 [2024-10-07 09:48:28.665780] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:39.901 [2024-10-07 09:48:28.665790] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:39.901 [2024-10-07 09:48:28.665801] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:39.901 [2024-10-07 09:48:28.666537] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:27:39.901 [2024-10-07 09:48:28.666594] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:27:39.901 [2024-10-07 09:48:28.666597] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:27:39.901 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:39.901 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:27:39.901 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:27:39.901 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:39.901 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:39.901 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:39.901 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:39.901 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.901 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:39.901 [2024-10-07 09:48:28.806388] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:39.901 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.901 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:39.901 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.901 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:39.901 Malloc0 00:27:39.901 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.901 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:39.901 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.901 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:39.901 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.901 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:39.901 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.901 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:39.901 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.901 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:39.901 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.901 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:39.901 [2024-10-07 09:48:28.865639] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:39.901 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.901 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:27:39.901 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:27:39.901 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # config=() 00:27:39.901 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # local subsystem config 00:27:39.901 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:27:39.901 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:27:39.901 { 00:27:39.901 "params": { 00:27:39.901 "name": "Nvme$subsystem", 00:27:39.901 "trtype": "$TEST_TRANSPORT", 00:27:39.901 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:39.901 "adrfam": "ipv4", 00:27:39.901 "trsvcid": "$NVMF_PORT", 00:27:39.901 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:39.901 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:39.901 "hdgst": ${hdgst:-false}, 00:27:39.901 "ddgst": ${ddgst:-false} 00:27:39.901 }, 00:27:39.901 "method": "bdev_nvme_attach_controller" 00:27:39.901 } 00:27:39.901 EOF 00:27:39.901 )") 00:27:39.901 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # cat 00:27:39.901 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # jq . 00:27:39.901 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@583 -- # IFS=, 00:27:39.901 09:48:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:27:39.901 "params": { 00:27:39.901 "name": "Nvme1", 00:27:39.901 "trtype": "tcp", 00:27:39.901 "traddr": "10.0.0.2", 00:27:39.901 "adrfam": "ipv4", 00:27:39.901 "trsvcid": "4420", 00:27:39.901 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:39.901 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:39.901 "hdgst": false, 00:27:39.901 "ddgst": false 00:27:39.901 }, 00:27:39.901 "method": "bdev_nvme_attach_controller" 00:27:39.901 }' 00:27:40.160 [2024-10-07 09:48:28.917937] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:27:40.160 [2024-10-07 09:48:28.918019] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid325618 ] 00:27:40.160 [2024-10-07 09:48:28.974541] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:40.160 [2024-10-07 09:48:29.088321] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:27:40.727 Running I/O for 1 seconds... 00:27:41.661 8277.00 IOPS, 32.33 MiB/s 00:27:41.661 Latency(us) 00:27:41.661 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:41.661 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:41.661 Verification LBA range: start 0x0 length 0x4000 00:27:41.661 Nvme1n1 : 1.01 8330.71 32.54 0.00 0.00 15301.20 1535.24 13107.20 00:27:41.661 =================================================================================================================== 00:27:41.661 Total : 8330.71 32.54 0.00 0.00 15301.20 1535.24 13107.20 00:27:41.920 09:48:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=325757 00:27:41.920 09:48:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:27:41.920 09:48:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:27:41.920 09:48:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:27:41.920 09:48:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # config=() 00:27:41.920 09:48:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # local subsystem config 00:27:41.920 09:48:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:27:41.920 09:48:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:27:41.920 { 00:27:41.920 "params": { 00:27:41.920 "name": "Nvme$subsystem", 00:27:41.920 "trtype": "$TEST_TRANSPORT", 00:27:41.920 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:41.920 "adrfam": "ipv4", 00:27:41.920 "trsvcid": "$NVMF_PORT", 00:27:41.920 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:41.920 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:41.920 "hdgst": ${hdgst:-false}, 00:27:41.920 "ddgst": ${ddgst:-false} 00:27:41.920 }, 00:27:41.920 "method": "bdev_nvme_attach_controller" 00:27:41.920 } 00:27:41.920 EOF 00:27:41.920 )") 00:27:41.920 09:48:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # cat 00:27:41.920 09:48:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # jq . 00:27:41.920 09:48:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@583 -- # IFS=, 00:27:41.920 09:48:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:27:41.920 "params": { 00:27:41.920 "name": "Nvme1", 00:27:41.920 "trtype": "tcp", 00:27:41.920 "traddr": "10.0.0.2", 00:27:41.920 "adrfam": "ipv4", 00:27:41.920 "trsvcid": "4420", 00:27:41.920 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:41.920 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:41.920 "hdgst": false, 00:27:41.920 "ddgst": false 00:27:41.920 }, 00:27:41.920 "method": "bdev_nvme_attach_controller" 00:27:41.920 }' 00:27:41.920 [2024-10-07 09:48:30.780637] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:27:41.920 [2024-10-07 09:48:30.780751] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid325757 ] 00:27:41.920 [2024-10-07 09:48:30.839070] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:42.180 [2024-10-07 09:48:30.950295] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:27:42.438 Running I/O for 15 seconds... 00:27:44.884 8497.00 IOPS, 33.19 MiB/s 8512.00 IOPS, 33.25 MiB/s 09:48:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 325481 00:27:44.884 09:48:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:27:44.884 [2024-10-07 09:48:33.743333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:35000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.884 [2024-10-07 09:48:33.743383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.884 [2024-10-07 09:48:33.743412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:35008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.884 [2024-10-07 09:48:33.743429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.884 [2024-10-07 09:48:33.743447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:35016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.884 [2024-10-07 09:48:33.743461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.884 [2024-10-07 09:48:33.743477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:35024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.885 [2024-10-07 09:48:33.743492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.885 [2024-10-07 09:48:33.743507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:35032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.885 [2024-10-07 09:48:33.743521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.885 [2024-10-07 09:48:33.743536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:35040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.885 [2024-10-07 09:48:33.743566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.885 [2024-10-07 09:48:33.743582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:35048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.885 [2024-10-07 09:48:33.743597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.885 [2024-10-07 09:48:33.743612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:35056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.885 [2024-10-07 09:48:33.743640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.885 [2024-10-07 09:48:33.743656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:35064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.885 [2024-10-07 09:48:33.743675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.885 [2024-10-07 09:48:33.743708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:35072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.885 [2024-10-07 09:48:33.743726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.885 [2024-10-07 09:48:33.743743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:35080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.885 [2024-10-07 09:48:33.743759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.885 [2024-10-07 09:48:33.743777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:35088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.885 [2024-10-07 09:48:33.743791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.885 [2024-10-07 09:48:33.743807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:35096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.885 [2024-10-07 09:48:33.743828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.885 [2024-10-07 09:48:33.743845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:35104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.885 [2024-10-07 09:48:33.743860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.885 [2024-10-07 09:48:33.743875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:35112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.885 [2024-10-07 09:48:33.743888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.885 [2024-10-07 09:48:33.743903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:35120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.885 [2024-10-07 09:48:33.743916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.885 [2024-10-07 09:48:33.743931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:35128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.885 [2024-10-07 09:48:33.743959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.885 [2024-10-07 09:48:33.743975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:35136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.885 [2024-10-07 09:48:33.743988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.885 [2024-10-07 09:48:33.744017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:35144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.885 [2024-10-07 09:48:33.744029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.885 [2024-10-07 09:48:33.744044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:35152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.885 [2024-10-07 09:48:33.744057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.885 [2024-10-07 09:48:33.744070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:35160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.885 [2024-10-07 09:48:33.744083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.885 [2024-10-07 09:48:33.744097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:35168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.885 [2024-10-07 09:48:33.744109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.885 [2024-10-07 09:48:33.744138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:35176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.885 [2024-10-07 09:48:33.744150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.885 [2024-10-07 09:48:33.744165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:35184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.885 [2024-10-07 09:48:33.744177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.885 [2024-10-07 09:48:33.744190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:35192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.885 [2024-10-07 09:48:33.744202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.885 [2024-10-07 09:48:33.744219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:35200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.885 [2024-10-07 09:48:33.744233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.885 [2024-10-07 09:48:33.744246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:35208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.885 [2024-10-07 09:48:33.744258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.885 [2024-10-07 09:48:33.744271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:35216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.885 [2024-10-07 09:48:33.744284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.885 [2024-10-07 09:48:33.744297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:35224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.885 [2024-10-07 09:48:33.744309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.885 [2024-10-07 09:48:33.744323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:35232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.885 [2024-10-07 09:48:33.744335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.885 [2024-10-07 09:48:33.744348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:35240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.885 [2024-10-07 09:48:33.744360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.885 [2024-10-07 09:48:33.744374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:35248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.885 [2024-10-07 09:48:33.744385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.885 [2024-10-07 09:48:33.744399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:35256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.885 [2024-10-07 09:48:33.744411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.885 [2024-10-07 09:48:33.744424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:35264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.885 [2024-10-07 09:48:33.744436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.885 [2024-10-07 09:48:33.744450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:35272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.885 [2024-10-07 09:48:33.744462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.886 [2024-10-07 09:48:33.744476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:35280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.886 [2024-10-07 09:48:33.744487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.886 [2024-10-07 09:48:33.744501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:35288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.886 [2024-10-07 09:48:33.744512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.886 [2024-10-07 09:48:33.744525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:35296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.886 [2024-10-07 09:48:33.744540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.886 [2024-10-07 09:48:33.744554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:35304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.886 [2024-10-07 09:48:33.744566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.886 [2024-10-07 09:48:33.744580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:35312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.886 [2024-10-07 09:48:33.744592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.886 [2024-10-07 09:48:33.744605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:35320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.886 [2024-10-07 09:48:33.744616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.886 [2024-10-07 09:48:33.744629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:35328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.886 [2024-10-07 09:48:33.744641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.886 [2024-10-07 09:48:33.744679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:35336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.886 [2024-10-07 09:48:33.744694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.886 [2024-10-07 09:48:33.744710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:35344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.886 [2024-10-07 09:48:33.744723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.886 [2024-10-07 09:48:33.744739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:35352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.886 [2024-10-07 09:48:33.744752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.886 [2024-10-07 09:48:33.744766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:35360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.886 [2024-10-07 09:48:33.744780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.886 [2024-10-07 09:48:33.744794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:35368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.886 [2024-10-07 09:48:33.744808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.886 [2024-10-07 09:48:33.744823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:35376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.886 [2024-10-07 09:48:33.744836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.886 [2024-10-07 09:48:33.744851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:35384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.886 [2024-10-07 09:48:33.744865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.886 [2024-10-07 09:48:33.744880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:35392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.886 [2024-10-07 09:48:33.744894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.886 [2024-10-07 09:48:33.744913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:35400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.886 [2024-10-07 09:48:33.744928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.886 [2024-10-07 09:48:33.744944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:35408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.886 [2024-10-07 09:48:33.744977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.886 [2024-10-07 09:48:33.744992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:35416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.886 [2024-10-07 09:48:33.745004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.886 [2024-10-07 09:48:33.745033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:35424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.886 [2024-10-07 09:48:33.745045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.886 [2024-10-07 09:48:33.745058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:35432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.886 [2024-10-07 09:48:33.745070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.886 [2024-10-07 09:48:33.745082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:35440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.886 [2024-10-07 09:48:33.745094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.886 [2024-10-07 09:48:33.745107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:35448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.886 [2024-10-07 09:48:33.745118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.886 [2024-10-07 09:48:33.745132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:35456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.886 [2024-10-07 09:48:33.745143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.886 [2024-10-07 09:48:33.745157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:35464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.886 [2024-10-07 09:48:33.745168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.886 [2024-10-07 09:48:33.745180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:35472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.886 [2024-10-07 09:48:33.745192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.886 [2024-10-07 09:48:33.745205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:35480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.886 [2024-10-07 09:48:33.745216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.886 [2024-10-07 09:48:33.745229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:35488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.886 [2024-10-07 09:48:33.745240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.886 [2024-10-07 09:48:33.745253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:35496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.886 [2024-10-07 09:48:33.745265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.886 [2024-10-07 09:48:33.745281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:35504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.886 [2024-10-07 09:48:33.745294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.886 [2024-10-07 09:48:33.745307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:35512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.886 [2024-10-07 09:48:33.745319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.886 [2024-10-07 09:48:33.745332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:35520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.886 [2024-10-07 09:48:33.745343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.886 [2024-10-07 09:48:33.745356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:35528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.886 [2024-10-07 09:48:33.745367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.886 [2024-10-07 09:48:33.745380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:35536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.886 [2024-10-07 09:48:33.745391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.886 [2024-10-07 09:48:33.745404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:35544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.887 [2024-10-07 09:48:33.745415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.887 [2024-10-07 09:48:33.745428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:35552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.887 [2024-10-07 09:48:33.745439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.887 [2024-10-07 09:48:33.745452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:35560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.887 [2024-10-07 09:48:33.745464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.887 [2024-10-07 09:48:33.745477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:35568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.887 [2024-10-07 09:48:33.745488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.887 [2024-10-07 09:48:33.745507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:35576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.887 [2024-10-07 09:48:33.745519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.887 [2024-10-07 09:48:33.745532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:35584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.887 [2024-10-07 09:48:33.745543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.887 [2024-10-07 09:48:33.745556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:35592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.887 [2024-10-07 09:48:33.745567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.887 [2024-10-07 09:48:33.745580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:35600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.887 [2024-10-07 09:48:33.745595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.887 [2024-10-07 09:48:33.745609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:35608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.887 [2024-10-07 09:48:33.745620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.887 [2024-10-07 09:48:33.745633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:35616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.887 [2024-10-07 09:48:33.745659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.887 [2024-10-07 09:48:33.745684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:35624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.887 [2024-10-07 09:48:33.745699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.887 [2024-10-07 09:48:33.745729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:35632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.887 [2024-10-07 09:48:33.745748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.887 [2024-10-07 09:48:33.745764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:35640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.887 [2024-10-07 09:48:33.745777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.887 [2024-10-07 09:48:33.745792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:35648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.887 [2024-10-07 09:48:33.745805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.887 [2024-10-07 09:48:33.745820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:35656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.887 [2024-10-07 09:48:33.745834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.887 [2024-10-07 09:48:33.745849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:35664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.887 [2024-10-07 09:48:33.745862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.887 [2024-10-07 09:48:33.745877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:35672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.887 [2024-10-07 09:48:33.745890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.887 [2024-10-07 09:48:33.745905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:35680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.887 [2024-10-07 09:48:33.745919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.887 [2024-10-07 09:48:33.745933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:35688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.887 [2024-10-07 09:48:33.745946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.887 [2024-10-07 09:48:33.745977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:35696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.887 [2024-10-07 09:48:33.745990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.887 [2024-10-07 09:48:33.746012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:35704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.887 [2024-10-07 09:48:33.746039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.887 [2024-10-07 09:48:33.746053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:35712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.887 [2024-10-07 09:48:33.746064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.887 [2024-10-07 09:48:33.746077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:35720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.887 [2024-10-07 09:48:33.746089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.887 [2024-10-07 09:48:33.746102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:35728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.887 [2024-10-07 09:48:33.746114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.887 [2024-10-07 09:48:33.746127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:35736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.887 [2024-10-07 09:48:33.746138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.887 [2024-10-07 09:48:33.746151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:35744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.887 [2024-10-07 09:48:33.746163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.887 [2024-10-07 09:48:33.746175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:35752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.887 [2024-10-07 09:48:33.746187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.887 [2024-10-07 09:48:33.746199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:35760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.887 [2024-10-07 09:48:33.746215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.887 [2024-10-07 09:48:33.746228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:35768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.887 [2024-10-07 09:48:33.746240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.887 [2024-10-07 09:48:33.746253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:35776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.887 [2024-10-07 09:48:33.746264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.887 [2024-10-07 09:48:33.746277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:35784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.887 [2024-10-07 09:48:33.746289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.887 [2024-10-07 09:48:33.746302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:35792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.887 [2024-10-07 09:48:33.746314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.887 [2024-10-07 09:48:33.746326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:35800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.887 [2024-10-07 09:48:33.746341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.888 [2024-10-07 09:48:33.746355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:35808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.888 [2024-10-07 09:48:33.746367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.888 [2024-10-07 09:48:33.746380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:35816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.888 [2024-10-07 09:48:33.746392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.888 [2024-10-07 09:48:33.746405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:35824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.888 [2024-10-07 09:48:33.746416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.888 [2024-10-07 09:48:33.746430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:35832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.888 [2024-10-07 09:48:33.746442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.888 [2024-10-07 09:48:33.746455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:35840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.888 [2024-10-07 09:48:33.746467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.888 [2024-10-07 09:48:33.746479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:35848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.888 [2024-10-07 09:48:33.746491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.888 [2024-10-07 09:48:33.746504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:35856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.888 [2024-10-07 09:48:33.746516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.888 [2024-10-07 09:48:33.746529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:35864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.888 [2024-10-07 09:48:33.746540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.888 [2024-10-07 09:48:33.746553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:35872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.888 [2024-10-07 09:48:33.746565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.888 [2024-10-07 09:48:33.746578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:35880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.888 [2024-10-07 09:48:33.746590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.888 [2024-10-07 09:48:33.746603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:35888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.888 [2024-10-07 09:48:33.746619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.888 [2024-10-07 09:48:33.746632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:35896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.888 [2024-10-07 09:48:33.746644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.888 [2024-10-07 09:48:33.746684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:35904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.888 [2024-10-07 09:48:33.746700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.888 [2024-10-07 09:48:33.746729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:35912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.888 [2024-10-07 09:48:33.746743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.888 [2024-10-07 09:48:33.746758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:35920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.888 [2024-10-07 09:48:33.746771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.888 [2024-10-07 09:48:33.746785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:35928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.888 [2024-10-07 09:48:33.746797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.888 [2024-10-07 09:48:33.746812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:35936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.888 [2024-10-07 09:48:33.746825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.888 [2024-10-07 09:48:33.746839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:35944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.888 [2024-10-07 09:48:33.746852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.888 [2024-10-07 09:48:33.746866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:35952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.888 [2024-10-07 09:48:33.746880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.888 [2024-10-07 09:48:33.746894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:35960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.888 [2024-10-07 09:48:33.746907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.888 [2024-10-07 09:48:33.746922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:35968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.888 [2024-10-07 09:48:33.746935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.888 [2024-10-07 09:48:33.746964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:35976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.888 [2024-10-07 09:48:33.746977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.888 [2024-10-07 09:48:33.746991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:35984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.888 [2024-10-07 09:48:33.747002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.888 [2024-10-07 09:48:33.747029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:35992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.888 [2024-10-07 09:48:33.747041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.888 [2024-10-07 09:48:33.747054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:36000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.888 [2024-10-07 09:48:33.747065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.888 [2024-10-07 09:48:33.747082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:36008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.888 [2024-10-07 09:48:33.747094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.888 [2024-10-07 09:48:33.747107] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452310 is same with the state(6) to be set 00:27:44.888 [2024-10-07 09:48:33.747129] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:44.888 [2024-10-07 09:48:33.747140] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:44.888 [2024-10-07 09:48:33.747151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:36016 len:8 PRP1 0x0 PRP2 0x0 00:27:44.888 [2024-10-07 09:48:33.747162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.888 [2024-10-07 09:48:33.747216] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1452310 was disconnected and freed. reset controller. 00:27:44.888 [2024-10-07 09:48:33.750300] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:44.888 [2024-10-07 09:48:33.750377] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:44.888 [2024-10-07 09:48:33.751081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.888 [2024-10-07 09:48:33.751110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:44.889 [2024-10-07 09:48:33.751126] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:44.889 [2024-10-07 09:48:33.751372] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:44.889 [2024-10-07 09:48:33.751573] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:44.889 [2024-10-07 09:48:33.751590] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:44.889 [2024-10-07 09:48:33.751604] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:44.889 [2024-10-07 09:48:33.754799] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:44.889 [2024-10-07 09:48:33.763850] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:44.889 [2024-10-07 09:48:33.764197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.889 [2024-10-07 09:48:33.764226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:44.889 [2024-10-07 09:48:33.764241] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:44.889 [2024-10-07 09:48:33.764469] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:44.889 [2024-10-07 09:48:33.764699] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:44.889 [2024-10-07 09:48:33.764734] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:44.889 [2024-10-07 09:48:33.764747] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:44.889 [2024-10-07 09:48:33.767615] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:44.889 [2024-10-07 09:48:33.776889] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:44.889 [2024-10-07 09:48:33.777220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.889 [2024-10-07 09:48:33.777252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:44.889 [2024-10-07 09:48:33.777269] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:44.889 [2024-10-07 09:48:33.777487] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:44.889 [2024-10-07 09:48:33.777734] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:44.889 [2024-10-07 09:48:33.777755] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:44.889 [2024-10-07 09:48:33.777769] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:44.889 [2024-10-07 09:48:33.780638] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:44.889 [2024-10-07 09:48:33.789987] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:44.889 [2024-10-07 09:48:33.790398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.889 [2024-10-07 09:48:33.790426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:44.889 [2024-10-07 09:48:33.790442] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:44.889 [2024-10-07 09:48:33.790684] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:44.889 [2024-10-07 09:48:33.790903] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:44.889 [2024-10-07 09:48:33.790923] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:44.889 [2024-10-07 09:48:33.790936] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:44.889 [2024-10-07 09:48:33.793971] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:44.889 [2024-10-07 09:48:33.803085] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:44.889 [2024-10-07 09:48:33.803495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.889 [2024-10-07 09:48:33.803523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:44.889 [2024-10-07 09:48:33.803539] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:44.889 [2024-10-07 09:48:33.803787] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:44.889 [2024-10-07 09:48:33.803987] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:44.889 [2024-10-07 09:48:33.804007] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:44.889 [2024-10-07 09:48:33.804037] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:44.889 [2024-10-07 09:48:33.806901] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:44.889 [2024-10-07 09:48:33.816128] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:44.889 [2024-10-07 09:48:33.816533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.889 [2024-10-07 09:48:33.816560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:44.889 [2024-10-07 09:48:33.816576] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:44.889 [2024-10-07 09:48:33.816823] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:44.889 [2024-10-07 09:48:33.817055] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:44.889 [2024-10-07 09:48:33.817075] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:44.889 [2024-10-07 09:48:33.817087] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:44.889 [2024-10-07 09:48:33.819937] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:44.889 [2024-10-07 09:48:33.829336] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:44.889 [2024-10-07 09:48:33.829680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.889 [2024-10-07 09:48:33.829724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:44.889 [2024-10-07 09:48:33.829741] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:44.889 [2024-10-07 09:48:33.829979] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:44.889 [2024-10-07 09:48:33.830182] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:44.889 [2024-10-07 09:48:33.830201] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:44.889 [2024-10-07 09:48:33.830213] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:44.889 [2024-10-07 09:48:33.833115] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:44.889 [2024-10-07 09:48:33.842413] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:44.889 [2024-10-07 09:48:33.842801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.889 [2024-10-07 09:48:33.842831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:44.889 [2024-10-07 09:48:33.842848] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:44.889 [2024-10-07 09:48:33.843097] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:44.889 [2024-10-07 09:48:33.843284] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:44.889 [2024-10-07 09:48:33.843303] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:44.889 [2024-10-07 09:48:33.843315] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:44.889 [2024-10-07 09:48:33.846215] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:44.889 [2024-10-07 09:48:33.855518] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:44.889 [2024-10-07 09:48:33.855842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.889 [2024-10-07 09:48:33.855870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:44.889 [2024-10-07 09:48:33.855885] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:44.889 [2024-10-07 09:48:33.856100] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:44.889 [2024-10-07 09:48:33.856303] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:44.889 [2024-10-07 09:48:33.856322] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:44.889 [2024-10-07 09:48:33.856335] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:44.889 [2024-10-07 09:48:33.859251] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:44.889 [2024-10-07 09:48:33.868741] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:44.889 [2024-10-07 09:48:33.869145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.889 [2024-10-07 09:48:33.869173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:44.890 [2024-10-07 09:48:33.869189] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:44.890 [2024-10-07 09:48:33.869424] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:44.890 [2024-10-07 09:48:33.869627] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:44.890 [2024-10-07 09:48:33.869661] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:44.890 [2024-10-07 09:48:33.869686] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:44.890 [2024-10-07 09:48:33.872576] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.149 [2024-10-07 09:48:33.882077] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.149 [2024-10-07 09:48:33.882385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.149 [2024-10-07 09:48:33.882411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:45.149 [2024-10-07 09:48:33.882427] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:45.149 [2024-10-07 09:48:33.882641] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:45.149 [2024-10-07 09:48:33.882876] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.149 [2024-10-07 09:48:33.882897] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.149 [2024-10-07 09:48:33.882910] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.149 [2024-10-07 09:48:33.885793] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.149 [2024-10-07 09:48:33.895214] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.149 [2024-10-07 09:48:33.895530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.149 [2024-10-07 09:48:33.895559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:45.149 [2024-10-07 09:48:33.895575] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:45.149 [2024-10-07 09:48:33.895862] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:45.149 [2024-10-07 09:48:33.896076] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.149 [2024-10-07 09:48:33.896109] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.149 [2024-10-07 09:48:33.896121] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.149 [2024-10-07 09:48:33.899023] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.149 [2024-10-07 09:48:33.908286] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.149 [2024-10-07 09:48:33.908628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.149 [2024-10-07 09:48:33.908656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:45.149 [2024-10-07 09:48:33.908705] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:45.149 [2024-10-07 09:48:33.908961] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:45.149 [2024-10-07 09:48:33.909166] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.149 [2024-10-07 09:48:33.909185] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.149 [2024-10-07 09:48:33.909197] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.149 [2024-10-07 09:48:33.911943] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.149 [2024-10-07 09:48:33.921363] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.149 [2024-10-07 09:48:33.921769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.149 [2024-10-07 09:48:33.921798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:45.149 [2024-10-07 09:48:33.921814] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:45.149 [2024-10-07 09:48:33.922050] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:45.149 [2024-10-07 09:48:33.922253] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.149 [2024-10-07 09:48:33.922272] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.149 [2024-10-07 09:48:33.922284] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.149 [2024-10-07 09:48:33.925186] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.149 [2024-10-07 09:48:33.934480] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.149 [2024-10-07 09:48:33.934893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.149 [2024-10-07 09:48:33.934922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:45.149 [2024-10-07 09:48:33.934937] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:45.149 [2024-10-07 09:48:33.935172] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:45.149 [2024-10-07 09:48:33.935374] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.149 [2024-10-07 09:48:33.935394] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.149 [2024-10-07 09:48:33.935406] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.149 [2024-10-07 09:48:33.938232] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.149 [2024-10-07 09:48:33.947577] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.149 [2024-10-07 09:48:33.947949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.149 [2024-10-07 09:48:33.947992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:45.149 [2024-10-07 09:48:33.948009] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:45.149 [2024-10-07 09:48:33.948242] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:45.149 [2024-10-07 09:48:33.948444] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.149 [2024-10-07 09:48:33.948467] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.149 [2024-10-07 09:48:33.948480] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.149 [2024-10-07 09:48:33.951384] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.149 [2024-10-07 09:48:33.960596] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.149 [2024-10-07 09:48:33.960968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.149 [2024-10-07 09:48:33.961010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:45.149 [2024-10-07 09:48:33.961025] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:45.149 [2024-10-07 09:48:33.961233] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:45.149 [2024-10-07 09:48:33.961435] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.149 [2024-10-07 09:48:33.961453] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.149 [2024-10-07 09:48:33.961466] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.149 [2024-10-07 09:48:33.964404] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.149 [2024-10-07 09:48:33.973774] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.149 [2024-10-07 09:48:33.974103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.149 [2024-10-07 09:48:33.974130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:45.150 [2024-10-07 09:48:33.974145] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:45.150 [2024-10-07 09:48:33.974360] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:45.150 [2024-10-07 09:48:33.974563] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.150 [2024-10-07 09:48:33.974581] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.150 [2024-10-07 09:48:33.974593] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.150 [2024-10-07 09:48:33.977485] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.150 [2024-10-07 09:48:33.986895] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.150 [2024-10-07 09:48:33.987301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.150 [2024-10-07 09:48:33.987329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:45.150 [2024-10-07 09:48:33.987345] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:45.150 [2024-10-07 09:48:33.987581] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:45.150 [2024-10-07 09:48:33.987835] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.150 [2024-10-07 09:48:33.987856] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.150 [2024-10-07 09:48:33.987869] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.150 [2024-10-07 09:48:33.990750] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.150 [2024-10-07 09:48:33.999876] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.150 [2024-10-07 09:48:34.000222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.150 [2024-10-07 09:48:34.000250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:45.150 [2024-10-07 09:48:34.000267] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:45.150 [2024-10-07 09:48:34.000501] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:45.150 [2024-10-07 09:48:34.000748] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.150 [2024-10-07 09:48:34.000769] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.150 [2024-10-07 09:48:34.000798] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.150 [2024-10-07 09:48:34.004027] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.150 [2024-10-07 09:48:34.013387] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.150 [2024-10-07 09:48:34.013769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.150 [2024-10-07 09:48:34.013798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:45.150 [2024-10-07 09:48:34.013830] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:45.150 [2024-10-07 09:48:34.014061] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:45.150 [2024-10-07 09:48:34.014286] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.150 [2024-10-07 09:48:34.014306] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.150 [2024-10-07 09:48:34.014319] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.150 [2024-10-07 09:48:34.017344] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.150 [2024-10-07 09:48:34.026476] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.150 [2024-10-07 09:48:34.026844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.150 [2024-10-07 09:48:34.026873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:45.150 [2024-10-07 09:48:34.026889] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:45.150 [2024-10-07 09:48:34.027133] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:45.150 [2024-10-07 09:48:34.027320] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.150 [2024-10-07 09:48:34.027338] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.150 [2024-10-07 09:48:34.027350] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.150 [2024-10-07 09:48:34.030292] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.150 [2024-10-07 09:48:34.039436] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.150 [2024-10-07 09:48:34.039746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.150 [2024-10-07 09:48:34.039774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:45.150 [2024-10-07 09:48:34.039789] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:45.150 [2024-10-07 09:48:34.040008] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:45.150 [2024-10-07 09:48:34.040212] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.150 [2024-10-07 09:48:34.040230] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.150 [2024-10-07 09:48:34.040242] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.150 [2024-10-07 09:48:34.043175] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.150 [2024-10-07 09:48:34.052575] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.150 [2024-10-07 09:48:34.052919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.150 [2024-10-07 09:48:34.052948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:45.150 [2024-10-07 09:48:34.052965] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:45.150 [2024-10-07 09:48:34.053186] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:45.150 [2024-10-07 09:48:34.053390] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.150 [2024-10-07 09:48:34.053409] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.150 [2024-10-07 09:48:34.053421] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.150 [2024-10-07 09:48:34.056315] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.150 [2024-10-07 09:48:34.065772] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.150 [2024-10-07 09:48:34.066181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.150 [2024-10-07 09:48:34.066208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:45.150 [2024-10-07 09:48:34.066224] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:45.150 [2024-10-07 09:48:34.066458] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:45.150 [2024-10-07 09:48:34.066661] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.150 [2024-10-07 09:48:34.066706] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.150 [2024-10-07 09:48:34.066719] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.150 [2024-10-07 09:48:34.069601] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.150 [2024-10-07 09:48:34.078856] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.150 [2024-10-07 09:48:34.079195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.150 [2024-10-07 09:48:34.079222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:45.150 [2024-10-07 09:48:34.079238] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:45.150 [2024-10-07 09:48:34.079470] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:45.151 [2024-10-07 09:48:34.079658] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.151 [2024-10-07 09:48:34.079702] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.151 [2024-10-07 09:48:34.079721] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.151 [2024-10-07 09:48:34.082584] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.151 [2024-10-07 09:48:34.092002] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.151 [2024-10-07 09:48:34.092373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.151 [2024-10-07 09:48:34.092400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:45.151 [2024-10-07 09:48:34.092415] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:45.151 [2024-10-07 09:48:34.092629] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:45.151 [2024-10-07 09:48:34.092862] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.151 [2024-10-07 09:48:34.092883] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.151 [2024-10-07 09:48:34.092896] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.151 [2024-10-07 09:48:34.095782] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.151 [2024-10-07 09:48:34.105129] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.151 [2024-10-07 09:48:34.105474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.151 [2024-10-07 09:48:34.105503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:45.151 [2024-10-07 09:48:34.105519] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:45.151 [2024-10-07 09:48:34.105765] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:45.151 [2024-10-07 09:48:34.105979] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.151 [2024-10-07 09:48:34.105998] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.151 [2024-10-07 09:48:34.106009] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.151 [2024-10-07 09:48:34.108849] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.151 [2024-10-07 09:48:34.118249] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.151 [2024-10-07 09:48:34.118577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.151 [2024-10-07 09:48:34.118605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:45.151 [2024-10-07 09:48:34.118621] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:45.151 [2024-10-07 09:48:34.118885] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:45.151 [2024-10-07 09:48:34.119093] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.151 [2024-10-07 09:48:34.119112] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.151 [2024-10-07 09:48:34.119125] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.151 [2024-10-07 09:48:34.121968] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.151 [2024-10-07 09:48:34.131346] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.151 [2024-10-07 09:48:34.131756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.151 [2024-10-07 09:48:34.131783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:45.151 [2024-10-07 09:48:34.131798] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:45.151 [2024-10-07 09:48:34.132024] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:45.151 [2024-10-07 09:48:34.132227] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.151 [2024-10-07 09:48:34.132246] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.151 [2024-10-07 09:48:34.132259] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.151 [2024-10-07 09:48:34.135042] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.411 [2024-10-07 09:48:34.144809] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.411 [2024-10-07 09:48:34.145163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.411 [2024-10-07 09:48:34.145191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:45.411 [2024-10-07 09:48:34.145207] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:45.411 [2024-10-07 09:48:34.145435] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:45.411 [2024-10-07 09:48:34.145638] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.411 [2024-10-07 09:48:34.145681] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.411 [2024-10-07 09:48:34.145704] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.411 [2024-10-07 09:48:34.148593] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.411 [2024-10-07 09:48:34.158224] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.411 [2024-10-07 09:48:34.158572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.411 [2024-10-07 09:48:34.158600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:45.411 [2024-10-07 09:48:34.158616] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:45.411 [2024-10-07 09:48:34.158854] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:45.411 [2024-10-07 09:48:34.159104] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.411 [2024-10-07 09:48:34.159123] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.411 [2024-10-07 09:48:34.159135] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.411 [2024-10-07 09:48:34.162159] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.411 [2024-10-07 09:48:34.171579] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.411 [2024-10-07 09:48:34.171944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.411 [2024-10-07 09:48:34.171987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:45.411 [2024-10-07 09:48:34.172003] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:45.411 [2024-10-07 09:48:34.172236] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:45.411 [2024-10-07 09:48:34.172429] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.411 [2024-10-07 09:48:34.172448] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.411 [2024-10-07 09:48:34.172459] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.411 [2024-10-07 09:48:34.175460] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.411 [2024-10-07 09:48:34.184812] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.411 [2024-10-07 09:48:34.185120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.411 [2024-10-07 09:48:34.185145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:45.411 [2024-10-07 09:48:34.185160] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:45.411 [2024-10-07 09:48:34.185354] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:45.411 [2024-10-07 09:48:34.185572] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.411 [2024-10-07 09:48:34.185591] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.411 [2024-10-07 09:48:34.185603] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.411 [2024-10-07 09:48:34.188479] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.411 [2024-10-07 09:48:34.197880] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.411 [2024-10-07 09:48:34.198221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.411 [2024-10-07 09:48:34.198248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:45.411 [2024-10-07 09:48:34.198264] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:45.411 [2024-10-07 09:48:34.198499] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:45.411 [2024-10-07 09:48:34.198728] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.411 [2024-10-07 09:48:34.198748] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.411 [2024-10-07 09:48:34.198761] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.411 [2024-10-07 09:48:34.201519] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.411 [2024-10-07 09:48:34.210956] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.411 [2024-10-07 09:48:34.211324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.411 [2024-10-07 09:48:34.211351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:45.411 [2024-10-07 09:48:34.211366] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:45.411 [2024-10-07 09:48:34.211582] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:45.412 [2024-10-07 09:48:34.211812] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.412 [2024-10-07 09:48:34.211832] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.412 [2024-10-07 09:48:34.211844] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.412 [2024-10-07 09:48:34.214604] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.412 [2024-10-07 09:48:34.224124] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.412 [2024-10-07 09:48:34.224430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.412 [2024-10-07 09:48:34.224458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:45.412 [2024-10-07 09:48:34.224473] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:45.412 [2024-10-07 09:48:34.224696] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:45.412 [2024-10-07 09:48:34.224896] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.412 [2024-10-07 09:48:34.224915] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.412 [2024-10-07 09:48:34.224928] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.412 [2024-10-07 09:48:34.227800] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.412 [2024-10-07 09:48:34.237134] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.412 [2024-10-07 09:48:34.237425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.412 [2024-10-07 09:48:34.237467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:45.412 [2024-10-07 09:48:34.237482] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:45.412 [2024-10-07 09:48:34.237707] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:45.412 [2024-10-07 09:48:34.237915] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.412 [2024-10-07 09:48:34.237935] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.412 [2024-10-07 09:48:34.237948] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.412 [2024-10-07 09:48:34.240707] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.412 [2024-10-07 09:48:34.250232] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.412 [2024-10-07 09:48:34.250643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.412 [2024-10-07 09:48:34.250676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:45.412 [2024-10-07 09:48:34.250709] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:45.412 [2024-10-07 09:48:34.250942] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:45.412 [2024-10-07 09:48:34.251145] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.412 [2024-10-07 09:48:34.251164] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.412 [2024-10-07 09:48:34.251176] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.412 [2024-10-07 09:48:34.254385] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.412 [2024-10-07 09:48:34.263388] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.412 [2024-10-07 09:48:34.263699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.412 [2024-10-07 09:48:34.263731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:45.412 [2024-10-07 09:48:34.263748] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:45.412 [2024-10-07 09:48:34.263963] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:45.412 [2024-10-07 09:48:34.264166] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.412 [2024-10-07 09:48:34.264184] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.412 [2024-10-07 09:48:34.264196] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.412 [2024-10-07 09:48:34.267069] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.412 [2024-10-07 09:48:34.276641] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.412 [2024-10-07 09:48:34.277077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.412 [2024-10-07 09:48:34.277105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:45.412 [2024-10-07 09:48:34.277121] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:45.412 [2024-10-07 09:48:34.277367] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:45.412 [2024-10-07 09:48:34.277569] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.412 [2024-10-07 09:48:34.277588] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.412 [2024-10-07 09:48:34.277600] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.412 [2024-10-07 09:48:34.280494] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.412 [2024-10-07 09:48:34.289808] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.412 [2024-10-07 09:48:34.290170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.412 [2024-10-07 09:48:34.290203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:45.412 [2024-10-07 09:48:34.290219] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:45.412 [2024-10-07 09:48:34.290452] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:45.412 [2024-10-07 09:48:34.290679] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.412 [2024-10-07 09:48:34.290699] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.412 [2024-10-07 09:48:34.290712] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.412 [2024-10-07 09:48:34.293686] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.412 6919.67 IOPS, 27.03 MiB/s [2024-10-07 09:48:34.304059] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.412 [2024-10-07 09:48:34.304480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.412 [2024-10-07 09:48:34.304509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:45.412 [2024-10-07 09:48:34.304526] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:45.412 [2024-10-07 09:48:34.304755] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:45.412 [2024-10-07 09:48:34.304996] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.412 [2024-10-07 09:48:34.305028] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.412 [2024-10-07 09:48:34.305041] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.412 [2024-10-07 09:48:34.308238] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.412 [2024-10-07 09:48:34.317773] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.412 [2024-10-07 09:48:34.318198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.413 [2024-10-07 09:48:34.318227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:45.413 [2024-10-07 09:48:34.318245] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:45.413 [2024-10-07 09:48:34.318485] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:45.413 [2024-10-07 09:48:34.318735] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.413 [2024-10-07 09:48:34.318757] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.413 [2024-10-07 09:48:34.318771] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.413 [2024-10-07 09:48:34.321998] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.413 [2024-10-07 09:48:34.331415] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.413 [2024-10-07 09:48:34.331759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.413 [2024-10-07 09:48:34.331788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:45.413 [2024-10-07 09:48:34.331805] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:45.413 [2024-10-07 09:48:34.332032] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:45.413 [2024-10-07 09:48:34.332230] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.413 [2024-10-07 09:48:34.332265] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.413 [2024-10-07 09:48:34.332279] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.413 [2024-10-07 09:48:34.335599] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.413 [2024-10-07 09:48:34.345091] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.413 [2024-10-07 09:48:34.345434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.413 [2024-10-07 09:48:34.345471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:45.413 [2024-10-07 09:48:34.345505] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:45.413 [2024-10-07 09:48:34.345731] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:45.413 [2024-10-07 09:48:34.345969] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.413 [2024-10-07 09:48:34.345994] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.413 [2024-10-07 09:48:34.346008] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.413 [2024-10-07 09:48:34.349264] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.413 [2024-10-07 09:48:34.358474] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.413 [2024-10-07 09:48:34.358790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.413 [2024-10-07 09:48:34.358820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:45.413 [2024-10-07 09:48:34.358837] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:45.413 [2024-10-07 09:48:34.359073] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:45.413 [2024-10-07 09:48:34.359280] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.413 [2024-10-07 09:48:34.359299] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.413 [2024-10-07 09:48:34.359311] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.413 [2024-10-07 09:48:34.362434] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.413 [2024-10-07 09:48:34.372121] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.413 [2024-10-07 09:48:34.372445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.413 [2024-10-07 09:48:34.372487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:45.413 [2024-10-07 09:48:34.372504] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:45.413 [2024-10-07 09:48:34.372741] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:45.413 [2024-10-07 09:48:34.372974] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.413 [2024-10-07 09:48:34.372994] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.413 [2024-10-07 09:48:34.373007] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.413 [2024-10-07 09:48:34.376234] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.413 [2024-10-07 09:48:34.385402] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.413 [2024-10-07 09:48:34.385793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.413 [2024-10-07 09:48:34.385822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:45.413 [2024-10-07 09:48:34.385840] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:45.413 [2024-10-07 09:48:34.386083] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:45.413 [2024-10-07 09:48:34.386271] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.413 [2024-10-07 09:48:34.386289] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.413 [2024-10-07 09:48:34.386302] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.413 [2024-10-07 09:48:34.389313] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.413 [2024-10-07 09:48:34.398676] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.413 [2024-10-07 09:48:34.399084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.413 [2024-10-07 09:48:34.399113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:45.413 [2024-10-07 09:48:34.399133] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:45.413 [2024-10-07 09:48:34.399351] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:45.413 [2024-10-07 09:48:34.399553] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.413 [2024-10-07 09:48:34.399572] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.413 [2024-10-07 09:48:34.399584] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.413 [2024-10-07 09:48:34.402611] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.675 [2024-10-07 09:48:34.411913] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.675 [2024-10-07 09:48:34.412241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.675 [2024-10-07 09:48:34.412284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:45.675 [2024-10-07 09:48:34.412299] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:45.675 [2024-10-07 09:48:34.412515] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:45.675 [2024-10-07 09:48:34.412748] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.675 [2024-10-07 09:48:34.412769] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.675 [2024-10-07 09:48:34.412797] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.675 [2024-10-07 09:48:34.415762] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.675 [2024-10-07 09:48:34.425132] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.675 [2024-10-07 09:48:34.425442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.675 [2024-10-07 09:48:34.425471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:45.675 [2024-10-07 09:48:34.425487] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:45.675 [2024-10-07 09:48:34.425735] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:45.675 [2024-10-07 09:48:34.425943] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.675 [2024-10-07 09:48:34.425963] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.675 [2024-10-07 09:48:34.425975] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.675 [2024-10-07 09:48:34.428877] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.675 [2024-10-07 09:48:34.438300] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.675 [2024-10-07 09:48:34.438593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.675 [2024-10-07 09:48:34.438636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:45.675 [2024-10-07 09:48:34.438651] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:45.675 [2024-10-07 09:48:34.438932] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:45.675 [2024-10-07 09:48:34.439154] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.675 [2024-10-07 09:48:34.439179] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.675 [2024-10-07 09:48:34.439191] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.675 [2024-10-07 09:48:34.442053] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.675 [2024-10-07 09:48:34.451310] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.675 [2024-10-07 09:48:34.451713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.675 [2024-10-07 09:48:34.451743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:45.675 [2024-10-07 09:48:34.451759] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:45.675 [2024-10-07 09:48:34.452007] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:45.675 [2024-10-07 09:48:34.452208] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.675 [2024-10-07 09:48:34.452228] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.675 [2024-10-07 09:48:34.452241] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.675 [2024-10-07 09:48:34.455029] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.675 [2024-10-07 09:48:34.464436] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.675 [2024-10-07 09:48:34.464824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.675 [2024-10-07 09:48:34.464854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:45.675 [2024-10-07 09:48:34.464871] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:45.675 [2024-10-07 09:48:34.465125] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:45.675 [2024-10-07 09:48:34.465327] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.675 [2024-10-07 09:48:34.465346] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.675 [2024-10-07 09:48:34.465359] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.675 [2024-10-07 09:48:34.468301] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.675 [2024-10-07 09:48:34.477513] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.675 [2024-10-07 09:48:34.477864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.675 [2024-10-07 09:48:34.477892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:45.675 [2024-10-07 09:48:34.477909] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:45.676 [2024-10-07 09:48:34.478143] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:45.676 [2024-10-07 09:48:34.478345] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.676 [2024-10-07 09:48:34.478364] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.676 [2024-10-07 09:48:34.478376] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.676 [2024-10-07 09:48:34.481277] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.676 [2024-10-07 09:48:34.490572] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.676 [2024-10-07 09:48:34.490914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.676 [2024-10-07 09:48:34.490943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:45.676 [2024-10-07 09:48:34.490960] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:45.676 [2024-10-07 09:48:34.491182] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:45.676 [2024-10-07 09:48:34.491404] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.676 [2024-10-07 09:48:34.491425] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.676 [2024-10-07 09:48:34.491438] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.676 [2024-10-07 09:48:34.494362] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.676 [2024-10-07 09:48:34.503674] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.676 [2024-10-07 09:48:34.504048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.676 [2024-10-07 09:48:34.504077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:45.676 [2024-10-07 09:48:34.504093] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:45.676 [2024-10-07 09:48:34.504308] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:45.676 [2024-10-07 09:48:34.504510] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.676 [2024-10-07 09:48:34.504530] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.676 [2024-10-07 09:48:34.504543] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.676 [2024-10-07 09:48:34.507633] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.676 [2024-10-07 09:48:34.517131] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.676 [2024-10-07 09:48:34.517484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.676 [2024-10-07 09:48:34.517513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:45.676 [2024-10-07 09:48:34.517530] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:45.676 [2024-10-07 09:48:34.517780] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:45.676 [2024-10-07 09:48:34.517987] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.676 [2024-10-07 09:48:34.518007] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.676 [2024-10-07 09:48:34.518019] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.676 [2024-10-07 09:48:34.520925] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.676 [2024-10-07 09:48:34.530274] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.676 [2024-10-07 09:48:34.530612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.676 [2024-10-07 09:48:34.530639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:45.676 [2024-10-07 09:48:34.530655] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:45.676 [2024-10-07 09:48:34.530911] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:45.676 [2024-10-07 09:48:34.531115] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.676 [2024-10-07 09:48:34.531135] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.676 [2024-10-07 09:48:34.531147] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.676 [2024-10-07 09:48:34.534075] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.676 [2024-10-07 09:48:34.543499] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.676 [2024-10-07 09:48:34.543868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.676 [2024-10-07 09:48:34.543898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:45.676 [2024-10-07 09:48:34.543915] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:45.676 [2024-10-07 09:48:34.544164] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:45.676 [2024-10-07 09:48:34.544366] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.676 [2024-10-07 09:48:34.544386] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.676 [2024-10-07 09:48:34.544398] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.676 [2024-10-07 09:48:34.547300] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.676 [2024-10-07 09:48:34.556578] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.676 [2024-10-07 09:48:34.556930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.676 [2024-10-07 09:48:34.556958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:45.676 [2024-10-07 09:48:34.556973] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:45.676 [2024-10-07 09:48:34.557207] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:45.676 [2024-10-07 09:48:34.557409] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.676 [2024-10-07 09:48:34.557429] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.676 [2024-10-07 09:48:34.557443] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.676 [2024-10-07 09:48:34.560356] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.676 [2024-10-07 09:48:34.569600] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.676 [2024-10-07 09:48:34.569925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.676 [2024-10-07 09:48:34.569953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:45.676 [2024-10-07 09:48:34.569969] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:45.676 [2024-10-07 09:48:34.570184] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:45.676 [2024-10-07 09:48:34.570386] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.676 [2024-10-07 09:48:34.570406] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.676 [2024-10-07 09:48:34.570423] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.676 [2024-10-07 09:48:34.573329] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.676 [2024-10-07 09:48:34.582661] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.676 [2024-10-07 09:48:34.582985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.676 [2024-10-07 09:48:34.583013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:45.676 [2024-10-07 09:48:34.583029] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:45.677 [2024-10-07 09:48:34.583244] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:45.677 [2024-10-07 09:48:34.583456] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.677 [2024-10-07 09:48:34.583476] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.677 [2024-10-07 09:48:34.583488] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.677 [2024-10-07 09:48:34.586349] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.677 [2024-10-07 09:48:34.595739] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.677 [2024-10-07 09:48:34.596081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.677 [2024-10-07 09:48:34.596108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:45.677 [2024-10-07 09:48:34.596124] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:45.677 [2024-10-07 09:48:34.596354] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:45.677 [2024-10-07 09:48:34.596557] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.677 [2024-10-07 09:48:34.596577] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.677 [2024-10-07 09:48:34.596589] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.677 [2024-10-07 09:48:34.599553] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.677 [2024-10-07 09:48:34.608865] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.677 [2024-10-07 09:48:34.609243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.677 [2024-10-07 09:48:34.609271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:45.677 [2024-10-07 09:48:34.609287] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:45.677 [2024-10-07 09:48:34.609541] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:45.677 [2024-10-07 09:48:34.609793] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.677 [2024-10-07 09:48:34.609818] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.677 [2024-10-07 09:48:34.609832] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.677 [2024-10-07 09:48:34.613077] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.677 [2024-10-07 09:48:34.622204] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.677 [2024-10-07 09:48:34.622552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.677 [2024-10-07 09:48:34.622580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:45.677 [2024-10-07 09:48:34.622596] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:45.677 [2024-10-07 09:48:34.622836] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:45.677 [2024-10-07 09:48:34.623074] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.677 [2024-10-07 09:48:34.623094] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.677 [2024-10-07 09:48:34.623106] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.677 [2024-10-07 09:48:34.625952] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.677 [2024-10-07 09:48:34.635291] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.677 [2024-10-07 09:48:34.635633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.677 [2024-10-07 09:48:34.635661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:45.677 [2024-10-07 09:48:34.635687] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:45.677 [2024-10-07 09:48:34.635922] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:45.677 [2024-10-07 09:48:34.636125] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.677 [2024-10-07 09:48:34.636144] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.677 [2024-10-07 09:48:34.636156] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.677 [2024-10-07 09:48:34.638903] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.677 [2024-10-07 09:48:34.648394] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.677 [2024-10-07 09:48:34.648749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.677 [2024-10-07 09:48:34.648776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:45.677 [2024-10-07 09:48:34.648791] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:45.677 [2024-10-07 09:48:34.648986] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:45.677 [2024-10-07 09:48:34.649204] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.677 [2024-10-07 09:48:34.649222] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.677 [2024-10-07 09:48:34.649234] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.677 [2024-10-07 09:48:34.652122] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.677 [2024-10-07 09:48:34.661463] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.677 [2024-10-07 09:48:34.661812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.677 [2024-10-07 09:48:34.661840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:45.677 [2024-10-07 09:48:34.661856] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:45.677 [2024-10-07 09:48:34.662090] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:45.677 [2024-10-07 09:48:34.662282] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.677 [2024-10-07 09:48:34.662301] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.677 [2024-10-07 09:48:34.662314] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.677 [2024-10-07 09:48:34.665303] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.939 [2024-10-07 09:48:34.674544] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.939 [2024-10-07 09:48:34.674915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.939 [2024-10-07 09:48:34.674944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:45.939 [2024-10-07 09:48:34.674960] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:45.939 [2024-10-07 09:48:34.675213] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:45.939 [2024-10-07 09:48:34.675436] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.939 [2024-10-07 09:48:34.675456] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.939 [2024-10-07 09:48:34.675468] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.939 [2024-10-07 09:48:34.678503] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.939 [2024-10-07 09:48:34.687626] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.939 [2024-10-07 09:48:34.687945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.939 [2024-10-07 09:48:34.687973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:45.939 [2024-10-07 09:48:34.687989] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:45.939 [2024-10-07 09:48:34.688204] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:45.939 [2024-10-07 09:48:34.688407] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.939 [2024-10-07 09:48:34.688427] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.939 [2024-10-07 09:48:34.688439] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.939 [2024-10-07 09:48:34.691377] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.939 [2024-10-07 09:48:34.700697] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.939 [2024-10-07 09:48:34.701040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.939 [2024-10-07 09:48:34.701068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:45.939 [2024-10-07 09:48:34.701084] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:45.939 [2024-10-07 09:48:34.701313] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:45.939 [2024-10-07 09:48:34.701516] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.939 [2024-10-07 09:48:34.701536] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.939 [2024-10-07 09:48:34.701548] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.939 [2024-10-07 09:48:34.704463] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.939 [2024-10-07 09:48:34.713677] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.939 [2024-10-07 09:48:34.713987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.939 [2024-10-07 09:48:34.714014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:45.939 [2024-10-07 09:48:34.714030] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:45.939 [2024-10-07 09:48:34.714246] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:45.939 [2024-10-07 09:48:34.714449] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.939 [2024-10-07 09:48:34.714469] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.939 [2024-10-07 09:48:34.714481] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.939 [2024-10-07 09:48:34.717387] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.939 [2024-10-07 09:48:34.726800] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.939 [2024-10-07 09:48:34.727106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.939 [2024-10-07 09:48:34.727135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:45.939 [2024-10-07 09:48:34.727150] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:45.939 [2024-10-07 09:48:34.727365] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:45.939 [2024-10-07 09:48:34.727569] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.939 [2024-10-07 09:48:34.727589] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.939 [2024-10-07 09:48:34.727601] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.939 [2024-10-07 09:48:34.730505] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.939 [2024-10-07 09:48:34.739925] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.939 [2024-10-07 09:48:34.740264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.939 [2024-10-07 09:48:34.740292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:45.939 [2024-10-07 09:48:34.740308] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:45.939 [2024-10-07 09:48:34.740542] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:45.939 [2024-10-07 09:48:34.740774] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.939 [2024-10-07 09:48:34.740795] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.939 [2024-10-07 09:48:34.740808] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.939 [2024-10-07 09:48:34.743645] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.939 [2024-10-07 09:48:34.752958] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.939 [2024-10-07 09:48:34.753315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.939 [2024-10-07 09:48:34.753347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:45.939 [2024-10-07 09:48:34.753364] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:45.939 [2024-10-07 09:48:34.753598] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:45.939 [2024-10-07 09:48:34.753828] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.939 [2024-10-07 09:48:34.753849] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.939 [2024-10-07 09:48:34.753862] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.940 [2024-10-07 09:48:34.756861] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.940 [2024-10-07 09:48:34.766399] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.940 [2024-10-07 09:48:34.766805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.940 [2024-10-07 09:48:34.766835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:45.940 [2024-10-07 09:48:34.766852] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:45.940 [2024-10-07 09:48:34.767080] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:45.940 [2024-10-07 09:48:34.767309] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.940 [2024-10-07 09:48:34.767329] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.940 [2024-10-07 09:48:34.767341] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.940 [2024-10-07 09:48:34.770322] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.940 [2024-10-07 09:48:34.779482] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.940 [2024-10-07 09:48:34.779917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.940 [2024-10-07 09:48:34.779947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:45.940 [2024-10-07 09:48:34.779979] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:45.940 [2024-10-07 09:48:34.780211] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:45.940 [2024-10-07 09:48:34.780397] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.940 [2024-10-07 09:48:34.780417] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.940 [2024-10-07 09:48:34.780430] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.940 [2024-10-07 09:48:34.783333] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.940 [2024-10-07 09:48:34.792739] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.940 [2024-10-07 09:48:34.793166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.940 [2024-10-07 09:48:34.793218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:45.940 [2024-10-07 09:48:34.793234] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:45.940 [2024-10-07 09:48:34.793476] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:45.940 [2024-10-07 09:48:34.793677] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.940 [2024-10-07 09:48:34.793698] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.940 [2024-10-07 09:48:34.793711] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.940 [2024-10-07 09:48:34.796570] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.940 [2024-10-07 09:48:34.805965] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.940 [2024-10-07 09:48:34.806309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.940 [2024-10-07 09:48:34.806337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:45.940 [2024-10-07 09:48:34.806353] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:45.940 [2024-10-07 09:48:34.806582] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:45.940 [2024-10-07 09:48:34.806815] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.940 [2024-10-07 09:48:34.806836] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.940 [2024-10-07 09:48:34.806849] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.940 [2024-10-07 09:48:34.809708] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.940 [2024-10-07 09:48:34.819143] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.940 [2024-10-07 09:48:34.819536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.940 [2024-10-07 09:48:34.819588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:45.940 [2024-10-07 09:48:34.819604] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:45.940 [2024-10-07 09:48:34.819859] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:45.940 [2024-10-07 09:48:34.820065] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.940 [2024-10-07 09:48:34.820085] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.940 [2024-10-07 09:48:34.820097] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.940 [2024-10-07 09:48:34.822942] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.940 [2024-10-07 09:48:34.832266] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.940 [2024-10-07 09:48:34.832715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.940 [2024-10-07 09:48:34.832743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:45.940 [2024-10-07 09:48:34.832759] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:45.940 [2024-10-07 09:48:34.833004] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:45.940 [2024-10-07 09:48:34.833191] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.940 [2024-10-07 09:48:34.833211] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.940 [2024-10-07 09:48:34.833223] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.940 [2024-10-07 09:48:34.836111] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.940 [2024-10-07 09:48:34.845403] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.940 [2024-10-07 09:48:34.845748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.940 [2024-10-07 09:48:34.845777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:45.940 [2024-10-07 09:48:34.845793] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:45.940 [2024-10-07 09:48:34.846026] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:45.940 [2024-10-07 09:48:34.846228] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.940 [2024-10-07 09:48:34.846246] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.940 [2024-10-07 09:48:34.846258] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.940 [2024-10-07 09:48:34.849164] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.940 [2024-10-07 09:48:34.858568] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.940 [2024-10-07 09:48:34.858895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.940 [2024-10-07 09:48:34.858924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:45.940 [2024-10-07 09:48:34.858940] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:45.940 [2024-10-07 09:48:34.859166] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:45.940 [2024-10-07 09:48:34.859368] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.940 [2024-10-07 09:48:34.859387] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.940 [2024-10-07 09:48:34.859399] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.940 [2024-10-07 09:48:34.862304] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.940 [2024-10-07 09:48:34.871678] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.940 [2024-10-07 09:48:34.872018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.940 [2024-10-07 09:48:34.872045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:45.940 [2024-10-07 09:48:34.872061] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:45.940 [2024-10-07 09:48:34.872277] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:45.940 [2024-10-07 09:48:34.872480] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.940 [2024-10-07 09:48:34.872500] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.940 [2024-10-07 09:48:34.872512] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.940 [2024-10-07 09:48:34.875462] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.940 [2024-10-07 09:48:34.884788] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.940 [2024-10-07 09:48:34.885214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.940 [2024-10-07 09:48:34.885242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:45.940 [2024-10-07 09:48:34.885264] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:45.940 [2024-10-07 09:48:34.885499] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:45.940 [2024-10-07 09:48:34.885747] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.940 [2024-10-07 09:48:34.885769] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.940 [2024-10-07 09:48:34.885782] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.941 [2024-10-07 09:48:34.888662] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.941 [2024-10-07 09:48:34.897936] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.941 [2024-10-07 09:48:34.898284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.941 [2024-10-07 09:48:34.898312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:45.941 [2024-10-07 09:48:34.898328] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:45.941 [2024-10-07 09:48:34.898562] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:45.941 [2024-10-07 09:48:34.898813] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.941 [2024-10-07 09:48:34.898836] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.941 [2024-10-07 09:48:34.898850] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.941 [2024-10-07 09:48:34.901743] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.941 [2024-10-07 09:48:34.911013] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.941 [2024-10-07 09:48:34.911307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.941 [2024-10-07 09:48:34.911334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:45.941 [2024-10-07 09:48:34.911350] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:45.941 [2024-10-07 09:48:34.911559] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:45.941 [2024-10-07 09:48:34.911793] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.941 [2024-10-07 09:48:34.911814] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.941 [2024-10-07 09:48:34.911827] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.941 [2024-10-07 09:48:34.914664] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.941 [2024-10-07 09:48:34.924120] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.941 [2024-10-07 09:48:34.924429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.941 [2024-10-07 09:48:34.924457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:45.941 [2024-10-07 09:48:34.924472] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:45.941 [2024-10-07 09:48:34.924700] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:45.941 [2024-10-07 09:48:34.924908] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.941 [2024-10-07 09:48:34.924933] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.941 [2024-10-07 09:48:34.924947] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.941 [2024-10-07 09:48:34.927803] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.202 [2024-10-07 09:48:34.937239] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.202 [2024-10-07 09:48:34.937549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.202 [2024-10-07 09:48:34.937577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:46.202 [2024-10-07 09:48:34.937594] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:46.202 [2024-10-07 09:48:34.937857] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:46.202 [2024-10-07 09:48:34.938095] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.202 [2024-10-07 09:48:34.938115] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.202 [2024-10-07 09:48:34.938129] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.202 [2024-10-07 09:48:34.941203] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.202 [2024-10-07 09:48:34.950327] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.202 [2024-10-07 09:48:34.950730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.202 [2024-10-07 09:48:34.950757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:46.202 [2024-10-07 09:48:34.950774] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:46.202 [2024-10-07 09:48:34.951003] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:46.202 [2024-10-07 09:48:34.951207] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.202 [2024-10-07 09:48:34.951226] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.202 [2024-10-07 09:48:34.951239] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.202 [2024-10-07 09:48:34.954025] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.202 [2024-10-07 09:48:34.963291] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.202 [2024-10-07 09:48:34.963635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.202 [2024-10-07 09:48:34.963663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:46.202 [2024-10-07 09:48:34.963692] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:46.202 [2024-10-07 09:48:34.963926] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:46.202 [2024-10-07 09:48:34.964130] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.202 [2024-10-07 09:48:34.964151] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.202 [2024-10-07 09:48:34.964163] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.202 [2024-10-07 09:48:34.967025] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.202 [2024-10-07 09:48:34.976433] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.202 [2024-10-07 09:48:34.976808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.202 [2024-10-07 09:48:34.976837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:46.202 [2024-10-07 09:48:34.976853] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:46.202 [2024-10-07 09:48:34.977069] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:46.202 [2024-10-07 09:48:34.977273] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.202 [2024-10-07 09:48:34.977293] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.202 [2024-10-07 09:48:34.977305] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.202 [2024-10-07 09:48:34.980248] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.202 [2024-10-07 09:48:34.989510] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.202 [2024-10-07 09:48:34.989926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.202 [2024-10-07 09:48:34.989955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:46.202 [2024-10-07 09:48:34.989971] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:46.202 [2024-10-07 09:48:34.990204] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:46.202 [2024-10-07 09:48:34.990408] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.202 [2024-10-07 09:48:34.990427] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.202 [2024-10-07 09:48:34.990440] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.202 [2024-10-07 09:48:34.993368] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.202 [2024-10-07 09:48:35.002523] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.202 [2024-10-07 09:48:35.002876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.202 [2024-10-07 09:48:35.002906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:46.202 [2024-10-07 09:48:35.002923] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:46.202 [2024-10-07 09:48:35.003156] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:46.202 [2024-10-07 09:48:35.003343] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.202 [2024-10-07 09:48:35.003363] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.202 [2024-10-07 09:48:35.003375] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.202 [2024-10-07 09:48:35.006278] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.202 [2024-10-07 09:48:35.015997] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.202 [2024-10-07 09:48:35.016455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.202 [2024-10-07 09:48:35.016484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:46.202 [2024-10-07 09:48:35.016501] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:46.202 [2024-10-07 09:48:35.016770] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:46.202 [2024-10-07 09:48:35.016992] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.202 [2024-10-07 09:48:35.017013] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.202 [2024-10-07 09:48:35.017026] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.202 [2024-10-07 09:48:35.020219] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.202 [2024-10-07 09:48:35.029097] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.202 [2024-10-07 09:48:35.029551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.202 [2024-10-07 09:48:35.029607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:46.202 [2024-10-07 09:48:35.029623] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:46.202 [2024-10-07 09:48:35.029881] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:46.202 [2024-10-07 09:48:35.030104] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.202 [2024-10-07 09:48:35.030124] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.202 [2024-10-07 09:48:35.030136] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.202 [2024-10-07 09:48:35.033036] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.202 [2024-10-07 09:48:35.042199] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.202 [2024-10-07 09:48:35.042606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.202 [2024-10-07 09:48:35.042635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:46.202 [2024-10-07 09:48:35.042652] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:46.202 [2024-10-07 09:48:35.042914] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:46.202 [2024-10-07 09:48:35.043152] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.202 [2024-10-07 09:48:35.043172] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.202 [2024-10-07 09:48:35.043185] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.202 [2024-10-07 09:48:35.046091] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.202 [2024-10-07 09:48:35.055349] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.202 [2024-10-07 09:48:35.055696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.202 [2024-10-07 09:48:35.055724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:46.202 [2024-10-07 09:48:35.055739] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:46.202 [2024-10-07 09:48:35.055974] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:46.202 [2024-10-07 09:48:35.056177] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.202 [2024-10-07 09:48:35.056195] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.203 [2024-10-07 09:48:35.056212] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.203 [2024-10-07 09:48:35.059118] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.203 [2024-10-07 09:48:35.068484] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.203 [2024-10-07 09:48:35.068861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.203 [2024-10-07 09:48:35.068889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:46.203 [2024-10-07 09:48:35.068905] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:46.203 [2024-10-07 09:48:35.069121] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:46.203 [2024-10-07 09:48:35.069324] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.203 [2024-10-07 09:48:35.069343] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.203 [2024-10-07 09:48:35.069356] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.203 [2024-10-07 09:48:35.072263] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.203 [2024-10-07 09:48:35.081509] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.203 [2024-10-07 09:48:35.081907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.203 [2024-10-07 09:48:35.081936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:46.203 [2024-10-07 09:48:35.081953] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:46.203 [2024-10-07 09:48:35.082203] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:46.203 [2024-10-07 09:48:35.082389] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.203 [2024-10-07 09:48:35.082409] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.203 [2024-10-07 09:48:35.082422] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.203 [2024-10-07 09:48:35.085331] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.203 [2024-10-07 09:48:35.094608] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.203 [2024-10-07 09:48:35.094980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.203 [2024-10-07 09:48:35.095009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:46.203 [2024-10-07 09:48:35.095026] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:46.203 [2024-10-07 09:48:35.095260] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:46.203 [2024-10-07 09:48:35.095466] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.203 [2024-10-07 09:48:35.095486] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.203 [2024-10-07 09:48:35.095498] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.203 [2024-10-07 09:48:35.098451] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.203 [2024-10-07 09:48:35.107695] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.203 [2024-10-07 09:48:35.108065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.203 [2024-10-07 09:48:35.108095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:46.203 [2024-10-07 09:48:35.108111] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:46.203 [2024-10-07 09:48:35.108346] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:46.203 [2024-10-07 09:48:35.108541] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.203 [2024-10-07 09:48:35.108561] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.203 [2024-10-07 09:48:35.108573] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.203 [2024-10-07 09:48:35.111460] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.203 [2024-10-07 09:48:35.120714] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.203 [2024-10-07 09:48:35.121056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.203 [2024-10-07 09:48:35.121084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:46.203 [2024-10-07 09:48:35.121101] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:46.203 [2024-10-07 09:48:35.121333] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:46.203 [2024-10-07 09:48:35.121537] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.203 [2024-10-07 09:48:35.121557] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.203 [2024-10-07 09:48:35.121569] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.203 [2024-10-07 09:48:35.124459] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.203 [2024-10-07 09:48:35.133715] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.203 [2024-10-07 09:48:35.134058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.203 [2024-10-07 09:48:35.134087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:46.203 [2024-10-07 09:48:35.134104] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:46.203 [2024-10-07 09:48:35.134338] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:46.203 [2024-10-07 09:48:35.134541] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.203 [2024-10-07 09:48:35.134561] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.203 [2024-10-07 09:48:35.134573] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.203 [2024-10-07 09:48:35.137467] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.203 [2024-10-07 09:48:35.146756] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.203 [2024-10-07 09:48:35.147161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.203 [2024-10-07 09:48:35.147189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:46.203 [2024-10-07 09:48:35.147205] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:46.203 [2024-10-07 09:48:35.147441] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:46.203 [2024-10-07 09:48:35.147651] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.203 [2024-10-07 09:48:35.147696] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.203 [2024-10-07 09:48:35.147711] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.203 [2024-10-07 09:48:35.150475] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.203 [2024-10-07 09:48:35.159907] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.203 [2024-10-07 09:48:35.160263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.203 [2024-10-07 09:48:35.160292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:46.203 [2024-10-07 09:48:35.160308] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:46.203 [2024-10-07 09:48:35.160543] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:46.203 [2024-10-07 09:48:35.160794] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.203 [2024-10-07 09:48:35.160816] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.203 [2024-10-07 09:48:35.160829] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.203 [2024-10-07 09:48:35.163712] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.203 [2024-10-07 09:48:35.173116] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.203 [2024-10-07 09:48:35.173460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.203 [2024-10-07 09:48:35.173488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:46.203 [2024-10-07 09:48:35.173504] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:46.203 [2024-10-07 09:48:35.173749] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:46.203 [2024-10-07 09:48:35.173948] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.203 [2024-10-07 09:48:35.173969] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.203 [2024-10-07 09:48:35.173982] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.203 [2024-10-07 09:48:35.176863] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.203 [2024-10-07 09:48:35.186252] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.203 [2024-10-07 09:48:35.186570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.203 [2024-10-07 09:48:35.186598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:46.203 [2024-10-07 09:48:35.186614] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:46.203 [2024-10-07 09:48:35.186870] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:46.203 [2024-10-07 09:48:35.187096] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.203 [2024-10-07 09:48:35.187115] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.203 [2024-10-07 09:48:35.187128] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.203 [2024-10-07 09:48:35.190037] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.463 [2024-10-07 09:48:35.199595] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.463 [2024-10-07 09:48:35.200031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.463 [2024-10-07 09:48:35.200060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:46.463 [2024-10-07 09:48:35.200075] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:46.463 [2024-10-07 09:48:35.200309] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:46.463 [2024-10-07 09:48:35.200519] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.463 [2024-10-07 09:48:35.200540] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.463 [2024-10-07 09:48:35.200553] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.463 [2024-10-07 09:48:35.203703] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.463 [2024-10-07 09:48:35.212629] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.463 [2024-10-07 09:48:35.213040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.463 [2024-10-07 09:48:35.213069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:46.463 [2024-10-07 09:48:35.213084] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:46.463 [2024-10-07 09:48:35.213320] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:46.464 [2024-10-07 09:48:35.213522] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.464 [2024-10-07 09:48:35.213543] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.464 [2024-10-07 09:48:35.213555] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.464 [2024-10-07 09:48:35.216462] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.464 [2024-10-07 09:48:35.225712] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.464 [2024-10-07 09:48:35.226054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.464 [2024-10-07 09:48:35.226082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:46.464 [2024-10-07 09:48:35.226099] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:46.464 [2024-10-07 09:48:35.226332] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:46.464 [2024-10-07 09:48:35.226535] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.464 [2024-10-07 09:48:35.226567] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.464 [2024-10-07 09:48:35.226580] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.464 [2024-10-07 09:48:35.229485] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.464 [2024-10-07 09:48:35.238814] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.464 [2024-10-07 09:48:35.239185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.464 [2024-10-07 09:48:35.239212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:46.464 [2024-10-07 09:48:35.239233] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:46.464 [2024-10-07 09:48:35.239449] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:46.464 [2024-10-07 09:48:35.239652] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.464 [2024-10-07 09:48:35.239697] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.464 [2024-10-07 09:48:35.239713] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.464 [2024-10-07 09:48:35.242478] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.464 [2024-10-07 09:48:35.251926] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.464 [2024-10-07 09:48:35.252284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.464 [2024-10-07 09:48:35.252312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:46.464 [2024-10-07 09:48:35.252328] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:46.464 [2024-10-07 09:48:35.252563] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:46.464 [2024-10-07 09:48:35.252813] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.464 [2024-10-07 09:48:35.252835] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.464 [2024-10-07 09:48:35.252848] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.464 [2024-10-07 09:48:35.255733] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.464 [2024-10-07 09:48:35.264914] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.464 [2024-10-07 09:48:35.265224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.464 [2024-10-07 09:48:35.265252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:46.464 [2024-10-07 09:48:35.265267] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:46.464 [2024-10-07 09:48:35.265483] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:46.464 [2024-10-07 09:48:35.265711] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.464 [2024-10-07 09:48:35.265733] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.464 [2024-10-07 09:48:35.265746] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.464 [2024-10-07 09:48:35.268789] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.464 [2024-10-07 09:48:35.278355] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.464 [2024-10-07 09:48:35.278715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.464 [2024-10-07 09:48:35.278761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:46.464 [2024-10-07 09:48:35.278778] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:46.464 [2024-10-07 09:48:35.279012] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:46.464 [2024-10-07 09:48:35.279220] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.464 [2024-10-07 09:48:35.279240] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.464 [2024-10-07 09:48:35.279253] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.464 [2024-10-07 09:48:35.282196] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.464 [2024-10-07 09:48:35.291490] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.464 [2024-10-07 09:48:35.291801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.464 [2024-10-07 09:48:35.291843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:46.464 [2024-10-07 09:48:35.291858] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:46.464 [2024-10-07 09:48:35.292088] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:46.464 [2024-10-07 09:48:35.292308] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.464 [2024-10-07 09:48:35.292328] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.464 [2024-10-07 09:48:35.292340] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.464 [2024-10-07 09:48:35.295253] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.464 5189.75 IOPS, 20.27 MiB/s [2024-10-07 09:48:35.305772] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.464 [2024-10-07 09:48:35.306145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.464 [2024-10-07 09:48:35.306173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:46.464 [2024-10-07 09:48:35.306188] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:46.464 [2024-10-07 09:48:35.306403] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:46.464 [2024-10-07 09:48:35.306606] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.464 [2024-10-07 09:48:35.306624] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.464 [2024-10-07 09:48:35.306637] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.464 [2024-10-07 09:48:35.309501] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.464 [2024-10-07 09:48:35.318833] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.464 [2024-10-07 09:48:35.319139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.464 [2024-10-07 09:48:35.319167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:46.464 [2024-10-07 09:48:35.319183] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:46.464 [2024-10-07 09:48:35.319398] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:46.464 [2024-10-07 09:48:35.319599] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.464 [2024-10-07 09:48:35.319619] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.464 [2024-10-07 09:48:35.319631] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.464 [2024-10-07 09:48:35.322549] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.464 [2024-10-07 09:48:35.332007] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.464 [2024-10-07 09:48:35.332347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.464 [2024-10-07 09:48:35.332375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:46.464 [2024-10-07 09:48:35.332391] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:46.464 [2024-10-07 09:48:35.332625] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:46.464 [2024-10-07 09:48:35.332857] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.465 [2024-10-07 09:48:35.332878] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.465 [2024-10-07 09:48:35.332890] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.465 [2024-10-07 09:48:35.335749] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.465 [2024-10-07 09:48:35.345173] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.465 [2024-10-07 09:48:35.345564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.465 [2024-10-07 09:48:35.345618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:46.465 [2024-10-07 09:48:35.345634] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:46.465 [2024-10-07 09:48:35.345912] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:46.465 [2024-10-07 09:48:35.346116] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.465 [2024-10-07 09:48:35.346135] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.465 [2024-10-07 09:48:35.346147] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.465 [2024-10-07 09:48:35.349039] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.465 [2024-10-07 09:48:35.358343] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.465 [2024-10-07 09:48:35.358821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.465 [2024-10-07 09:48:35.358851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:46.465 [2024-10-07 09:48:35.358867] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:46.465 [2024-10-07 09:48:35.359107] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:46.465 [2024-10-07 09:48:35.359301] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.465 [2024-10-07 09:48:35.359321] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.465 [2024-10-07 09:48:35.359334] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.465 [2024-10-07 09:48:35.362358] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.465 [2024-10-07 09:48:35.371719] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.465 [2024-10-07 09:48:35.372053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.465 [2024-10-07 09:48:35.372107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:46.465 [2024-10-07 09:48:35.372128] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:46.465 [2024-10-07 09:48:35.372356] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:46.465 [2024-10-07 09:48:35.372548] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.465 [2024-10-07 09:48:35.372568] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.465 [2024-10-07 09:48:35.372580] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.465 [2024-10-07 09:48:35.375542] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.465 [2024-10-07 09:48:35.385006] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.465 [2024-10-07 09:48:35.385444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.465 [2024-10-07 09:48:35.385493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:46.465 [2024-10-07 09:48:35.385509] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:46.465 [2024-10-07 09:48:35.385773] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:46.465 [2024-10-07 09:48:35.385992] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.465 [2024-10-07 09:48:35.386013] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.465 [2024-10-07 09:48:35.386025] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.465 [2024-10-07 09:48:35.388980] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.465 [2024-10-07 09:48:35.398353] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.465 [2024-10-07 09:48:35.398721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.465 [2024-10-07 09:48:35.398753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:46.465 [2024-10-07 09:48:35.398770] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:46.465 [2024-10-07 09:48:35.399014] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:46.465 [2024-10-07 09:48:35.399207] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.465 [2024-10-07 09:48:35.399227] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.465 [2024-10-07 09:48:35.399240] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.465 [2024-10-07 09:48:35.402238] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.465 [2024-10-07 09:48:35.411745] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.465 [2024-10-07 09:48:35.412149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.465 [2024-10-07 09:48:35.412177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:46.465 [2024-10-07 09:48:35.412194] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:46.465 [2024-10-07 09:48:35.412428] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:46.465 [2024-10-07 09:48:35.412638] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.465 [2024-10-07 09:48:35.412689] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.465 [2024-10-07 09:48:35.412705] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.465 [2024-10-07 09:48:35.415689] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.465 [2024-10-07 09:48:35.424946] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.465 [2024-10-07 09:48:35.425394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.465 [2024-10-07 09:48:35.425423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:46.465 [2024-10-07 09:48:35.425439] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:46.465 [2024-10-07 09:48:35.425689] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:46.465 [2024-10-07 09:48:35.425926] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.465 [2024-10-07 09:48:35.425948] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.465 [2024-10-07 09:48:35.425962] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.465 [2024-10-07 09:48:35.429220] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.465 [2024-10-07 09:48:35.438173] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.465 [2024-10-07 09:48:35.438473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.465 [2024-10-07 09:48:35.438500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:46.465 [2024-10-07 09:48:35.438516] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:46.465 [2024-10-07 09:48:35.438782] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:46.465 [2024-10-07 09:48:35.438989] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.465 [2024-10-07 09:48:35.439010] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.465 [2024-10-07 09:48:35.439024] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.465 [2024-10-07 09:48:35.442245] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.465 [2024-10-07 09:48:35.451591] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.465 [2024-10-07 09:48:35.451940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.466 [2024-10-07 09:48:35.451969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:46.466 [2024-10-07 09:48:35.451985] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:46.466 [2024-10-07 09:48:35.452216] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:46.466 [2024-10-07 09:48:35.452426] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.466 [2024-10-07 09:48:35.452446] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.466 [2024-10-07 09:48:35.452458] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.466 [2024-10-07 09:48:35.455807] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.727 [2024-10-07 09:48:35.465149] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.727 [2024-10-07 09:48:35.465504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.727 [2024-10-07 09:48:35.465533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:46.727 [2024-10-07 09:48:35.465549] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:46.727 [2024-10-07 09:48:35.465775] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:46.727 [2024-10-07 09:48:35.466034] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.727 [2024-10-07 09:48:35.466054] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.727 [2024-10-07 09:48:35.466066] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.727 [2024-10-07 09:48:35.469113] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.727 [2024-10-07 09:48:35.478413] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.727 [2024-10-07 09:48:35.478765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.727 [2024-10-07 09:48:35.478794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:46.727 [2024-10-07 09:48:35.478810] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:46.727 [2024-10-07 09:48:35.479024] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:46.727 [2024-10-07 09:48:35.479232] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.727 [2024-10-07 09:48:35.479251] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.727 [2024-10-07 09:48:35.479262] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.727 [2024-10-07 09:48:35.482296] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.727 [2024-10-07 09:48:35.491778] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.727 [2024-10-07 09:48:35.492121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.727 [2024-10-07 09:48:35.492149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:46.727 [2024-10-07 09:48:35.492165] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:46.727 [2024-10-07 09:48:35.492385] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:46.727 [2024-10-07 09:48:35.492612] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.727 [2024-10-07 09:48:35.492631] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.727 [2024-10-07 09:48:35.492659] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.727 [2024-10-07 09:48:35.495763] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.727 [2024-10-07 09:48:35.505009] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.727 [2024-10-07 09:48:35.505400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.727 [2024-10-07 09:48:35.505430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:46.727 [2024-10-07 09:48:35.505447] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:46.727 [2024-10-07 09:48:35.505709] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:46.727 [2024-10-07 09:48:35.505909] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.727 [2024-10-07 09:48:35.505929] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.728 [2024-10-07 09:48:35.505953] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.728 [2024-10-07 09:48:35.508909] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.728 [2024-10-07 09:48:35.518278] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.728 [2024-10-07 09:48:35.518575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.728 [2024-10-07 09:48:35.518618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:46.728 [2024-10-07 09:48:35.518635] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:46.728 [2024-10-07 09:48:35.518886] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:46.728 [2024-10-07 09:48:35.519122] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.728 [2024-10-07 09:48:35.519150] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.728 [2024-10-07 09:48:35.519162] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.728 [2024-10-07 09:48:35.522457] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.728 [2024-10-07 09:48:35.531721] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.728 [2024-10-07 09:48:35.532129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.728 [2024-10-07 09:48:35.532157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:46.728 [2024-10-07 09:48:35.532173] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:46.728 [2024-10-07 09:48:35.532380] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:46.728 [2024-10-07 09:48:35.532604] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.728 [2024-10-07 09:48:35.532624] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.728 [2024-10-07 09:48:35.532637] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.728 [2024-10-07 09:48:35.535711] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.728 [2024-10-07 09:48:35.545224] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.728 [2024-10-07 09:48:35.545639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.728 [2024-10-07 09:48:35.545677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:46.728 [2024-10-07 09:48:35.545696] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:46.728 [2024-10-07 09:48:35.545910] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:46.728 [2024-10-07 09:48:35.546139] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.728 [2024-10-07 09:48:35.546158] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.728 [2024-10-07 09:48:35.546176] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.728 [2024-10-07 09:48:35.549222] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.728 [2024-10-07 09:48:35.558496] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.728 [2024-10-07 09:48:35.558817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.728 [2024-10-07 09:48:35.558846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:46.728 [2024-10-07 09:48:35.558863] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:46.728 [2024-10-07 09:48:35.559096] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:46.728 [2024-10-07 09:48:35.559289] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.728 [2024-10-07 09:48:35.559308] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.728 [2024-10-07 09:48:35.559321] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.728 [2024-10-07 09:48:35.562293] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.728 [2024-10-07 09:48:35.571792] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.728 [2024-10-07 09:48:35.572174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.728 [2024-10-07 09:48:35.572203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:46.728 [2024-10-07 09:48:35.572219] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:46.728 [2024-10-07 09:48:35.572459] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:46.728 [2024-10-07 09:48:35.572677] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.728 [2024-10-07 09:48:35.572698] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.728 [2024-10-07 09:48:35.572718] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.728 [2024-10-07 09:48:35.575756] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.728 [2024-10-07 09:48:35.585022] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.728 [2024-10-07 09:48:35.585344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.728 [2024-10-07 09:48:35.585373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:46.728 [2024-10-07 09:48:35.585390] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:46.728 [2024-10-07 09:48:35.585614] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:46.728 [2024-10-07 09:48:35.585863] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.728 [2024-10-07 09:48:35.585885] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.728 [2024-10-07 09:48:35.585898] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.728 [2024-10-07 09:48:35.588909] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.728 [2024-10-07 09:48:35.598391] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.728 [2024-10-07 09:48:35.598819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.728 [2024-10-07 09:48:35.598849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:46.728 [2024-10-07 09:48:35.598866] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:46.728 [2024-10-07 09:48:35.599109] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:46.728 [2024-10-07 09:48:35.599317] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.728 [2024-10-07 09:48:35.599337] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.728 [2024-10-07 09:48:35.599349] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.728 [2024-10-07 09:48:35.602372] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.728 [2024-10-07 09:48:35.611594] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.728 [2024-10-07 09:48:35.611917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.728 [2024-10-07 09:48:35.611961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:46.728 [2024-10-07 09:48:35.611977] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:46.728 [2024-10-07 09:48:35.612197] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:46.728 [2024-10-07 09:48:35.612404] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.728 [2024-10-07 09:48:35.612425] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.728 [2024-10-07 09:48:35.612437] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.728 [2024-10-07 09:48:35.615417] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.728 [2024-10-07 09:48:35.624920] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.728 [2024-10-07 09:48:35.625290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.728 [2024-10-07 09:48:35.625319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:46.728 [2024-10-07 09:48:35.625336] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:46.728 [2024-10-07 09:48:35.625576] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:46.728 [2024-10-07 09:48:35.625834] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.728 [2024-10-07 09:48:35.625857] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.728 [2024-10-07 09:48:35.625871] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.728 [2024-10-07 09:48:35.628864] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.728 [2024-10-07 09:48:35.638103] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.728 [2024-10-07 09:48:35.638408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.728 [2024-10-07 09:48:35.638436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:46.729 [2024-10-07 09:48:35.638453] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:46.729 [2024-10-07 09:48:35.638678] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:46.729 [2024-10-07 09:48:35.638903] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.729 [2024-10-07 09:48:35.638924] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.729 [2024-10-07 09:48:35.638937] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.729 [2024-10-07 09:48:35.641896] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.729 [2024-10-07 09:48:35.651334] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.729 [2024-10-07 09:48:35.651687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.729 [2024-10-07 09:48:35.651717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:46.729 [2024-10-07 09:48:35.651733] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:46.729 [2024-10-07 09:48:35.651965] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:46.729 [2024-10-07 09:48:35.652175] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.729 [2024-10-07 09:48:35.652195] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.729 [2024-10-07 09:48:35.652207] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.729 [2024-10-07 09:48:35.655200] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.729 [2024-10-07 09:48:35.664582] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.729 [2024-10-07 09:48:35.664997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.729 [2024-10-07 09:48:35.665025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:46.729 [2024-10-07 09:48:35.665040] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:46.729 [2024-10-07 09:48:35.665256] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:46.729 [2024-10-07 09:48:35.665464] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.729 [2024-10-07 09:48:35.665484] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.729 [2024-10-07 09:48:35.665497] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.729 [2024-10-07 09:48:35.668486] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.729 [2024-10-07 09:48:35.677904] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.729 [2024-10-07 09:48:35.678297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.729 [2024-10-07 09:48:35.678325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:46.729 [2024-10-07 09:48:35.678341] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:46.729 [2024-10-07 09:48:35.678563] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:46.729 [2024-10-07 09:48:35.678815] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.729 [2024-10-07 09:48:35.678838] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.729 [2024-10-07 09:48:35.678852] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.729 [2024-10-07 09:48:35.681847] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.729 [2024-10-07 09:48:35.691136] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.729 [2024-10-07 09:48:35.691491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.729 [2024-10-07 09:48:35.691520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:46.729 [2024-10-07 09:48:35.691537] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:46.729 [2024-10-07 09:48:35.691791] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:46.729 [2024-10-07 09:48:35.692010] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.729 [2024-10-07 09:48:35.692031] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.729 [2024-10-07 09:48:35.692058] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.729 [2024-10-07 09:48:35.695043] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.729 [2024-10-07 09:48:35.704466] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.729 [2024-10-07 09:48:35.704844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.729 [2024-10-07 09:48:35.704874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:46.729 [2024-10-07 09:48:35.704890] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:46.729 [2024-10-07 09:48:35.705131] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:46.729 [2024-10-07 09:48:35.705339] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.729 [2024-10-07 09:48:35.705359] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.729 [2024-10-07 09:48:35.705371] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.729 [2024-10-07 09:48:35.708372] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.729 [2024-10-07 09:48:35.717881] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.729 [2024-10-07 09:48:35.718351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.729 [2024-10-07 09:48:35.718380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:46.729 [2024-10-07 09:48:35.718397] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:46.729 [2024-10-07 09:48:35.718647] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:46.729 [2024-10-07 09:48:35.718896] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.729 [2024-10-07 09:48:35.718919] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.729 [2024-10-07 09:48:35.718932] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.729 [2024-10-07 09:48:35.722130] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.990 [2024-10-07 09:48:35.731354] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.990 [2024-10-07 09:48:35.731766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.990 [2024-10-07 09:48:35.731795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:46.990 [2024-10-07 09:48:35.731817] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:46.990 [2024-10-07 09:48:35.732053] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:46.990 [2024-10-07 09:48:35.732263] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.990 [2024-10-07 09:48:35.732284] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.990 [2024-10-07 09:48:35.732297] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.990 [2024-10-07 09:48:35.735267] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.990 [2024-10-07 09:48:35.744727] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.990 [2024-10-07 09:48:35.745088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.990 [2024-10-07 09:48:35.745116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:46.990 [2024-10-07 09:48:35.745133] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:46.990 [2024-10-07 09:48:35.745374] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:46.990 [2024-10-07 09:48:35.745601] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.990 [2024-10-07 09:48:35.745620] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.990 [2024-10-07 09:48:35.745633] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.990 [2024-10-07 09:48:35.748602] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.990 [2024-10-07 09:48:35.758010] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.990 [2024-10-07 09:48:35.758361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.990 [2024-10-07 09:48:35.758390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:46.990 [2024-10-07 09:48:35.758406] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:46.990 [2024-10-07 09:48:35.758647] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:46.990 [2024-10-07 09:48:35.758894] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.990 [2024-10-07 09:48:35.758917] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.990 [2024-10-07 09:48:35.758930] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.990 [2024-10-07 09:48:35.761883] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.990 [2024-10-07 09:48:35.771294] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.990 [2024-10-07 09:48:35.771678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.990 [2024-10-07 09:48:35.771707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:46.990 [2024-10-07 09:48:35.771738] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:46.990 [2024-10-07 09:48:35.771967] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:46.990 [2024-10-07 09:48:35.772198] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.990 [2024-10-07 09:48:35.772219] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.990 [2024-10-07 09:48:35.772232] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.990 [2024-10-07 09:48:35.775262] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.990 [2024-10-07 09:48:35.784730] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.990 [2024-10-07 09:48:35.785151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.990 [2024-10-07 09:48:35.785179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:46.990 [2024-10-07 09:48:35.785195] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:46.990 [2024-10-07 09:48:35.785431] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:46.990 [2024-10-07 09:48:35.785625] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.990 [2024-10-07 09:48:35.785644] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.990 [2024-10-07 09:48:35.785680] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.990 [2024-10-07 09:48:35.788922] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.990 [2024-10-07 09:48:35.798099] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.990 [2024-10-07 09:48:35.798494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.990 [2024-10-07 09:48:35.798523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:46.990 [2024-10-07 09:48:35.798538] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:46.990 [2024-10-07 09:48:35.798792] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:46.990 [2024-10-07 09:48:35.799040] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.990 [2024-10-07 09:48:35.799061] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.990 [2024-10-07 09:48:35.799074] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.990 [2024-10-07 09:48:35.801981] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.990 [2024-10-07 09:48:35.811438] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.990 [2024-10-07 09:48:35.811825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.990 [2024-10-07 09:48:35.811855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:46.990 [2024-10-07 09:48:35.811872] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:46.990 [2024-10-07 09:48:35.812114] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:46.990 [2024-10-07 09:48:35.812323] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.990 [2024-10-07 09:48:35.812343] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.990 [2024-10-07 09:48:35.812356] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.990 [2024-10-07 09:48:35.815339] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.990 [2024-10-07 09:48:35.824749] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.990 [2024-10-07 09:48:35.825118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.990 [2024-10-07 09:48:35.825147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:46.990 [2024-10-07 09:48:35.825164] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:46.990 [2024-10-07 09:48:35.825406] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:46.990 [2024-10-07 09:48:35.825619] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.990 [2024-10-07 09:48:35.825639] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.990 [2024-10-07 09:48:35.825653] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.990 [2024-10-07 09:48:35.828657] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.991 [2024-10-07 09:48:35.837949] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.991 [2024-10-07 09:48:35.838314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.991 [2024-10-07 09:48:35.838342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:46.991 [2024-10-07 09:48:35.838358] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:46.991 [2024-10-07 09:48:35.838593] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:46.991 [2024-10-07 09:48:35.838832] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.991 [2024-10-07 09:48:35.838855] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.991 [2024-10-07 09:48:35.838868] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.991 [2024-10-07 09:48:35.841830] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.991 [2024-10-07 09:48:35.851183] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.991 [2024-10-07 09:48:35.851592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.991 [2024-10-07 09:48:35.851620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:46.991 [2024-10-07 09:48:35.851636] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:46.991 [2024-10-07 09:48:35.851899] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:46.991 [2024-10-07 09:48:35.852112] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.991 [2024-10-07 09:48:35.852132] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.991 [2024-10-07 09:48:35.852145] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.991 [2024-10-07 09:48:35.855088] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.991 [2024-10-07 09:48:35.864472] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.991 [2024-10-07 09:48:35.864845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.991 [2024-10-07 09:48:35.864875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:46.991 [2024-10-07 09:48:35.864897] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:46.991 [2024-10-07 09:48:35.865150] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:46.991 [2024-10-07 09:48:35.865359] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.991 [2024-10-07 09:48:35.865379] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.991 [2024-10-07 09:48:35.865392] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.991 [2024-10-07 09:48:35.868363] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.991 [2024-10-07 09:48:35.877786] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.991 [2024-10-07 09:48:35.878154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.991 [2024-10-07 09:48:35.878182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:46.991 [2024-10-07 09:48:35.878197] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:46.991 [2024-10-07 09:48:35.878432] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:46.991 [2024-10-07 09:48:35.878641] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.991 [2024-10-07 09:48:35.878661] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.991 [2024-10-07 09:48:35.878698] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.991 [2024-10-07 09:48:35.881623] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.991 [2024-10-07 09:48:35.891021] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.991 [2024-10-07 09:48:35.891371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.991 [2024-10-07 09:48:35.891400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:46.991 [2024-10-07 09:48:35.891417] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:46.991 [2024-10-07 09:48:35.891659] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:46.991 [2024-10-07 09:48:35.891894] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.991 [2024-10-07 09:48:35.891917] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.991 [2024-10-07 09:48:35.891931] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.991 [2024-10-07 09:48:35.894929] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.991 [2024-10-07 09:48:35.904229] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.991 [2024-10-07 09:48:35.904584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.991 [2024-10-07 09:48:35.904613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:46.991 [2024-10-07 09:48:35.904629] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:46.991 [2024-10-07 09:48:35.904871] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:46.991 [2024-10-07 09:48:35.905116] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.991 [2024-10-07 09:48:35.905141] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.991 [2024-10-07 09:48:35.905154] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.991 [2024-10-07 09:48:35.908099] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.991 [2024-10-07 09:48:35.917534] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.991 [2024-10-07 09:48:35.917922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.991 [2024-10-07 09:48:35.917951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:46.991 [2024-10-07 09:48:35.917968] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:46.991 [2024-10-07 09:48:35.918210] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:46.991 [2024-10-07 09:48:35.918420] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.991 [2024-10-07 09:48:35.918440] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.991 [2024-10-07 09:48:35.918453] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.991 [2024-10-07 09:48:35.921435] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.991 [2024-10-07 09:48:35.930885] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.991 [2024-10-07 09:48:35.931252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.991 [2024-10-07 09:48:35.931281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:46.991 [2024-10-07 09:48:35.931297] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:46.991 [2024-10-07 09:48:35.931538] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:46.991 [2024-10-07 09:48:35.931792] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.991 [2024-10-07 09:48:35.931814] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.991 [2024-10-07 09:48:35.931828] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.992 [2024-10-07 09:48:35.934796] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.992 [2024-10-07 09:48:35.944198] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.992 [2024-10-07 09:48:35.944576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.992 [2024-10-07 09:48:35.944604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:46.992 [2024-10-07 09:48:35.944619] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:46.992 [2024-10-07 09:48:35.944872] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:46.992 [2024-10-07 09:48:35.945099] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.992 [2024-10-07 09:48:35.945120] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.992 [2024-10-07 09:48:35.945132] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.992 [2024-10-07 09:48:35.948113] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.992 [2024-10-07 09:48:35.957503] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.992 [2024-10-07 09:48:35.957826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.992 [2024-10-07 09:48:35.957871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:46.992 [2024-10-07 09:48:35.957888] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:46.992 [2024-10-07 09:48:35.958121] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:46.992 [2024-10-07 09:48:35.958328] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.992 [2024-10-07 09:48:35.958348] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.992 [2024-10-07 09:48:35.958361] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.992 [2024-10-07 09:48:35.961343] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.992 [2024-10-07 09:48:35.970723] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.992 [2024-10-07 09:48:35.971136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.992 [2024-10-07 09:48:35.971164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:46.992 [2024-10-07 09:48:35.971180] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:46.992 [2024-10-07 09:48:35.971414] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:46.992 [2024-10-07 09:48:35.971621] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.992 [2024-10-07 09:48:35.971642] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.992 [2024-10-07 09:48:35.971654] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.992 [2024-10-07 09:48:35.974598] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.992 [2024-10-07 09:48:35.984192] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.992 [2024-10-07 09:48:35.984625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.992 [2024-10-07 09:48:35.984654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:46.992 [2024-10-07 09:48:35.984680] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:46.992 [2024-10-07 09:48:35.984911] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:46.992 [2024-10-07 09:48:35.985138] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.992 [2024-10-07 09:48:35.985158] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.992 [2024-10-07 09:48:35.985172] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.255 [2024-10-07 09:48:35.988211] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.255 [2024-10-07 09:48:35.997397] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.255 [2024-10-07 09:48:35.997718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.255 [2024-10-07 09:48:35.997748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:47.255 [2024-10-07 09:48:35.997765] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:47.255 [2024-10-07 09:48:35.998014] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:47.255 [2024-10-07 09:48:35.998243] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.255 [2024-10-07 09:48:35.998264] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.255 [2024-10-07 09:48:35.998278] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.255 [2024-10-07 09:48:36.001283] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.255 [2024-10-07 09:48:36.010563] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.255 [2024-10-07 09:48:36.010915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.255 [2024-10-07 09:48:36.010944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:47.255 [2024-10-07 09:48:36.010962] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:47.255 [2024-10-07 09:48:36.011184] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:47.255 [2024-10-07 09:48:36.011392] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.255 [2024-10-07 09:48:36.011412] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.255 [2024-10-07 09:48:36.011425] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.255 [2024-10-07 09:48:36.014409] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.255 [2024-10-07 09:48:36.023823] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.255 [2024-10-07 09:48:36.024201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.255 [2024-10-07 09:48:36.024230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:47.255 [2024-10-07 09:48:36.024246] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:47.255 [2024-10-07 09:48:36.024461] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:47.255 [2024-10-07 09:48:36.024691] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.256 [2024-10-07 09:48:36.024714] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.256 [2024-10-07 09:48:36.024727] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.256 [2024-10-07 09:48:36.027764] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.256 [2024-10-07 09:48:36.037113] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.256 [2024-10-07 09:48:36.037439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.256 [2024-10-07 09:48:36.037469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:47.256 [2024-10-07 09:48:36.037486] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:47.256 [2024-10-07 09:48:36.037750] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:47.256 [2024-10-07 09:48:36.037949] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.256 [2024-10-07 09:48:36.037985] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.256 [2024-10-07 09:48:36.038004] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.256 [2024-10-07 09:48:36.040999] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.256 [2024-10-07 09:48:36.050335] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.256 [2024-10-07 09:48:36.050682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.256 [2024-10-07 09:48:36.050712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:47.256 [2024-10-07 09:48:36.050729] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:47.256 [2024-10-07 09:48:36.050970] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:47.256 [2024-10-07 09:48:36.051180] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.256 [2024-10-07 09:48:36.051200] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.256 [2024-10-07 09:48:36.051212] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.256 [2024-10-07 09:48:36.054235] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.256 [2024-10-07 09:48:36.063622] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.256 [2024-10-07 09:48:36.063971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.256 [2024-10-07 09:48:36.064001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:47.256 [2024-10-07 09:48:36.064033] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:47.256 [2024-10-07 09:48:36.064268] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:47.256 [2024-10-07 09:48:36.064477] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.256 [2024-10-07 09:48:36.064497] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.256 [2024-10-07 09:48:36.064511] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.256 [2024-10-07 09:48:36.067493] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.256 [2024-10-07 09:48:36.076896] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.256 [2024-10-07 09:48:36.077219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.256 [2024-10-07 09:48:36.077247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:47.256 [2024-10-07 09:48:36.077263] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:47.256 [2024-10-07 09:48:36.077478] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:47.256 [2024-10-07 09:48:36.077728] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.256 [2024-10-07 09:48:36.077750] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.256 [2024-10-07 09:48:36.077764] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.256 [2024-10-07 09:48:36.080728] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.256 [2024-10-07 09:48:36.090168] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.256 [2024-10-07 09:48:36.090581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.256 [2024-10-07 09:48:36.090615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:47.256 [2024-10-07 09:48:36.090633] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:47.256 [2024-10-07 09:48:36.090886] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:47.256 [2024-10-07 09:48:36.091099] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.256 [2024-10-07 09:48:36.091119] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.256 [2024-10-07 09:48:36.091132] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.256 [2024-10-07 09:48:36.094111] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.256 [2024-10-07 09:48:36.103405] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.256 [2024-10-07 09:48:36.103778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.256 [2024-10-07 09:48:36.103808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:47.256 [2024-10-07 09:48:36.103824] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:47.256 [2024-10-07 09:48:36.104046] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:47.256 [2024-10-07 09:48:36.104254] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.256 [2024-10-07 09:48:36.104274] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.256 [2024-10-07 09:48:36.104287] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.256 [2024-10-07 09:48:36.107269] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.256 [2024-10-07 09:48:36.116624] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.256 [2024-10-07 09:48:36.117067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.256 [2024-10-07 09:48:36.117098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:47.256 [2024-10-07 09:48:36.117114] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:47.256 [2024-10-07 09:48:36.117355] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:47.256 [2024-10-07 09:48:36.117563] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.256 [2024-10-07 09:48:36.117584] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.256 [2024-10-07 09:48:36.117596] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.256 [2024-10-07 09:48:36.120616] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.256 [2024-10-07 09:48:36.129917] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.256 [2024-10-07 09:48:36.130288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.256 [2024-10-07 09:48:36.130316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:47.256 [2024-10-07 09:48:36.130348] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:47.256 [2024-10-07 09:48:36.130588] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:47.256 [2024-10-07 09:48:36.130835] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.256 [2024-10-07 09:48:36.130858] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.256 [2024-10-07 09:48:36.130871] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.256 [2024-10-07 09:48:36.133818] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.256 [2024-10-07 09:48:36.143219] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.256 [2024-10-07 09:48:36.143567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.256 [2024-10-07 09:48:36.143596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:47.256 [2024-10-07 09:48:36.143612] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:47.256 [2024-10-07 09:48:36.143880] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:47.256 [2024-10-07 09:48:36.144093] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.256 [2024-10-07 09:48:36.144114] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.256 [2024-10-07 09:48:36.144127] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.256 [2024-10-07 09:48:36.147106] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.256 [2024-10-07 09:48:36.156478] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.256 [2024-10-07 09:48:36.156850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.256 [2024-10-07 09:48:36.156879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:47.256 [2024-10-07 09:48:36.156896] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:47.256 [2024-10-07 09:48:36.157136] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:47.256 [2024-10-07 09:48:36.157343] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.256 [2024-10-07 09:48:36.157364] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.256 [2024-10-07 09:48:36.157376] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.256 [2024-10-07 09:48:36.160361] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.256 [2024-10-07 09:48:36.169770] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.257 [2024-10-07 09:48:36.170136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.257 [2024-10-07 09:48:36.170164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:47.257 [2024-10-07 09:48:36.170180] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:47.257 [2024-10-07 09:48:36.170413] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:47.257 [2024-10-07 09:48:36.170622] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.257 [2024-10-07 09:48:36.170642] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.257 [2024-10-07 09:48:36.170655] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.257 [2024-10-07 09:48:36.173644] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.257 [2024-10-07 09:48:36.183105] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.257 [2024-10-07 09:48:36.183461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.257 [2024-10-07 09:48:36.183490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:47.257 [2024-10-07 09:48:36.183506] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:47.257 [2024-10-07 09:48:36.183758] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:47.257 [2024-10-07 09:48:36.183963] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.257 [2024-10-07 09:48:36.184000] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.257 [2024-10-07 09:48:36.184013] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.257 [2024-10-07 09:48:36.186973] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.257 [2024-10-07 09:48:36.196457] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.257 [2024-10-07 09:48:36.196810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.257 [2024-10-07 09:48:36.196839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:47.257 [2024-10-07 09:48:36.196856] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:47.257 [2024-10-07 09:48:36.197093] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:47.257 [2024-10-07 09:48:36.197302] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.257 [2024-10-07 09:48:36.197323] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.257 [2024-10-07 09:48:36.197335] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.257 [2024-10-07 09:48:36.200332] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.257 [2024-10-07 09:48:36.209765] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.257 [2024-10-07 09:48:36.210137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.257 [2024-10-07 09:48:36.210167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:47.257 [2024-10-07 09:48:36.210183] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:47.257 [2024-10-07 09:48:36.210425] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:47.257 [2024-10-07 09:48:36.210617] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.257 [2024-10-07 09:48:36.210636] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.257 [2024-10-07 09:48:36.210649] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.257 [2024-10-07 09:48:36.213638] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.257 [2024-10-07 09:48:36.223145] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.257 [2024-10-07 09:48:36.223495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.257 [2024-10-07 09:48:36.223524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:47.257 [2024-10-07 09:48:36.223545] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:47.257 [2024-10-07 09:48:36.223797] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:47.257 [2024-10-07 09:48:36.224031] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.257 [2024-10-07 09:48:36.224051] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.257 [2024-10-07 09:48:36.224064] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.257 [2024-10-07 09:48:36.227095] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.257 [2024-10-07 09:48:36.236402] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.257 [2024-10-07 09:48:36.236707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.257 [2024-10-07 09:48:36.236736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:47.257 [2024-10-07 09:48:36.236753] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:47.257 [2024-10-07 09:48:36.236964] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:47.257 [2024-10-07 09:48:36.237157] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.257 [2024-10-07 09:48:36.237177] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.257 [2024-10-07 09:48:36.237190] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.257 [2024-10-07 09:48:36.240161] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.257 [2024-10-07 09:48:36.249956] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.519 [2024-10-07 09:48:36.250343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.519 [2024-10-07 09:48:36.250372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:47.519 [2024-10-07 09:48:36.250389] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:47.519 [2024-10-07 09:48:36.250620] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:47.519 [2024-10-07 09:48:36.250856] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.519 [2024-10-07 09:48:36.250879] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.519 [2024-10-07 09:48:36.250893] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.519 [2024-10-07 09:48:36.253876] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.519 [2024-10-07 09:48:36.263183] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.519 [2024-10-07 09:48:36.263637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.519 [2024-10-07 09:48:36.263695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:47.519 [2024-10-07 09:48:36.263714] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:47.519 [2024-10-07 09:48:36.263959] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:47.519 [2024-10-07 09:48:36.264161] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.519 [2024-10-07 09:48:36.264185] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.519 [2024-10-07 09:48:36.264198] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.519 [2024-10-07 09:48:36.267134] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.519 [2024-10-07 09:48:36.276254] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.519 [2024-10-07 09:48:36.276639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.519 [2024-10-07 09:48:36.276711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:47.519 [2024-10-07 09:48:36.276728] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:47.519 [2024-10-07 09:48:36.276981] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:47.519 [2024-10-07 09:48:36.277201] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.519 [2024-10-07 09:48:36.277221] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.519 [2024-10-07 09:48:36.277234] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.519 [2024-10-07 09:48:36.280270] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.519 [2024-10-07 09:48:36.289763] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.519 [2024-10-07 09:48:36.290148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.519 [2024-10-07 09:48:36.290176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:47.519 [2024-10-07 09:48:36.290192] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:47.519 [2024-10-07 09:48:36.290428] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:47.519 [2024-10-07 09:48:36.290632] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.519 [2024-10-07 09:48:36.290652] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.519 [2024-10-07 09:48:36.290675] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.520 [2024-10-07 09:48:36.293617] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.520 [2024-10-07 09:48:36.302820] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.520 [2024-10-07 09:48:36.303138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.520 [2024-10-07 09:48:36.303209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:47.520 [2024-10-07 09:48:36.303225] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:47.520 [2024-10-07 09:48:36.303453] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:47.520 [2024-10-07 09:48:36.303656] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.520 [2024-10-07 09:48:36.303686] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.520 [2024-10-07 09:48:36.303700] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.520 4151.80 IOPS, 16.22 MiB/s [2024-10-07 09:48:36.307875] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.520 [2024-10-07 09:48:36.315893] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.520 [2024-10-07 09:48:36.316269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.520 [2024-10-07 09:48:36.316296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:47.520 [2024-10-07 09:48:36.316312] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:47.520 [2024-10-07 09:48:36.316526] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:47.520 [2024-10-07 09:48:36.316780] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.520 [2024-10-07 09:48:36.316803] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.520 [2024-10-07 09:48:36.316817] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.520 [2024-10-07 09:48:36.319705] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.520 [2024-10-07 09:48:36.328949] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.520 [2024-10-07 09:48:36.329258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.520 [2024-10-07 09:48:36.329286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:47.520 [2024-10-07 09:48:36.329302] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:47.520 [2024-10-07 09:48:36.329518] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:47.520 [2024-10-07 09:48:36.329767] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.520 [2024-10-07 09:48:36.329797] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.520 [2024-10-07 09:48:36.329810] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.520 [2024-10-07 09:48:36.332741] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.520 [2024-10-07 09:48:36.341928] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.520 [2024-10-07 09:48:36.342237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.520 [2024-10-07 09:48:36.342265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:47.520 [2024-10-07 09:48:36.342281] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:47.520 [2024-10-07 09:48:36.342498] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:47.520 [2024-10-07 09:48:36.342733] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.520 [2024-10-07 09:48:36.342755] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.520 [2024-10-07 09:48:36.342767] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.520 [2024-10-07 09:48:36.345544] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.520 [2024-10-07 09:48:36.355011] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.520 [2024-10-07 09:48:36.355417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.520 [2024-10-07 09:48:36.355445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:47.520 [2024-10-07 09:48:36.355466] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:47.520 [2024-10-07 09:48:36.355713] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:47.520 [2024-10-07 09:48:36.355911] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.520 [2024-10-07 09:48:36.355930] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.520 [2024-10-07 09:48:36.355943] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.520 [2024-10-07 09:48:36.358817] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.520 [2024-10-07 09:48:36.368014] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.520 [2024-10-07 09:48:36.368325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.520 [2024-10-07 09:48:36.368353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:47.520 [2024-10-07 09:48:36.368369] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:47.520 [2024-10-07 09:48:36.368586] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:47.520 [2024-10-07 09:48:36.368840] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.520 [2024-10-07 09:48:36.368861] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.520 [2024-10-07 09:48:36.368875] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.520 [2024-10-07 09:48:36.371770] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.520 [2024-10-07 09:48:36.381044] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.520 [2024-10-07 09:48:36.381385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.520 [2024-10-07 09:48:36.381412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:47.520 [2024-10-07 09:48:36.381427] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:47.520 [2024-10-07 09:48:36.381658] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:47.520 [2024-10-07 09:48:36.381862] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.520 [2024-10-07 09:48:36.381881] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.520 [2024-10-07 09:48:36.381894] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.520 [2024-10-07 09:48:36.384770] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.520 [2024-10-07 09:48:36.394241] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.520 [2024-10-07 09:48:36.394647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.520 [2024-10-07 09:48:36.394683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:47.520 [2024-10-07 09:48:36.394701] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:47.520 [2024-10-07 09:48:36.394935] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:47.520 [2024-10-07 09:48:36.395136] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.520 [2024-10-07 09:48:36.395160] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.520 [2024-10-07 09:48:36.395173] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.520 [2024-10-07 09:48:36.398037] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.520 [2024-10-07 09:48:36.407194] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.520 [2024-10-07 09:48:36.407568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.520 [2024-10-07 09:48:36.407635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:47.520 [2024-10-07 09:48:36.407651] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:47.520 [2024-10-07 09:48:36.407910] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:47.520 [2024-10-07 09:48:36.408130] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.520 [2024-10-07 09:48:36.408150] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.520 [2024-10-07 09:48:36.408162] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.520 [2024-10-07 09:48:36.411022] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.520 [2024-10-07 09:48:36.420225] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.520 [2024-10-07 09:48:36.420618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.520 [2024-10-07 09:48:36.420680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:47.520 [2024-10-07 09:48:36.420698] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:47.520 [2024-10-07 09:48:36.420944] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:47.520 [2024-10-07 09:48:36.421131] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.520 [2024-10-07 09:48:36.421151] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.520 [2024-10-07 09:48:36.421163] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.520 [2024-10-07 09:48:36.423946] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.520 [2024-10-07 09:48:36.433257] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.520 [2024-10-07 09:48:36.433661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.521 [2024-10-07 09:48:36.433735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:47.521 [2024-10-07 09:48:36.433753] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:47.521 [2024-10-07 09:48:36.434015] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:47.521 [2024-10-07 09:48:36.434202] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.521 [2024-10-07 09:48:36.434220] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.521 [2024-10-07 09:48:36.434232] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.521 [2024-10-07 09:48:36.437018] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.521 [2024-10-07 09:48:36.446364] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.521 [2024-10-07 09:48:36.446704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.521 [2024-10-07 09:48:36.446733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:47.521 [2024-10-07 09:48:36.446749] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:47.521 [2024-10-07 09:48:36.446970] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:47.521 [2024-10-07 09:48:36.447190] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.521 [2024-10-07 09:48:36.447210] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.521 [2024-10-07 09:48:36.447223] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.521 [2024-10-07 09:48:36.450216] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.521 [2024-10-07 09:48:36.459398] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.521 [2024-10-07 09:48:36.459743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.521 [2024-10-07 09:48:36.459771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:47.521 [2024-10-07 09:48:36.459787] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:47.521 [2024-10-07 09:48:36.460020] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:47.521 [2024-10-07 09:48:36.460222] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.521 [2024-10-07 09:48:36.460240] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.521 [2024-10-07 09:48:36.460252] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.521 [2024-10-07 09:48:36.463142] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.521 [2024-10-07 09:48:36.472517] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.521 [2024-10-07 09:48:36.472870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.521 [2024-10-07 09:48:36.472897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:47.521 [2024-10-07 09:48:36.472913] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:47.521 [2024-10-07 09:48:36.473143] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:47.521 [2024-10-07 09:48:36.473347] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.521 [2024-10-07 09:48:36.473366] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.521 [2024-10-07 09:48:36.473379] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.521 [2024-10-07 09:48:36.476285] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.521 [2024-10-07 09:48:36.485556] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.521 [2024-10-07 09:48:36.485905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.521 [2024-10-07 09:48:36.485933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:47.521 [2024-10-07 09:48:36.485949] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:47.521 [2024-10-07 09:48:36.486187] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:47.521 [2024-10-07 09:48:36.486388] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.521 [2024-10-07 09:48:36.486409] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.521 [2024-10-07 09:48:36.486421] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.521 [2024-10-07 09:48:36.489331] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.521 [2024-10-07 09:48:36.498817] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.521 [2024-10-07 09:48:36.499194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.521 [2024-10-07 09:48:36.499223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:47.521 [2024-10-07 09:48:36.499239] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:47.521 [2024-10-07 09:48:36.499489] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:47.521 [2024-10-07 09:48:36.499720] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.521 [2024-10-07 09:48:36.499753] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.521 [2024-10-07 09:48:36.499768] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.521 [2024-10-07 09:48:36.502753] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.521 [2024-10-07 09:48:36.512402] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.521 [2024-10-07 09:48:36.512784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.521 [2024-10-07 09:48:36.512814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:47.521 [2024-10-07 09:48:36.512832] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:47.521 [2024-10-07 09:48:36.513073] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:47.521 [2024-10-07 09:48:36.513299] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.521 [2024-10-07 09:48:36.513319] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.521 [2024-10-07 09:48:36.513331] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.782 [2024-10-07 09:48:36.516351] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.782 [2024-10-07 09:48:36.525544] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.782 [2024-10-07 09:48:36.525886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.782 [2024-10-07 09:48:36.525916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:47.782 [2024-10-07 09:48:36.525933] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:47.782 [2024-10-07 09:48:36.526166] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:47.782 [2024-10-07 09:48:36.526367] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.782 [2024-10-07 09:48:36.526387] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.782 [2024-10-07 09:48:36.526405] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.782 [2024-10-07 09:48:36.529410] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.782 [2024-10-07 09:48:36.539039] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.782 [2024-10-07 09:48:36.539388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.782 [2024-10-07 09:48:36.539416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:47.782 [2024-10-07 09:48:36.539432] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:47.782 [2024-10-07 09:48:36.539677] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:47.782 [2024-10-07 09:48:36.539886] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.782 [2024-10-07 09:48:36.539905] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.782 [2024-10-07 09:48:36.539917] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.782 [2024-10-07 09:48:36.542796] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.782 [2024-10-07 09:48:36.552275] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.782 [2024-10-07 09:48:36.552681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.782 [2024-10-07 09:48:36.552726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:47.782 [2024-10-07 09:48:36.552742] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:47.782 [2024-10-07 09:48:36.552981] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:47.782 [2024-10-07 09:48:36.553185] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.782 [2024-10-07 09:48:36.553204] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.782 [2024-10-07 09:48:36.553217] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.782 [2024-10-07 09:48:36.556120] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.782 [2024-10-07 09:48:36.565518] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.782 [2024-10-07 09:48:36.565964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.782 [2024-10-07 09:48:36.566007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:47.782 [2024-10-07 09:48:36.566023] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:47.782 [2024-10-07 09:48:36.566254] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:47.782 [2024-10-07 09:48:36.566467] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.782 [2024-10-07 09:48:36.566486] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.782 [2024-10-07 09:48:36.566498] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.782 [2024-10-07 09:48:36.569407] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.782 [2024-10-07 09:48:36.578751] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.782 [2024-10-07 09:48:36.579114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.782 [2024-10-07 09:48:36.579191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:47.782 [2024-10-07 09:48:36.579208] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:47.782 [2024-10-07 09:48:36.579435] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:47.782 [2024-10-07 09:48:36.579639] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.782 [2024-10-07 09:48:36.579684] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.782 [2024-10-07 09:48:36.579698] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.782 [2024-10-07 09:48:36.582567] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.782 [2024-10-07 09:48:36.591998] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.782 [2024-10-07 09:48:36.592343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.782 [2024-10-07 09:48:36.592371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:47.782 [2024-10-07 09:48:36.592386] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:47.782 [2024-10-07 09:48:36.592620] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:47.782 [2024-10-07 09:48:36.592824] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.782 [2024-10-07 09:48:36.592845] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.782 [2024-10-07 09:48:36.592857] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.783 [2024-10-07 09:48:36.595807] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.783 [2024-10-07 09:48:36.605640] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.783 [2024-10-07 09:48:36.606005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.783 [2024-10-07 09:48:36.606035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:47.783 [2024-10-07 09:48:36.606052] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:47.783 [2024-10-07 09:48:36.606278] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:47.783 [2024-10-07 09:48:36.606493] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.783 [2024-10-07 09:48:36.606513] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.783 [2024-10-07 09:48:36.606526] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.783 [2024-10-07 09:48:36.609790] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.783 [2024-10-07 09:48:36.619194] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.783 [2024-10-07 09:48:36.619567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.783 [2024-10-07 09:48:36.619595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:47.783 [2024-10-07 09:48:36.619612] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:47.783 [2024-10-07 09:48:36.619836] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:47.783 [2024-10-07 09:48:36.620080] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.783 [2024-10-07 09:48:36.620100] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.783 [2024-10-07 09:48:36.620112] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.783 [2024-10-07 09:48:36.623355] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.783 [2024-10-07 09:48:36.632757] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.783 [2024-10-07 09:48:36.633153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.783 [2024-10-07 09:48:36.633182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:47.783 [2024-10-07 09:48:36.633199] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:47.783 [2024-10-07 09:48:36.633440] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:47.783 [2024-10-07 09:48:36.633681] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.783 [2024-10-07 09:48:36.633703] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.783 [2024-10-07 09:48:36.633718] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.783 [2024-10-07 09:48:36.636956] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.783 [2024-10-07 09:48:36.646423] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.783 [2024-10-07 09:48:36.646765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.783 [2024-10-07 09:48:36.646816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:47.783 [2024-10-07 09:48:36.646833] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:47.783 [2024-10-07 09:48:36.647076] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:47.783 [2024-10-07 09:48:36.647324] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.783 [2024-10-07 09:48:36.647345] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.783 [2024-10-07 09:48:36.647358] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.783 [2024-10-07 09:48:36.650570] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.783 [2024-10-07 09:48:36.659799] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.783 [2024-10-07 09:48:36.660201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.783 [2024-10-07 09:48:36.660229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:47.783 [2024-10-07 09:48:36.660245] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:47.783 [2024-10-07 09:48:36.660473] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:47.783 [2024-10-07 09:48:36.660707] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.783 [2024-10-07 09:48:36.660730] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.783 [2024-10-07 09:48:36.660745] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.783 [2024-10-07 09:48:36.663745] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.783 [2024-10-07 09:48:36.673143] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.783 [2024-10-07 09:48:36.673547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.783 [2024-10-07 09:48:36.673575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:47.783 [2024-10-07 09:48:36.673591] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:47.783 [2024-10-07 09:48:36.673828] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:47.783 [2024-10-07 09:48:36.674068] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.783 [2024-10-07 09:48:36.674088] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.783 [2024-10-07 09:48:36.674100] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.783 [2024-10-07 09:48:36.677117] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.783 [2024-10-07 09:48:36.686472] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.783 [2024-10-07 09:48:36.686790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.783 [2024-10-07 09:48:36.686819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:47.783 [2024-10-07 09:48:36.686836] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:47.783 [2024-10-07 09:48:36.687076] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:47.783 [2024-10-07 09:48:36.687279] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.783 [2024-10-07 09:48:36.687298] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.783 [2024-10-07 09:48:36.687311] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.783 [2024-10-07 09:48:36.690275] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.783 [2024-10-07 09:48:36.699752] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.783 [2024-10-07 09:48:36.700108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.783 [2024-10-07 09:48:36.700136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:47.783 [2024-10-07 09:48:36.700152] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:47.783 [2024-10-07 09:48:36.700388] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:47.783 [2024-10-07 09:48:36.700591] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.783 [2024-10-07 09:48:36.700610] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.783 [2024-10-07 09:48:36.700622] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.783 [2024-10-07 09:48:36.703534] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.783 [2024-10-07 09:48:36.712897] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.783 [2024-10-07 09:48:36.713259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.783 [2024-10-07 09:48:36.713287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:47.783 [2024-10-07 09:48:36.713308] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:47.783 [2024-10-07 09:48:36.713542] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:47.783 [2024-10-07 09:48:36.713790] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.783 [2024-10-07 09:48:36.713811] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.783 [2024-10-07 09:48:36.713824] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.783 [2024-10-07 09:48:36.716709] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.783 [2024-10-07 09:48:36.726082] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.783 [2024-10-07 09:48:36.726425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.783 [2024-10-07 09:48:36.726454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:47.783 [2024-10-07 09:48:36.726470] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:47.783 [2024-10-07 09:48:36.726717] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:47.783 [2024-10-07 09:48:36.726931] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.783 [2024-10-07 09:48:36.726952] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.783 [2024-10-07 09:48:36.726965] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.783 [2024-10-07 09:48:36.729847] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.784 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 325481 Killed "${NVMF_APP[@]}" "$@" 00:27:47.784 09:48:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:27:47.784 09:48:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:27:47.784 09:48:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:27:47.784 09:48:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:47.784 09:48:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:47.784 [2024-10-07 09:48:36.739374] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.784 [2024-10-07 09:48:36.739745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.784 [2024-10-07 09:48:36.739774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:47.784 [2024-10-07 09:48:36.739790] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:47.784 [2024-10-07 09:48:36.740023] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:47.784 [2024-10-07 09:48:36.740231] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.784 [2024-10-07 09:48:36.740252] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.784 [2024-10-07 09:48:36.740265] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.784 09:48:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # nvmfpid=326506 00:27:47.784 09:48:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:47.784 09:48:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # waitforlisten 326506 00:27:47.784 09:48:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 326506 ']' 00:27:47.784 09:48:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:47.784 09:48:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:47.784 09:48:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:47.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:47.784 09:48:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:47.784 09:48:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:47.784 [2024-10-07 09:48:36.743454] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.784 [2024-10-07 09:48:36.752812] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.784 [2024-10-07 09:48:36.753215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.784 [2024-10-07 09:48:36.753258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:47.784 [2024-10-07 09:48:36.753274] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:47.784 [2024-10-07 09:48:36.753510] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:47.784 [2024-10-07 09:48:36.753754] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.784 [2024-10-07 09:48:36.753776] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.784 [2024-10-07 09:48:36.753791] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.784 [2024-10-07 09:48:36.756863] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.784 [2024-10-07 09:48:36.766249] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.784 [2024-10-07 09:48:36.766600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.784 [2024-10-07 09:48:36.766629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:47.784 [2024-10-07 09:48:36.766646] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:47.784 [2024-10-07 09:48:36.766884] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:47.784 [2024-10-07 09:48:36.767115] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.784 [2024-10-07 09:48:36.767134] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.784 [2024-10-07 09:48:36.767146] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.784 [2024-10-07 09:48:36.770213] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:48.042 [2024-10-07 09:48:36.779638] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:48.042 [2024-10-07 09:48:36.780117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.042 [2024-10-07 09:48:36.780147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:48.042 [2024-10-07 09:48:36.780164] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:48.042 [2024-10-07 09:48:36.780419] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:48.042 [2024-10-07 09:48:36.780654] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:48.042 [2024-10-07 09:48:36.780701] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:48.042 [2024-10-07 09:48:36.780717] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:48.042 [2024-10-07 09:48:36.784160] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:48.042 [2024-10-07 09:48:36.790927] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:27:48.042 [2024-10-07 09:48:36.790999] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:48.042 [2024-10-07 09:48:36.792847] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:48.042 [2024-10-07 09:48:36.793219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.042 [2024-10-07 09:48:36.793248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:48.042 [2024-10-07 09:48:36.793264] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:48.042 [2024-10-07 09:48:36.793508] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:48.042 [2024-10-07 09:48:36.793761] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:48.043 [2024-10-07 09:48:36.793782] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:48.043 [2024-10-07 09:48:36.793795] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:48.043 [2024-10-07 09:48:36.796760] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:48.043 [2024-10-07 09:48:36.806389] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:48.043 [2024-10-07 09:48:36.806715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.043 [2024-10-07 09:48:36.806744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:48.043 [2024-10-07 09:48:36.806761] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:48.043 [2024-10-07 09:48:36.806990] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:48.043 [2024-10-07 09:48:36.807200] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:48.043 [2024-10-07 09:48:36.807220] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:48.043 [2024-10-07 09:48:36.807233] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:48.043 [2024-10-07 09:48:36.810228] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:48.043 [2024-10-07 09:48:36.819621] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:48.043 [2024-10-07 09:48:36.820066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.043 [2024-10-07 09:48:36.820094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:48.043 [2024-10-07 09:48:36.820110] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:48.043 [2024-10-07 09:48:36.820353] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:48.043 [2024-10-07 09:48:36.820545] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:48.043 [2024-10-07 09:48:36.820569] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:48.043 [2024-10-07 09:48:36.820583] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:48.043 [2024-10-07 09:48:36.823556] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:48.043 [2024-10-07 09:48:36.832995] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:48.043 [2024-10-07 09:48:36.833361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.043 [2024-10-07 09:48:36.833389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:48.043 [2024-10-07 09:48:36.833405] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:48.043 [2024-10-07 09:48:36.833639] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:48.043 [2024-10-07 09:48:36.833886] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:48.043 [2024-10-07 09:48:36.833908] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:48.043 [2024-10-07 09:48:36.833922] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:48.043 [2024-10-07 09:48:36.836957] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:48.043 [2024-10-07 09:48:36.846277] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:48.043 [2024-10-07 09:48:36.846597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.043 [2024-10-07 09:48:36.846626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:48.043 [2024-10-07 09:48:36.846642] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:48.043 [2024-10-07 09:48:36.846907] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:48.043 [2024-10-07 09:48:36.847119] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:48.043 [2024-10-07 09:48:36.847139] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:48.043 [2024-10-07 09:48:36.847152] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:48.043 [2024-10-07 09:48:36.850139] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:48.043 [2024-10-07 09:48:36.853228] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:48.043 [2024-10-07 09:48:36.859616] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:48.043 [2024-10-07 09:48:36.860045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.043 [2024-10-07 09:48:36.860079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:48.043 [2024-10-07 09:48:36.860097] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:48.043 [2024-10-07 09:48:36.860346] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:48.043 [2024-10-07 09:48:36.860556] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:48.043 [2024-10-07 09:48:36.860576] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:48.043 [2024-10-07 09:48:36.860589] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:48.043 [2024-10-07 09:48:36.863585] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:48.043 [2024-10-07 09:48:36.872883] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:48.043 [2024-10-07 09:48:36.873367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.043 [2024-10-07 09:48:36.873400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:48.043 [2024-10-07 09:48:36.873418] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:48.043 [2024-10-07 09:48:36.873657] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:48.043 [2024-10-07 09:48:36.873881] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:48.043 [2024-10-07 09:48:36.873902] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:48.043 [2024-10-07 09:48:36.873916] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:48.043 [2024-10-07 09:48:36.876900] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:48.043 [2024-10-07 09:48:36.886140] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:48.043 [2024-10-07 09:48:36.886556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.043 [2024-10-07 09:48:36.886584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:48.043 [2024-10-07 09:48:36.886601] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:48.043 [2024-10-07 09:48:36.886869] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:48.043 [2024-10-07 09:48:36.887099] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:48.043 [2024-10-07 09:48:36.887119] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:48.043 [2024-10-07 09:48:36.887132] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:48.043 [2024-10-07 09:48:36.890086] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:48.043 [2024-10-07 09:48:36.899487] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:48.043 [2024-10-07 09:48:36.899871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.043 [2024-10-07 09:48:36.899902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:48.043 [2024-10-07 09:48:36.899919] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:48.043 [2024-10-07 09:48:36.900174] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:48.043 [2024-10-07 09:48:36.900385] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:48.043 [2024-10-07 09:48:36.900405] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:48.043 [2024-10-07 09:48:36.900418] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:48.043 [2024-10-07 09:48:36.903389] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:48.043 [2024-10-07 09:48:36.912695] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:48.043 [2024-10-07 09:48:36.913236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.043 [2024-10-07 09:48:36.913272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:48.043 [2024-10-07 09:48:36.913301] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:48.043 [2024-10-07 09:48:36.913572] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:48.043 [2024-10-07 09:48:36.913797] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:48.043 [2024-10-07 09:48:36.913818] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:48.043 [2024-10-07 09:48:36.913833] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:48.043 [2024-10-07 09:48:36.916818] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:48.043 [2024-10-07 09:48:36.925900] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:48.043 [2024-10-07 09:48:36.926373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.043 [2024-10-07 09:48:36.926408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:48.043 [2024-10-07 09:48:36.926426] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:48.043 [2024-10-07 09:48:36.926689] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:48.044 [2024-10-07 09:48:36.926890] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:48.044 [2024-10-07 09:48:36.926912] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:48.044 [2024-10-07 09:48:36.926927] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:48.044 [2024-10-07 09:48:36.929951] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:48.044 [2024-10-07 09:48:36.939195] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:48.044 [2024-10-07 09:48:36.939521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.044 [2024-10-07 09:48:36.939550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:48.044 [2024-10-07 09:48:36.939567] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:48.044 [2024-10-07 09:48:36.939820] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:48.044 [2024-10-07 09:48:36.940068] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:48.044 [2024-10-07 09:48:36.940090] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:48.044 [2024-10-07 09:48:36.940104] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:48.044 [2024-10-07 09:48:36.943090] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:48.044 [2024-10-07 09:48:36.952485] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:48.044 [2024-10-07 09:48:36.952863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.044 [2024-10-07 09:48:36.952893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:48.044 [2024-10-07 09:48:36.952910] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:48.044 [2024-10-07 09:48:36.953148] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:48.044 [2024-10-07 09:48:36.953357] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:48.044 [2024-10-07 09:48:36.953390] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:48.044 [2024-10-07 09:48:36.953404] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:48.044 [2024-10-07 09:48:36.956416] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:48.044 [2024-10-07 09:48:36.958856] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:48.044 [2024-10-07 09:48:36.958888] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:48.044 [2024-10-07 09:48:36.958902] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:48.044 [2024-10-07 09:48:36.958913] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:48.044 [2024-10-07 09:48:36.958922] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:48.044 [2024-10-07 09:48:36.959689] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:27:48.044 [2024-10-07 09:48:36.959746] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:27:48.044 [2024-10-07 09:48:36.959750] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:27:48.044 [2024-10-07 09:48:36.965883] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:48.044 [2024-10-07 09:48:36.966378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.044 [2024-10-07 09:48:36.966412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:48.044 [2024-10-07 09:48:36.966432] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:48.044 [2024-10-07 09:48:36.966674] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:48.044 [2024-10-07 09:48:36.966889] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:48.044 [2024-10-07 09:48:36.966911] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:48.044 [2024-10-07 09:48:36.966926] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:48.044 [2024-10-07 09:48:36.970071] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:48.044 [2024-10-07 09:48:36.979322] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:48.044 [2024-10-07 09:48:36.979869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.044 [2024-10-07 09:48:36.979909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:48.044 [2024-10-07 09:48:36.979928] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:48.044 [2024-10-07 09:48:36.980180] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:48.044 [2024-10-07 09:48:36.980389] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:48.044 [2024-10-07 09:48:36.980410] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:48.044 [2024-10-07 09:48:36.980425] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:48.044 [2024-10-07 09:48:36.983518] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:48.044 [2024-10-07 09:48:36.992809] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:48.044 [2024-10-07 09:48:36.993318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.044 [2024-10-07 09:48:36.993358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:48.044 [2024-10-07 09:48:36.993389] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:48.044 [2024-10-07 09:48:36.993628] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:48.044 [2024-10-07 09:48:36.993882] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:48.044 [2024-10-07 09:48:36.993905] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:48.044 [2024-10-07 09:48:36.993922] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:48.044 [2024-10-07 09:48:36.997064] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:48.044 [2024-10-07 09:48:37.006399] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:48.044 [2024-10-07 09:48:37.006912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.044 [2024-10-07 09:48:37.006953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:48.044 [2024-10-07 09:48:37.006973] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:48.044 [2024-10-07 09:48:37.007209] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:48.044 [2024-10-07 09:48:37.007419] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:48.044 [2024-10-07 09:48:37.007441] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:48.044 [2024-10-07 09:48:37.007457] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:48.044 [2024-10-07 09:48:37.010598] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:48.044 [2024-10-07 09:48:37.019863] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:48.044 [2024-10-07 09:48:37.020353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.044 [2024-10-07 09:48:37.020392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:48.044 [2024-10-07 09:48:37.020412] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:48.044 [2024-10-07 09:48:37.020662] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:48.044 [2024-10-07 09:48:37.020878] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:48.044 [2024-10-07 09:48:37.020900] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:48.044 [2024-10-07 09:48:37.020916] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:48.044 [2024-10-07 09:48:37.024006] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:48.044 [2024-10-07 09:48:37.033455] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:48.044 [2024-10-07 09:48:37.033974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.044 [2024-10-07 09:48:37.034013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:48.044 [2024-10-07 09:48:37.034035] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:48.044 [2024-10-07 09:48:37.034306] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:48.044 [2024-10-07 09:48:37.034522] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:48.044 [2024-10-07 09:48:37.034553] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:48.044 [2024-10-07 09:48:37.034570] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:48.044 [2024-10-07 09:48:37.037937] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:48.302 [2024-10-07 09:48:37.046881] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:48.302 [2024-10-07 09:48:37.047319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.302 [2024-10-07 09:48:37.047349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:48.302 [2024-10-07 09:48:37.047365] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:48.302 [2024-10-07 09:48:37.047594] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:48.302 [2024-10-07 09:48:37.047845] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:48.302 [2024-10-07 09:48:37.047867] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:48.302 [2024-10-07 09:48:37.047880] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:48.303 [2024-10-07 09:48:37.051022] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:48.303 [2024-10-07 09:48:37.060413] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:48.303 [2024-10-07 09:48:37.060731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.303 [2024-10-07 09:48:37.060761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:48.303 [2024-10-07 09:48:37.060778] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:48.303 [2024-10-07 09:48:37.061009] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:48.303 [2024-10-07 09:48:37.061221] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:48.303 [2024-10-07 09:48:37.061244] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:48.303 [2024-10-07 09:48:37.061258] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:48.303 [2024-10-07 09:48:37.064483] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:48.303 09:48:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:48.303 09:48:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:27:48.303 09:48:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:27:48.303 09:48:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:48.303 09:48:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:48.303 [2024-10-07 09:48:37.074056] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:48.303 [2024-10-07 09:48:37.074424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.303 [2024-10-07 09:48:37.074453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:48.303 [2024-10-07 09:48:37.074470] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:48.303 [2024-10-07 09:48:37.074695] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:48.303 [2024-10-07 09:48:37.074913] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:48.303 [2024-10-07 09:48:37.074943] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:48.303 [2024-10-07 09:48:37.074973] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:48.303 [2024-10-07 09:48:37.078169] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:48.303 [2024-10-07 09:48:37.087615] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:48.303 [2024-10-07 09:48:37.087973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.303 [2024-10-07 09:48:37.088002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:48.303 [2024-10-07 09:48:37.088018] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:48.303 [2024-10-07 09:48:37.088246] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:48.303 [2024-10-07 09:48:37.088467] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:48.303 [2024-10-07 09:48:37.088487] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:48.303 [2024-10-07 09:48:37.088500] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:48.303 [2024-10-07 09:48:37.091788] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:48.303 09:48:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:48.303 09:48:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:48.303 09:48:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.303 09:48:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:48.303 [2024-10-07 09:48:37.101210] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:48.303 [2024-10-07 09:48:37.101581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.303 [2024-10-07 09:48:37.101610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:48.303 [2024-10-07 09:48:37.101627] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:48.303 [2024-10-07 09:48:37.101850] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:48.303 [2024-10-07 09:48:37.102099] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:48.303 [2024-10-07 09:48:37.102120] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:48.303 [2024-10-07 09:48:37.102134] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:48.303 [2024-10-07 09:48:37.103517] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:48.303 [2024-10-07 09:48:37.105343] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:48.303 [2024-10-07 09:48:37.114534] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:48.303 [2024-10-07 09:48:37.114863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.303 [2024-10-07 09:48:37.114892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:48.303 [2024-10-07 09:48:37.114908] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:48.303 [2024-10-07 09:48:37.115157] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:48.303 [2024-10-07 09:48:37.115356] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:48.303 [2024-10-07 09:48:37.115376] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:48.303 [2024-10-07 09:48:37.115390] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:48.303 [2024-10-07 09:48:37.118457] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:48.303 [2024-10-07 09:48:37.128070] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:48.303 09:48:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.303 [2024-10-07 09:48:37.128422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.303 [2024-10-07 09:48:37.128452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:48.303 [2024-10-07 09:48:37.128469] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:48.303 09:48:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:48.303 [2024-10-07 09:48:37.128692] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:48.303 09:48:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.303 09:48:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:48.303 [2024-10-07 09:48:37.128917] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:48.303 [2024-10-07 09:48:37.128950] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:48.303 [2024-10-07 09:48:37.128964] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:48.303 [2024-10-07 09:48:37.132298] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:48.303 [2024-10-07 09:48:37.141533] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:48.303 [2024-10-07 09:48:37.142061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.303 [2024-10-07 09:48:37.142103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:48.303 [2024-10-07 09:48:37.142123] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:48.303 [2024-10-07 09:48:37.142360] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:48.303 [2024-10-07 09:48:37.142567] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:48.303 [2024-10-07 09:48:37.142589] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:48.303 [2024-10-07 09:48:37.142605] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:48.303 [2024-10-07 09:48:37.145842] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:48.303 Malloc0 00:27:48.303 09:48:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.303 09:48:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:48.303 09:48:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.303 09:48:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:48.303 [2024-10-07 09:48:37.155286] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:48.303 [2024-10-07 09:48:37.155723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.303 [2024-10-07 09:48:37.155757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:48.303 [2024-10-07 09:48:37.155789] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:48.303 [2024-10-07 09:48:37.156028] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:48.303 [2024-10-07 09:48:37.156250] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:48.303 [2024-10-07 09:48:37.156272] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:48.303 [2024-10-07 09:48:37.156287] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:48.303 [2024-10-07 09:48:37.159456] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:48.303 09:48:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.303 09:48:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:48.303 09:48:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.303 09:48:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:48.303 [2024-10-07 09:48:37.168859] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:48.303 09:48:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.303 09:48:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:48.303 09:48:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.303 [2024-10-07 09:48:37.169202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.303 [2024-10-07 09:48:37.169232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143fc90 with addr=10.0.0.2, port=4420 00:27:48.303 [2024-10-07 09:48:37.169249] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143fc90 is same with the state(6) to be set 00:27:48.303 09:48:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:48.303 [2024-10-07 09:48:37.169464] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143fc90 (9): Bad file descriptor 00:27:48.303 [2024-10-07 09:48:37.169692] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:48.303 [2024-10-07 09:48:37.169715] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:48.303 [2024-10-07 09:48:37.169729] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:48.303 [2024-10-07 09:48:37.172872] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:48.303 [2024-10-07 09:48:37.172970] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:48.303 09:48:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.303 09:48:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 325757 00:27:48.303 [2024-10-07 09:48:37.182510] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:48.561 3459.83 IOPS, 13.51 MiB/s [2024-10-07 09:48:37.342616] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:57.519 4141.57 IOPS, 16.18 MiB/s 4686.25 IOPS, 18.31 MiB/s 5098.67 IOPS, 19.92 MiB/s 5444.50 IOPS, 21.27 MiB/s 5734.45 IOPS, 22.40 MiB/s 5972.92 IOPS, 23.33 MiB/s 6166.77 IOPS, 24.09 MiB/s 6343.21 IOPS, 24.78 MiB/s 6482.87 IOPS, 25.32 MiB/s 00:27:57.519 Latency(us) 00:27:57.519 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:57.519 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:57.519 Verification LBA range: start 0x0 length 0x4000 00:27:57.519 Nvme1n1 : 15.01 6479.13 25.31 10551.87 0.00 7492.34 567.37 20486.07 00:27:57.519 =================================================================================================================== 00:27:57.519 Total : 6479.13 25.31 10551.87 0.00 7492.34 567.37 20486.07 00:27:57.782 09:48:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:27:57.782 09:48:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:57.782 09:48:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.782 09:48:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:57.782 09:48:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.782 09:48:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:27:57.782 09:48:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:27:57.782 09:48:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@514 -- # nvmfcleanup 00:27:57.782 09:48:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:27:57.782 09:48:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:57.782 09:48:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:27:57.782 09:48:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:57.782 09:48:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:57.782 rmmod nvme_tcp 00:27:57.782 rmmod nvme_fabrics 00:27:57.782 rmmod nvme_keyring 00:27:57.782 09:48:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:57.782 09:48:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:27:57.782 09:48:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:27:57.782 09:48:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@515 -- # '[' -n 326506 ']' 00:27:57.782 09:48:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # killprocess 326506 00:27:57.782 09:48:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 326506 ']' 00:27:57.782 09:48:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # kill -0 326506 00:27:57.783 09:48:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # uname 00:27:57.783 09:48:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:57.783 09:48:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 326506 00:27:57.783 09:48:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:57.783 09:48:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:57.783 09:48:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 326506' 00:27:57.783 killing process with pid 326506 00:27:57.783 09:48:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@969 -- # kill 326506 00:27:57.783 09:48:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@974 -- # wait 326506 00:27:58.042 09:48:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:27:58.042 09:48:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:27:58.042 09:48:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:27:58.042 09:48:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:27:58.042 09:48:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@789 -- # iptables-save 00:27:58.042 09:48:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:27:58.042 09:48:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@789 -- # iptables-restore 00:27:58.042 09:48:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:58.042 09:48:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:58.042 09:48:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:58.042 09:48:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:58.042 09:48:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:00.576 09:48:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:00.576 00:28:00.576 real 0m22.893s 00:28:00.576 user 1m1.472s 00:28:00.576 sys 0m4.224s 00:28:00.576 09:48:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:00.576 09:48:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:00.576 ************************************ 00:28:00.576 END TEST nvmf_bdevperf 00:28:00.576 ************************************ 00:28:00.576 09:48:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:28:00.576 09:48:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:00.576 09:48:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:00.576 09:48:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.576 ************************************ 00:28:00.576 START TEST nvmf_target_disconnect 00:28:00.576 ************************************ 00:28:00.576 09:48:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:28:00.576 * Looking for test storage... 00:28:00.576 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:00.576 09:48:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:28:00.576 09:48:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1681 -- # lcov --version 00:28:00.576 09:48:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:28:00.576 09:48:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:28:00.576 09:48:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:00.576 09:48:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:00.576 09:48:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:00.576 09:48:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:28:00.576 09:48:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:28:00.576 09:48:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:28:00.576 09:48:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:28:00.576 09:48:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:28:00.576 09:48:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:28:00.576 09:48:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:28:00.576 09:48:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:00.576 09:48:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:28:00.576 09:48:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:28:00.576 09:48:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:00.576 09:48:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:00.576 09:48:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:28:00.576 09:48:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:28:00.576 09:48:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:00.576 09:48:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:28:00.576 09:48:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:28:00.576 09:48:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:28:00.576 09:48:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:28:00.576 09:48:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:00.576 09:48:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:28:00.576 09:48:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:28:00.576 09:48:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:00.576 09:48:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:00.576 09:48:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:28:00.576 09:48:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:00.576 09:48:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:28:00.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:00.576 --rc genhtml_branch_coverage=1 00:28:00.576 --rc genhtml_function_coverage=1 00:28:00.576 --rc genhtml_legend=1 00:28:00.576 --rc geninfo_all_blocks=1 00:28:00.576 --rc geninfo_unexecuted_blocks=1 00:28:00.576 00:28:00.576 ' 00:28:00.576 09:48:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:28:00.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:00.576 --rc genhtml_branch_coverage=1 00:28:00.576 --rc genhtml_function_coverage=1 00:28:00.576 --rc genhtml_legend=1 00:28:00.576 --rc geninfo_all_blocks=1 00:28:00.576 --rc geninfo_unexecuted_blocks=1 00:28:00.576 00:28:00.576 ' 00:28:00.576 09:48:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:28:00.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:00.576 --rc genhtml_branch_coverage=1 00:28:00.576 --rc genhtml_function_coverage=1 00:28:00.576 --rc genhtml_legend=1 00:28:00.576 --rc geninfo_all_blocks=1 00:28:00.576 --rc geninfo_unexecuted_blocks=1 00:28:00.576 00:28:00.576 ' 00:28:00.576 09:48:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:28:00.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:00.576 --rc genhtml_branch_coverage=1 00:28:00.576 --rc genhtml_function_coverage=1 00:28:00.576 --rc genhtml_legend=1 00:28:00.576 --rc geninfo_all_blocks=1 00:28:00.576 --rc geninfo_unexecuted_blocks=1 00:28:00.576 00:28:00.576 ' 00:28:00.576 09:48:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:00.576 09:48:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:28:00.577 09:48:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:00.577 09:48:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:00.577 09:48:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:00.577 09:48:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:00.577 09:48:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:00.577 09:48:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:00.577 09:48:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:00.577 09:48:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:00.577 09:48:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:00.577 09:48:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:00.577 09:48:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:28:00.577 09:48:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:28:00.577 09:48:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:00.577 09:48:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:00.577 09:48:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:00.577 09:48:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:00.577 09:48:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:00.577 09:48:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:28:00.577 09:48:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:00.577 09:48:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:00.577 09:48:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:00.577 09:48:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:00.577 09:48:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:00.577 09:48:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:00.577 09:48:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:28:00.577 09:48:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:00.577 09:48:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:28:00.577 09:48:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:00.577 09:48:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:00.577 09:48:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:00.577 09:48:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:00.577 09:48:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:00.577 09:48:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:00.577 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:00.577 09:48:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:00.577 09:48:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:00.577 09:48:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:00.577 09:48:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:28:00.577 09:48:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:28:00.577 09:48:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:28:00.577 09:48:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:28:00.577 09:48:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:28:00.577 09:48:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:00.577 09:48:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # prepare_net_devs 00:28:00.577 09:48:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@436 -- # local -g is_hw=no 00:28:00.577 09:48:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # remove_spdk_ns 00:28:00.577 09:48:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:00.577 09:48:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:00.577 09:48:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:00.577 09:48:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:28:00.577 09:48:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:28:00.577 09:48:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:28:00.577 09:48:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:02.479 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:02.479 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:28:02.479 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:02.479 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:02.479 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:02.479 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:02.479 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:02.479 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:28:02.479 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:02.479 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:28:02.479 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:28:02.479 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:28:02.479 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:28:02.479 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:28:02.479 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:28:02.479 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:02.479 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:02.479 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:02.479 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:02.479 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:02.479 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:02.479 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:02.479 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:02.479 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:02.479 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:02.479 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:02.479 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:02.479 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:02.479 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:02.479 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:02.479 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:02.479 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:02.479 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:02.479 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:02.479 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x1592)' 00:28:02.479 Found 0000:09:00.0 (0x8086 - 0x1592) 00:28:02.479 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:02.479 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:02.479 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:28:02.479 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:28:02.479 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:02.479 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:02.479 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x1592)' 00:28:02.479 Found 0000:09:00.1 (0x8086 - 0x1592) 00:28:02.479 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:02.479 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:02.479 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:28:02.479 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:28:02.479 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:02.479 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:02.479 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:02.479 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:02.479 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:02.479 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:02.479 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:02.480 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:02.480 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:02.480 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:02.480 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:02.480 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:28:02.480 Found net devices under 0000:09:00.0: cvl_0_0 00:28:02.480 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:02.480 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:02.480 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:02.480 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:02.480 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:02.480 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:02.480 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:02.480 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:02.480 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:28:02.480 Found net devices under 0000:09:00.1: cvl_0_1 00:28:02.480 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:02.480 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:28:02.480 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # is_hw=yes 00:28:02.480 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:28:02.480 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:28:02.480 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:28:02.480 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:02.480 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:02.480 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:02.480 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:02.480 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:02.480 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:02.480 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:02.480 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:02.480 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:02.480 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:02.480 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:02.480 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:02.480 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:02.480 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:02.480 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:02.480 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:02.480 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:02.480 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:02.480 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:02.480 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:02.480 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:02.480 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:02.480 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:02.480 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:02.480 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.236 ms 00:28:02.480 00:28:02.480 --- 10.0.0.2 ping statistics --- 00:28:02.480 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:02.480 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:28:02.480 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:02.480 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:02.480 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:28:02.480 00:28:02.480 --- 10.0.0.1 ping statistics --- 00:28:02.480 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:02.480 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:28:02.480 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:02.480 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # return 0 00:28:02.480 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:28:02.480 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:02.480 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:28:02.480 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:28:02.480 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:02.480 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:28:02.480 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:28:02.480 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:28:02.480 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:02.480 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:02.480 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:02.740 ************************************ 00:28:02.740 START TEST nvmf_target_disconnect_tc1 00:28:02.740 ************************************ 00:28:02.740 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc1 00:28:02.740 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:02.740 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:28:02.740 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:02.740 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:28:02.740 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:02.740 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:28:02.740 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:02.740 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:28:02.740 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:02.740 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:28:02.740 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:28:02.740 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:02.740 [2024-10-07 09:48:51.560293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.740 [2024-10-07 09:48:51.560367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1994220 with addr=10.0.0.2, port=4420 00:28:02.740 [2024-10-07 09:48:51.560398] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:28:02.740 [2024-10-07 09:48:51.560439] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:02.740 [2024-10-07 09:48:51.560454] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:28:02.740 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:28:02.740 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:28:02.740 Initializing NVMe Controllers 00:28:02.740 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:28:02.740 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:02.740 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:02.740 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:02.740 00:28:02.740 real 0m0.087s 00:28:02.740 user 0m0.037s 00:28:02.740 sys 0m0.049s 00:28:02.740 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:02.741 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:02.741 ************************************ 00:28:02.741 END TEST nvmf_target_disconnect_tc1 00:28:02.741 ************************************ 00:28:02.741 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:28:02.741 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:02.741 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:02.741 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:02.741 ************************************ 00:28:02.741 START TEST nvmf_target_disconnect_tc2 00:28:02.741 ************************************ 00:28:02.741 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc2 00:28:02.741 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:28:02.741 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:28:02.741 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:28:02.741 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:02.741 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:02.741 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # nvmfpid=329527 00:28:02.741 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:28:02.741 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # waitforlisten 329527 00:28:02.741 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 329527 ']' 00:28:02.741 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:02.741 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:02.741 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:02.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:02.741 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:02.741 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:02.741 [2024-10-07 09:48:51.676798] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:28:02.741 [2024-10-07 09:48:51.676887] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:02.999 [2024-10-07 09:48:51.740806] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:02.999 [2024-10-07 09:48:51.852346] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:02.999 [2024-10-07 09:48:51.852418] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:02.999 [2024-10-07 09:48:51.852431] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:02.999 [2024-10-07 09:48:51.852442] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:02.999 [2024-10-07 09:48:51.852452] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:02.999 [2024-10-07 09:48:51.854111] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 5 00:28:02.999 [2024-10-07 09:48:51.854172] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 6 00:28:02.999 [2024-10-07 09:48:51.854236] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 7 00:28:02.999 [2024-10-07 09:48:51.854240] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:28:02.999 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:02.999 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:28:02.999 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:28:02.999 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:02.999 09:48:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:03.258 09:48:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:03.258 09:48:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:03.258 09:48:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.258 09:48:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:03.258 Malloc0 00:28:03.258 09:48:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.258 09:48:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:28:03.258 09:48:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.258 09:48:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:03.258 [2024-10-07 09:48:52.042475] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:03.258 09:48:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.258 09:48:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:03.258 09:48:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.258 09:48:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:03.258 09:48:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.258 09:48:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:03.258 09:48:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.258 09:48:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:03.259 09:48:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.259 09:48:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:03.259 09:48:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.259 09:48:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:03.259 [2024-10-07 09:48:52.070784] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:03.259 09:48:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.259 09:48:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:03.259 09:48:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.259 09:48:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:03.259 09:48:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.259 09:48:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=329549 00:28:03.259 09:48:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:28:03.259 09:48:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:05.172 09:48:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 329527 00:28:05.172 09:48:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:28:05.172 Read completed with error (sct=0, sc=8) 00:28:05.172 starting I/O failed 00:28:05.172 Read completed with error (sct=0, sc=8) 00:28:05.172 starting I/O failed 00:28:05.172 Read completed with error (sct=0, sc=8) 00:28:05.172 starting I/O failed 00:28:05.172 Read completed with error (sct=0, sc=8) 00:28:05.172 starting I/O failed 00:28:05.172 Read completed with error (sct=0, sc=8) 00:28:05.172 starting I/O failed 00:28:05.172 Read completed with error (sct=0, sc=8) 00:28:05.172 starting I/O failed 00:28:05.172 Read completed with error (sct=0, sc=8) 00:28:05.172 starting I/O failed 00:28:05.172 Read completed with error (sct=0, sc=8) 00:28:05.172 starting I/O failed 00:28:05.172 Read completed with error (sct=0, sc=8) 00:28:05.172 starting I/O failed 00:28:05.172 Read completed with error (sct=0, sc=8) 00:28:05.172 starting I/O failed 00:28:05.172 Read completed with error (sct=0, sc=8) 00:28:05.172 starting I/O failed 00:28:05.172 Write completed with error (sct=0, sc=8) 00:28:05.172 starting I/O failed 00:28:05.172 Read completed with error (sct=0, sc=8) 00:28:05.172 starting I/O failed 00:28:05.172 Write completed with error (sct=0, sc=8) 00:28:05.172 starting I/O failed 00:28:05.172 Read completed with error (sct=0, sc=8) 00:28:05.172 starting I/O failed 00:28:05.172 Write completed with error (sct=0, sc=8) 00:28:05.172 starting I/O failed 00:28:05.172 Read completed with error (sct=0, sc=8) 00:28:05.172 starting I/O failed 00:28:05.172 Write completed with error (sct=0, sc=8) 00:28:05.172 starting I/O failed 00:28:05.172 Write completed with error (sct=0, sc=8) 00:28:05.172 starting I/O failed 00:28:05.172 Write completed with error (sct=0, sc=8) 00:28:05.172 starting I/O failed 00:28:05.172 Read completed with error (sct=0, sc=8) 00:28:05.172 starting I/O failed 00:28:05.172 Read completed with error (sct=0, sc=8) 00:28:05.172 starting I/O failed 00:28:05.172 Write completed with error (sct=0, sc=8) 00:28:05.172 starting I/O failed 00:28:05.172 Write completed with error (sct=0, sc=8) 00:28:05.172 starting I/O failed 00:28:05.172 Read completed with error (sct=0, sc=8) 00:28:05.172 starting I/O failed 00:28:05.172 Write completed with error (sct=0, sc=8) 00:28:05.172 starting I/O failed 00:28:05.172 Write completed with error (sct=0, sc=8) 00:28:05.172 starting I/O failed 00:28:05.172 Read completed with error (sct=0, sc=8) 00:28:05.172 starting I/O failed 00:28:05.173 Write completed with error (sct=0, sc=8) 00:28:05.173 starting I/O failed 00:28:05.173 Read completed with error (sct=0, sc=8) 00:28:05.173 starting I/O failed 00:28:05.173 Read completed with error (sct=0, sc=8) 00:28:05.173 starting I/O failed 00:28:05.173 Write completed with error (sct=0, sc=8) 00:28:05.173 starting I/O failed 00:28:05.173 Read completed with error (sct=0, sc=8) 00:28:05.173 [2024-10-07 09:48:54.097035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:05.173 starting I/O failed 00:28:05.173 Read completed with error (sct=0, sc=8) 00:28:05.173 starting I/O failed 00:28:05.173 Read completed with error (sct=0, sc=8) 00:28:05.173 starting I/O failed 00:28:05.173 Read completed with error (sct=0, sc=8) 00:28:05.173 starting I/O failed 00:28:05.173 Read completed with error (sct=0, sc=8) 00:28:05.173 starting I/O failed 00:28:05.173 Read completed with error (sct=0, sc=8) 00:28:05.173 starting I/O failed 00:28:05.173 Read completed with error (sct=0, sc=8) 00:28:05.173 starting I/O failed 00:28:05.173 Read completed with error (sct=0, sc=8) 00:28:05.173 starting I/O failed 00:28:05.173 Read completed with error (sct=0, sc=8) 00:28:05.173 starting I/O failed 00:28:05.173 Read completed with error (sct=0, sc=8) 00:28:05.173 starting I/O failed 00:28:05.173 Read completed with error (sct=0, sc=8) 00:28:05.173 starting I/O failed 00:28:05.173 Write completed with error (sct=0, sc=8) 00:28:05.173 starting I/O failed 00:28:05.173 Read completed with error (sct=0, sc=8) 00:28:05.173 starting I/O failed 00:28:05.173 Write completed with error (sct=0, sc=8) 00:28:05.173 starting I/O failed 00:28:05.173 Read completed with error (sct=0, sc=8) 00:28:05.173 starting I/O failed 00:28:05.173 Write completed with error (sct=0, sc=8) 00:28:05.173 starting I/O failed 00:28:05.173 Read completed with error (sct=0, sc=8) 00:28:05.173 starting I/O failed 00:28:05.173 Write completed with error (sct=0, sc=8) 00:28:05.173 starting I/O failed 00:28:05.173 Write completed with error (sct=0, sc=8) 00:28:05.173 starting I/O failed 00:28:05.173 Write completed with error (sct=0, sc=8) 00:28:05.173 starting I/O failed 00:28:05.173 Read completed with error (sct=0, sc=8) 00:28:05.173 starting I/O failed 00:28:05.173 Read completed with error (sct=0, sc=8) 00:28:05.173 starting I/O failed 00:28:05.173 Write completed with error (sct=0, sc=8) 00:28:05.173 starting I/O failed 00:28:05.173 Write completed with error (sct=0, sc=8) 00:28:05.173 starting I/O failed 00:28:05.173 Read completed with error (sct=0, sc=8) 00:28:05.173 starting I/O failed 00:28:05.173 Write completed with error (sct=0, sc=8) 00:28:05.173 starting I/O failed 00:28:05.173 Write completed with error (sct=0, sc=8) 00:28:05.173 starting I/O failed 00:28:05.173 Read completed with error (sct=0, sc=8) 00:28:05.173 starting I/O failed 00:28:05.173 Write completed with error (sct=0, sc=8) 00:28:05.173 starting I/O failed 00:28:05.173 Read completed with error (sct=0, sc=8) 00:28:05.173 starting I/O failed 00:28:05.173 Read completed with error (sct=0, sc=8) 00:28:05.173 starting I/O failed 00:28:05.173 Write completed with error (sct=0, sc=8) 00:28:05.173 starting I/O failed 00:28:05.173 [2024-10-07 09:48:54.097426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:05.173 Read completed with error (sct=0, sc=8) 00:28:05.173 starting I/O failed 00:28:05.173 Read completed with error (sct=0, sc=8) 00:28:05.173 starting I/O failed 00:28:05.173 Read completed with error (sct=0, sc=8) 00:28:05.173 starting I/O failed 00:28:05.173 Read completed with error (sct=0, sc=8) 00:28:05.173 starting I/O failed 00:28:05.173 Read completed with error (sct=0, sc=8) 00:28:05.173 starting I/O failed 00:28:05.173 Read completed with error (sct=0, sc=8) 00:28:05.173 starting I/O failed 00:28:05.173 Write completed with error (sct=0, sc=8) 00:28:05.173 starting I/O failed 00:28:05.173 Read completed with error (sct=0, sc=8) 00:28:05.173 starting I/O failed 00:28:05.173 Read completed with error (sct=0, sc=8) 00:28:05.173 starting I/O failed 00:28:05.173 Write completed with error (sct=0, sc=8) 00:28:05.173 starting I/O failed 00:28:05.173 Read completed with error (sct=0, sc=8) 00:28:05.173 starting I/O failed 00:28:05.173 Write completed with error (sct=0, sc=8) 00:28:05.173 starting I/O failed 00:28:05.173 Write completed with error (sct=0, sc=8) 00:28:05.173 starting I/O failed 00:28:05.173 Write completed with error (sct=0, sc=8) 00:28:05.173 starting I/O failed 00:28:05.173 Write completed with error (sct=0, sc=8) 00:28:05.173 starting I/O failed 00:28:05.173 Read completed with error (sct=0, sc=8) 00:28:05.173 starting I/O failed 00:28:05.173 Write completed with error (sct=0, sc=8) 00:28:05.173 starting I/O failed 00:28:05.173 Read completed with error (sct=0, sc=8) 00:28:05.173 starting I/O failed 00:28:05.173 Read completed with error (sct=0, sc=8) 00:28:05.173 starting I/O failed 00:28:05.173 Read completed with error (sct=0, sc=8) 00:28:05.173 starting I/O failed 00:28:05.173 Read completed with error (sct=0, sc=8) 00:28:05.173 starting I/O failed 00:28:05.173 Read completed with error (sct=0, sc=8) 00:28:05.173 starting I/O failed 00:28:05.173 Read completed with error (sct=0, sc=8) 00:28:05.173 starting I/O failed 00:28:05.173 Write completed with error (sct=0, sc=8) 00:28:05.173 starting I/O failed 00:28:05.173 Write completed with error (sct=0, sc=8) 00:28:05.173 starting I/O failed 00:28:05.173 Read completed with error (sct=0, sc=8) 00:28:05.173 starting I/O failed 00:28:05.173 Read completed with error (sct=0, sc=8) 00:28:05.173 starting I/O failed 00:28:05.173 Write completed with error (sct=0, sc=8) 00:28:05.173 starting I/O failed 00:28:05.173 Write completed with error (sct=0, sc=8) 00:28:05.173 starting I/O failed 00:28:05.173 Write completed with error (sct=0, sc=8) 00:28:05.173 starting I/O failed 00:28:05.173 Write completed with error (sct=0, sc=8) 00:28:05.173 starting I/O failed 00:28:05.173 Read completed with error (sct=0, sc=8) 00:28:05.173 starting I/O failed 00:28:05.173 [2024-10-07 09:48:54.097811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:05.173 Read completed with error (sct=0, sc=8) 00:28:05.173 starting I/O failed 00:28:05.173 Read completed with error (sct=0, sc=8) 00:28:05.173 starting I/O failed 00:28:05.173 Read completed with error (sct=0, sc=8) 00:28:05.173 starting I/O failed 00:28:05.173 Read completed with error (sct=0, sc=8) 00:28:05.173 starting I/O failed 00:28:05.173 Read completed with error (sct=0, sc=8) 00:28:05.173 starting I/O failed 00:28:05.173 Read completed with error (sct=0, sc=8) 00:28:05.173 starting I/O failed 00:28:05.173 Read completed with error (sct=0, sc=8) 00:28:05.173 starting I/O failed 00:28:05.173 Read completed with error (sct=0, sc=8) 00:28:05.173 starting I/O failed 00:28:05.173 Read completed with error (sct=0, sc=8) 00:28:05.173 starting I/O failed 00:28:05.173 Read completed with error (sct=0, sc=8) 00:28:05.173 starting I/O failed 00:28:05.173 Read completed with error (sct=0, sc=8) 00:28:05.173 starting I/O failed 00:28:05.173 Read completed with error (sct=0, sc=8) 00:28:05.173 starting I/O failed 00:28:05.173 Write completed with error (sct=0, sc=8) 00:28:05.173 starting I/O failed 00:28:05.173 Read completed with error (sct=0, sc=8) 00:28:05.173 starting I/O failed 00:28:05.173 Write completed with error (sct=0, sc=8) 00:28:05.173 starting I/O failed 00:28:05.173 Write completed with error (sct=0, sc=8) 00:28:05.173 starting I/O failed 00:28:05.173 Read completed with error (sct=0, sc=8) 00:28:05.173 starting I/O failed 00:28:05.173 Write completed with error (sct=0, sc=8) 00:28:05.173 starting I/O failed 00:28:05.173 Read completed with error (sct=0, sc=8) 00:28:05.173 starting I/O failed 00:28:05.173 Write completed with error (sct=0, sc=8) 00:28:05.173 starting I/O failed 00:28:05.173 Write completed with error (sct=0, sc=8) 00:28:05.173 starting I/O failed 00:28:05.173 Read completed with error (sct=0, sc=8) 00:28:05.173 starting I/O failed 00:28:05.173 Write completed with error (sct=0, sc=8) 00:28:05.173 starting I/O failed 00:28:05.173 Read completed with error (sct=0, sc=8) 00:28:05.173 starting I/O failed 00:28:05.173 Write completed with error (sct=0, sc=8) 00:28:05.173 starting I/O failed 00:28:05.173 Write completed with error (sct=0, sc=8) 00:28:05.173 starting I/O failed 00:28:05.173 Write completed with error (sct=0, sc=8) 00:28:05.173 starting I/O failed 00:28:05.173 Write completed with error (sct=0, sc=8) 00:28:05.173 starting I/O failed 00:28:05.173 Read completed with error (sct=0, sc=8) 00:28:05.173 starting I/O failed 00:28:05.173 Write completed with error (sct=0, sc=8) 00:28:05.173 starting I/O failed 00:28:05.173 Write completed with error (sct=0, sc=8) 00:28:05.173 starting I/O failed 00:28:05.173 Write completed with error (sct=0, sc=8) 00:28:05.173 starting I/O failed 00:28:05.173 [2024-10-07 09:48:54.098150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:05.173 [2024-10-07 09:48:54.098396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.173 [2024-10-07 09:48:54.098433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.173 qpair failed and we were unable to recover it. 00:28:05.173 [2024-10-07 09:48:54.098532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.173 [2024-10-07 09:48:54.098560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.173 qpair failed and we were unable to recover it. 00:28:05.173 [2024-10-07 09:48:54.098652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.173 [2024-10-07 09:48:54.098690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.173 qpair failed and we were unable to recover it. 00:28:05.173 [2024-10-07 09:48:54.098821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.174 [2024-10-07 09:48:54.098847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.174 qpair failed and we were unable to recover it. 00:28:05.174 [2024-10-07 09:48:54.098937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.174 [2024-10-07 09:48:54.098965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.174 qpair failed and we were unable to recover it. 00:28:05.174 [2024-10-07 09:48:54.099063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.174 [2024-10-07 09:48:54.099090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.174 qpair failed and we were unable to recover it. 00:28:05.174 [2024-10-07 09:48:54.099173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.174 [2024-10-07 09:48:54.099199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.174 qpair failed and we were unable to recover it. 00:28:05.174 [2024-10-07 09:48:54.099322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.174 [2024-10-07 09:48:54.099348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.174 qpair failed and we were unable to recover it. 00:28:05.174 [2024-10-07 09:48:54.099463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.174 [2024-10-07 09:48:54.099489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.174 qpair failed and we were unable to recover it. 00:28:05.174 [2024-10-07 09:48:54.099607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.174 [2024-10-07 09:48:54.099633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.174 qpair failed and we were unable to recover it. 00:28:05.174 [2024-10-07 09:48:54.099753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.174 [2024-10-07 09:48:54.099799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.174 qpair failed and we were unable to recover it. 00:28:05.174 [2024-10-07 09:48:54.099906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.174 [2024-10-07 09:48:54.099945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.174 qpair failed and we were unable to recover it. 00:28:05.174 [2024-10-07 09:48:54.100067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.174 [2024-10-07 09:48:54.100093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.174 qpair failed and we were unable to recover it. 00:28:05.174 [2024-10-07 09:48:54.100220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.174 [2024-10-07 09:48:54.100245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.174 qpair failed and we were unable to recover it. 00:28:05.174 [2024-10-07 09:48:54.100366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.174 [2024-10-07 09:48:54.100393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.174 qpair failed and we were unable to recover it. 00:28:05.174 [2024-10-07 09:48:54.100483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.174 [2024-10-07 09:48:54.100509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.174 qpair failed and we were unable to recover it. 00:28:05.174 [2024-10-07 09:48:54.100594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.174 [2024-10-07 09:48:54.100621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.174 qpair failed and we were unable to recover it. 00:28:05.174 [2024-10-07 09:48:54.100729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.174 [2024-10-07 09:48:54.100755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.174 qpair failed and we were unable to recover it. 00:28:05.174 [2024-10-07 09:48:54.100876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.174 [2024-10-07 09:48:54.100903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.174 qpair failed and we were unable to recover it. 00:28:05.174 [2024-10-07 09:48:54.101046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.174 [2024-10-07 09:48:54.101074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.174 qpair failed and we were unable to recover it. 00:28:05.174 [2024-10-07 09:48:54.101209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.174 [2024-10-07 09:48:54.101235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.174 qpair failed and we were unable to recover it. 00:28:05.174 [2024-10-07 09:48:54.101327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.174 [2024-10-07 09:48:54.101352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.174 qpair failed and we were unable to recover it. 00:28:05.174 [2024-10-07 09:48:54.101458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.174 [2024-10-07 09:48:54.101483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.174 qpair failed and we were unable to recover it. 00:28:05.174 [2024-10-07 09:48:54.101614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.174 [2024-10-07 09:48:54.101653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.174 qpair failed and we were unable to recover it. 00:28:05.174 [2024-10-07 09:48:54.101777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.174 [2024-10-07 09:48:54.101807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.174 qpair failed and we were unable to recover it. 00:28:05.174 [2024-10-07 09:48:54.101893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.174 [2024-10-07 09:48:54.101933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.174 qpair failed and we were unable to recover it. 00:28:05.174 [2024-10-07 09:48:54.102024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.174 [2024-10-07 09:48:54.102052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.174 qpair failed and we were unable to recover it. 00:28:05.174 [2024-10-07 09:48:54.102188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.174 [2024-10-07 09:48:54.102236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.174 qpair failed and we were unable to recover it. 00:28:05.174 [2024-10-07 09:48:54.102358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.174 [2024-10-07 09:48:54.102426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.174 qpair failed and we were unable to recover it. 00:28:05.174 [2024-10-07 09:48:54.102587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.174 [2024-10-07 09:48:54.102617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.174 qpair failed and we were unable to recover it. 00:28:05.174 [2024-10-07 09:48:54.102724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.174 [2024-10-07 09:48:54.102751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.174 qpair failed and we were unable to recover it. 00:28:05.174 [2024-10-07 09:48:54.102865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.174 [2024-10-07 09:48:54.102894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.174 qpair failed and we were unable to recover it. 00:28:05.174 [2024-10-07 09:48:54.103015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.174 [2024-10-07 09:48:54.103044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.174 qpair failed and we were unable to recover it. 00:28:05.174 [2024-10-07 09:48:54.103136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.174 [2024-10-07 09:48:54.103162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.174 qpair failed and we were unable to recover it. 00:28:05.174 [2024-10-07 09:48:54.103275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.174 [2024-10-07 09:48:54.103301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.174 qpair failed and we were unable to recover it. 00:28:05.174 [2024-10-07 09:48:54.103425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.174 [2024-10-07 09:48:54.103453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.174 qpair failed and we were unable to recover it. 00:28:05.174 [2024-10-07 09:48:54.103596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.175 [2024-10-07 09:48:54.103622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.175 qpair failed and we were unable to recover it. 00:28:05.175 [2024-10-07 09:48:54.103739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.175 [2024-10-07 09:48:54.103779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.175 qpair failed and we were unable to recover it. 00:28:05.175 [2024-10-07 09:48:54.103874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.175 [2024-10-07 09:48:54.103902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.175 qpair failed and we were unable to recover it. 00:28:05.175 [2024-10-07 09:48:54.104010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.175 [2024-10-07 09:48:54.104047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.175 qpair failed and we were unable to recover it. 00:28:05.175 [2024-10-07 09:48:54.104144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.175 [2024-10-07 09:48:54.104171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.175 qpair failed and we were unable to recover it. 00:28:05.175 [2024-10-07 09:48:54.104285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.175 [2024-10-07 09:48:54.104312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.175 qpair failed and we were unable to recover it. 00:28:05.175 [2024-10-07 09:48:54.104395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.175 [2024-10-07 09:48:54.104422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.175 qpair failed and we were unable to recover it. 00:28:05.175 [2024-10-07 09:48:54.104545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.175 [2024-10-07 09:48:54.104571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.175 qpair failed and we were unable to recover it. 00:28:05.175 [2024-10-07 09:48:54.104696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.175 [2024-10-07 09:48:54.104734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.175 qpair failed and we were unable to recover it. 00:28:05.175 [2024-10-07 09:48:54.104813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.175 [2024-10-07 09:48:54.104838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.175 qpair failed and we were unable to recover it. 00:28:05.175 [2024-10-07 09:48:54.104947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.175 [2024-10-07 09:48:54.104982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.175 qpair failed and we were unable to recover it. 00:28:05.175 [2024-10-07 09:48:54.105092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.175 [2024-10-07 09:48:54.105119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.175 qpair failed and we were unable to recover it. 00:28:05.175 [2024-10-07 09:48:54.105207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.175 [2024-10-07 09:48:54.105233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.175 qpair failed and we were unable to recover it. 00:28:05.175 [2024-10-07 09:48:54.105338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.175 [2024-10-07 09:48:54.105364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.175 qpair failed and we were unable to recover it. 00:28:05.175 [2024-10-07 09:48:54.105478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.175 [2024-10-07 09:48:54.105504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.175 qpair failed and we were unable to recover it. 00:28:05.175 [2024-10-07 09:48:54.105599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.175 [2024-10-07 09:48:54.105637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.175 qpair failed and we were unable to recover it. 00:28:05.175 [2024-10-07 09:48:54.105739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.175 [2024-10-07 09:48:54.105766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.175 qpair failed and we were unable to recover it. 00:28:05.175 [2024-10-07 09:48:54.105860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.175 [2024-10-07 09:48:54.105889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.175 qpair failed and we were unable to recover it. 00:28:05.175 [2024-10-07 09:48:54.105986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.175 [2024-10-07 09:48:54.106013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.175 qpair failed and we were unable to recover it. 00:28:05.175 [2024-10-07 09:48:54.106120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.175 [2024-10-07 09:48:54.106148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.175 qpair failed and we were unable to recover it. 00:28:05.175 [2024-10-07 09:48:54.106259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.175 [2024-10-07 09:48:54.106285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.175 qpair failed and we were unable to recover it. 00:28:05.175 [2024-10-07 09:48:54.106374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.175 [2024-10-07 09:48:54.106402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.175 qpair failed and we were unable to recover it. 00:28:05.175 [2024-10-07 09:48:54.106521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.175 [2024-10-07 09:48:54.106548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.175 qpair failed and we were unable to recover it. 00:28:05.175 [2024-10-07 09:48:54.106658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.175 [2024-10-07 09:48:54.106693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.175 qpair failed and we were unable to recover it. 00:28:05.175 [2024-10-07 09:48:54.106778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.175 [2024-10-07 09:48:54.106804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.175 qpair failed and we were unable to recover it. 00:28:05.175 [2024-10-07 09:48:54.106886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.175 [2024-10-07 09:48:54.106913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.175 qpair failed and we were unable to recover it. 00:28:05.175 [2024-10-07 09:48:54.107033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.175 [2024-10-07 09:48:54.107061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.175 qpair failed and we were unable to recover it. 00:28:05.175 [2024-10-07 09:48:54.107181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.175 [2024-10-07 09:48:54.107207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.175 qpair failed and we were unable to recover it. 00:28:05.175 [2024-10-07 09:48:54.107320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.175 [2024-10-07 09:48:54.107346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.175 qpair failed and we were unable to recover it. 00:28:05.175 [2024-10-07 09:48:54.107459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.175 [2024-10-07 09:48:54.107485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.175 qpair failed and we were unable to recover it. 00:28:05.175 [2024-10-07 09:48:54.107604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.175 [2024-10-07 09:48:54.107639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.175 qpair failed and we were unable to recover it. 00:28:05.175 [2024-10-07 09:48:54.107746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.175 [2024-10-07 09:48:54.107774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.175 qpair failed and we were unable to recover it. 00:28:05.175 [2024-10-07 09:48:54.107860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.175 [2024-10-07 09:48:54.107887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.175 qpair failed and we were unable to recover it. 00:28:05.175 [2024-10-07 09:48:54.108008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.175 [2024-10-07 09:48:54.108036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.175 qpair failed and we were unable to recover it. 00:28:05.175 [2024-10-07 09:48:54.108140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.175 [2024-10-07 09:48:54.108166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.175 qpair failed and we were unable to recover it. 00:28:05.175 [2024-10-07 09:48:54.108273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.175 [2024-10-07 09:48:54.108302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.175 qpair failed and we were unable to recover it. 00:28:05.175 [2024-10-07 09:48:54.108416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.175 [2024-10-07 09:48:54.108443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.175 qpair failed and we were unable to recover it. 00:28:05.175 [2024-10-07 09:48:54.108531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.176 [2024-10-07 09:48:54.108559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.176 qpair failed and we were unable to recover it. 00:28:05.176 [2024-10-07 09:48:54.108637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.176 [2024-10-07 09:48:54.108671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.176 qpair failed and we were unable to recover it. 00:28:05.176 [2024-10-07 09:48:54.108766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.176 [2024-10-07 09:48:54.108794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.176 qpair failed and we were unable to recover it. 00:28:05.176 [2024-10-07 09:48:54.108878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.176 [2024-10-07 09:48:54.108904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.176 qpair failed and we were unable to recover it. 00:28:05.176 [2024-10-07 09:48:54.109040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.176 [2024-10-07 09:48:54.109066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.176 qpair failed and we were unable to recover it. 00:28:05.176 [2024-10-07 09:48:54.109157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.176 [2024-10-07 09:48:54.109184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.176 qpair failed and we were unable to recover it. 00:28:05.176 [2024-10-07 09:48:54.109277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.176 [2024-10-07 09:48:54.109304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.176 qpair failed and we were unable to recover it. 00:28:05.176 [2024-10-07 09:48:54.109384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.176 [2024-10-07 09:48:54.109411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.176 qpair failed and we were unable to recover it. 00:28:05.176 [2024-10-07 09:48:54.109512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.176 [2024-10-07 09:48:54.109539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.176 qpair failed and we were unable to recover it. 00:28:05.176 [2024-10-07 09:48:54.109639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.176 [2024-10-07 09:48:54.109671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.176 qpair failed and we were unable to recover it. 00:28:05.176 [2024-10-07 09:48:54.109759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.176 [2024-10-07 09:48:54.109784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.176 qpair failed and we were unable to recover it. 00:28:05.176 [2024-10-07 09:48:54.109892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.176 [2024-10-07 09:48:54.109918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.176 qpair failed and we were unable to recover it. 00:28:05.176 [2024-10-07 09:48:54.110025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.176 [2024-10-07 09:48:54.110051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.176 qpair failed and we were unable to recover it. 00:28:05.176 [2024-10-07 09:48:54.110166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.176 [2024-10-07 09:48:54.110192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.176 qpair failed and we were unable to recover it. 00:28:05.176 [2024-10-07 09:48:54.110307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.176 [2024-10-07 09:48:54.110333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.176 qpair failed and we were unable to recover it. 00:28:05.176 [2024-10-07 09:48:54.110438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.176 [2024-10-07 09:48:54.110463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.176 qpair failed and we were unable to recover it. 00:28:05.176 [2024-10-07 09:48:54.110574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.176 [2024-10-07 09:48:54.110599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.176 qpair failed and we were unable to recover it. 00:28:05.176 [2024-10-07 09:48:54.110721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.176 [2024-10-07 09:48:54.110747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.176 qpair failed and we were unable to recover it. 00:28:05.176 [2024-10-07 09:48:54.110826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.176 [2024-10-07 09:48:54.110852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.176 qpair failed and we were unable to recover it. 00:28:05.176 [2024-10-07 09:48:54.110954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.176 [2024-10-07 09:48:54.110992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.176 qpair failed and we were unable to recover it. 00:28:05.176 [2024-10-07 09:48:54.111114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.176 [2024-10-07 09:48:54.111143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.176 qpair failed and we were unable to recover it. 00:28:05.176 [2024-10-07 09:48:54.111257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.176 [2024-10-07 09:48:54.111283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.176 qpair failed and we were unable to recover it. 00:28:05.176 [2024-10-07 09:48:54.111364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.176 [2024-10-07 09:48:54.111390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.176 qpair failed and we were unable to recover it. 00:28:05.176 [2024-10-07 09:48:54.111507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.176 [2024-10-07 09:48:54.111534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.176 qpair failed and we were unable to recover it. 00:28:05.176 [2024-10-07 09:48:54.111627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.176 [2024-10-07 09:48:54.111675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.176 qpair failed and we were unable to recover it. 00:28:05.176 [2024-10-07 09:48:54.111776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.176 [2024-10-07 09:48:54.111804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.176 qpair failed and we were unable to recover it. 00:28:05.176 [2024-10-07 09:48:54.111896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.176 [2024-10-07 09:48:54.111921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.176 qpair failed and we were unable to recover it. 00:28:05.176 [2024-10-07 09:48:54.112068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.176 [2024-10-07 09:48:54.112093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.176 qpair failed and we were unable to recover it. 00:28:05.176 [2024-10-07 09:48:54.112176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.176 [2024-10-07 09:48:54.112201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.176 qpair failed and we were unable to recover it. 00:28:05.176 [2024-10-07 09:48:54.112323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.176 [2024-10-07 09:48:54.112348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.176 qpair failed and we were unable to recover it. 00:28:05.176 [2024-10-07 09:48:54.112466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.176 [2024-10-07 09:48:54.112495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.176 qpair failed and we were unable to recover it. 00:28:05.176 [2024-10-07 09:48:54.112579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.176 [2024-10-07 09:48:54.112605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.176 qpair failed and we were unable to recover it. 00:28:05.176 [2024-10-07 09:48:54.113702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.176 [2024-10-07 09:48:54.113760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.176 qpair failed and we were unable to recover it. 00:28:05.176 [2024-10-07 09:48:54.113917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.176 [2024-10-07 09:48:54.113950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.176 qpair failed and we were unable to recover it. 00:28:05.176 [2024-10-07 09:48:54.114064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.176 [2024-10-07 09:48:54.114090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.176 qpair failed and we were unable to recover it. 00:28:05.176 [2024-10-07 09:48:54.114217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.176 [2024-10-07 09:48:54.114243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.176 qpair failed and we were unable to recover it. 00:28:05.176 [2024-10-07 09:48:54.114333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.176 [2024-10-07 09:48:54.114361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.176 qpair failed and we were unable to recover it. 00:28:05.177 [2024-10-07 09:48:54.114482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.177 [2024-10-07 09:48:54.114507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.177 qpair failed and we were unable to recover it. 00:28:05.177 [2024-10-07 09:48:54.114626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.177 [2024-10-07 09:48:54.114652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.177 qpair failed and we were unable to recover it. 00:28:05.177 [2024-10-07 09:48:54.114776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.177 [2024-10-07 09:48:54.114802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.177 qpair failed and we were unable to recover it. 00:28:05.177 [2024-10-07 09:48:54.114889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.177 [2024-10-07 09:48:54.114915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.177 qpair failed and we were unable to recover it. 00:28:05.177 [2024-10-07 09:48:54.115032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.177 [2024-10-07 09:48:54.115058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.177 qpair failed and we were unable to recover it. 00:28:05.177 [2024-10-07 09:48:54.115198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.177 [2024-10-07 09:48:54.115226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.177 qpair failed and we were unable to recover it. 00:28:05.177 [2024-10-07 09:48:54.115336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.177 [2024-10-07 09:48:54.115362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.177 qpair failed and we were unable to recover it. 00:28:05.177 [2024-10-07 09:48:54.115502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.177 [2024-10-07 09:48:54.115530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.177 qpair failed and we were unable to recover it. 00:28:05.177 [2024-10-07 09:48:54.115637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.177 [2024-10-07 09:48:54.115663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.177 qpair failed and we were unable to recover it. 00:28:05.177 [2024-10-07 09:48:54.115780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.177 [2024-10-07 09:48:54.115807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.177 qpair failed and we were unable to recover it. 00:28:05.177 [2024-10-07 09:48:54.115923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.177 [2024-10-07 09:48:54.115950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.177 qpair failed and we were unable to recover it. 00:28:05.177 [2024-10-07 09:48:54.116092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.177 [2024-10-07 09:48:54.116118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.177 qpair failed and we were unable to recover it. 00:28:05.177 [2024-10-07 09:48:54.116213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.177 [2024-10-07 09:48:54.116241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.177 qpair failed and we were unable to recover it. 00:28:05.177 [2024-10-07 09:48:54.116362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.177 [2024-10-07 09:48:54.116389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.177 qpair failed and we were unable to recover it. 00:28:05.177 [2024-10-07 09:48:54.116525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.177 [2024-10-07 09:48:54.116551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.177 qpair failed and we were unable to recover it. 00:28:05.177 [2024-10-07 09:48:54.116643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.177 [2024-10-07 09:48:54.116680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.177 qpair failed and we were unable to recover it. 00:28:05.177 [2024-10-07 09:48:54.116797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.177 [2024-10-07 09:48:54.116823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.177 qpair failed and we were unable to recover it. 00:28:05.177 [2024-10-07 09:48:54.116936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.177 [2024-10-07 09:48:54.116962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.177 qpair failed and we were unable to recover it. 00:28:05.177 [2024-10-07 09:48:54.117046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.177 [2024-10-07 09:48:54.117073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.177 qpair failed and we were unable to recover it. 00:28:05.177 [2024-10-07 09:48:54.117184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.177 [2024-10-07 09:48:54.117210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.177 qpair failed and we were unable to recover it. 00:28:05.177 [2024-10-07 09:48:54.117306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.177 [2024-10-07 09:48:54.117332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.177 qpair failed and we were unable to recover it. 00:28:05.177 [2024-10-07 09:48:54.117424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.177 [2024-10-07 09:48:54.117452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.177 qpair failed and we were unable to recover it. 00:28:05.177 [2024-10-07 09:48:54.117605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.177 [2024-10-07 09:48:54.117644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.177 qpair failed and we were unable to recover it. 00:28:05.177 [2024-10-07 09:48:54.117777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.177 [2024-10-07 09:48:54.117805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.177 qpair failed and we were unable to recover it. 00:28:05.177 [2024-10-07 09:48:54.117936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.177 [2024-10-07 09:48:54.117974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.177 qpair failed and we were unable to recover it. 00:28:05.177 [2024-10-07 09:48:54.118118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.177 [2024-10-07 09:48:54.118146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.177 qpair failed and we were unable to recover it. 00:28:05.177 [2024-10-07 09:48:54.118287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.177 [2024-10-07 09:48:54.118316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.177 qpair failed and we were unable to recover it. 00:28:05.177 [2024-10-07 09:48:54.118429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.177 [2024-10-07 09:48:54.118455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.177 qpair failed and we were unable to recover it. 00:28:05.177 [2024-10-07 09:48:54.118571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.177 [2024-10-07 09:48:54.118597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.177 qpair failed and we were unable to recover it. 00:28:05.177 [2024-10-07 09:48:54.118715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.177 [2024-10-07 09:48:54.118743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.177 qpair failed and we were unable to recover it. 00:28:05.177 [2024-10-07 09:48:54.118827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.177 [2024-10-07 09:48:54.118853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.177 qpair failed and we were unable to recover it. 00:28:05.177 [2024-10-07 09:48:54.118997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.177 [2024-10-07 09:48:54.119022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.178 qpair failed and we were unable to recover it. 00:28:05.178 [2024-10-07 09:48:54.119134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.178 [2024-10-07 09:48:54.119159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.178 qpair failed and we were unable to recover it. 00:28:05.178 [2024-10-07 09:48:54.119268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.178 [2024-10-07 09:48:54.119294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.178 qpair failed and we were unable to recover it. 00:28:05.178 [2024-10-07 09:48:54.119382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.178 [2024-10-07 09:48:54.119408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.178 qpair failed and we were unable to recover it. 00:28:05.178 [2024-10-07 09:48:54.119527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.178 [2024-10-07 09:48:54.119555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.178 qpair failed and we were unable to recover it. 00:28:05.178 [2024-10-07 09:48:54.119646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.178 [2024-10-07 09:48:54.119683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.178 qpair failed and we were unable to recover it. 00:28:05.178 [2024-10-07 09:48:54.119797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.178 [2024-10-07 09:48:54.119823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.178 qpair failed and we were unable to recover it. 00:28:05.178 [2024-10-07 09:48:54.119907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.178 [2024-10-07 09:48:54.119933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.178 qpair failed and we were unable to recover it. 00:28:05.178 [2024-10-07 09:48:54.120019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.178 [2024-10-07 09:48:54.120045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.178 qpair failed and we were unable to recover it. 00:28:05.178 [2024-10-07 09:48:54.120133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.178 [2024-10-07 09:48:54.120161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.178 qpair failed and we were unable to recover it. 00:28:05.178 [2024-10-07 09:48:54.120304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.178 [2024-10-07 09:48:54.120332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.178 qpair failed and we were unable to recover it. 00:28:05.178 [2024-10-07 09:48:54.120434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.178 [2024-10-07 09:48:54.120472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.178 qpair failed and we were unable to recover it. 00:28:05.178 [2024-10-07 09:48:54.120593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.178 [2024-10-07 09:48:54.120619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.178 qpair failed and we were unable to recover it. 00:28:05.178 [2024-10-07 09:48:54.120717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.178 [2024-10-07 09:48:54.120743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.178 qpair failed and we were unable to recover it. 00:28:05.178 [2024-10-07 09:48:54.120825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.178 [2024-10-07 09:48:54.120850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.178 qpair failed and we were unable to recover it. 00:28:05.178 [2024-10-07 09:48:54.120983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.178 [2024-10-07 09:48:54.121008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.178 qpair failed and we were unable to recover it. 00:28:05.178 [2024-10-07 09:48:54.121092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.178 [2024-10-07 09:48:54.121117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.178 qpair failed and we were unable to recover it. 00:28:05.178 [2024-10-07 09:48:54.121228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.178 [2024-10-07 09:48:54.121257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.178 qpair failed and we were unable to recover it. 00:28:05.178 [2024-10-07 09:48:54.121365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.178 [2024-10-07 09:48:54.121390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.178 qpair failed and we were unable to recover it. 00:28:05.178 [2024-10-07 09:48:54.121510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.178 [2024-10-07 09:48:54.121538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.178 qpair failed and we were unable to recover it. 00:28:05.178 [2024-10-07 09:48:54.121647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.178 [2024-10-07 09:48:54.121681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.178 qpair failed and we were unable to recover it. 00:28:05.178 [2024-10-07 09:48:54.121828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.178 [2024-10-07 09:48:54.121854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.178 qpair failed and we were unable to recover it. 00:28:05.178 [2024-10-07 09:48:54.121995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.178 [2024-10-07 09:48:54.122021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.178 qpair failed and we were unable to recover it. 00:28:05.178 [2024-10-07 09:48:54.122103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.178 [2024-10-07 09:48:54.122131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.178 qpair failed and we were unable to recover it. 00:28:05.178 [2024-10-07 09:48:54.122219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.178 [2024-10-07 09:48:54.122249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.178 qpair failed and we were unable to recover it. 00:28:05.178 [2024-10-07 09:48:54.122367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.178 [2024-10-07 09:48:54.122395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.178 qpair failed and we were unable to recover it. 00:28:05.178 [2024-10-07 09:48:54.122521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.178 [2024-10-07 09:48:54.122560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.178 qpair failed and we were unable to recover it. 00:28:05.178 [2024-10-07 09:48:54.122681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.178 [2024-10-07 09:48:54.122709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.178 qpair failed and we were unable to recover it. 00:28:05.178 [2024-10-07 09:48:54.122827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.178 [2024-10-07 09:48:54.122853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.178 qpair failed and we were unable to recover it. 00:28:05.178 [2024-10-07 09:48:54.122966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.178 [2024-10-07 09:48:54.122992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.178 qpair failed and we were unable to recover it. 00:28:05.178 [2024-10-07 09:48:54.123079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.178 [2024-10-07 09:48:54.123105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.178 qpair failed and we were unable to recover it. 00:28:05.178 [2024-10-07 09:48:54.123216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.178 [2024-10-07 09:48:54.123242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.178 qpair failed and we were unable to recover it. 00:28:05.178 [2024-10-07 09:48:54.123357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.178 [2024-10-07 09:48:54.123388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.178 qpair failed and we were unable to recover it. 00:28:05.178 [2024-10-07 09:48:54.123468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.178 [2024-10-07 09:48:54.123496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.178 qpair failed and we were unable to recover it. 00:28:05.178 [2024-10-07 09:48:54.123640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.178 [2024-10-07 09:48:54.123687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.178 qpair failed and we were unable to recover it. 00:28:05.178 [2024-10-07 09:48:54.123793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.178 [2024-10-07 09:48:54.123819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.178 qpair failed and we were unable to recover it. 00:28:05.178 [2024-10-07 09:48:54.123906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.178 [2024-10-07 09:48:54.123933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.178 qpair failed and we were unable to recover it. 00:28:05.178 [2024-10-07 09:48:54.124040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.178 [2024-10-07 09:48:54.124065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.178 qpair failed and we were unable to recover it. 00:28:05.178 [2024-10-07 09:48:54.124179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.178 [2024-10-07 09:48:54.124207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.178 qpair failed and we were unable to recover it. 00:28:05.178 [2024-10-07 09:48:54.124348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.179 [2024-10-07 09:48:54.124373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.179 qpair failed and we were unable to recover it. 00:28:05.179 [2024-10-07 09:48:54.124457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.179 [2024-10-07 09:48:54.124482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.179 qpair failed and we were unable to recover it. 00:28:05.179 [2024-10-07 09:48:54.124560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.179 [2024-10-07 09:48:54.124586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.179 qpair failed and we were unable to recover it. 00:28:05.179 [2024-10-07 09:48:54.124696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.179 [2024-10-07 09:48:54.124723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.179 qpair failed and we were unable to recover it. 00:28:05.179 [2024-10-07 09:48:54.124813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.179 [2024-10-07 09:48:54.124839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.179 qpair failed and we were unable to recover it. 00:28:05.179 [2024-10-07 09:48:54.124953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.179 [2024-10-07 09:48:54.124979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.179 qpair failed and we were unable to recover it. 00:28:05.179 [2024-10-07 09:48:54.125065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.179 [2024-10-07 09:48:54.125091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.179 qpair failed and we were unable to recover it. 00:28:05.179 [2024-10-07 09:48:54.125206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.179 [2024-10-07 09:48:54.125231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.179 qpair failed and we were unable to recover it. 00:28:05.179 [2024-10-07 09:48:54.125306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.179 [2024-10-07 09:48:54.125331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.179 qpair failed and we were unable to recover it. 00:28:05.179 [2024-10-07 09:48:54.125416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.179 [2024-10-07 09:48:54.125442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.179 qpair failed and we were unable to recover it. 00:28:05.179 [2024-10-07 09:48:54.125523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.179 [2024-10-07 09:48:54.125548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.179 qpair failed and we were unable to recover it. 00:28:05.179 [2024-10-07 09:48:54.125663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.179 [2024-10-07 09:48:54.125697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.179 qpair failed and we were unable to recover it. 00:28:05.179 [2024-10-07 09:48:54.125815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.179 [2024-10-07 09:48:54.125843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.179 qpair failed and we were unable to recover it. 00:28:05.179 [2024-10-07 09:48:54.125956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.179 [2024-10-07 09:48:54.125982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.179 qpair failed and we were unable to recover it. 00:28:05.179 [2024-10-07 09:48:54.126121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.179 [2024-10-07 09:48:54.126146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.179 qpair failed and we were unable to recover it. 00:28:05.179 [2024-10-07 09:48:54.126292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.179 [2024-10-07 09:48:54.126318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.179 qpair failed and we were unable to recover it. 00:28:05.179 [2024-10-07 09:48:54.126428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.179 [2024-10-07 09:48:54.126454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.179 qpair failed and we were unable to recover it. 00:28:05.179 [2024-10-07 09:48:54.126542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.179 [2024-10-07 09:48:54.126569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.179 qpair failed and we were unable to recover it. 00:28:05.179 [2024-10-07 09:48:54.126706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.179 [2024-10-07 09:48:54.126732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.179 qpair failed and we were unable to recover it. 00:28:05.179 [2024-10-07 09:48:54.126842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.179 [2024-10-07 09:48:54.126867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.179 qpair failed and we were unable to recover it. 00:28:05.179 [2024-10-07 09:48:54.127011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.179 [2024-10-07 09:48:54.127038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.179 qpair failed and we were unable to recover it. 00:28:05.179 [2024-10-07 09:48:54.127151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.179 [2024-10-07 09:48:54.127176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.179 qpair failed and we were unable to recover it. 00:28:05.179 [2024-10-07 09:48:54.127259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.179 [2024-10-07 09:48:54.127285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.179 qpair failed and we were unable to recover it. 00:28:05.179 [2024-10-07 09:48:54.127420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.179 [2024-10-07 09:48:54.127446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.179 qpair failed and we were unable to recover it. 00:28:05.179 [2024-10-07 09:48:54.127573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.179 [2024-10-07 09:48:54.127612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.179 qpair failed and we were unable to recover it. 00:28:05.179 [2024-10-07 09:48:54.127755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.179 [2024-10-07 09:48:54.127794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.179 qpair failed and we were unable to recover it. 00:28:05.179 [2024-10-07 09:48:54.127924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.179 [2024-10-07 09:48:54.127952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.179 qpair failed and we were unable to recover it. 00:28:05.179 [2024-10-07 09:48:54.128044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.179 [2024-10-07 09:48:54.128070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.179 qpair failed and we were unable to recover it. 00:28:05.179 [2024-10-07 09:48:54.128228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.179 [2024-10-07 09:48:54.128277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.179 qpair failed and we were unable to recover it. 00:28:05.179 [2024-10-07 09:48:54.128419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.179 [2024-10-07 09:48:54.128445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.179 qpair failed and we were unable to recover it. 00:28:05.179 [2024-10-07 09:48:54.128536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.179 [2024-10-07 09:48:54.128563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.179 qpair failed and we were unable to recover it. 00:28:05.179 [2024-10-07 09:48:54.128678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.179 [2024-10-07 09:48:54.128705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.179 qpair failed and we were unable to recover it. 00:28:05.179 [2024-10-07 09:48:54.128822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.179 [2024-10-07 09:48:54.128850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.179 qpair failed and we were unable to recover it. 00:28:05.179 [2024-10-07 09:48:54.128939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.179 [2024-10-07 09:48:54.128964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.179 qpair failed and we were unable to recover it. 00:28:05.179 [2024-10-07 09:48:54.129043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.179 [2024-10-07 09:48:54.129068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.179 qpair failed and we were unable to recover it. 00:28:05.179 [2024-10-07 09:48:54.129176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.179 [2024-10-07 09:48:54.129202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.179 qpair failed and we were unable to recover it. 00:28:05.179 [2024-10-07 09:48:54.129311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.179 [2024-10-07 09:48:54.129337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.179 qpair failed and we were unable to recover it. 00:28:05.179 [2024-10-07 09:48:54.129451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.180 [2024-10-07 09:48:54.129476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.180 qpair failed and we were unable to recover it. 00:28:05.180 [2024-10-07 09:48:54.129587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.180 [2024-10-07 09:48:54.129613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.180 qpair failed and we were unable to recover it. 00:28:05.180 [2024-10-07 09:48:54.129699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.180 [2024-10-07 09:48:54.129726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.180 qpair failed and we were unable to recover it. 00:28:05.180 [2024-10-07 09:48:54.129834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.180 [2024-10-07 09:48:54.129860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.180 qpair failed and we were unable to recover it. 00:28:05.180 [2024-10-07 09:48:54.129973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.180 [2024-10-07 09:48:54.129998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.180 qpair failed and we were unable to recover it. 00:28:05.180 [2024-10-07 09:48:54.130121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.180 [2024-10-07 09:48:54.130159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.180 qpair failed and we were unable to recover it. 00:28:05.180 [2024-10-07 09:48:54.130307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.180 [2024-10-07 09:48:54.130333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.180 qpair failed and we were unable to recover it. 00:28:05.180 [2024-10-07 09:48:54.130414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.180 [2024-10-07 09:48:54.130439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.180 qpair failed and we were unable to recover it. 00:28:05.180 [2024-10-07 09:48:54.130518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.180 [2024-10-07 09:48:54.130544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.180 qpair failed and we were unable to recover it. 00:28:05.180 [2024-10-07 09:48:54.130661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.180 [2024-10-07 09:48:54.130707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.180 qpair failed and we were unable to recover it. 00:28:05.180 [2024-10-07 09:48:54.130811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.180 [2024-10-07 09:48:54.130839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.180 qpair failed and we were unable to recover it. 00:28:05.180 [2024-10-07 09:48:54.130958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.180 [2024-10-07 09:48:54.130983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.180 qpair failed and we were unable to recover it. 00:28:05.180 [2024-10-07 09:48:54.131125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.180 [2024-10-07 09:48:54.131151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.180 qpair failed and we were unable to recover it. 00:28:05.180 [2024-10-07 09:48:54.131231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.180 [2024-10-07 09:48:54.131256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.180 qpair failed and we were unable to recover it. 00:28:05.180 [2024-10-07 09:48:54.131331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.180 [2024-10-07 09:48:54.131356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.180 qpair failed and we were unable to recover it. 00:28:05.180 [2024-10-07 09:48:54.131501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.180 [2024-10-07 09:48:54.131528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.180 qpair failed and we were unable to recover it. 00:28:05.180 [2024-10-07 09:48:54.131661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.180 [2024-10-07 09:48:54.131706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.180 qpair failed and we were unable to recover it. 00:28:05.180 [2024-10-07 09:48:54.131854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.180 [2024-10-07 09:48:54.131881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.180 qpair failed and we were unable to recover it. 00:28:05.180 [2024-10-07 09:48:54.131971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.180 [2024-10-07 09:48:54.131997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.180 qpair failed and we were unable to recover it. 00:28:05.180 [2024-10-07 09:48:54.132077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.180 [2024-10-07 09:48:54.132102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.180 qpair failed and we were unable to recover it. 00:28:05.180 [2024-10-07 09:48:54.132206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.180 [2024-10-07 09:48:54.132231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.180 qpair failed and we were unable to recover it. 00:28:05.180 [2024-10-07 09:48:54.132320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.180 [2024-10-07 09:48:54.132348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.180 qpair failed and we were unable to recover it. 00:28:05.180 [2024-10-07 09:48:54.132463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.180 [2024-10-07 09:48:54.132491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.180 qpair failed and we were unable to recover it. 00:28:05.180 [2024-10-07 09:48:54.132572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.180 [2024-10-07 09:48:54.132603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.180 qpair failed and we were unable to recover it. 00:28:05.180 [2024-10-07 09:48:54.132715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.180 [2024-10-07 09:48:54.132741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.180 qpair failed and we were unable to recover it. 00:28:05.180 [2024-10-07 09:48:54.132852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.180 [2024-10-07 09:48:54.132877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.180 qpair failed and we were unable to recover it. 00:28:05.180 [2024-10-07 09:48:54.133012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.180 [2024-10-07 09:48:54.133038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.180 qpair failed and we were unable to recover it. 00:28:05.180 [2024-10-07 09:48:54.133151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.180 [2024-10-07 09:48:54.133176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.180 qpair failed and we were unable to recover it. 00:28:05.180 [2024-10-07 09:48:54.133256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.180 [2024-10-07 09:48:54.133284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.180 qpair failed and we were unable to recover it. 00:28:05.180 [2024-10-07 09:48:54.133402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.180 [2024-10-07 09:48:54.133427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.180 qpair failed and we were unable to recover it. 00:28:05.180 [2024-10-07 09:48:54.133538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.180 [2024-10-07 09:48:54.133565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.180 qpair failed and we were unable to recover it. 00:28:05.180 [2024-10-07 09:48:54.133682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.180 [2024-10-07 09:48:54.133709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.180 qpair failed and we were unable to recover it. 00:28:05.180 [2024-10-07 09:48:54.133822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.180 [2024-10-07 09:48:54.133848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.180 qpair failed and we were unable to recover it. 00:28:05.180 [2024-10-07 09:48:54.133952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.180 [2024-10-07 09:48:54.133977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.180 qpair failed and we were unable to recover it. 00:28:05.180 [2024-10-07 09:48:54.134115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.180 [2024-10-07 09:48:54.134142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.180 qpair failed and we were unable to recover it. 00:28:05.180 [2024-10-07 09:48:54.134224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.181 [2024-10-07 09:48:54.134250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.181 qpair failed and we were unable to recover it. 00:28:05.181 [2024-10-07 09:48:54.134392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.181 [2024-10-07 09:48:54.134419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.181 qpair failed and we were unable to recover it. 00:28:05.181 [2024-10-07 09:48:54.134541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.181 [2024-10-07 09:48:54.134568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.181 qpair failed and we were unable to recover it. 00:28:05.181 [2024-10-07 09:48:54.134682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.181 [2024-10-07 09:48:54.134708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.181 qpair failed and we were unable to recover it. 00:28:05.181 [2024-10-07 09:48:54.134796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.181 [2024-10-07 09:48:54.134822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.181 qpair failed and we were unable to recover it. 00:28:05.181 [2024-10-07 09:48:54.134957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.181 [2024-10-07 09:48:54.134982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.181 qpair failed and we were unable to recover it. 00:28:05.181 [2024-10-07 09:48:54.135120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.181 [2024-10-07 09:48:54.135146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.181 qpair failed and we were unable to recover it. 00:28:05.181 [2024-10-07 09:48:54.135259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.181 [2024-10-07 09:48:54.135285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.181 qpair failed and we were unable to recover it. 00:28:05.181 [2024-10-07 09:48:54.135371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.181 [2024-10-07 09:48:54.135399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.181 qpair failed and we were unable to recover it. 00:28:05.181 [2024-10-07 09:48:54.135512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.181 [2024-10-07 09:48:54.135537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.181 qpair failed and we were unable to recover it. 00:28:05.181 [2024-10-07 09:48:54.135625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.181 [2024-10-07 09:48:54.135652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.181 qpair failed and we were unable to recover it. 00:28:05.181 [2024-10-07 09:48:54.135744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.181 [2024-10-07 09:48:54.135769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.181 qpair failed and we were unable to recover it. 00:28:05.181 [2024-10-07 09:48:54.135865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.181 [2024-10-07 09:48:54.135891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.181 qpair failed and we were unable to recover it. 00:28:05.181 [2024-10-07 09:48:54.136004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.181 [2024-10-07 09:48:54.136029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.181 qpair failed and we were unable to recover it. 00:28:05.181 [2024-10-07 09:48:54.136113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.181 [2024-10-07 09:48:54.136139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.181 qpair failed and we were unable to recover it. 00:28:05.181 [2024-10-07 09:48:54.136267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.181 [2024-10-07 09:48:54.136312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.181 qpair failed and we were unable to recover it. 00:28:05.181 [2024-10-07 09:48:54.136411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.181 [2024-10-07 09:48:54.136438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.181 qpair failed and we were unable to recover it. 00:28:05.181 [2024-10-07 09:48:54.136580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.181 [2024-10-07 09:48:54.136605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.181 qpair failed and we were unable to recover it. 00:28:05.181 [2024-10-07 09:48:54.136714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.181 [2024-10-07 09:48:54.136740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.181 qpair failed and we were unable to recover it. 00:28:05.181 [2024-10-07 09:48:54.136865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.181 [2024-10-07 09:48:54.136890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.181 qpair failed and we were unable to recover it. 00:28:05.181 [2024-10-07 09:48:54.136997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.181 [2024-10-07 09:48:54.137023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.181 qpair failed and we were unable to recover it. 00:28:05.181 [2024-10-07 09:48:54.137102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.181 [2024-10-07 09:48:54.137127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.181 qpair failed and we were unable to recover it. 00:28:05.181 [2024-10-07 09:48:54.137242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.181 [2024-10-07 09:48:54.137272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.181 qpair failed and we were unable to recover it. 00:28:05.181 [2024-10-07 09:48:54.137418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.181 [2024-10-07 09:48:54.137445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.181 qpair failed and we were unable to recover it. 00:28:05.181 [2024-10-07 09:48:54.137555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.181 [2024-10-07 09:48:54.137594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.181 qpair failed and we were unable to recover it. 00:28:05.181 [2024-10-07 09:48:54.137710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.181 [2024-10-07 09:48:54.137738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.181 qpair failed and we were unable to recover it. 00:28:05.181 [2024-10-07 09:48:54.137821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.181 [2024-10-07 09:48:54.137847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.181 qpair failed and we were unable to recover it. 00:28:05.181 [2024-10-07 09:48:54.137953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.181 [2024-10-07 09:48:54.137979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.181 qpair failed and we were unable to recover it. 00:28:05.181 [2024-10-07 09:48:54.138059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.181 [2024-10-07 09:48:54.138085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.181 qpair failed and we were unable to recover it. 00:28:05.181 [2024-10-07 09:48:54.138180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.181 [2024-10-07 09:48:54.138206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.181 qpair failed and we were unable to recover it. 00:28:05.181 [2024-10-07 09:48:54.138282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.181 [2024-10-07 09:48:54.138307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.181 qpair failed and we were unable to recover it. 00:28:05.181 [2024-10-07 09:48:54.138424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.181 [2024-10-07 09:48:54.138452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.181 qpair failed and we were unable to recover it. 00:28:05.181 [2024-10-07 09:48:54.138564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.181 [2024-10-07 09:48:54.138589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.181 qpair failed and we were unable to recover it. 00:28:05.181 [2024-10-07 09:48:54.138704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.181 [2024-10-07 09:48:54.138730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.181 qpair failed and we were unable to recover it. 00:28:05.181 [2024-10-07 09:48:54.138838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.181 [2024-10-07 09:48:54.138863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.181 qpair failed and we were unable to recover it. 00:28:05.181 [2024-10-07 09:48:54.138976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.181 [2024-10-07 09:48:54.139001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.181 qpair failed and we were unable to recover it. 00:28:05.181 [2024-10-07 09:48:54.139150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.181 [2024-10-07 09:48:54.139175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.181 qpair failed and we were unable to recover it. 00:28:05.181 [2024-10-07 09:48:54.139254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.181 [2024-10-07 09:48:54.139282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.181 qpair failed and we were unable to recover it. 00:28:05.181 [2024-10-07 09:48:54.139379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.182 [2024-10-07 09:48:54.139418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.182 qpair failed and we were unable to recover it. 00:28:05.182 [2024-10-07 09:48:54.139540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.182 [2024-10-07 09:48:54.139567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.182 qpair failed and we were unable to recover it. 00:28:05.182 [2024-10-07 09:48:54.139653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.182 [2024-10-07 09:48:54.139685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.182 qpair failed and we were unable to recover it. 00:28:05.182 [2024-10-07 09:48:54.139811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.182 [2024-10-07 09:48:54.139836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.182 qpair failed and we were unable to recover it. 00:28:05.182 [2024-10-07 09:48:54.139956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.182 [2024-10-07 09:48:54.139985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.182 qpair failed and we were unable to recover it. 00:28:05.182 [2024-10-07 09:48:54.140135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.182 [2024-10-07 09:48:54.140183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.182 qpair failed and we were unable to recover it. 00:28:05.182 [2024-10-07 09:48:54.140269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.182 [2024-10-07 09:48:54.140295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.182 qpair failed and we were unable to recover it. 00:28:05.182 [2024-10-07 09:48:54.140379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.182 [2024-10-07 09:48:54.140405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.182 qpair failed and we were unable to recover it. 00:28:05.182 [2024-10-07 09:48:54.140509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.182 [2024-10-07 09:48:54.140535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.182 qpair failed and we were unable to recover it. 00:28:05.182 [2024-10-07 09:48:54.140688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.182 [2024-10-07 09:48:54.140716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.182 qpair failed and we were unable to recover it. 00:28:05.182 [2024-10-07 09:48:54.140860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.182 [2024-10-07 09:48:54.140887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.182 qpair failed and we were unable to recover it. 00:28:05.182 [2024-10-07 09:48:54.140967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.182 [2024-10-07 09:48:54.140993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.182 qpair failed and we were unable to recover it. 00:28:05.182 [2024-10-07 09:48:54.141129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.182 [2024-10-07 09:48:54.141155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.182 qpair failed and we were unable to recover it. 00:28:05.182 [2024-10-07 09:48:54.141240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.182 [2024-10-07 09:48:54.141265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.182 qpair failed and we were unable to recover it. 00:28:05.182 [2024-10-07 09:48:54.141425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.182 [2024-10-07 09:48:54.141475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.182 qpair failed and we were unable to recover it. 00:28:05.182 [2024-10-07 09:48:54.141589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.182 [2024-10-07 09:48:54.141614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.182 qpair failed and we were unable to recover it. 00:28:05.182 [2024-10-07 09:48:54.141705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.182 [2024-10-07 09:48:54.141732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.182 qpair failed and we were unable to recover it. 00:28:05.182 [2024-10-07 09:48:54.141820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.182 [2024-10-07 09:48:54.141854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.182 qpair failed and we were unable to recover it. 00:28:05.182 [2024-10-07 09:48:54.141972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.182 [2024-10-07 09:48:54.141997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.182 qpair failed and we were unable to recover it. 00:28:05.182 [2024-10-07 09:48:54.142111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.182 [2024-10-07 09:48:54.142137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.182 qpair failed and we were unable to recover it. 00:28:05.182 [2024-10-07 09:48:54.142250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.182 [2024-10-07 09:48:54.142276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.182 qpair failed and we were unable to recover it. 00:28:05.182 [2024-10-07 09:48:54.142412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.182 [2024-10-07 09:48:54.142440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.182 qpair failed and we were unable to recover it. 00:28:05.182 [2024-10-07 09:48:54.142555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.182 [2024-10-07 09:48:54.142582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.182 qpair failed and we were unable to recover it. 00:28:05.182 [2024-10-07 09:48:54.142735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.182 [2024-10-07 09:48:54.142774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.182 qpair failed and we were unable to recover it. 00:28:05.182 [2024-10-07 09:48:54.142868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.182 [2024-10-07 09:48:54.142894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.182 qpair failed and we were unable to recover it. 00:28:05.182 [2024-10-07 09:48:54.143009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.182 [2024-10-07 09:48:54.143034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.182 qpair failed and we were unable to recover it. 00:28:05.182 [2024-10-07 09:48:54.143147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.182 [2024-10-07 09:48:54.143173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.182 qpair failed and we were unable to recover it. 00:28:05.182 [2024-10-07 09:48:54.143253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.182 [2024-10-07 09:48:54.143279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.182 qpair failed and we were unable to recover it. 00:28:05.182 [2024-10-07 09:48:54.143396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.182 [2024-10-07 09:48:54.143422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.182 qpair failed and we were unable to recover it. 00:28:05.182 [2024-10-07 09:48:54.143575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.182 [2024-10-07 09:48:54.143614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.182 qpair failed and we were unable to recover it. 00:28:05.182 [2024-10-07 09:48:54.143739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.182 [2024-10-07 09:48:54.143767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.182 qpair failed and we were unable to recover it. 00:28:05.182 [2024-10-07 09:48:54.143849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.182 [2024-10-07 09:48:54.143874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.182 qpair failed and we were unable to recover it. 00:28:05.183 [2024-10-07 09:48:54.143951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.183 [2024-10-07 09:48:54.143976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.183 qpair failed and we were unable to recover it. 00:28:05.183 [2024-10-07 09:48:54.144114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.183 [2024-10-07 09:48:54.144140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.183 qpair failed and we were unable to recover it. 00:28:05.183 [2024-10-07 09:48:54.144227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.183 [2024-10-07 09:48:54.144253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.183 qpair failed and we were unable to recover it. 00:28:05.183 [2024-10-07 09:48:54.144339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.183 [2024-10-07 09:48:54.144364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.183 qpair failed and we were unable to recover it. 00:28:05.183 [2024-10-07 09:48:54.144450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.183 [2024-10-07 09:48:54.144475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.183 qpair failed and we were unable to recover it. 00:28:05.183 [2024-10-07 09:48:54.144561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.183 [2024-10-07 09:48:54.144587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.183 qpair failed and we were unable to recover it. 00:28:05.183 [2024-10-07 09:48:54.144698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.183 [2024-10-07 09:48:54.144735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.183 qpair failed and we were unable to recover it. 00:28:05.183 [2024-10-07 09:48:54.144811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.183 [2024-10-07 09:48:54.144836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.183 qpair failed and we were unable to recover it. 00:28:05.183 [2024-10-07 09:48:54.144922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.183 [2024-10-07 09:48:54.144948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.183 qpair failed and we were unable to recover it. 00:28:05.183 [2024-10-07 09:48:54.145062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.183 [2024-10-07 09:48:54.145086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.183 qpair failed and we were unable to recover it. 00:28:05.183 [2024-10-07 09:48:54.145240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.183 [2024-10-07 09:48:54.145278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.183 qpair failed and we were unable to recover it. 00:28:05.183 [2024-10-07 09:48:54.145366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.183 [2024-10-07 09:48:54.145395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.183 qpair failed and we were unable to recover it. 00:28:05.183 [2024-10-07 09:48:54.145483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.183 [2024-10-07 09:48:54.145515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.183 qpair failed and we were unable to recover it. 00:28:05.183 [2024-10-07 09:48:54.145606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.183 [2024-10-07 09:48:54.145633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.183 qpair failed and we were unable to recover it. 00:28:05.183 [2024-10-07 09:48:54.145759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.183 [2024-10-07 09:48:54.145786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.183 qpair failed and we were unable to recover it. 00:28:05.183 [2024-10-07 09:48:54.145908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.183 [2024-10-07 09:48:54.145936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.183 qpair failed and we were unable to recover it. 00:28:05.183 [2024-10-07 09:48:54.146046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.183 [2024-10-07 09:48:54.146071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.183 qpair failed and we were unable to recover it. 00:28:05.183 [2024-10-07 09:48:54.146208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.183 [2024-10-07 09:48:54.146234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.183 qpair failed and we were unable to recover it. 00:28:05.183 [2024-10-07 09:48:54.146323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.183 [2024-10-07 09:48:54.146349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.183 qpair failed and we were unable to recover it. 00:28:05.183 [2024-10-07 09:48:54.146459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.183 [2024-10-07 09:48:54.146488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.183 qpair failed and we were unable to recover it. 00:28:05.183 [2024-10-07 09:48:54.146617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.183 [2024-10-07 09:48:54.146657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.183 qpair failed and we were unable to recover it. 00:28:05.183 [2024-10-07 09:48:54.146756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.183 [2024-10-07 09:48:54.146783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.183 qpair failed and we were unable to recover it. 00:28:05.183 [2024-10-07 09:48:54.146903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.183 [2024-10-07 09:48:54.146931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.183 qpair failed and we were unable to recover it. 00:28:05.183 [2024-10-07 09:48:54.147046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.183 [2024-10-07 09:48:54.147074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.183 qpair failed and we were unable to recover it. 00:28:05.183 [2024-10-07 09:48:54.147199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.183 [2024-10-07 09:48:54.147227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.183 qpair failed and we were unable to recover it. 00:28:05.183 [2024-10-07 09:48:54.147363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.183 [2024-10-07 09:48:54.147391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.183 qpair failed and we were unable to recover it. 00:28:05.183 [2024-10-07 09:48:54.147475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.183 [2024-10-07 09:48:54.147502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.183 qpair failed and we were unable to recover it. 00:28:05.183 [2024-10-07 09:48:54.147625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.183 [2024-10-07 09:48:54.147675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.183 qpair failed and we were unable to recover it. 00:28:05.183 [2024-10-07 09:48:54.147799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.183 [2024-10-07 09:48:54.147829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.183 qpair failed and we were unable to recover it. 00:28:05.183 [2024-10-07 09:48:54.147950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.183 [2024-10-07 09:48:54.147980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.183 qpair failed and we were unable to recover it. 00:28:05.183 [2024-10-07 09:48:54.148068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.183 [2024-10-07 09:48:54.148094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.183 qpair failed and we were unable to recover it. 00:28:05.183 [2024-10-07 09:48:54.148205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.183 [2024-10-07 09:48:54.148230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.183 qpair failed and we were unable to recover it. 00:28:05.183 [2024-10-07 09:48:54.148341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.183 [2024-10-07 09:48:54.148369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.183 qpair failed and we were unable to recover it. 00:28:05.183 [2024-10-07 09:48:54.148484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.183 [2024-10-07 09:48:54.148510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.183 qpair failed and we were unable to recover it. 00:28:05.183 [2024-10-07 09:48:54.148626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.183 [2024-10-07 09:48:54.148653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.183 qpair failed and we were unable to recover it. 00:28:05.183 [2024-10-07 09:48:54.148753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.183 [2024-10-07 09:48:54.148779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.183 qpair failed and we were unable to recover it. 00:28:05.183 [2024-10-07 09:48:54.148861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.183 [2024-10-07 09:48:54.148887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.183 qpair failed and we were unable to recover it. 00:28:05.183 [2024-10-07 09:48:54.148967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.183 [2024-10-07 09:48:54.148992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.183 qpair failed and we were unable to recover it. 00:28:05.183 [2024-10-07 09:48:54.149108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.184 [2024-10-07 09:48:54.149133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.184 qpair failed and we were unable to recover it. 00:28:05.184 [2024-10-07 09:48:54.149246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.184 [2024-10-07 09:48:54.149275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.184 qpair failed and we were unable to recover it. 00:28:05.184 [2024-10-07 09:48:54.149388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.184 [2024-10-07 09:48:54.149416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.184 qpair failed and we were unable to recover it. 00:28:05.184 [2024-10-07 09:48:54.149498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.184 [2024-10-07 09:48:54.149525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.184 qpair failed and we were unable to recover it. 00:28:05.184 [2024-10-07 09:48:54.149605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.184 [2024-10-07 09:48:54.149631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.184 qpair failed and we were unable to recover it. 00:28:05.184 [2024-10-07 09:48:54.149762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.184 [2024-10-07 09:48:54.149792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.184 qpair failed and we were unable to recover it. 00:28:05.184 [2024-10-07 09:48:54.149903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.184 [2024-10-07 09:48:54.149930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.184 qpair failed and we were unable to recover it. 00:28:05.184 [2024-10-07 09:48:54.150044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.184 [2024-10-07 09:48:54.150072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.184 qpair failed and we were unable to recover it. 00:28:05.184 [2024-10-07 09:48:54.150151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.184 [2024-10-07 09:48:54.150177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.184 qpair failed and we were unable to recover it. 00:28:05.184 [2024-10-07 09:48:54.150293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.184 [2024-10-07 09:48:54.150319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.184 qpair failed and we were unable to recover it. 00:28:05.184 [2024-10-07 09:48:54.150430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.184 [2024-10-07 09:48:54.150456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.184 qpair failed and we were unable to recover it. 00:28:05.184 [2024-10-07 09:48:54.150567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.184 [2024-10-07 09:48:54.150594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.184 qpair failed and we were unable to recover it. 00:28:05.184 [2024-10-07 09:48:54.150706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.184 [2024-10-07 09:48:54.150733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.184 qpair failed and we were unable to recover it. 00:28:05.184 [2024-10-07 09:48:54.150848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.184 [2024-10-07 09:48:54.150877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.184 qpair failed and we were unable to recover it. 00:28:05.184 [2024-10-07 09:48:54.150977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.184 [2024-10-07 09:48:54.151009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.184 qpair failed and we were unable to recover it. 00:28:05.184 [2024-10-07 09:48:54.151104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.184 [2024-10-07 09:48:54.151130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.184 qpair failed and we were unable to recover it. 00:28:05.184 [2024-10-07 09:48:54.151244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.184 [2024-10-07 09:48:54.151272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.184 qpair failed and we were unable to recover it. 00:28:05.184 [2024-10-07 09:48:54.151388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.184 [2024-10-07 09:48:54.151415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.184 qpair failed and we were unable to recover it. 00:28:05.184 [2024-10-07 09:48:54.151538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.184 [2024-10-07 09:48:54.151579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.184 qpair failed and we were unable to recover it. 00:28:05.184 [2024-10-07 09:48:54.151708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.184 [2024-10-07 09:48:54.151738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.184 qpair failed and we were unable to recover it. 00:28:05.184 [2024-10-07 09:48:54.151827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.184 [2024-10-07 09:48:54.151853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.184 qpair failed and we were unable to recover it. 00:28:05.184 [2024-10-07 09:48:54.151963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.184 [2024-10-07 09:48:54.151989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.184 qpair failed and we were unable to recover it. 00:28:05.184 [2024-10-07 09:48:54.152100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.184 [2024-10-07 09:48:54.152128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.184 qpair failed and we were unable to recover it. 00:28:05.184 [2024-10-07 09:48:54.152267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.184 [2024-10-07 09:48:54.152294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.184 qpair failed and we were unable to recover it. 00:28:05.184 [2024-10-07 09:48:54.152407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.184 [2024-10-07 09:48:54.152442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.184 qpair failed and we were unable to recover it. 00:28:05.184 [2024-10-07 09:48:54.152586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.184 [2024-10-07 09:48:54.152614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.184 qpair failed and we were unable to recover it. 00:28:05.184 [2024-10-07 09:48:54.152736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.184 [2024-10-07 09:48:54.152762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.184 qpair failed and we were unable to recover it. 00:28:05.184 [2024-10-07 09:48:54.152874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.184 [2024-10-07 09:48:54.152901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.184 qpair failed and we were unable to recover it. 00:28:05.184 [2024-10-07 09:48:54.153019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.184 [2024-10-07 09:48:54.153048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.184 qpair failed and we were unable to recover it. 00:28:05.184 [2024-10-07 09:48:54.153189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.184 [2024-10-07 09:48:54.153216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.184 qpair failed and we were unable to recover it. 00:28:05.184 [2024-10-07 09:48:54.153323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.184 [2024-10-07 09:48:54.153349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.184 qpair failed and we were unable to recover it. 00:28:05.184 [2024-10-07 09:48:54.153439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.184 [2024-10-07 09:48:54.153468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.184 qpair failed and we were unable to recover it. 00:28:05.184 [2024-10-07 09:48:54.153612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.184 [2024-10-07 09:48:54.153640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.184 qpair failed and we were unable to recover it. 00:28:05.184 [2024-10-07 09:48:54.153734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.184 [2024-10-07 09:48:54.153762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.184 qpair failed and we were unable to recover it. 00:28:05.184 [2024-10-07 09:48:54.153837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.184 [2024-10-07 09:48:54.153863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.184 qpair failed and we were unable to recover it. 00:28:05.184 [2024-10-07 09:48:54.153946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.184 [2024-10-07 09:48:54.153972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.184 qpair failed and we were unable to recover it. 00:28:05.184 [2024-10-07 09:48:54.154068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.184 [2024-10-07 09:48:54.154094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.184 qpair failed and we were unable to recover it. 00:28:05.184 [2024-10-07 09:48:54.154230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.184 [2024-10-07 09:48:54.154255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.184 qpair failed and we were unable to recover it. 00:28:05.184 [2024-10-07 09:48:54.154405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.185 [2024-10-07 09:48:54.154446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.185 qpair failed and we were unable to recover it. 00:28:05.185 [2024-10-07 09:48:54.154539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.185 [2024-10-07 09:48:54.154566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.185 qpair failed and we were unable to recover it. 00:28:05.185 [2024-10-07 09:48:54.154682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.185 [2024-10-07 09:48:54.154710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.185 qpair failed and we were unable to recover it. 00:28:05.185 [2024-10-07 09:48:54.154805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.185 [2024-10-07 09:48:54.154832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.185 qpair failed and we were unable to recover it. 00:28:05.185 [2024-10-07 09:48:54.154972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.185 [2024-10-07 09:48:54.155000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.185 qpair failed and we were unable to recover it. 00:28:05.185 [2024-10-07 09:48:54.155110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.185 [2024-10-07 09:48:54.155136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.185 qpair failed and we were unable to recover it. 00:28:05.185 [2024-10-07 09:48:54.155225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.185 [2024-10-07 09:48:54.155250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.185 qpair failed and we were unable to recover it. 00:28:05.185 [2024-10-07 09:48:54.155360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.185 [2024-10-07 09:48:54.155388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.185 qpair failed and we were unable to recover it. 00:28:05.185 [2024-10-07 09:48:54.155467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.185 [2024-10-07 09:48:54.155495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.185 qpair failed and we were unable to recover it. 00:28:05.185 [2024-10-07 09:48:54.155585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.185 [2024-10-07 09:48:54.155611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.185 qpair failed and we were unable to recover it. 00:28:05.185 [2024-10-07 09:48:54.155745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.185 [2024-10-07 09:48:54.155787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.185 qpair failed and we were unable to recover it. 00:28:05.185 [2024-10-07 09:48:54.155915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.185 [2024-10-07 09:48:54.155944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.185 qpair failed and we were unable to recover it. 00:28:05.185 [2024-10-07 09:48:54.156086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.185 [2024-10-07 09:48:54.156114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.185 qpair failed and we were unable to recover it. 00:28:05.185 [2024-10-07 09:48:54.156224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.185 [2024-10-07 09:48:54.156252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.185 qpair failed and we were unable to recover it. 00:28:05.185 [2024-10-07 09:48:54.156340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.185 [2024-10-07 09:48:54.156367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.185 qpair failed and we were unable to recover it. 00:28:05.185 [2024-10-07 09:48:54.156525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.185 [2024-10-07 09:48:54.156566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.185 qpair failed and we were unable to recover it. 00:28:05.185 [2024-10-07 09:48:54.156686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.185 [2024-10-07 09:48:54.156722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.185 qpair failed and we were unable to recover it. 00:28:05.185 [2024-10-07 09:48:54.156841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.185 [2024-10-07 09:48:54.156870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.185 qpair failed and we were unable to recover it. 00:28:05.185 [2024-10-07 09:48:54.156957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.185 [2024-10-07 09:48:54.156984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.185 qpair failed and we were unable to recover it. 00:28:05.185 [2024-10-07 09:48:54.157099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.185 [2024-10-07 09:48:54.157127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.185 qpair failed and we were unable to recover it. 00:28:05.185 [2024-10-07 09:48:54.157233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.185 [2024-10-07 09:48:54.157259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.185 qpair failed and we were unable to recover it. 00:28:05.185 [2024-10-07 09:48:54.157350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.185 [2024-10-07 09:48:54.157377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.185 qpair failed and we were unable to recover it. 00:28:05.185 [2024-10-07 09:48:54.157472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.185 [2024-10-07 09:48:54.157510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.185 qpair failed and we were unable to recover it. 00:28:05.185 [2024-10-07 09:48:54.157675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.185 [2024-10-07 09:48:54.157706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.185 qpair failed and we were unable to recover it. 00:28:05.185 [2024-10-07 09:48:54.157849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.185 [2024-10-07 09:48:54.157877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.185 qpair failed and we were unable to recover it. 00:28:05.185 [2024-10-07 09:48:54.157999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.185 [2024-10-07 09:48:54.158027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.185 qpair failed and we were unable to recover it. 00:28:05.185 [2024-10-07 09:48:54.158118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.185 [2024-10-07 09:48:54.158145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.185 qpair failed and we were unable to recover it. 00:28:05.185 [2024-10-07 09:48:54.158220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.185 [2024-10-07 09:48:54.158247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.185 qpair failed and we were unable to recover it. 00:28:05.185 [2024-10-07 09:48:54.158391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.185 [2024-10-07 09:48:54.158419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.185 qpair failed and we were unable to recover it. 00:28:05.185 [2024-10-07 09:48:54.158530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.185 [2024-10-07 09:48:54.158558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.185 qpair failed and we were unable to recover it. 00:28:05.185 [2024-10-07 09:48:54.158678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.185 [2024-10-07 09:48:54.158705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.185 qpair failed and we were unable to recover it. 00:28:05.185 [2024-10-07 09:48:54.158787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.185 [2024-10-07 09:48:54.158814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.185 qpair failed and we were unable to recover it. 00:28:05.185 [2024-10-07 09:48:54.158926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.185 [2024-10-07 09:48:54.158953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.185 qpair failed and we were unable to recover it. 00:28:05.185 [2024-10-07 09:48:54.159033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.185 [2024-10-07 09:48:54.159059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.185 qpair failed and we were unable to recover it. 00:28:05.185 [2024-10-07 09:48:54.159170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.185 [2024-10-07 09:48:54.159199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.185 qpair failed and we were unable to recover it. 00:28:05.185 [2024-10-07 09:48:54.159307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.185 [2024-10-07 09:48:54.159332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.185 qpair failed and we were unable to recover it. 00:28:05.185 [2024-10-07 09:48:54.159417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.185 [2024-10-07 09:48:54.159443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.185 qpair failed and we were unable to recover it. 00:28:05.185 [2024-10-07 09:48:54.159586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.185 [2024-10-07 09:48:54.159614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.185 qpair failed and we were unable to recover it. 00:28:05.185 [2024-10-07 09:48:54.159721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.185 [2024-10-07 09:48:54.159747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.185 qpair failed and we were unable to recover it. 00:28:05.186 [2024-10-07 09:48:54.159855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.186 [2024-10-07 09:48:54.159881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.186 qpair failed and we were unable to recover it. 00:28:05.186 [2024-10-07 09:48:54.159984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.186 [2024-10-07 09:48:54.160012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.186 qpair failed and we were unable to recover it. 00:28:05.186 [2024-10-07 09:48:54.160094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.186 [2024-10-07 09:48:54.160120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.186 qpair failed and we were unable to recover it. 00:28:05.186 [2024-10-07 09:48:54.160209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.186 [2024-10-07 09:48:54.160235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.186 qpair failed and we were unable to recover it. 00:28:05.186 [2024-10-07 09:48:54.160343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.186 [2024-10-07 09:48:54.160375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.186 qpair failed and we were unable to recover it. 00:28:05.186 [2024-10-07 09:48:54.160475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.186 [2024-10-07 09:48:54.160514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.186 qpair failed and we were unable to recover it. 00:28:05.472 [2024-10-07 09:48:54.160643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.472 [2024-10-07 09:48:54.160681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.472 qpair failed and we were unable to recover it. 00:28:05.472 [2024-10-07 09:48:54.160803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.472 [2024-10-07 09:48:54.160832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.472 qpair failed and we were unable to recover it. 00:28:05.472 [2024-10-07 09:48:54.160947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.472 [2024-10-07 09:48:54.160975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.472 qpair failed and we were unable to recover it. 00:28:05.472 [2024-10-07 09:48:54.161087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.472 [2024-10-07 09:48:54.161114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.472 qpair failed and we were unable to recover it. 00:28:05.472 [2024-10-07 09:48:54.161226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.472 [2024-10-07 09:48:54.161254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.472 qpair failed and we were unable to recover it. 00:28:05.472 [2024-10-07 09:48:54.161398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.472 [2024-10-07 09:48:54.161426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.472 qpair failed and we were unable to recover it. 00:28:05.472 [2024-10-07 09:48:54.161564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.472 [2024-10-07 09:48:54.161593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.472 qpair failed and we were unable to recover it. 00:28:05.472 [2024-10-07 09:48:54.161774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.472 [2024-10-07 09:48:54.161805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.472 qpair failed and we were unable to recover it. 00:28:05.472 [2024-10-07 09:48:54.161926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.472 [2024-10-07 09:48:54.161955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.472 qpair failed and we were unable to recover it. 00:28:05.472 [2024-10-07 09:48:54.162060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.472 [2024-10-07 09:48:54.162086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.472 qpair failed and we were unable to recover it. 00:28:05.472 [2024-10-07 09:48:54.162204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.472 [2024-10-07 09:48:54.162233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.472 qpair failed and we were unable to recover it. 00:28:05.472 [2024-10-07 09:48:54.162342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.472 [2024-10-07 09:48:54.162369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.472 qpair failed and we were unable to recover it. 00:28:05.472 [2024-10-07 09:48:54.162490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.472 [2024-10-07 09:48:54.162517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.472 qpair failed and we were unable to recover it. 00:28:05.472 [2024-10-07 09:48:54.162627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.472 [2024-10-07 09:48:54.162653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.472 qpair failed and we were unable to recover it. 00:28:05.473 [2024-10-07 09:48:54.162802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.473 [2024-10-07 09:48:54.162829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.473 qpair failed and we were unable to recover it. 00:28:05.473 [2024-10-07 09:48:54.162939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.473 [2024-10-07 09:48:54.162964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.473 qpair failed and we were unable to recover it. 00:28:05.473 [2024-10-07 09:48:54.163102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.473 [2024-10-07 09:48:54.163129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.473 qpair failed and we were unable to recover it. 00:28:05.473 [2024-10-07 09:48:54.163267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.473 [2024-10-07 09:48:54.163296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.473 qpair failed and we were unable to recover it. 00:28:05.473 [2024-10-07 09:48:54.163382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.473 [2024-10-07 09:48:54.163409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.473 qpair failed and we were unable to recover it. 00:28:05.473 [2024-10-07 09:48:54.163497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.473 [2024-10-07 09:48:54.163525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.473 qpair failed and we were unable to recover it. 00:28:05.473 [2024-10-07 09:48:54.163640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.473 [2024-10-07 09:48:54.163675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.473 qpair failed and we were unable to recover it. 00:28:05.473 [2024-10-07 09:48:54.163788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.473 [2024-10-07 09:48:54.163814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.473 qpair failed and we were unable to recover it. 00:28:05.473 [2024-10-07 09:48:54.163889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.473 [2024-10-07 09:48:54.163915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.473 qpair failed and we were unable to recover it. 00:28:05.473 [2024-10-07 09:48:54.164005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.473 [2024-10-07 09:48:54.164033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.473 qpair failed and we were unable to recover it. 00:28:05.473 [2024-10-07 09:48:54.164149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.473 [2024-10-07 09:48:54.164177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.473 qpair failed and we were unable to recover it. 00:28:05.473 [2024-10-07 09:48:54.164304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.473 [2024-10-07 09:48:54.164334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.473 qpair failed and we were unable to recover it. 00:28:05.473 [2024-10-07 09:48:54.164453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.473 [2024-10-07 09:48:54.164481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.473 qpair failed and we were unable to recover it. 00:28:05.473 [2024-10-07 09:48:54.164605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.473 [2024-10-07 09:48:54.164646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.473 qpair failed and we were unable to recover it. 00:28:05.473 [2024-10-07 09:48:54.164749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.473 [2024-10-07 09:48:54.164777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.473 qpair failed and we were unable to recover it. 00:28:05.473 [2024-10-07 09:48:54.164867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.473 [2024-10-07 09:48:54.164894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.473 qpair failed and we were unable to recover it. 00:28:05.473 [2024-10-07 09:48:54.165011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.473 [2024-10-07 09:48:54.165037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.473 qpair failed and we were unable to recover it. 00:28:05.473 [2024-10-07 09:48:54.165154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.473 [2024-10-07 09:48:54.165181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.473 qpair failed and we were unable to recover it. 00:28:05.473 [2024-10-07 09:48:54.165292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.473 [2024-10-07 09:48:54.165317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.473 qpair failed and we were unable to recover it. 00:28:05.473 [2024-10-07 09:48:54.165424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.473 [2024-10-07 09:48:54.165451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.473 qpair failed and we were unable to recover it. 00:28:05.473 [2024-10-07 09:48:54.165578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.473 [2024-10-07 09:48:54.165619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.473 qpair failed and we were unable to recover it. 00:28:05.473 [2024-10-07 09:48:54.165750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.473 [2024-10-07 09:48:54.165779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.473 qpair failed and we were unable to recover it. 00:28:05.473 [2024-10-07 09:48:54.165923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.473 [2024-10-07 09:48:54.165951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.473 qpair failed and we were unable to recover it. 00:28:05.473 [2024-10-07 09:48:54.166030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.473 [2024-10-07 09:48:54.166056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.473 qpair failed and we were unable to recover it. 00:28:05.473 [2024-10-07 09:48:54.166140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.473 [2024-10-07 09:48:54.166171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.473 qpair failed and we were unable to recover it. 00:28:05.473 [2024-10-07 09:48:54.166309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.473 [2024-10-07 09:48:54.166335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.473 qpair failed and we were unable to recover it. 00:28:05.473 [2024-10-07 09:48:54.166447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.473 [2024-10-07 09:48:54.166476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.473 qpair failed and we were unable to recover it. 00:28:05.473 [2024-10-07 09:48:54.166565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.473 [2024-10-07 09:48:54.166603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.473 qpair failed and we were unable to recover it. 00:28:05.473 [2024-10-07 09:48:54.166726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.473 [2024-10-07 09:48:54.166762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.473 qpair failed and we were unable to recover it. 00:28:05.473 [2024-10-07 09:48:54.166869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.473 [2024-10-07 09:48:54.166895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.473 qpair failed and we were unable to recover it. 00:28:05.473 [2024-10-07 09:48:54.167030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.473 [2024-10-07 09:48:54.167057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.473 qpair failed and we were unable to recover it. 00:28:05.473 [2024-10-07 09:48:54.167171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.473 [2024-10-07 09:48:54.167197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.473 qpair failed and we were unable to recover it. 00:28:05.473 [2024-10-07 09:48:54.167312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.473 [2024-10-07 09:48:54.167341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.473 qpair failed and we were unable to recover it. 00:28:05.473 [2024-10-07 09:48:54.167427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.473 [2024-10-07 09:48:54.167456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.473 qpair failed and we were unable to recover it. 00:28:05.474 [2024-10-07 09:48:54.167536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.474 [2024-10-07 09:48:54.167563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.474 qpair failed and we were unable to recover it. 00:28:05.474 [2024-10-07 09:48:54.167645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.474 [2024-10-07 09:48:54.167678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.474 qpair failed and we were unable to recover it. 00:28:05.474 [2024-10-07 09:48:54.167769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.474 [2024-10-07 09:48:54.167795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.474 qpair failed and we were unable to recover it. 00:28:05.474 [2024-10-07 09:48:54.167897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.474 [2024-10-07 09:48:54.167927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.474 qpair failed and we were unable to recover it. 00:28:05.474 [2024-10-07 09:48:54.168056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.474 [2024-10-07 09:48:54.168113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.474 qpair failed and we were unable to recover it. 00:28:05.474 [2024-10-07 09:48:54.168340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.474 [2024-10-07 09:48:54.168367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.474 qpair failed and we were unable to recover it. 00:28:05.474 [2024-10-07 09:48:54.168480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.474 [2024-10-07 09:48:54.168505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.474 qpair failed and we were unable to recover it. 00:28:05.474 [2024-10-07 09:48:54.168617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.474 [2024-10-07 09:48:54.168644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.474 qpair failed and we were unable to recover it. 00:28:05.474 [2024-10-07 09:48:54.168797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.474 [2024-10-07 09:48:54.168825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.474 qpair failed and we were unable to recover it. 00:28:05.474 [2024-10-07 09:48:54.168936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.474 [2024-10-07 09:48:54.168961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.474 qpair failed and we were unable to recover it. 00:28:05.474 [2024-10-07 09:48:54.169074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.474 [2024-10-07 09:48:54.169101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.474 qpair failed and we were unable to recover it. 00:28:05.474 [2024-10-07 09:48:54.169212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.474 [2024-10-07 09:48:54.169239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.474 qpair failed and we were unable to recover it. 00:28:05.474 [2024-10-07 09:48:54.169327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.474 [2024-10-07 09:48:54.169355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.474 qpair failed and we were unable to recover it. 00:28:05.474 [2024-10-07 09:48:54.169440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.474 [2024-10-07 09:48:54.169467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.474 qpair failed and we were unable to recover it. 00:28:05.474 [2024-10-07 09:48:54.169607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.474 [2024-10-07 09:48:54.169635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.474 qpair failed and we were unable to recover it. 00:28:05.474 [2024-10-07 09:48:54.169748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.474 [2024-10-07 09:48:54.169774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.474 qpair failed and we were unable to recover it. 00:28:05.474 [2024-10-07 09:48:54.169889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.474 [2024-10-07 09:48:54.169916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.474 qpair failed and we were unable to recover it. 00:28:05.474 [2024-10-07 09:48:54.170019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.474 [2024-10-07 09:48:54.170060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.474 qpair failed and we were unable to recover it. 00:28:05.474 [2024-10-07 09:48:54.170183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.474 [2024-10-07 09:48:54.170211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.474 qpair failed and we were unable to recover it. 00:28:05.474 [2024-10-07 09:48:54.170300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.474 [2024-10-07 09:48:54.170326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.474 qpair failed and we were unable to recover it. 00:28:05.474 [2024-10-07 09:48:54.170416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.474 [2024-10-07 09:48:54.170442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.474 qpair failed and we were unable to recover it. 00:28:05.474 [2024-10-07 09:48:54.170581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.474 [2024-10-07 09:48:54.170608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.474 qpair failed and we were unable to recover it. 00:28:05.474 [2024-10-07 09:48:54.170693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.474 [2024-10-07 09:48:54.170719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.474 qpair failed and we were unable to recover it. 00:28:05.474 [2024-10-07 09:48:54.170827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.474 [2024-10-07 09:48:54.170853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.474 qpair failed and we were unable to recover it. 00:28:05.474 [2024-10-07 09:48:54.170991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.474 [2024-10-07 09:48:54.171018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.474 qpair failed and we were unable to recover it. 00:28:05.474 [2024-10-07 09:48:54.171127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.474 [2024-10-07 09:48:54.171153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.474 qpair failed and we were unable to recover it. 00:28:05.474 [2024-10-07 09:48:54.171244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.474 [2024-10-07 09:48:54.171270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.474 qpair failed and we were unable to recover it. 00:28:05.474 [2024-10-07 09:48:54.171356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.474 [2024-10-07 09:48:54.171385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.474 qpair failed and we were unable to recover it. 00:28:05.474 [2024-10-07 09:48:54.171472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.474 [2024-10-07 09:48:54.171498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.474 qpair failed and we were unable to recover it. 00:28:05.474 [2024-10-07 09:48:54.171584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.474 [2024-10-07 09:48:54.171613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.474 qpair failed and we were unable to recover it. 00:28:05.474 [2024-10-07 09:48:54.171732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.474 [2024-10-07 09:48:54.171759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.474 qpair failed and we were unable to recover it. 00:28:05.474 [2024-10-07 09:48:54.171866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.474 [2024-10-07 09:48:54.171906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.474 qpair failed and we were unable to recover it. 00:28:05.474 [2024-10-07 09:48:54.172025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.474 [2024-10-07 09:48:54.172052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.474 qpair failed and we were unable to recover it. 00:28:05.474 [2024-10-07 09:48:54.172136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.475 [2024-10-07 09:48:54.172163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.475 qpair failed and we were unable to recover it. 00:28:05.475 [2024-10-07 09:48:54.172299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.475 [2024-10-07 09:48:54.172327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.475 qpair failed and we were unable to recover it. 00:28:05.475 [2024-10-07 09:48:54.172408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.475 [2024-10-07 09:48:54.172434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.475 qpair failed and we were unable to recover it. 00:28:05.475 [2024-10-07 09:48:54.172523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.475 [2024-10-07 09:48:54.172548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.475 qpair failed and we were unable to recover it. 00:28:05.475 [2024-10-07 09:48:54.172685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.475 [2024-10-07 09:48:54.172713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.475 qpair failed and we were unable to recover it. 00:28:05.475 [2024-10-07 09:48:54.172794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.475 [2024-10-07 09:48:54.172820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.475 qpair failed and we were unable to recover it. 00:28:05.475 [2024-10-07 09:48:54.172900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.475 [2024-10-07 09:48:54.172926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.475 qpair failed and we were unable to recover it. 00:28:05.475 [2024-10-07 09:48:54.173031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.475 [2024-10-07 09:48:54.173057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.475 qpair failed and we were unable to recover it. 00:28:05.475 [2024-10-07 09:48:54.173168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.475 [2024-10-07 09:48:54.173195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.475 qpair failed and we were unable to recover it. 00:28:05.475 [2024-10-07 09:48:54.173277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.475 [2024-10-07 09:48:54.173304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.475 qpair failed and we were unable to recover it. 00:28:05.475 [2024-10-07 09:48:54.173424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.475 [2024-10-07 09:48:54.173450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.475 qpair failed and we were unable to recover it. 00:28:05.475 [2024-10-07 09:48:54.173536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.475 [2024-10-07 09:48:54.173564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.475 qpair failed and we were unable to recover it. 00:28:05.475 [2024-10-07 09:48:54.173685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.475 [2024-10-07 09:48:54.173714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.475 qpair failed and we were unable to recover it. 00:28:05.475 [2024-10-07 09:48:54.173795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.475 [2024-10-07 09:48:54.173820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.475 qpair failed and we were unable to recover it. 00:28:05.475 [2024-10-07 09:48:54.173900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.475 [2024-10-07 09:48:54.173927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.475 qpair failed and we were unable to recover it. 00:28:05.475 [2024-10-07 09:48:54.174010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.475 [2024-10-07 09:48:54.174037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.475 qpair failed and we were unable to recover it. 00:28:05.475 [2024-10-07 09:48:54.174147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.475 [2024-10-07 09:48:54.174174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.475 qpair failed and we were unable to recover it. 00:28:05.475 [2024-10-07 09:48:54.174283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.475 [2024-10-07 09:48:54.174310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.475 qpair failed and we were unable to recover it. 00:28:05.475 [2024-10-07 09:48:54.174440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.475 [2024-10-07 09:48:54.174481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.475 qpair failed and we were unable to recover it. 00:28:05.475 [2024-10-07 09:48:54.174630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.475 [2024-10-07 09:48:54.174661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.475 qpair failed and we were unable to recover it. 00:28:05.475 [2024-10-07 09:48:54.174782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.475 [2024-10-07 09:48:54.174807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.475 qpair failed and we were unable to recover it. 00:28:05.475 [2024-10-07 09:48:54.174892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.475 [2024-10-07 09:48:54.174919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.475 qpair failed and we were unable to recover it. 00:28:05.475 [2024-10-07 09:48:54.175032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.475 [2024-10-07 09:48:54.175058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.475 qpair failed and we were unable to recover it. 00:28:05.475 [2024-10-07 09:48:54.175163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.475 [2024-10-07 09:48:54.175202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.475 qpair failed and we were unable to recover it. 00:28:05.475 [2024-10-07 09:48:54.175363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.475 [2024-10-07 09:48:54.175400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.475 qpair failed and we were unable to recover it. 00:28:05.475 [2024-10-07 09:48:54.175511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.475 [2024-10-07 09:48:54.175537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.475 qpair failed and we were unable to recover it. 00:28:05.475 [2024-10-07 09:48:54.175654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.475 [2024-10-07 09:48:54.175694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.475 qpair failed and we were unable to recover it. 00:28:05.475 [2024-10-07 09:48:54.175808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.475 [2024-10-07 09:48:54.175835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.475 qpair failed and we were unable to recover it. 00:28:05.475 [2024-10-07 09:48:54.176027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.475 [2024-10-07 09:48:54.176054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.475 qpair failed and we were unable to recover it. 00:28:05.475 [2024-10-07 09:48:54.176163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.475 [2024-10-07 09:48:54.176188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.475 qpair failed and we were unable to recover it. 00:28:05.475 [2024-10-07 09:48:54.176325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.475 [2024-10-07 09:48:54.176350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.475 qpair failed and we were unable to recover it. 00:28:05.475 [2024-10-07 09:48:54.176466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.475 [2024-10-07 09:48:54.176492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.475 qpair failed and we were unable to recover it. 00:28:05.475 [2024-10-07 09:48:54.176602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.475 [2024-10-07 09:48:54.176627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.475 qpair failed and we were unable to recover it. 00:28:05.475 [2024-10-07 09:48:54.176776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.475 [2024-10-07 09:48:54.176801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.475 qpair failed and we were unable to recover it. 00:28:05.475 [2024-10-07 09:48:54.176896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.475 [2024-10-07 09:48:54.176921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.475 qpair failed and we were unable to recover it. 00:28:05.475 [2024-10-07 09:48:54.177059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.475 [2024-10-07 09:48:54.177084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.475 qpair failed and we were unable to recover it. 00:28:05.475 [2024-10-07 09:48:54.177188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.475 [2024-10-07 09:48:54.177213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.475 qpair failed and we were unable to recover it. 00:28:05.475 [2024-10-07 09:48:54.177332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.475 [2024-10-07 09:48:54.177361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.475 qpair failed and we were unable to recover it. 00:28:05.475 [2024-10-07 09:48:54.177454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.476 [2024-10-07 09:48:54.177482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.476 qpair failed and we were unable to recover it. 00:28:05.476 [2024-10-07 09:48:54.177585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.476 [2024-10-07 09:48:54.177623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.476 qpair failed and we were unable to recover it. 00:28:05.476 [2024-10-07 09:48:54.177730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.476 [2024-10-07 09:48:54.177757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.476 qpair failed and we were unable to recover it. 00:28:05.476 [2024-10-07 09:48:54.177871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.476 [2024-10-07 09:48:54.177895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.476 qpair failed and we were unable to recover it. 00:28:05.476 [2024-10-07 09:48:54.177979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.476 [2024-10-07 09:48:54.178004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.476 qpair failed and we were unable to recover it. 00:28:05.476 [2024-10-07 09:48:54.178113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.476 [2024-10-07 09:48:54.178139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.476 qpair failed and we were unable to recover it. 00:28:05.476 [2024-10-07 09:48:54.178255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.476 [2024-10-07 09:48:54.178280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.476 qpair failed and we were unable to recover it. 00:28:05.476 [2024-10-07 09:48:54.178391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.476 [2024-10-07 09:48:54.178416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.476 qpair failed and we were unable to recover it. 00:28:05.476 [2024-10-07 09:48:54.178525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.476 [2024-10-07 09:48:54.178550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.476 qpair failed and we were unable to recover it. 00:28:05.476 [2024-10-07 09:48:54.178662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.476 [2024-10-07 09:48:54.178695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.476 qpair failed and we were unable to recover it. 00:28:05.476 [2024-10-07 09:48:54.178786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.476 [2024-10-07 09:48:54.178813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.476 qpair failed and we were unable to recover it. 00:28:05.476 [2024-10-07 09:48:54.178932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.476 [2024-10-07 09:48:54.178958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.476 qpair failed and we were unable to recover it. 00:28:05.476 [2024-10-07 09:48:54.179046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.476 [2024-10-07 09:48:54.179073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.476 qpair failed and we were unable to recover it. 00:28:05.476 [2024-10-07 09:48:54.179165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.476 [2024-10-07 09:48:54.179192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.476 qpair failed and we were unable to recover it. 00:28:05.476 [2024-10-07 09:48:54.179277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.476 [2024-10-07 09:48:54.179304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.476 qpair failed and we were unable to recover it. 00:28:05.476 [2024-10-07 09:48:54.179450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.476 [2024-10-07 09:48:54.179476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.476 qpair failed and we were unable to recover it. 00:28:05.476 [2024-10-07 09:48:54.179560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.476 [2024-10-07 09:48:54.179586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.476 qpair failed and we were unable to recover it. 00:28:05.476 [2024-10-07 09:48:54.179675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.476 [2024-10-07 09:48:54.179701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.476 qpair failed and we were unable to recover it. 00:28:05.476 [2024-10-07 09:48:54.179806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.476 [2024-10-07 09:48:54.179831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.476 qpair failed and we were unable to recover it. 00:28:05.476 [2024-10-07 09:48:54.179913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.476 [2024-10-07 09:48:54.179939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.476 qpair failed and we were unable to recover it. 00:28:05.476 [2024-10-07 09:48:54.180044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.476 [2024-10-07 09:48:54.180070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.476 qpair failed and we were unable to recover it. 00:28:05.476 [2024-10-07 09:48:54.180175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.476 [2024-10-07 09:48:54.180200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.476 qpair failed and we were unable to recover it. 00:28:05.476 [2024-10-07 09:48:54.180311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.476 [2024-10-07 09:48:54.180337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.476 qpair failed and we were unable to recover it. 00:28:05.476 [2024-10-07 09:48:54.180455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.476 [2024-10-07 09:48:54.180480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.476 qpair failed and we were unable to recover it. 00:28:05.476 [2024-10-07 09:48:54.180589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.476 [2024-10-07 09:48:54.180614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.476 qpair failed and we were unable to recover it. 00:28:05.476 [2024-10-07 09:48:54.180708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.476 [2024-10-07 09:48:54.180735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.476 qpair failed and we were unable to recover it. 00:28:05.476 [2024-10-07 09:48:54.180830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.476 [2024-10-07 09:48:54.180859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.476 qpair failed and we were unable to recover it. 00:28:05.476 [2024-10-07 09:48:54.180938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.476 [2024-10-07 09:48:54.180963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.476 qpair failed and we were unable to recover it. 00:28:05.476 [2024-10-07 09:48:54.181074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.476 [2024-10-07 09:48:54.181099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.476 qpair failed and we were unable to recover it. 00:28:05.476 [2024-10-07 09:48:54.181207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.476 [2024-10-07 09:48:54.181231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.476 qpair failed and we were unable to recover it. 00:28:05.476 [2024-10-07 09:48:54.181334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.476 [2024-10-07 09:48:54.181358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.476 qpair failed and we were unable to recover it. 00:28:05.476 [2024-10-07 09:48:54.181471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.476 [2024-10-07 09:48:54.181497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.476 qpair failed and we were unable to recover it. 00:28:05.476 [2024-10-07 09:48:54.181632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.476 [2024-10-07 09:48:54.181658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.476 qpair failed and we were unable to recover it. 00:28:05.476 [2024-10-07 09:48:54.181784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.476 [2024-10-07 09:48:54.181810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.476 qpair failed and we were unable to recover it. 00:28:05.476 [2024-10-07 09:48:54.181924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.476 [2024-10-07 09:48:54.181949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.476 qpair failed and we were unable to recover it. 00:28:05.476 [2024-10-07 09:48:54.182085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.476 [2024-10-07 09:48:54.182109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.476 qpair failed and we were unable to recover it. 00:28:05.476 [2024-10-07 09:48:54.182194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.476 [2024-10-07 09:48:54.182220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.476 qpair failed and we were unable to recover it. 00:28:05.476 [2024-10-07 09:48:54.182331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.476 [2024-10-07 09:48:54.182356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.476 qpair failed and we were unable to recover it. 00:28:05.476 [2024-10-07 09:48:54.182436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.477 [2024-10-07 09:48:54.182461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.477 qpair failed and we were unable to recover it. 00:28:05.477 [2024-10-07 09:48:54.182543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.477 [2024-10-07 09:48:54.182567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.477 qpair failed and we were unable to recover it. 00:28:05.477 [2024-10-07 09:48:54.182687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.477 [2024-10-07 09:48:54.182713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.477 qpair failed and we were unable to recover it. 00:28:05.477 [2024-10-07 09:48:54.182830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.477 [2024-10-07 09:48:54.182855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.477 qpair failed and we were unable to recover it. 00:28:05.477 [2024-10-07 09:48:54.182942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.477 [2024-10-07 09:48:54.182967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.477 qpair failed and we were unable to recover it. 00:28:05.477 [2024-10-07 09:48:54.183053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.477 [2024-10-07 09:48:54.183079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.477 qpair failed and we were unable to recover it. 00:28:05.477 [2024-10-07 09:48:54.183153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.477 [2024-10-07 09:48:54.183177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.477 qpair failed and we were unable to recover it. 00:28:05.477 [2024-10-07 09:48:54.183282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.477 [2024-10-07 09:48:54.183306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.477 qpair failed and we were unable to recover it. 00:28:05.477 [2024-10-07 09:48:54.183412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.477 [2024-10-07 09:48:54.183437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.477 qpair failed and we were unable to recover it. 00:28:05.477 [2024-10-07 09:48:54.183554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.477 [2024-10-07 09:48:54.183578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.477 qpair failed and we were unable to recover it. 00:28:05.477 [2024-10-07 09:48:54.183714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.477 [2024-10-07 09:48:54.183740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.477 qpair failed and we were unable to recover it. 00:28:05.477 [2024-10-07 09:48:54.183824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.477 [2024-10-07 09:48:54.183850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.477 qpair failed and we were unable to recover it. 00:28:05.477 [2024-10-07 09:48:54.183930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.477 [2024-10-07 09:48:54.183954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.477 qpair failed and we were unable to recover it. 00:28:05.477 [2024-10-07 09:48:54.184040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.477 [2024-10-07 09:48:54.184064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.477 qpair failed and we were unable to recover it. 00:28:05.477 [2024-10-07 09:48:54.184172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.477 [2024-10-07 09:48:54.184195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.477 qpair failed and we were unable to recover it. 00:28:05.477 [2024-10-07 09:48:54.184314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.477 [2024-10-07 09:48:54.184339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.477 qpair failed and we were unable to recover it. 00:28:05.477 [2024-10-07 09:48:54.184478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.477 [2024-10-07 09:48:54.184503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.477 qpair failed and we were unable to recover it. 00:28:05.477 [2024-10-07 09:48:54.184616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.477 [2024-10-07 09:48:54.184641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.477 qpair failed and we were unable to recover it. 00:28:05.477 [2024-10-07 09:48:54.184760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.477 [2024-10-07 09:48:54.184786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.477 qpair failed and we were unable to recover it. 00:28:05.477 [2024-10-07 09:48:54.184874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.477 [2024-10-07 09:48:54.184899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.477 qpair failed and we were unable to recover it. 00:28:05.477 [2024-10-07 09:48:54.185042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.477 [2024-10-07 09:48:54.185067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.477 qpair failed and we were unable to recover it. 00:28:05.477 [2024-10-07 09:48:54.185269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.477 [2024-10-07 09:48:54.185300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.477 qpair failed and we were unable to recover it. 00:28:05.477 [2024-10-07 09:48:54.185444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.477 [2024-10-07 09:48:54.185470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.477 qpair failed and we were unable to recover it. 00:28:05.477 [2024-10-07 09:48:54.185582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.477 [2024-10-07 09:48:54.185607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.477 qpair failed and we were unable to recover it. 00:28:05.477 [2024-10-07 09:48:54.185763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.477 [2024-10-07 09:48:54.185789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.477 qpair failed and we were unable to recover it. 00:28:05.477 [2024-10-07 09:48:54.185906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.477 [2024-10-07 09:48:54.185932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.477 qpair failed and we were unable to recover it. 00:28:05.477 [2024-10-07 09:48:54.186050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.477 [2024-10-07 09:48:54.186076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.477 qpair failed and we were unable to recover it. 00:28:05.477 [2024-10-07 09:48:54.186151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.477 [2024-10-07 09:48:54.186176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.477 qpair failed and we were unable to recover it. 00:28:05.477 [2024-10-07 09:48:54.186288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.477 [2024-10-07 09:48:54.186319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.477 qpair failed and we were unable to recover it. 00:28:05.477 [2024-10-07 09:48:54.186411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.477 [2024-10-07 09:48:54.186436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.477 qpair failed and we were unable to recover it. 00:28:05.477 [2024-10-07 09:48:54.186549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.477 [2024-10-07 09:48:54.186573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.477 qpair failed and we were unable to recover it. 00:28:05.477 [2024-10-07 09:48:54.186699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.477 [2024-10-07 09:48:54.186725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.477 qpair failed and we were unable to recover it. 00:28:05.478 [2024-10-07 09:48:54.186833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.478 [2024-10-07 09:48:54.186858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.478 qpair failed and we were unable to recover it. 00:28:05.478 [2024-10-07 09:48:54.186940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.478 [2024-10-07 09:48:54.186964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.478 qpair failed and we were unable to recover it. 00:28:05.478 [2024-10-07 09:48:54.187072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.478 [2024-10-07 09:48:54.187097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.478 qpair failed and we were unable to recover it. 00:28:05.478 [2024-10-07 09:48:54.187208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.478 [2024-10-07 09:48:54.187233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.478 qpair failed and we were unable to recover it. 00:28:05.478 [2024-10-07 09:48:54.187344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.478 [2024-10-07 09:48:54.187368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.478 qpair failed and we were unable to recover it. 00:28:05.478 [2024-10-07 09:48:54.187483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.478 [2024-10-07 09:48:54.187509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.478 qpair failed and we were unable to recover it. 00:28:05.478 [2024-10-07 09:48:54.187621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.478 [2024-10-07 09:48:54.187645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.478 qpair failed and we were unable to recover it. 00:28:05.478 [2024-10-07 09:48:54.187754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.478 [2024-10-07 09:48:54.187788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.478 qpair failed and we were unable to recover it. 00:28:05.478 [2024-10-07 09:48:54.187925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.478 [2024-10-07 09:48:54.187963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.478 qpair failed and we were unable to recover it. 00:28:05.478 [2024-10-07 09:48:54.188090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.478 [2024-10-07 09:48:54.188117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.478 qpair failed and we were unable to recover it. 00:28:05.478 [2024-10-07 09:48:54.188207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.478 [2024-10-07 09:48:54.188233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.478 qpair failed and we were unable to recover it. 00:28:05.478 [2024-10-07 09:48:54.188316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.478 [2024-10-07 09:48:54.188342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.478 qpair failed and we were unable to recover it. 00:28:05.478 [2024-10-07 09:48:54.188423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.478 [2024-10-07 09:48:54.188448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.478 qpair failed and we were unable to recover it. 00:28:05.478 [2024-10-07 09:48:54.188538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.478 [2024-10-07 09:48:54.188563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.478 qpair failed and we were unable to recover it. 00:28:05.478 [2024-10-07 09:48:54.188652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.478 [2024-10-07 09:48:54.188684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.478 qpair failed and we were unable to recover it. 00:28:05.478 [2024-10-07 09:48:54.188799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.478 [2024-10-07 09:48:54.188824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.478 qpair failed and we were unable to recover it. 00:28:05.478 [2024-10-07 09:48:54.188907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.478 [2024-10-07 09:48:54.188931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.478 qpair failed and we were unable to recover it. 00:28:05.478 [2024-10-07 09:48:54.189072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.478 [2024-10-07 09:48:54.189097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.478 qpair failed and we were unable to recover it. 00:28:05.478 [2024-10-07 09:48:54.189203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.478 [2024-10-07 09:48:54.189228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.478 qpair failed and we were unable to recover it. 00:28:05.478 [2024-10-07 09:48:54.189311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.478 [2024-10-07 09:48:54.189336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.478 qpair failed and we were unable to recover it. 00:28:05.478 [2024-10-07 09:48:54.189437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.478 [2024-10-07 09:48:54.189475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.478 qpair failed and we were unable to recover it. 00:28:05.478 [2024-10-07 09:48:54.189575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.478 [2024-10-07 09:48:54.189604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.478 qpair failed and we were unable to recover it. 00:28:05.478 [2024-10-07 09:48:54.189691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.478 [2024-10-07 09:48:54.189722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.478 qpair failed and we were unable to recover it. 00:28:05.478 [2024-10-07 09:48:54.189817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.478 [2024-10-07 09:48:54.189847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.478 qpair failed and we were unable to recover it. 00:28:05.478 [2024-10-07 09:48:54.189938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.478 [2024-10-07 09:48:54.189964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.478 qpair failed and we were unable to recover it. 00:28:05.478 [2024-10-07 09:48:54.190047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.478 [2024-10-07 09:48:54.190073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.478 qpair failed and we were unable to recover it. 00:28:05.478 [2024-10-07 09:48:54.190191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.478 [2024-10-07 09:48:54.190222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.478 qpair failed and we were unable to recover it. 00:28:05.478 [2024-10-07 09:48:54.190309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.478 [2024-10-07 09:48:54.190336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.478 qpair failed and we were unable to recover it. 00:28:05.478 [2024-10-07 09:48:54.190453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.478 [2024-10-07 09:48:54.190478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.478 qpair failed and we were unable to recover it. 00:28:05.478 [2024-10-07 09:48:54.190903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.478 [2024-10-07 09:48:54.190932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.478 qpair failed and we were unable to recover it. 00:28:05.478 [2024-10-07 09:48:54.191079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.478 [2024-10-07 09:48:54.191105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.478 qpair failed and we were unable to recover it. 00:28:05.478 [2024-10-07 09:48:54.191213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.478 [2024-10-07 09:48:54.191238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.478 qpair failed and we were unable to recover it. 00:28:05.478 [2024-10-07 09:48:54.191329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.478 [2024-10-07 09:48:54.191355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.478 qpair failed and we were unable to recover it. 00:28:05.478 [2024-10-07 09:48:54.191441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.478 [2024-10-07 09:48:54.191468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.478 qpair failed and we were unable to recover it. 00:28:05.478 [2024-10-07 09:48:54.191607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.478 [2024-10-07 09:48:54.191632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.478 qpair failed and we were unable to recover it. 00:28:05.478 [2024-10-07 09:48:54.191780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.478 [2024-10-07 09:48:54.191806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.478 qpair failed and we were unable to recover it. 00:28:05.478 [2024-10-07 09:48:54.191892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.478 [2024-10-07 09:48:54.191918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.478 qpair failed and we were unable to recover it. 00:28:05.478 [2024-10-07 09:48:54.192014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.478 [2024-10-07 09:48:54.192039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.478 qpair failed and we were unable to recover it. 00:28:05.478 [2024-10-07 09:48:54.192114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.479 [2024-10-07 09:48:54.192139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.479 qpair failed and we were unable to recover it. 00:28:05.479 [2024-10-07 09:48:54.192220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.479 [2024-10-07 09:48:54.192245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.479 qpair failed and we were unable to recover it. 00:28:05.479 [2024-10-07 09:48:54.192359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.479 [2024-10-07 09:48:54.192384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.479 qpair failed and we were unable to recover it. 00:28:05.479 [2024-10-07 09:48:54.192501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.479 [2024-10-07 09:48:54.192526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.479 qpair failed and we were unable to recover it. 00:28:05.479 [2024-10-07 09:48:54.192637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.479 [2024-10-07 09:48:54.192662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.479 qpair failed and we were unable to recover it. 00:28:05.479 [2024-10-07 09:48:54.192759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.479 [2024-10-07 09:48:54.192784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.479 qpair failed and we were unable to recover it. 00:28:05.479 [2024-10-07 09:48:54.192896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.479 [2024-10-07 09:48:54.192921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.479 qpair failed and we were unable to recover it. 00:28:05.479 [2024-10-07 09:48:54.193030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.479 [2024-10-07 09:48:54.193055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.479 qpair failed and we were unable to recover it. 00:28:05.479 [2024-10-07 09:48:54.193169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.479 [2024-10-07 09:48:54.193195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.479 qpair failed and we were unable to recover it. 00:28:05.479 [2024-10-07 09:48:54.193290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.479 [2024-10-07 09:48:54.193315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.479 qpair failed and we were unable to recover it. 00:28:05.479 [2024-10-07 09:48:54.193402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.479 [2024-10-07 09:48:54.193428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.479 qpair failed and we were unable to recover it. 00:28:05.479 [2024-10-07 09:48:54.193510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.479 [2024-10-07 09:48:54.193535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.479 qpair failed and we were unable to recover it. 00:28:05.479 [2024-10-07 09:48:54.193691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.479 [2024-10-07 09:48:54.193723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.479 qpair failed and we were unable to recover it. 00:28:05.479 [2024-10-07 09:48:54.193812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.479 [2024-10-07 09:48:54.193837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.479 qpair failed and we were unable to recover it. 00:28:05.479 [2024-10-07 09:48:54.193925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.479 [2024-10-07 09:48:54.193950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.479 qpair failed and we were unable to recover it. 00:28:05.479 [2024-10-07 09:48:54.194043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.479 [2024-10-07 09:48:54.194073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.479 qpair failed and we were unable to recover it. 00:28:05.479 [2024-10-07 09:48:54.194201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.479 [2024-10-07 09:48:54.194225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.479 qpair failed and we were unable to recover it. 00:28:05.479 [2024-10-07 09:48:54.194336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.479 [2024-10-07 09:48:54.194361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.479 qpair failed and we were unable to recover it. 00:28:05.479 [2024-10-07 09:48:54.194485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.479 [2024-10-07 09:48:54.194510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.479 qpair failed and we were unable to recover it. 00:28:05.479 [2024-10-07 09:48:54.194596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.479 [2024-10-07 09:48:54.194622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.479 qpair failed and we were unable to recover it. 00:28:05.479 [2024-10-07 09:48:54.194724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.479 [2024-10-07 09:48:54.194751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.479 qpair failed and we were unable to recover it. 00:28:05.479 [2024-10-07 09:48:54.194871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.479 [2024-10-07 09:48:54.194899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.479 qpair failed and we were unable to recover it. 00:28:05.479 [2024-10-07 09:48:54.195015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.479 [2024-10-07 09:48:54.195041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.479 qpair failed and we were unable to recover it. 00:28:05.479 [2024-10-07 09:48:54.195147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.479 [2024-10-07 09:48:54.195173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.479 qpair failed and we were unable to recover it. 00:28:05.479 [2024-10-07 09:48:54.195284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.479 [2024-10-07 09:48:54.195311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.479 qpair failed and we were unable to recover it. 00:28:05.479 [2024-10-07 09:48:54.195449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.479 [2024-10-07 09:48:54.195484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.479 qpair failed and we were unable to recover it. 00:28:05.479 [2024-10-07 09:48:54.195580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.479 [2024-10-07 09:48:54.195610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.479 qpair failed and we were unable to recover it. 00:28:05.479 [2024-10-07 09:48:54.195730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.479 [2024-10-07 09:48:54.195758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.479 qpair failed and we were unable to recover it. 00:28:05.479 [2024-10-07 09:48:54.195884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.479 [2024-10-07 09:48:54.195912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.479 qpair failed and we were unable to recover it. 00:28:05.479 [2024-10-07 09:48:54.195996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.479 [2024-10-07 09:48:54.196023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.479 qpair failed and we were unable to recover it. 00:28:05.479 [2024-10-07 09:48:54.196114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.479 [2024-10-07 09:48:54.196141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.479 qpair failed and we were unable to recover it. 00:28:05.479 [2024-10-07 09:48:54.196253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.479 [2024-10-07 09:48:54.196280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.479 qpair failed and we were unable to recover it. 00:28:05.479 [2024-10-07 09:48:54.196365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.479 [2024-10-07 09:48:54.196390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.479 qpair failed and we were unable to recover it. 00:28:05.479 [2024-10-07 09:48:54.196476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.479 [2024-10-07 09:48:54.196501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.479 qpair failed and we were unable to recover it. 00:28:05.479 [2024-10-07 09:48:54.196613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.479 [2024-10-07 09:48:54.196640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.479 qpair failed and we were unable to recover it. 00:28:05.479 [2024-10-07 09:48:54.196744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.479 [2024-10-07 09:48:54.196775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.479 qpair failed and we were unable to recover it. 00:28:05.479 [2024-10-07 09:48:54.196865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.479 [2024-10-07 09:48:54.196892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.479 qpair failed and we were unable to recover it. 00:28:05.479 [2024-10-07 09:48:54.196966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.479 [2024-10-07 09:48:54.196991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.479 qpair failed and we were unable to recover it. 00:28:05.479 [2024-10-07 09:48:54.197073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.480 [2024-10-07 09:48:54.197098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.480 qpair failed and we were unable to recover it. 00:28:05.480 [2024-10-07 09:48:54.197297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.480 [2024-10-07 09:48:54.197349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.480 qpair failed and we were unable to recover it. 00:28:05.480 [2024-10-07 09:48:54.197436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.480 [2024-10-07 09:48:54.197461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.480 qpair failed and we were unable to recover it. 00:28:05.480 [2024-10-07 09:48:54.197600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.480 [2024-10-07 09:48:54.197627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.480 qpair failed and we were unable to recover it. 00:28:05.480 [2024-10-07 09:48:54.197750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.480 [2024-10-07 09:48:54.197778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.480 qpair failed and we were unable to recover it. 00:28:05.480 [2024-10-07 09:48:54.197873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.480 [2024-10-07 09:48:54.197899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.480 qpair failed and we were unable to recover it. 00:28:05.480 [2024-10-07 09:48:54.198009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.480 [2024-10-07 09:48:54.198036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.480 qpair failed and we were unable to recover it. 00:28:05.480 [2024-10-07 09:48:54.198173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.480 [2024-10-07 09:48:54.198199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.480 qpair failed and we were unable to recover it. 00:28:05.480 [2024-10-07 09:48:54.198314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.480 [2024-10-07 09:48:54.198340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.480 qpair failed and we were unable to recover it. 00:28:05.480 [2024-10-07 09:48:54.198452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.480 [2024-10-07 09:48:54.198478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.480 qpair failed and we were unable to recover it. 00:28:05.480 [2024-10-07 09:48:54.198568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.480 [2024-10-07 09:48:54.198596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.480 qpair failed and we were unable to recover it. 00:28:05.480 [2024-10-07 09:48:54.198715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.480 [2024-10-07 09:48:54.198742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.480 qpair failed and we were unable to recover it. 00:28:05.480 [2024-10-07 09:48:54.198829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.480 [2024-10-07 09:48:54.198854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.480 qpair failed and we were unable to recover it. 00:28:05.480 [2024-10-07 09:48:54.198936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.480 [2024-10-07 09:48:54.198963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.480 qpair failed and we were unable to recover it. 00:28:05.480 [2024-10-07 09:48:54.199072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.480 [2024-10-07 09:48:54.199102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.480 qpair failed and we were unable to recover it. 00:28:05.480 [2024-10-07 09:48:54.199256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.480 [2024-10-07 09:48:54.199286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.480 qpair failed and we were unable to recover it. 00:28:05.480 [2024-10-07 09:48:54.199375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.480 [2024-10-07 09:48:54.199400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.480 qpair failed and we were unable to recover it. 00:28:05.480 [2024-10-07 09:48:54.199488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.480 [2024-10-07 09:48:54.199515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.480 qpair failed and we were unable to recover it. 00:28:05.480 [2024-10-07 09:48:54.199627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.480 [2024-10-07 09:48:54.199654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.480 qpair failed and we were unable to recover it. 00:28:05.480 [2024-10-07 09:48:54.199770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.480 [2024-10-07 09:48:54.199800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.480 qpair failed and we were unable to recover it. 00:28:05.480 [2024-10-07 09:48:54.199917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.480 [2024-10-07 09:48:54.199943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.480 qpair failed and we were unable to recover it. 00:28:05.480 [2024-10-07 09:48:54.200032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.480 [2024-10-07 09:48:54.200061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.480 qpair failed and we were unable to recover it. 00:28:05.480 [2024-10-07 09:48:54.200232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.480 [2024-10-07 09:48:54.200290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.480 qpair failed and we were unable to recover it. 00:28:05.480 [2024-10-07 09:48:54.200383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.480 [2024-10-07 09:48:54.200410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.480 qpair failed and we were unable to recover it. 00:28:05.480 [2024-10-07 09:48:54.200515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.480 [2024-10-07 09:48:54.200541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.480 qpair failed and we were unable to recover it. 00:28:05.480 [2024-10-07 09:48:54.200637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.480 [2024-10-07 09:48:54.200662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.480 qpair failed and we were unable to recover it. 00:28:05.480 [2024-10-07 09:48:54.200795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.480 [2024-10-07 09:48:54.200821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.480 qpair failed and we were unable to recover it. 00:28:05.480 [2024-10-07 09:48:54.200906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.480 [2024-10-07 09:48:54.200931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.480 qpair failed and we were unable to recover it. 00:28:05.480 [2024-10-07 09:48:54.201050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.480 [2024-10-07 09:48:54.201077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.480 qpair failed and we were unable to recover it. 00:28:05.480 [2024-10-07 09:48:54.201165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.480 [2024-10-07 09:48:54.201190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.480 qpair failed and we were unable to recover it. 00:28:05.480 [2024-10-07 09:48:54.201330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.480 [2024-10-07 09:48:54.201358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.480 qpair failed and we were unable to recover it. 00:28:05.480 [2024-10-07 09:48:54.201445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.480 [2024-10-07 09:48:54.201470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.480 qpair failed and we were unable to recover it. 00:28:05.480 [2024-10-07 09:48:54.201581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.480 [2024-10-07 09:48:54.201608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.480 qpair failed and we were unable to recover it. 00:28:05.480 [2024-10-07 09:48:54.201724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.480 [2024-10-07 09:48:54.201755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.480 qpair failed and we were unable to recover it. 00:28:05.480 [2024-10-07 09:48:54.201865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.480 [2024-10-07 09:48:54.201892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.480 qpair failed and we were unable to recover it. 00:28:05.480 [2024-10-07 09:48:54.201982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.480 [2024-10-07 09:48:54.202008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.480 qpair failed and we were unable to recover it. 00:28:05.480 [2024-10-07 09:48:54.202095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.480 [2024-10-07 09:48:54.202121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.480 qpair failed and we were unable to recover it. 00:28:05.480 [2024-10-07 09:48:54.202201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.480 [2024-10-07 09:48:54.202227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.480 qpair failed and we were unable to recover it. 00:28:05.480 [2024-10-07 09:48:54.202328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.480 [2024-10-07 09:48:54.202354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.480 qpair failed and we were unable to recover it. 00:28:05.481 [2024-10-07 09:48:54.202471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.481 [2024-10-07 09:48:54.202497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.481 qpair failed and we were unable to recover it. 00:28:05.481 [2024-10-07 09:48:54.202635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.481 [2024-10-07 09:48:54.202661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.481 qpair failed and we were unable to recover it. 00:28:05.481 [2024-10-07 09:48:54.202781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.481 [2024-10-07 09:48:54.202821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.481 qpair failed and we were unable to recover it. 00:28:05.481 [2024-10-07 09:48:54.202978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.481 [2024-10-07 09:48:54.203007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.481 qpair failed and we were unable to recover it. 00:28:05.481 [2024-10-07 09:48:54.203122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.481 [2024-10-07 09:48:54.203150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.481 qpair failed and we were unable to recover it. 00:28:05.481 [2024-10-07 09:48:54.203243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.481 [2024-10-07 09:48:54.203270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.481 qpair failed and we were unable to recover it. 00:28:05.481 [2024-10-07 09:48:54.203390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.481 [2024-10-07 09:48:54.203419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.481 qpair failed and we were unable to recover it. 00:28:05.481 [2024-10-07 09:48:54.203506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.481 [2024-10-07 09:48:54.203538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.481 qpair failed and we were unable to recover it. 00:28:05.481 [2024-10-07 09:48:54.203649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.481 [2024-10-07 09:48:54.203684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.481 qpair failed and we were unable to recover it. 00:28:05.481 [2024-10-07 09:48:54.203773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.481 [2024-10-07 09:48:54.203797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.481 qpair failed and we were unable to recover it. 00:28:05.481 [2024-10-07 09:48:54.203914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.481 [2024-10-07 09:48:54.203941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.481 qpair failed and we were unable to recover it. 00:28:05.481 [2024-10-07 09:48:54.204054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.481 [2024-10-07 09:48:54.204080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.481 qpair failed and we were unable to recover it. 00:28:05.481 [2024-10-07 09:48:54.204216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.481 [2024-10-07 09:48:54.204243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.481 qpair failed and we were unable to recover it. 00:28:05.481 [2024-10-07 09:48:54.204356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.481 [2024-10-07 09:48:54.204383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.481 qpair failed and we were unable to recover it. 00:28:05.481 [2024-10-07 09:48:54.204471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.481 [2024-10-07 09:48:54.204499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.481 qpair failed and we were unable to recover it. 00:28:05.481 [2024-10-07 09:48:54.204612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.481 [2024-10-07 09:48:54.204643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.481 qpair failed and we were unable to recover it. 00:28:05.481 [2024-10-07 09:48:54.204731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.481 [2024-10-07 09:48:54.204756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.481 qpair failed and we were unable to recover it. 00:28:05.481 [2024-10-07 09:48:54.204842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.481 [2024-10-07 09:48:54.204866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.481 qpair failed and we were unable to recover it. 00:28:05.481 [2024-10-07 09:48:54.204955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.481 [2024-10-07 09:48:54.204982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.481 qpair failed and we were unable to recover it. 00:28:05.481 [2024-10-07 09:48:54.205062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.481 [2024-10-07 09:48:54.205086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.481 qpair failed and we were unable to recover it. 00:28:05.481 [2024-10-07 09:48:54.205175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.481 [2024-10-07 09:48:54.205204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.481 qpair failed and we were unable to recover it. 00:28:05.481 [2024-10-07 09:48:54.205292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.481 [2024-10-07 09:48:54.205318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.481 qpair failed and we were unable to recover it. 00:28:05.481 [2024-10-07 09:48:54.205455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.481 [2024-10-07 09:48:54.205482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.481 qpair failed and we were unable to recover it. 00:28:05.481 [2024-10-07 09:48:54.205563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.481 [2024-10-07 09:48:54.205588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.481 qpair failed and we were unable to recover it. 00:28:05.481 [2024-10-07 09:48:54.205681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.481 [2024-10-07 09:48:54.205707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.481 qpair failed and we were unable to recover it. 00:28:05.481 [2024-10-07 09:48:54.205794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.481 [2024-10-07 09:48:54.205819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.481 qpair failed and we were unable to recover it. 00:28:05.481 [2024-10-07 09:48:54.205904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.481 [2024-10-07 09:48:54.205933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.481 qpair failed and we were unable to recover it. 00:28:05.481 [2024-10-07 09:48:54.206020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.481 [2024-10-07 09:48:54.206045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.481 qpair failed and we were unable to recover it. 00:28:05.481 [2024-10-07 09:48:54.206137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.481 [2024-10-07 09:48:54.206177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.481 qpair failed and we were unable to recover it. 00:28:05.481 [2024-10-07 09:48:54.206270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.481 [2024-10-07 09:48:54.206299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.481 qpair failed and we were unable to recover it. 00:28:05.481 [2024-10-07 09:48:54.206394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.481 [2024-10-07 09:48:54.206420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.481 qpair failed and we were unable to recover it. 00:28:05.481 [2024-10-07 09:48:54.206499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.481 [2024-10-07 09:48:54.206524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.481 qpair failed and we were unable to recover it. 00:28:05.481 [2024-10-07 09:48:54.206607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.481 [2024-10-07 09:48:54.206632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.481 qpair failed and we were unable to recover it. 00:28:05.481 [2024-10-07 09:48:54.206760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.481 [2024-10-07 09:48:54.206787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.481 qpair failed and we were unable to recover it. 00:28:05.481 [2024-10-07 09:48:54.206868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.481 [2024-10-07 09:48:54.206893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.482 qpair failed and we were unable to recover it. 00:28:05.482 [2024-10-07 09:48:54.206975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.482 [2024-10-07 09:48:54.207000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.482 qpair failed and we were unable to recover it. 00:28:05.482 [2024-10-07 09:48:54.207113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.482 [2024-10-07 09:48:54.207140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.482 qpair failed and we were unable to recover it. 00:28:05.482 [2024-10-07 09:48:54.207256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.482 [2024-10-07 09:48:54.207286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.482 qpair failed and we were unable to recover it. 00:28:05.482 [2024-10-07 09:48:54.207413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.482 [2024-10-07 09:48:54.207444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.482 qpair failed and we were unable to recover it. 00:28:05.482 [2024-10-07 09:48:54.207544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.482 [2024-10-07 09:48:54.207585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.482 qpair failed and we were unable to recover it. 00:28:05.482 [2024-10-07 09:48:54.207686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.482 [2024-10-07 09:48:54.207717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.482 qpair failed and we were unable to recover it. 00:28:05.482 [2024-10-07 09:48:54.207865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.482 [2024-10-07 09:48:54.207893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.482 qpair failed and we were unable to recover it. 00:28:05.482 [2024-10-07 09:48:54.207981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.482 [2024-10-07 09:48:54.208015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.482 qpair failed and we were unable to recover it. 00:28:05.482 [2024-10-07 09:48:54.208106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.482 [2024-10-07 09:48:54.208132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.482 qpair failed and we were unable to recover it. 00:28:05.482 [2024-10-07 09:48:54.208220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.482 [2024-10-07 09:48:54.208245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.482 qpair failed and we were unable to recover it. 00:28:05.482 [2024-10-07 09:48:54.208356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.482 [2024-10-07 09:48:54.208382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.482 qpair failed and we were unable to recover it. 00:28:05.482 [2024-10-07 09:48:54.208488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.482 [2024-10-07 09:48:54.208515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.482 qpair failed and we were unable to recover it. 00:28:05.482 [2024-10-07 09:48:54.208598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.482 [2024-10-07 09:48:54.208623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.482 qpair failed and we were unable to recover it. 00:28:05.482 [2024-10-07 09:48:54.208723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.482 [2024-10-07 09:48:54.208753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.482 qpair failed and we were unable to recover it. 00:28:05.482 [2024-10-07 09:48:54.208840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.482 [2024-10-07 09:48:54.208867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.482 qpair failed and we were unable to recover it. 00:28:05.482 [2024-10-07 09:48:54.208947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.482 [2024-10-07 09:48:54.208975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.482 qpair failed and we were unable to recover it. 00:28:05.482 [2024-10-07 09:48:54.209116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.482 [2024-10-07 09:48:54.209143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.482 qpair failed and we were unable to recover it. 00:28:05.482 [2024-10-07 09:48:54.209263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.482 [2024-10-07 09:48:54.209289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.482 qpair failed and we were unable to recover it. 00:28:05.482 [2024-10-07 09:48:54.209376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.482 [2024-10-07 09:48:54.209401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.482 qpair failed and we were unable to recover it. 00:28:05.482 [2024-10-07 09:48:54.209511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.482 [2024-10-07 09:48:54.209539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.482 qpair failed and we were unable to recover it. 00:28:05.482 [2024-10-07 09:48:54.209661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.482 [2024-10-07 09:48:54.209698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.482 qpair failed and we were unable to recover it. 00:28:05.482 [2024-10-07 09:48:54.209790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.482 [2024-10-07 09:48:54.209817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.482 qpair failed and we were unable to recover it. 00:28:05.482 [2024-10-07 09:48:54.209900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.482 [2024-10-07 09:48:54.209927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.482 qpair failed and we were unable to recover it. 00:28:05.482 [2024-10-07 09:48:54.210037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.482 [2024-10-07 09:48:54.210063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.482 qpair failed and we were unable to recover it. 00:28:05.482 [2024-10-07 09:48:54.210174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.482 [2024-10-07 09:48:54.210200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.482 qpair failed and we were unable to recover it. 00:28:05.482 [2024-10-07 09:48:54.210285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.482 [2024-10-07 09:48:54.210312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.482 qpair failed and we were unable to recover it. 00:28:05.482 [2024-10-07 09:48:54.210397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.482 [2024-10-07 09:48:54.210425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.482 qpair failed and we were unable to recover it. 00:28:05.482 [2024-10-07 09:48:54.210508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.482 [2024-10-07 09:48:54.210534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.482 qpair failed and we were unable to recover it. 00:28:05.482 [2024-10-07 09:48:54.210617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.482 [2024-10-07 09:48:54.210642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.482 qpair failed and we were unable to recover it. 00:28:05.482 [2024-10-07 09:48:54.210743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.482 [2024-10-07 09:48:54.210769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.482 qpair failed and we were unable to recover it. 00:28:05.482 [2024-10-07 09:48:54.210873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.482 [2024-10-07 09:48:54.210900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.482 qpair failed and we were unable to recover it. 00:28:05.482 [2024-10-07 09:48:54.210987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.482 [2024-10-07 09:48:54.211012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.482 qpair failed and we were unable to recover it. 00:28:05.482 [2024-10-07 09:48:54.211089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.482 [2024-10-07 09:48:54.211121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.482 qpair failed and we were unable to recover it. 00:28:05.482 [2024-10-07 09:48:54.211247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.482 [2024-10-07 09:48:54.211278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.482 qpair failed and we were unable to recover it. 00:28:05.482 [2024-10-07 09:48:54.211366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.482 [2024-10-07 09:48:54.211400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.482 qpair failed and we were unable to recover it. 00:28:05.482 [2024-10-07 09:48:54.211512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.482 [2024-10-07 09:48:54.211539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.482 qpair failed and we were unable to recover it. 00:28:05.482 [2024-10-07 09:48:54.211643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.482 [2024-10-07 09:48:54.211676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.483 qpair failed and we were unable to recover it. 00:28:05.483 [2024-10-07 09:48:54.211756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.483 [2024-10-07 09:48:54.211781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.483 qpair failed and we were unable to recover it. 00:28:05.483 [2024-10-07 09:48:54.211921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.483 [2024-10-07 09:48:54.211948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.483 qpair failed and we were unable to recover it. 00:28:05.483 [2024-10-07 09:48:54.212074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.483 [2024-10-07 09:48:54.212129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.483 qpair failed and we were unable to recover it. 00:28:05.483 [2024-10-07 09:48:54.212219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.483 [2024-10-07 09:48:54.212246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.483 qpair failed and we were unable to recover it. 00:28:05.483 [2024-10-07 09:48:54.212330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.483 [2024-10-07 09:48:54.212358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.483 qpair failed and we were unable to recover it. 00:28:05.483 [2024-10-07 09:48:54.212506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.483 [2024-10-07 09:48:54.212534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.483 qpair failed and we were unable to recover it. 00:28:05.483 [2024-10-07 09:48:54.212622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.483 [2024-10-07 09:48:54.212647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.483 qpair failed and we were unable to recover it. 00:28:05.483 [2024-10-07 09:48:54.212763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.483 [2024-10-07 09:48:54.212790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.483 qpair failed and we were unable to recover it. 00:28:05.483 [2024-10-07 09:48:54.212872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.483 [2024-10-07 09:48:54.212897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.483 qpair failed and we were unable to recover it. 00:28:05.483 [2024-10-07 09:48:54.213009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.483 [2024-10-07 09:48:54.213036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.483 qpair failed and we were unable to recover it. 00:28:05.483 [2024-10-07 09:48:54.213136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.483 [2024-10-07 09:48:54.213199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.483 qpair failed and we were unable to recover it. 00:28:05.483 [2024-10-07 09:48:54.213340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.483 [2024-10-07 09:48:54.213366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.483 qpair failed and we were unable to recover it. 00:28:05.483 [2024-10-07 09:48:54.213444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.483 [2024-10-07 09:48:54.213469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.483 qpair failed and we were unable to recover it. 00:28:05.483 [2024-10-07 09:48:54.213546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.483 [2024-10-07 09:48:54.213571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.483 qpair failed and we were unable to recover it. 00:28:05.483 [2024-10-07 09:48:54.213730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.483 [2024-10-07 09:48:54.213758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.483 qpair failed and we were unable to recover it. 00:28:05.483 [2024-10-07 09:48:54.213875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.483 [2024-10-07 09:48:54.213902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.483 qpair failed and we were unable to recover it. 00:28:05.483 [2024-10-07 09:48:54.214016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.483 [2024-10-07 09:48:54.214043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.483 qpair failed and we were unable to recover it. 00:28:05.483 [2024-10-07 09:48:54.214161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.483 [2024-10-07 09:48:54.214187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.483 qpair failed and we were unable to recover it. 00:28:05.483 [2024-10-07 09:48:54.214276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.483 [2024-10-07 09:48:54.214305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.483 qpair failed and we were unable to recover it. 00:28:05.483 [2024-10-07 09:48:54.214388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.483 [2024-10-07 09:48:54.214415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.483 qpair failed and we were unable to recover it. 00:28:05.483 [2024-10-07 09:48:54.214523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.483 [2024-10-07 09:48:54.214564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.483 qpair failed and we were unable to recover it. 00:28:05.483 [2024-10-07 09:48:54.214663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.483 [2024-10-07 09:48:54.214706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.483 qpair failed and we were unable to recover it. 00:28:05.483 [2024-10-07 09:48:54.214794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.483 [2024-10-07 09:48:54.214820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.483 qpair failed and we were unable to recover it. 00:28:05.483 [2024-10-07 09:48:54.214908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.483 [2024-10-07 09:48:54.214934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.483 qpair failed and we were unable to recover it. 00:28:05.483 [2024-10-07 09:48:54.215023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.483 [2024-10-07 09:48:54.215054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.483 qpair failed and we were unable to recover it. 00:28:05.483 [2024-10-07 09:48:54.215148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.483 [2024-10-07 09:48:54.215174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.483 qpair failed and we were unable to recover it. 00:28:05.483 [2024-10-07 09:48:54.215293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.483 [2024-10-07 09:48:54.215319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.483 qpair failed and we were unable to recover it. 00:28:05.483 [2024-10-07 09:48:54.215400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.483 [2024-10-07 09:48:54.215426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.483 qpair failed and we were unable to recover it. 00:28:05.483 [2024-10-07 09:48:54.215543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.483 [2024-10-07 09:48:54.215573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.483 qpair failed and we were unable to recover it. 00:28:05.483 [2024-10-07 09:48:54.215674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.483 [2024-10-07 09:48:54.215714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.483 qpair failed and we were unable to recover it. 00:28:05.483 [2024-10-07 09:48:54.215836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.483 [2024-10-07 09:48:54.215865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.483 qpair failed and we were unable to recover it. 00:28:05.483 [2024-10-07 09:48:54.215954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.483 [2024-10-07 09:48:54.215981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.483 qpair failed and we were unable to recover it. 00:28:05.483 [2024-10-07 09:48:54.216096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.483 [2024-10-07 09:48:54.216123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.483 qpair failed and we were unable to recover it. 00:28:05.483 [2024-10-07 09:48:54.216246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.483 [2024-10-07 09:48:54.216285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.483 qpair failed and we were unable to recover it. 00:28:05.483 [2024-10-07 09:48:54.216387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.483 [2024-10-07 09:48:54.216415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.483 qpair failed and we were unable to recover it. 00:28:05.483 [2024-10-07 09:48:54.216504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.483 [2024-10-07 09:48:54.216535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.483 qpair failed and we were unable to recover it. 00:28:05.483 [2024-10-07 09:48:54.216622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.483 [2024-10-07 09:48:54.216649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.483 qpair failed and we were unable to recover it. 00:28:05.483 [2024-10-07 09:48:54.216740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.483 [2024-10-07 09:48:54.216766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.483 qpair failed and we were unable to recover it. 00:28:05.483 [2024-10-07 09:48:54.216861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.484 [2024-10-07 09:48:54.216890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.484 qpair failed and we were unable to recover it. 00:28:05.484 [2024-10-07 09:48:54.216978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.484 [2024-10-07 09:48:54.217004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.484 qpair failed and we were unable to recover it. 00:28:05.484 [2024-10-07 09:48:54.217113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.484 [2024-10-07 09:48:54.217141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.484 qpair failed and we were unable to recover it. 00:28:05.484 [2024-10-07 09:48:54.217259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.484 [2024-10-07 09:48:54.217303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.484 qpair failed and we were unable to recover it. 00:28:05.484 [2024-10-07 09:48:54.217477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.484 [2024-10-07 09:48:54.217531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.484 qpair failed and we were unable to recover it. 00:28:05.484 [2024-10-07 09:48:54.217624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.484 [2024-10-07 09:48:54.217656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.484 qpair failed and we were unable to recover it. 00:28:05.484 [2024-10-07 09:48:54.217785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.484 [2024-10-07 09:48:54.217818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.484 qpair failed and we were unable to recover it. 00:28:05.484 [2024-10-07 09:48:54.217942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.484 [2024-10-07 09:48:54.217971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.484 qpair failed and we were unable to recover it. 00:28:05.484 [2024-10-07 09:48:54.218168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.484 [2024-10-07 09:48:54.218222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.484 qpair failed and we were unable to recover it. 00:28:05.484 [2024-10-07 09:48:54.218339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.484 [2024-10-07 09:48:54.218404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.484 qpair failed and we were unable to recover it. 00:28:05.484 [2024-10-07 09:48:54.218488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.484 [2024-10-07 09:48:54.218514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.484 qpair failed and we were unable to recover it. 00:28:05.484 [2024-10-07 09:48:54.218601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.484 [2024-10-07 09:48:54.218626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.484 qpair failed and we were unable to recover it. 00:28:05.484 [2024-10-07 09:48:54.218719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.484 [2024-10-07 09:48:54.218747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.484 qpair failed and we were unable to recover it. 00:28:05.484 [2024-10-07 09:48:54.218851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.484 [2024-10-07 09:48:54.218878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.484 qpair failed and we were unable to recover it. 00:28:05.484 [2024-10-07 09:48:54.218991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.484 [2024-10-07 09:48:54.219018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.484 qpair failed and we were unable to recover it. 00:28:05.484 [2024-10-07 09:48:54.219101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.484 [2024-10-07 09:48:54.219126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.484 qpair failed and we were unable to recover it. 00:28:05.484 [2024-10-07 09:48:54.219207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.484 [2024-10-07 09:48:54.219239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.484 qpair failed and we were unable to recover it. 00:28:05.484 [2024-10-07 09:48:54.219355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.484 [2024-10-07 09:48:54.219385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.484 qpair failed and we were unable to recover it. 00:28:05.484 [2024-10-07 09:48:54.219478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.484 [2024-10-07 09:48:54.219506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.484 qpair failed and we were unable to recover it. 00:28:05.484 [2024-10-07 09:48:54.219644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.484 [2024-10-07 09:48:54.219695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.484 qpair failed and we were unable to recover it. 00:28:05.484 [2024-10-07 09:48:54.219795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.484 [2024-10-07 09:48:54.219823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.484 qpair failed and we were unable to recover it. 00:28:05.484 [2024-10-07 09:48:54.219907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.484 [2024-10-07 09:48:54.219932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.484 qpair failed and we were unable to recover it. 00:28:05.484 [2024-10-07 09:48:54.220046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.484 [2024-10-07 09:48:54.220073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.484 qpair failed and we were unable to recover it. 00:28:05.484 [2024-10-07 09:48:54.220163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.484 [2024-10-07 09:48:54.220188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.484 qpair failed and we were unable to recover it. 00:28:05.484 [2024-10-07 09:48:54.220272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.484 [2024-10-07 09:48:54.220300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.484 qpair failed and we were unable to recover it. 00:28:05.484 [2024-10-07 09:48:54.220413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.484 [2024-10-07 09:48:54.220439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.484 qpair failed and we were unable to recover it. 00:28:05.484 [2024-10-07 09:48:54.220529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.484 [2024-10-07 09:48:54.220560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.484 qpair failed and we were unable to recover it. 00:28:05.484 [2024-10-07 09:48:54.220649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.484 [2024-10-07 09:48:54.220685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.484 qpair failed and we were unable to recover it. 00:28:05.484 [2024-10-07 09:48:54.220764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.484 [2024-10-07 09:48:54.220789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.484 qpair failed and we were unable to recover it. 00:28:05.484 [2024-10-07 09:48:54.220870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.484 [2024-10-07 09:48:54.220895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.484 qpair failed and we were unable to recover it. 00:28:05.484 [2024-10-07 09:48:54.220979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.484 [2024-10-07 09:48:54.221004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.484 qpair failed and we were unable to recover it. 00:28:05.484 [2024-10-07 09:48:54.221117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.484 [2024-10-07 09:48:54.221144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.484 qpair failed and we were unable to recover it. 00:28:05.484 [2024-10-07 09:48:54.221240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.484 [2024-10-07 09:48:54.221270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.484 qpair failed and we were unable to recover it. 00:28:05.484 [2024-10-07 09:48:54.221357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.484 [2024-10-07 09:48:54.221390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.484 qpair failed and we were unable to recover it. 00:28:05.484 [2024-10-07 09:48:54.221477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.484 [2024-10-07 09:48:54.221507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.484 qpair failed and we were unable to recover it. 00:28:05.484 [2024-10-07 09:48:54.221595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.484 [2024-10-07 09:48:54.221621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.484 qpair failed and we were unable to recover it. 00:28:05.484 [2024-10-07 09:48:54.221714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.484 [2024-10-07 09:48:54.221741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.484 qpair failed and we were unable to recover it. 00:28:05.484 [2024-10-07 09:48:54.221855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.484 [2024-10-07 09:48:54.221883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.484 qpair failed and we were unable to recover it. 00:28:05.484 [2024-10-07 09:48:54.222013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.484 [2024-10-07 09:48:54.222041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.485 qpair failed and we were unable to recover it. 00:28:05.485 [2024-10-07 09:48:54.222211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.485 [2024-10-07 09:48:54.222265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.485 qpair failed and we were unable to recover it. 00:28:05.485 [2024-10-07 09:48:54.222427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.485 [2024-10-07 09:48:54.222483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.485 qpair failed and we were unable to recover it. 00:28:05.485 [2024-10-07 09:48:54.222648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.485 [2024-10-07 09:48:54.222688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.485 qpair failed and we were unable to recover it. 00:28:05.485 [2024-10-07 09:48:54.222784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.485 [2024-10-07 09:48:54.222809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.485 qpair failed and we were unable to recover it. 00:28:05.485 [2024-10-07 09:48:54.222896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.485 [2024-10-07 09:48:54.222922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.485 qpair failed and we were unable to recover it. 00:28:05.485 [2024-10-07 09:48:54.223029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.485 [2024-10-07 09:48:54.223055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.485 qpair failed and we were unable to recover it. 00:28:05.485 [2024-10-07 09:48:54.223143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.485 [2024-10-07 09:48:54.223174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.485 qpair failed and we were unable to recover it. 00:28:05.485 [2024-10-07 09:48:54.223262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.485 [2024-10-07 09:48:54.223289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.485 qpair failed and we were unable to recover it. 00:28:05.485 [2024-10-07 09:48:54.223431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.485 [2024-10-07 09:48:54.223458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.485 qpair failed and we were unable to recover it. 00:28:05.485 [2024-10-07 09:48:54.223541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.485 [2024-10-07 09:48:54.223567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.485 qpair failed and we were unable to recover it. 00:28:05.485 [2024-10-07 09:48:54.223681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.485 [2024-10-07 09:48:54.223708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.485 qpair failed and we were unable to recover it. 00:28:05.485 [2024-10-07 09:48:54.223818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.485 [2024-10-07 09:48:54.223845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.485 qpair failed and we were unable to recover it. 00:28:05.485 [2024-10-07 09:48:54.223935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.485 [2024-10-07 09:48:54.223960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.485 qpair failed and we were unable to recover it. 00:28:05.485 [2024-10-07 09:48:54.224071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.485 [2024-10-07 09:48:54.224098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.485 qpair failed and we were unable to recover it. 00:28:05.485 [2024-10-07 09:48:54.224179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.485 [2024-10-07 09:48:54.224210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.485 qpair failed and we were unable to recover it. 00:28:05.485 [2024-10-07 09:48:54.224296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.485 [2024-10-07 09:48:54.224336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.485 qpair failed and we were unable to recover it. 00:28:05.485 [2024-10-07 09:48:54.224462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.485 [2024-10-07 09:48:54.224503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.485 qpair failed and we were unable to recover it. 00:28:05.485 [2024-10-07 09:48:54.224596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.485 [2024-10-07 09:48:54.224625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.485 qpair failed and we were unable to recover it. 00:28:05.485 [2024-10-07 09:48:54.224765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.485 [2024-10-07 09:48:54.224794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.485 qpair failed and we were unable to recover it. 00:28:05.485 [2024-10-07 09:48:54.224914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.485 [2024-10-07 09:48:54.224941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.485 qpair failed and we were unable to recover it. 00:28:05.485 [2024-10-07 09:48:54.225031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.485 [2024-10-07 09:48:54.225059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.485 qpair failed and we were unable to recover it. 00:28:05.485 [2024-10-07 09:48:54.225171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.485 [2024-10-07 09:48:54.225200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.485 qpair failed and we were unable to recover it. 00:28:05.485 [2024-10-07 09:48:54.225292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.485 [2024-10-07 09:48:54.225324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.485 qpair failed and we were unable to recover it. 00:28:05.485 [2024-10-07 09:48:54.225411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.485 [2024-10-07 09:48:54.225437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.485 qpair failed and we were unable to recover it. 00:28:05.485 [2024-10-07 09:48:54.225552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.485 [2024-10-07 09:48:54.225579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.485 qpair failed and we were unable to recover it. 00:28:05.485 [2024-10-07 09:48:54.225700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.485 [2024-10-07 09:48:54.225727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.485 qpair failed and we were unable to recover it. 00:28:05.485 [2024-10-07 09:48:54.225810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.485 [2024-10-07 09:48:54.225835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.485 qpair failed and we were unable to recover it. 00:28:05.485 [2024-10-07 09:48:54.225977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.485 [2024-10-07 09:48:54.226003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.485 qpair failed and we were unable to recover it. 00:28:05.485 [2024-10-07 09:48:54.226098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.485 [2024-10-07 09:48:54.226124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.485 qpair failed and we were unable to recover it. 00:28:05.485 [2024-10-07 09:48:54.226214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.485 [2024-10-07 09:48:54.226239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.485 qpair failed and we were unable to recover it. 00:28:05.485 [2024-10-07 09:48:54.226329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.485 [2024-10-07 09:48:54.226358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.485 qpair failed and we were unable to recover it. 00:28:05.485 [2024-10-07 09:48:54.226457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.485 [2024-10-07 09:48:54.226497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.485 qpair failed and we were unable to recover it. 00:28:05.485 [2024-10-07 09:48:54.226592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.485 [2024-10-07 09:48:54.226620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.485 qpair failed and we were unable to recover it. 00:28:05.485 [2024-10-07 09:48:54.226708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.485 [2024-10-07 09:48:54.226734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.485 qpair failed and we were unable to recover it. 00:28:05.485 [2024-10-07 09:48:54.226821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.485 [2024-10-07 09:48:54.226847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.485 qpair failed and we were unable to recover it. 00:28:05.485 [2024-10-07 09:48:54.226933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.485 [2024-10-07 09:48:54.226960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.485 qpair failed and we were unable to recover it. 00:28:05.485 [2024-10-07 09:48:54.227034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.485 [2024-10-07 09:48:54.227059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.485 qpair failed and we were unable to recover it. 00:28:05.485 [2024-10-07 09:48:54.227194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.485 [2024-10-07 09:48:54.227221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.485 qpair failed and we were unable to recover it. 00:28:05.485 [2024-10-07 09:48:54.227316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.486 [2024-10-07 09:48:54.227356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.486 qpair failed and we were unable to recover it. 00:28:05.486 [2024-10-07 09:48:54.227450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.486 [2024-10-07 09:48:54.227479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.486 qpair failed and we were unable to recover it. 00:28:05.486 [2024-10-07 09:48:54.227570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.486 [2024-10-07 09:48:54.227599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.486 qpair failed and we were unable to recover it. 00:28:05.486 [2024-10-07 09:48:54.227689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.486 [2024-10-07 09:48:54.227722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.486 qpair failed and we were unable to recover it. 00:28:05.486 [2024-10-07 09:48:54.227808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.486 [2024-10-07 09:48:54.227835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.486 qpair failed and we were unable to recover it. 00:28:05.486 [2024-10-07 09:48:54.227977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.486 [2024-10-07 09:48:54.228004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.486 qpair failed and we were unable to recover it. 00:28:05.486 [2024-10-07 09:48:54.228089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.486 [2024-10-07 09:48:54.228114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.486 qpair failed and we were unable to recover it. 00:28:05.486 [2024-10-07 09:48:54.228220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.486 [2024-10-07 09:48:54.228247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.486 qpair failed and we were unable to recover it. 00:28:05.486 [2024-10-07 09:48:54.228333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.486 [2024-10-07 09:48:54.228360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.486 qpair failed and we were unable to recover it. 00:28:05.486 [2024-10-07 09:48:54.228445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.486 [2024-10-07 09:48:54.228473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.486 qpair failed and we were unable to recover it. 00:28:05.486 [2024-10-07 09:48:54.228561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.486 [2024-10-07 09:48:54.228588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.486 qpair failed and we were unable to recover it. 00:28:05.486 [2024-10-07 09:48:54.228698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.486 [2024-10-07 09:48:54.228725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.486 qpair failed and we were unable to recover it. 00:28:05.486 [2024-10-07 09:48:54.228833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.486 [2024-10-07 09:48:54.228860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.486 qpair failed and we were unable to recover it. 00:28:05.486 [2024-10-07 09:48:54.228948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.486 [2024-10-07 09:48:54.228975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.486 qpair failed and we were unable to recover it. 00:28:05.486 [2024-10-07 09:48:54.229088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.486 [2024-10-07 09:48:54.229116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.486 qpair failed and we were unable to recover it. 00:28:05.486 [2024-10-07 09:48:54.229231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.486 [2024-10-07 09:48:54.229260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.486 qpair failed and we were unable to recover it. 00:28:05.486 [2024-10-07 09:48:54.229406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.486 [2024-10-07 09:48:54.229433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.486 qpair failed and we were unable to recover it. 00:28:05.486 [2024-10-07 09:48:54.229546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.486 [2024-10-07 09:48:54.229573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.486 qpair failed and we were unable to recover it. 00:28:05.486 [2024-10-07 09:48:54.229648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.486 [2024-10-07 09:48:54.229679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.486 qpair failed and we were unable to recover it. 00:28:05.486 [2024-10-07 09:48:54.229798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.486 [2024-10-07 09:48:54.229825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.486 qpair failed and we were unable to recover it. 00:28:05.486 [2024-10-07 09:48:54.229905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.486 [2024-10-07 09:48:54.229931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.486 qpair failed and we were unable to recover it. 00:28:05.486 [2024-10-07 09:48:54.230018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.486 [2024-10-07 09:48:54.230046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.486 qpair failed and we were unable to recover it. 00:28:05.486 [2024-10-07 09:48:54.230165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.486 [2024-10-07 09:48:54.230202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.486 qpair failed and we were unable to recover it. 00:28:05.486 [2024-10-07 09:48:54.230283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.486 [2024-10-07 09:48:54.230308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.486 qpair failed and we were unable to recover it. 00:28:05.486 [2024-10-07 09:48:54.230388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.486 [2024-10-07 09:48:54.230414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.486 qpair failed and we were unable to recover it. 00:28:05.486 [2024-10-07 09:48:54.230491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.486 [2024-10-07 09:48:54.230517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.486 qpair failed and we were unable to recover it. 00:28:05.486 [2024-10-07 09:48:54.230625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.486 [2024-10-07 09:48:54.230651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.486 qpair failed and we were unable to recover it. 00:28:05.486 [2024-10-07 09:48:54.230778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.486 [2024-10-07 09:48:54.230805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.486 qpair failed and we were unable to recover it. 00:28:05.486 [2024-10-07 09:48:54.230898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.486 [2024-10-07 09:48:54.230924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.486 qpair failed and we were unable to recover it. 00:28:05.486 [2024-10-07 09:48:54.231009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.486 [2024-10-07 09:48:54.231035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.486 qpair failed and we were unable to recover it. 00:28:05.486 [2024-10-07 09:48:54.231115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.486 [2024-10-07 09:48:54.231149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.486 qpair failed and we were unable to recover it. 00:28:05.486 [2024-10-07 09:48:54.231262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.486 [2024-10-07 09:48:54.231288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.486 qpair failed and we were unable to recover it. 00:28:05.486 [2024-10-07 09:48:54.231376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.486 [2024-10-07 09:48:54.231403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.486 qpair failed and we were unable to recover it. 00:28:05.486 [2024-10-07 09:48:54.231509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.486 [2024-10-07 09:48:54.231536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.486 qpair failed and we were unable to recover it. 00:28:05.487 [2024-10-07 09:48:54.231639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.487 [2024-10-07 09:48:54.231676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.487 qpair failed and we were unable to recover it. 00:28:05.487 [2024-10-07 09:48:54.231792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.487 [2024-10-07 09:48:54.231819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.487 qpair failed and we were unable to recover it. 00:28:05.487 [2024-10-07 09:48:54.231893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.487 [2024-10-07 09:48:54.231918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.487 qpair failed and we were unable to recover it. 00:28:05.487 [2024-10-07 09:48:54.232004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.487 [2024-10-07 09:48:54.232031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.487 qpair failed and we were unable to recover it. 00:28:05.487 [2024-10-07 09:48:54.232139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.487 [2024-10-07 09:48:54.232165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.487 qpair failed and we were unable to recover it. 00:28:05.487 [2024-10-07 09:48:54.232261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.487 [2024-10-07 09:48:54.232289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.487 qpair failed and we were unable to recover it. 00:28:05.487 [2024-10-07 09:48:54.232374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.487 [2024-10-07 09:48:54.232400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.487 qpair failed and we were unable to recover it. 00:28:05.487 [2024-10-07 09:48:54.232506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.487 [2024-10-07 09:48:54.232532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.487 qpair failed and we were unable to recover it. 00:28:05.487 [2024-10-07 09:48:54.232613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.487 [2024-10-07 09:48:54.232639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.487 qpair failed and we were unable to recover it. 00:28:05.487 [2024-10-07 09:48:54.232752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.487 [2024-10-07 09:48:54.232779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.487 qpair failed and we were unable to recover it. 00:28:05.487 [2024-10-07 09:48:54.232880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.487 [2024-10-07 09:48:54.232920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.487 qpair failed and we were unable to recover it. 00:28:05.487 [2024-10-07 09:48:54.233017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.487 [2024-10-07 09:48:54.233045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.487 qpair failed and we were unable to recover it. 00:28:05.487 [2024-10-07 09:48:54.233162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.487 [2024-10-07 09:48:54.233189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.487 qpair failed and we were unable to recover it. 00:28:05.487 [2024-10-07 09:48:54.233278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.487 [2024-10-07 09:48:54.233305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.487 qpair failed and we were unable to recover it. 00:28:05.487 [2024-10-07 09:48:54.233384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.487 [2024-10-07 09:48:54.233409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.487 qpair failed and we were unable to recover it. 00:28:05.487 [2024-10-07 09:48:54.233571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.487 [2024-10-07 09:48:54.233598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.487 qpair failed and we were unable to recover it. 00:28:05.487 [2024-10-07 09:48:54.233700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.487 [2024-10-07 09:48:54.233729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.487 qpair failed and we were unable to recover it. 00:28:05.487 [2024-10-07 09:48:54.233811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.487 [2024-10-07 09:48:54.233837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.487 qpair failed and we were unable to recover it. 00:28:05.487 [2024-10-07 09:48:54.233946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.487 [2024-10-07 09:48:54.233972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.487 qpair failed and we were unable to recover it. 00:28:05.487 [2024-10-07 09:48:54.234049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.487 [2024-10-07 09:48:54.234074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.487 qpair failed and we were unable to recover it. 00:28:05.487 [2024-10-07 09:48:54.234151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.487 [2024-10-07 09:48:54.234177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.487 qpair failed and we were unable to recover it. 00:28:05.487 [2024-10-07 09:48:54.234287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.487 [2024-10-07 09:48:54.234313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.487 qpair failed and we were unable to recover it. 00:28:05.487 [2024-10-07 09:48:54.234435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.487 [2024-10-07 09:48:54.234464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.487 qpair failed and we were unable to recover it. 00:28:05.487 [2024-10-07 09:48:54.234586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.487 [2024-10-07 09:48:54.234613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.487 qpair failed and we were unable to recover it. 00:28:05.487 [2024-10-07 09:48:54.234728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.487 [2024-10-07 09:48:54.234755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.487 qpair failed and we were unable to recover it. 00:28:05.487 [2024-10-07 09:48:54.234844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.487 [2024-10-07 09:48:54.234869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.487 qpair failed and we were unable to recover it. 00:28:05.487 [2024-10-07 09:48:54.234950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.487 [2024-10-07 09:48:54.234976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.487 qpair failed and we were unable to recover it. 00:28:05.487 [2024-10-07 09:48:54.235081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.487 [2024-10-07 09:48:54.235108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.487 qpair failed and we were unable to recover it. 00:28:05.487 [2024-10-07 09:48:54.235198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.487 [2024-10-07 09:48:54.235224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.487 qpair failed and we were unable to recover it. 00:28:05.487 [2024-10-07 09:48:54.235313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.487 [2024-10-07 09:48:54.235340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.487 qpair failed and we were unable to recover it. 00:28:05.487 [2024-10-07 09:48:54.235463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.487 [2024-10-07 09:48:54.235491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.487 qpair failed and we were unable to recover it. 00:28:05.487 [2024-10-07 09:48:54.235568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.487 [2024-10-07 09:48:54.235593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.487 qpair failed and we were unable to recover it. 00:28:05.487 [2024-10-07 09:48:54.235679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.487 [2024-10-07 09:48:54.235705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.487 qpair failed and we were unable to recover it. 00:28:05.487 [2024-10-07 09:48:54.235789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.487 [2024-10-07 09:48:54.235814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.487 qpair failed and we were unable to recover it. 00:28:05.487 [2024-10-07 09:48:54.235926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.487 [2024-10-07 09:48:54.235954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.487 qpair failed and we were unable to recover it. 00:28:05.487 [2024-10-07 09:48:54.236040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.487 [2024-10-07 09:48:54.236067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.487 qpair failed and we were unable to recover it. 00:28:05.487 [2024-10-07 09:48:54.236176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.487 [2024-10-07 09:48:54.236203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.487 qpair failed and we were unable to recover it. 00:28:05.487 [2024-10-07 09:48:54.236298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.487 [2024-10-07 09:48:54.236324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.487 qpair failed and we were unable to recover it. 00:28:05.487 [2024-10-07 09:48:54.236438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.487 [2024-10-07 09:48:54.236465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.488 qpair failed and we were unable to recover it. 00:28:05.488 [2024-10-07 09:48:54.236549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.488 [2024-10-07 09:48:54.236574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.488 qpair failed and we were unable to recover it. 00:28:05.488 [2024-10-07 09:48:54.236687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.488 [2024-10-07 09:48:54.236715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.488 qpair failed and we were unable to recover it. 00:28:05.488 [2024-10-07 09:48:54.236805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.488 [2024-10-07 09:48:54.236832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.488 qpair failed and we were unable to recover it. 00:28:05.488 [2024-10-07 09:48:54.236909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.488 [2024-10-07 09:48:54.236935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.488 qpair failed and we were unable to recover it. 00:28:05.488 [2024-10-07 09:48:54.237017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.488 [2024-10-07 09:48:54.237044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.488 qpair failed and we were unable to recover it. 00:28:05.488 [2024-10-07 09:48:54.237128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.488 [2024-10-07 09:48:54.237153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.488 qpair failed and we were unable to recover it. 00:28:05.488 [2024-10-07 09:48:54.237240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.488 [2024-10-07 09:48:54.237267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.488 qpair failed and we were unable to recover it. 00:28:05.488 [2024-10-07 09:48:54.237355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.488 [2024-10-07 09:48:54.237384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.488 qpair failed and we were unable to recover it. 00:28:05.488 [2024-10-07 09:48:54.237501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.488 [2024-10-07 09:48:54.237528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.488 qpair failed and we were unable to recover it. 00:28:05.488 [2024-10-07 09:48:54.237612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.488 [2024-10-07 09:48:54.237640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.488 qpair failed and we were unable to recover it. 00:28:05.488 [2024-10-07 09:48:54.237757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.488 [2024-10-07 09:48:54.237784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.488 qpair failed and we were unable to recover it. 00:28:05.488 [2024-10-07 09:48:54.237868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.488 [2024-10-07 09:48:54.237895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.488 qpair failed and we were unable to recover it. 00:28:05.488 [2024-10-07 09:48:54.237982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.488 [2024-10-07 09:48:54.238008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.488 qpair failed and we were unable to recover it. 00:28:05.488 [2024-10-07 09:48:54.238083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.488 [2024-10-07 09:48:54.238109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.488 qpair failed and we were unable to recover it. 00:28:05.488 [2024-10-07 09:48:54.238190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.488 [2024-10-07 09:48:54.238215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.488 qpair failed and we were unable to recover it. 00:28:05.488 [2024-10-07 09:48:54.238298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.488 [2024-10-07 09:48:54.238326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.488 qpair failed and we were unable to recover it. 00:28:05.488 [2024-10-07 09:48:54.238416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.488 [2024-10-07 09:48:54.238443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.488 qpair failed and we were unable to recover it. 00:28:05.488 [2024-10-07 09:48:54.238530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.488 [2024-10-07 09:48:54.238556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.488 qpair failed and we were unable to recover it. 00:28:05.488 [2024-10-07 09:48:54.238671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.488 [2024-10-07 09:48:54.238699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.488 qpair failed and we were unable to recover it. 00:28:05.488 [2024-10-07 09:48:54.238787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.488 [2024-10-07 09:48:54.238812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.488 qpair failed and we were unable to recover it. 00:28:05.488 [2024-10-07 09:48:54.238891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.488 [2024-10-07 09:48:54.238918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.488 qpair failed and we were unable to recover it. 00:28:05.488 [2024-10-07 09:48:54.238999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.488 [2024-10-07 09:48:54.239025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.488 qpair failed and we were unable to recover it. 00:28:05.488 [2024-10-07 09:48:54.239107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.488 [2024-10-07 09:48:54.239134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.488 qpair failed and we were unable to recover it. 00:28:05.488 [2024-10-07 09:48:54.239240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.488 [2024-10-07 09:48:54.239267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.488 qpair failed and we were unable to recover it. 00:28:05.488 [2024-10-07 09:48:54.239359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.488 [2024-10-07 09:48:54.239391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.488 qpair failed and we were unable to recover it. 00:28:05.488 [2024-10-07 09:48:54.239477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.488 [2024-10-07 09:48:54.239503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.488 qpair failed and we were unable to recover it. 00:28:05.488 [2024-10-07 09:48:54.239584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.488 [2024-10-07 09:48:54.239608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.488 qpair failed and we were unable to recover it. 00:28:05.488 [2024-10-07 09:48:54.239700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.488 [2024-10-07 09:48:54.239728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.488 qpair failed and we were unable to recover it. 00:28:05.488 [2024-10-07 09:48:54.239821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.488 [2024-10-07 09:48:54.239847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.488 qpair failed and we were unable to recover it. 00:28:05.488 [2024-10-07 09:48:54.239938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.488 [2024-10-07 09:48:54.239965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.488 qpair failed and we were unable to recover it. 00:28:05.488 [2024-10-07 09:48:54.240045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.488 [2024-10-07 09:48:54.240069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.488 qpair failed and we were unable to recover it. 00:28:05.488 [2024-10-07 09:48:54.240145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.488 [2024-10-07 09:48:54.240170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.488 qpair failed and we were unable to recover it. 00:28:05.488 [2024-10-07 09:48:54.240248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.488 [2024-10-07 09:48:54.240274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.488 qpair failed and we were unable to recover it. 00:28:05.488 [2024-10-07 09:48:54.240358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.488 [2024-10-07 09:48:54.240383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.488 qpair failed and we were unable to recover it. 00:28:05.488 [2024-10-07 09:48:54.240460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.488 [2024-10-07 09:48:54.240486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.488 qpair failed and we were unable to recover it. 00:28:05.488 [2024-10-07 09:48:54.240568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.488 [2024-10-07 09:48:54.240592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.488 qpair failed and we were unable to recover it. 00:28:05.488 [2024-10-07 09:48:54.240703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.488 [2024-10-07 09:48:54.240730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.488 qpair failed and we were unable to recover it. 00:28:05.488 [2024-10-07 09:48:54.240821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.488 [2024-10-07 09:48:54.240848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.488 qpair failed and we were unable to recover it. 00:28:05.488 [2024-10-07 09:48:54.240974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.489 [2024-10-07 09:48:54.241000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.489 qpair failed and we were unable to recover it. 00:28:05.489 [2024-10-07 09:48:54.241077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.489 [2024-10-07 09:48:54.241102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.489 qpair failed and we were unable to recover it. 00:28:05.489 [2024-10-07 09:48:54.241219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.489 [2024-10-07 09:48:54.241246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.489 qpair failed and we were unable to recover it. 00:28:05.489 [2024-10-07 09:48:54.241332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.489 [2024-10-07 09:48:54.241358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.489 qpair failed and we were unable to recover it. 00:28:05.489 [2024-10-07 09:48:54.241441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.489 [2024-10-07 09:48:54.241468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.489 qpair failed and we were unable to recover it. 00:28:05.489 [2024-10-07 09:48:54.241591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.489 [2024-10-07 09:48:54.241617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.489 qpair failed and we were unable to recover it. 00:28:05.489 [2024-10-07 09:48:54.241736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.489 [2024-10-07 09:48:54.241763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.489 qpair failed and we were unable to recover it. 00:28:05.489 [2024-10-07 09:48:54.241878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.489 [2024-10-07 09:48:54.241905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.489 qpair failed and we were unable to recover it. 00:28:05.489 [2024-10-07 09:48:54.241997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.489 [2024-10-07 09:48:54.242023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.489 qpair failed and we were unable to recover it. 00:28:05.489 [2024-10-07 09:48:54.242109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.489 [2024-10-07 09:48:54.242134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.489 qpair failed and we were unable to recover it. 00:28:05.489 [2024-10-07 09:48:54.242229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.489 [2024-10-07 09:48:54.242256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.489 qpair failed and we were unable to recover it. 00:28:05.489 [2024-10-07 09:48:54.242343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.489 [2024-10-07 09:48:54.242369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.489 qpair failed and we were unable to recover it. 00:28:05.489 [2024-10-07 09:48:54.242448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.489 [2024-10-07 09:48:54.242473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.489 qpair failed and we were unable to recover it. 00:28:05.489 [2024-10-07 09:48:54.242550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.489 [2024-10-07 09:48:54.242581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.489 qpair failed and we were unable to recover it. 00:28:05.489 [2024-10-07 09:48:54.242693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.489 [2024-10-07 09:48:54.242721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.489 qpair failed and we were unable to recover it. 00:28:05.489 [2024-10-07 09:48:54.242812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.489 [2024-10-07 09:48:54.242837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.489 qpair failed and we were unable to recover it. 00:28:05.489 [2024-10-07 09:48:54.242912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.489 [2024-10-07 09:48:54.242939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.489 qpair failed and we were unable to recover it. 00:28:05.489 [2024-10-07 09:48:54.243050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.489 [2024-10-07 09:48:54.243076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.489 qpair failed and we were unable to recover it. 00:28:05.489 [2024-10-07 09:48:54.243162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.489 [2024-10-07 09:48:54.243188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.489 qpair failed and we were unable to recover it. 00:28:05.489 [2024-10-07 09:48:54.243300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.489 [2024-10-07 09:48:54.243326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.489 qpair failed and we were unable to recover it. 00:28:05.489 [2024-10-07 09:48:54.243409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.489 [2024-10-07 09:48:54.243436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.489 qpair failed and we were unable to recover it. 00:28:05.489 [2024-10-07 09:48:54.243548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.489 [2024-10-07 09:48:54.243574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.489 qpair failed and we were unable to recover it. 00:28:05.489 [2024-10-07 09:48:54.243683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.489 [2024-10-07 09:48:54.243711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.489 qpair failed and we were unable to recover it. 00:28:05.489 [2024-10-07 09:48:54.243790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.489 [2024-10-07 09:48:54.243817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.489 qpair failed and we were unable to recover it. 00:28:05.489 [2024-10-07 09:48:54.243904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.489 [2024-10-07 09:48:54.243930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.489 qpair failed and we were unable to recover it. 00:28:05.489 [2024-10-07 09:48:54.244039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.489 [2024-10-07 09:48:54.244065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.489 qpair failed and we were unable to recover it. 00:28:05.489 [2024-10-07 09:48:54.244175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.489 [2024-10-07 09:48:54.244201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.489 qpair failed and we were unable to recover it. 00:28:05.489 [2024-10-07 09:48:54.244322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.489 [2024-10-07 09:48:54.244348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.489 qpair failed and we were unable to recover it. 00:28:05.489 [2024-10-07 09:48:54.244452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.489 [2024-10-07 09:48:54.244478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.489 qpair failed and we were unable to recover it. 00:28:05.489 [2024-10-07 09:48:54.244615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.489 [2024-10-07 09:48:54.244641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.489 qpair failed and we were unable to recover it. 00:28:05.489 [2024-10-07 09:48:54.244730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.489 [2024-10-07 09:48:54.244758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.489 qpair failed and we were unable to recover it. 00:28:05.489 [2024-10-07 09:48:54.244848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.489 [2024-10-07 09:48:54.244874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.489 qpair failed and we were unable to recover it. 00:28:05.489 [2024-10-07 09:48:54.244981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.489 [2024-10-07 09:48:54.245008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.489 qpair failed and we were unable to recover it. 00:28:05.489 [2024-10-07 09:48:54.245144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.489 [2024-10-07 09:48:54.245170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.489 qpair failed and we were unable to recover it. 00:28:05.489 [2024-10-07 09:48:54.245314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.489 [2024-10-07 09:48:54.245341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.489 qpair failed and we were unable to recover it. 00:28:05.489 [2024-10-07 09:48:54.245426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.489 [2024-10-07 09:48:54.245452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.489 qpair failed and we were unable to recover it. 00:28:05.489 [2024-10-07 09:48:54.245560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.489 [2024-10-07 09:48:54.245601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.489 qpair failed and we were unable to recover it. 00:28:05.489 [2024-10-07 09:48:54.245736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.489 [2024-10-07 09:48:54.245770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.489 qpair failed and we were unable to recover it. 00:28:05.489 [2024-10-07 09:48:54.245863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.490 [2024-10-07 09:48:54.245889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.490 qpair failed and we were unable to recover it. 00:28:05.490 [2024-10-07 09:48:54.246020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.490 [2024-10-07 09:48:54.246049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.490 qpair failed and we were unable to recover it. 00:28:05.490 [2024-10-07 09:48:54.246149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.490 [2024-10-07 09:48:54.246209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.490 qpair failed and we were unable to recover it. 00:28:05.490 [2024-10-07 09:48:54.246382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.490 [2024-10-07 09:48:54.246408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.490 qpair failed and we were unable to recover it. 00:28:05.490 [2024-10-07 09:48:54.246572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.490 [2024-10-07 09:48:54.246601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.490 qpair failed and we were unable to recover it. 00:28:05.490 [2024-10-07 09:48:54.246713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.490 [2024-10-07 09:48:54.246740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.490 qpair failed and we were unable to recover it. 00:28:05.490 [2024-10-07 09:48:54.246882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.490 [2024-10-07 09:48:54.246911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.490 qpair failed and we were unable to recover it. 00:28:05.490 [2024-10-07 09:48:54.247026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.490 [2024-10-07 09:48:54.247053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.490 qpair failed and we were unable to recover it. 00:28:05.490 [2024-10-07 09:48:54.247151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.490 [2024-10-07 09:48:54.247176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.490 qpair failed and we were unable to recover it. 00:28:05.490 [2024-10-07 09:48:54.247379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.490 [2024-10-07 09:48:54.247433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.490 qpair failed and we were unable to recover it. 00:28:05.490 [2024-10-07 09:48:54.247634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.490 [2024-10-07 09:48:54.247673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.490 qpair failed and we were unable to recover it. 00:28:05.490 [2024-10-07 09:48:54.247796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.490 [2024-10-07 09:48:54.247836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.490 qpair failed and we were unable to recover it. 00:28:05.490 [2024-10-07 09:48:54.247924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.490 [2024-10-07 09:48:54.247949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.490 qpair failed and we were unable to recover it. 00:28:05.490 [2024-10-07 09:48:54.248026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.490 [2024-10-07 09:48:54.248051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.490 qpair failed and we were unable to recover it. 00:28:05.490 [2024-10-07 09:48:54.248171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.490 [2024-10-07 09:48:54.248198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.490 qpair failed and we were unable to recover it. 00:28:05.490 [2024-10-07 09:48:54.248303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.490 [2024-10-07 09:48:54.248334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.490 qpair failed and we were unable to recover it. 00:28:05.490 [2024-10-07 09:48:54.248450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.490 [2024-10-07 09:48:54.248487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.490 qpair failed and we were unable to recover it. 00:28:05.490 [2024-10-07 09:48:54.248573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.490 [2024-10-07 09:48:54.248600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.490 qpair failed and we were unable to recover it. 00:28:05.490 [2024-10-07 09:48:54.248700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.490 [2024-10-07 09:48:54.248728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.490 qpair failed and we were unable to recover it. 00:28:05.490 [2024-10-07 09:48:54.248840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.490 [2024-10-07 09:48:54.248868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.490 qpair failed and we were unable to recover it. 00:28:05.490 [2024-10-07 09:48:54.248964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.490 [2024-10-07 09:48:54.248990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.490 qpair failed and we were unable to recover it. 00:28:05.490 [2024-10-07 09:48:54.249068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.490 [2024-10-07 09:48:54.249097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.490 qpair failed and we were unable to recover it. 00:28:05.490 [2024-10-07 09:48:54.249177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.490 [2024-10-07 09:48:54.249203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.490 qpair failed and we were unable to recover it. 00:28:05.490 [2024-10-07 09:48:54.249345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.490 [2024-10-07 09:48:54.249372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.490 qpair failed and we were unable to recover it. 00:28:05.490 [2024-10-07 09:48:54.249455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.490 [2024-10-07 09:48:54.249481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.490 qpair failed and we were unable to recover it. 00:28:05.490 [2024-10-07 09:48:54.249585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.490 [2024-10-07 09:48:54.249611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.490 qpair failed and we were unable to recover it. 00:28:05.490 [2024-10-07 09:48:54.249750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.490 [2024-10-07 09:48:54.249778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.490 qpair failed and we were unable to recover it. 00:28:05.490 [2024-10-07 09:48:54.249861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.490 [2024-10-07 09:48:54.249888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.490 qpair failed and we were unable to recover it. 00:28:05.490 [2024-10-07 09:48:54.250028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.490 [2024-10-07 09:48:54.250055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.490 qpair failed and we were unable to recover it. 00:28:05.490 [2024-10-07 09:48:54.250150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.490 [2024-10-07 09:48:54.250178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.490 qpair failed and we were unable to recover it. 00:28:05.490 [2024-10-07 09:48:54.250299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.490 [2024-10-07 09:48:54.250326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.490 qpair failed and we were unable to recover it. 00:28:05.490 [2024-10-07 09:48:54.250436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.490 [2024-10-07 09:48:54.250463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.490 qpair failed and we were unable to recover it. 00:28:05.490 [2024-10-07 09:48:54.250578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.490 [2024-10-07 09:48:54.250604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.490 qpair failed and we were unable to recover it. 00:28:05.490 [2024-10-07 09:48:54.250689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.490 [2024-10-07 09:48:54.250715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.490 qpair failed and we were unable to recover it. 00:28:05.490 [2024-10-07 09:48:54.250797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.490 [2024-10-07 09:48:54.250824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.490 qpair failed and we were unable to recover it. 00:28:05.490 [2024-10-07 09:48:54.250969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.490 [2024-10-07 09:48:54.250996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.490 qpair failed and we were unable to recover it. 00:28:05.490 [2024-10-07 09:48:54.251108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.490 [2024-10-07 09:48:54.251135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.490 qpair failed and we were unable to recover it. 00:28:05.490 [2024-10-07 09:48:54.251248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.490 [2024-10-07 09:48:54.251274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.491 qpair failed and we were unable to recover it. 00:28:05.491 [2024-10-07 09:48:54.251386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.491 [2024-10-07 09:48:54.251413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.491 qpair failed and we were unable to recover it. 00:28:05.491 [2024-10-07 09:48:54.251555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.491 [2024-10-07 09:48:54.251581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.491 qpair failed and we were unable to recover it. 00:28:05.491 [2024-10-07 09:48:54.251712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.491 [2024-10-07 09:48:54.251739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.491 qpair failed and we were unable to recover it. 00:28:05.491 [2024-10-07 09:48:54.251820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.491 [2024-10-07 09:48:54.251847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.491 qpair failed and we were unable to recover it. 00:28:05.491 [2024-10-07 09:48:54.251952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.491 [2024-10-07 09:48:54.251979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.491 qpair failed and we were unable to recover it. 00:28:05.491 [2024-10-07 09:48:54.252121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.491 [2024-10-07 09:48:54.252148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.491 qpair failed and we were unable to recover it. 00:28:05.491 [2024-10-07 09:48:54.252238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.491 [2024-10-07 09:48:54.252266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.491 qpair failed and we were unable to recover it. 00:28:05.491 [2024-10-07 09:48:54.252347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.491 [2024-10-07 09:48:54.252374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.491 qpair failed and we were unable to recover it. 00:28:05.491 [2024-10-07 09:48:54.252509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.491 [2024-10-07 09:48:54.252536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.491 qpair failed and we were unable to recover it. 00:28:05.491 [2024-10-07 09:48:54.252679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.491 [2024-10-07 09:48:54.252706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.491 qpair failed and we were unable to recover it. 00:28:05.491 [2024-10-07 09:48:54.252783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.491 [2024-10-07 09:48:54.252809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.491 qpair failed and we were unable to recover it. 00:28:05.491 [2024-10-07 09:48:54.252922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.491 [2024-10-07 09:48:54.252949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.491 qpair failed and we were unable to recover it. 00:28:05.491 [2024-10-07 09:48:54.253058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.491 [2024-10-07 09:48:54.253085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.491 qpair failed and we were unable to recover it. 00:28:05.491 [2024-10-07 09:48:54.253181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.491 [2024-10-07 09:48:54.253208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.491 qpair failed and we were unable to recover it. 00:28:05.491 [2024-10-07 09:48:54.253289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.491 [2024-10-07 09:48:54.253317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.491 qpair failed and we were unable to recover it. 00:28:05.491 [2024-10-07 09:48:54.253442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.491 [2024-10-07 09:48:54.253482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.491 qpair failed and we were unable to recover it. 00:28:05.491 [2024-10-07 09:48:54.253584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.491 [2024-10-07 09:48:54.253612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.491 qpair failed and we were unable to recover it. 00:28:05.491 [2024-10-07 09:48:54.253746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.491 [2024-10-07 09:48:54.253780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.491 qpair failed and we were unable to recover it. 00:28:05.491 [2024-10-07 09:48:54.253862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.491 [2024-10-07 09:48:54.253890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.491 qpair failed and we were unable to recover it. 00:28:05.491 [2024-10-07 09:48:54.254005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.491 [2024-10-07 09:48:54.254031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.491 qpair failed and we were unable to recover it. 00:28:05.491 [2024-10-07 09:48:54.254151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.491 [2024-10-07 09:48:54.254177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.491 qpair failed and we were unable to recover it. 00:28:05.491 [2024-10-07 09:48:54.254256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.491 [2024-10-07 09:48:54.254282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.491 qpair failed and we were unable to recover it. 00:28:05.491 [2024-10-07 09:48:54.254367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.491 [2024-10-07 09:48:54.254394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.491 qpair failed and we were unable to recover it. 00:28:05.491 [2024-10-07 09:48:54.254505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.491 [2024-10-07 09:48:54.254533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.491 qpair failed and we were unable to recover it. 00:28:05.491 [2024-10-07 09:48:54.254692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.491 [2024-10-07 09:48:54.254720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.491 qpair failed and we were unable to recover it. 00:28:05.491 [2024-10-07 09:48:54.254808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.491 [2024-10-07 09:48:54.254835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.491 qpair failed and we were unable to recover it. 00:28:05.491 [2024-10-07 09:48:54.254913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.491 [2024-10-07 09:48:54.254940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.491 qpair failed and we were unable to recover it. 00:28:05.491 [2024-10-07 09:48:54.255111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.491 [2024-10-07 09:48:54.255163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.491 qpair failed and we were unable to recover it. 00:28:05.491 [2024-10-07 09:48:54.255273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.491 [2024-10-07 09:48:54.255326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.491 qpair failed and we were unable to recover it. 00:28:05.491 [2024-10-07 09:48:54.255467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.491 [2024-10-07 09:48:54.255494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.491 qpair failed and we were unable to recover it. 00:28:05.491 [2024-10-07 09:48:54.255615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.491 [2024-10-07 09:48:54.255643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.491 qpair failed and we were unable to recover it. 00:28:05.491 [2024-10-07 09:48:54.255799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.491 [2024-10-07 09:48:54.255827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.491 qpair failed and we were unable to recover it. 00:28:05.491 [2024-10-07 09:48:54.255916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.491 [2024-10-07 09:48:54.255943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.491 qpair failed and we were unable to recover it. 00:28:05.491 [2024-10-07 09:48:54.256107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.492 [2024-10-07 09:48:54.256171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.492 qpair failed and we were unable to recover it. 00:28:05.492 [2024-10-07 09:48:54.256309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.492 [2024-10-07 09:48:54.256337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.492 qpair failed and we were unable to recover it. 00:28:05.492 [2024-10-07 09:48:54.256424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.492 [2024-10-07 09:48:54.256452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.492 qpair failed and we were unable to recover it. 00:28:05.492 [2024-10-07 09:48:54.256569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.492 [2024-10-07 09:48:54.256596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.492 qpair failed and we were unable to recover it. 00:28:05.492 [2024-10-07 09:48:54.256690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.492 [2024-10-07 09:48:54.256716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.492 qpair failed and we were unable to recover it. 00:28:05.492 [2024-10-07 09:48:54.256857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.492 [2024-10-07 09:48:54.256883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.492 qpair failed and we were unable to recover it. 00:28:05.492 [2024-10-07 09:48:54.257076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.492 [2024-10-07 09:48:54.257134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.492 qpair failed and we were unable to recover it. 00:28:05.492 [2024-10-07 09:48:54.257271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.492 [2024-10-07 09:48:54.257298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.492 qpair failed and we were unable to recover it. 00:28:05.492 [2024-10-07 09:48:54.257407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.492 [2024-10-07 09:48:54.257434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.492 qpair failed and we were unable to recover it. 00:28:05.492 [2024-10-07 09:48:54.257544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.492 [2024-10-07 09:48:54.257572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.492 qpair failed and we were unable to recover it. 00:28:05.492 [2024-10-07 09:48:54.257711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.492 [2024-10-07 09:48:54.257738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.492 qpair failed and we were unable to recover it. 00:28:05.492 [2024-10-07 09:48:54.257886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.492 [2024-10-07 09:48:54.257913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.492 qpair failed and we were unable to recover it. 00:28:05.492 [2024-10-07 09:48:54.258026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.492 [2024-10-07 09:48:54.258053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.492 qpair failed and we were unable to recover it. 00:28:05.492 [2024-10-07 09:48:54.258165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.492 [2024-10-07 09:48:54.258192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.492 qpair failed and we were unable to recover it. 00:28:05.492 [2024-10-07 09:48:54.258274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.492 [2024-10-07 09:48:54.258302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.492 qpair failed and we were unable to recover it. 00:28:05.492 [2024-10-07 09:48:54.258442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.492 [2024-10-07 09:48:54.258469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.492 qpair failed and we were unable to recover it. 00:28:05.492 [2024-10-07 09:48:54.258611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.492 [2024-10-07 09:48:54.258637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.492 qpair failed and we were unable to recover it. 00:28:05.492 [2024-10-07 09:48:54.258771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.492 [2024-10-07 09:48:54.258799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.492 qpair failed and we were unable to recover it. 00:28:05.492 [2024-10-07 09:48:54.258885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.492 [2024-10-07 09:48:54.258913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.492 qpair failed and we were unable to recover it. 00:28:05.492 [2024-10-07 09:48:54.259055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.492 [2024-10-07 09:48:54.259082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.492 qpair failed and we were unable to recover it. 00:28:05.492 [2024-10-07 09:48:54.259218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.492 [2024-10-07 09:48:54.259245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.492 qpair failed and we were unable to recover it. 00:28:05.492 [2024-10-07 09:48:54.259323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.492 [2024-10-07 09:48:54.259350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.492 qpair failed and we were unable to recover it. 00:28:05.492 [2024-10-07 09:48:54.259496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.492 [2024-10-07 09:48:54.259524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.492 qpair failed and we were unable to recover it. 00:28:05.492 [2024-10-07 09:48:54.259618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.492 [2024-10-07 09:48:54.259644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.492 qpair failed and we were unable to recover it. 00:28:05.492 [2024-10-07 09:48:54.259737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.492 [2024-10-07 09:48:54.259768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.492 qpair failed and we were unable to recover it. 00:28:05.492 [2024-10-07 09:48:54.259874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.492 [2024-10-07 09:48:54.259901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.492 qpair failed and we were unable to recover it. 00:28:05.492 [2024-10-07 09:48:54.260019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.492 [2024-10-07 09:48:54.260045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.492 qpair failed and we were unable to recover it. 00:28:05.492 [2024-10-07 09:48:54.260180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.492 [2024-10-07 09:48:54.260208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.492 qpair failed and we were unable to recover it. 00:28:05.492 [2024-10-07 09:48:54.260327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.492 [2024-10-07 09:48:54.260360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.492 qpair failed and we were unable to recover it. 00:28:05.492 [2024-10-07 09:48:54.260472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.492 [2024-10-07 09:48:54.260498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.492 qpair failed and we were unable to recover it. 00:28:05.492 [2024-10-07 09:48:54.260594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.492 [2024-10-07 09:48:54.260625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.492 qpair failed and we were unable to recover it. 00:28:05.492 [2024-10-07 09:48:54.260717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.492 [2024-10-07 09:48:54.260745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.492 qpair failed and we were unable to recover it. 00:28:05.492 [2024-10-07 09:48:54.260855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.492 [2024-10-07 09:48:54.260882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.492 qpair failed and we were unable to recover it. 00:28:05.492 [2024-10-07 09:48:54.261020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.492 [2024-10-07 09:48:54.261047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.492 qpair failed and we were unable to recover it. 00:28:05.492 [2024-10-07 09:48:54.261126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.492 [2024-10-07 09:48:54.261152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.492 qpair failed and we were unable to recover it. 00:28:05.492 [2024-10-07 09:48:54.261241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.492 [2024-10-07 09:48:54.261268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.492 qpair failed and we were unable to recover it. 00:28:05.492 [2024-10-07 09:48:54.261375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.492 [2024-10-07 09:48:54.261401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.492 qpair failed and we were unable to recover it. 00:28:05.492 [2024-10-07 09:48:54.261511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.492 [2024-10-07 09:48:54.261538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.492 qpair failed and we were unable to recover it. 00:28:05.492 [2024-10-07 09:48:54.261654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.493 [2024-10-07 09:48:54.261691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.493 qpair failed and we were unable to recover it. 00:28:05.493 [2024-10-07 09:48:54.261828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.493 [2024-10-07 09:48:54.261855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.493 qpair failed and we were unable to recover it. 00:28:05.493 [2024-10-07 09:48:54.261968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.493 [2024-10-07 09:48:54.261995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.493 qpair failed and we were unable to recover it. 00:28:05.493 [2024-10-07 09:48:54.262110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.493 [2024-10-07 09:48:54.262137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.493 qpair failed and we were unable to recover it. 00:28:05.493 [2024-10-07 09:48:54.262225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.493 [2024-10-07 09:48:54.262254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.493 qpair failed and we were unable to recover it. 00:28:05.493 [2024-10-07 09:48:54.262366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.493 [2024-10-07 09:48:54.262393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.493 qpair failed and we were unable to recover it. 00:28:05.493 [2024-10-07 09:48:54.262506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.493 [2024-10-07 09:48:54.262533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.493 qpair failed and we were unable to recover it. 00:28:05.493 [2024-10-07 09:48:54.262673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.493 [2024-10-07 09:48:54.262701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.493 qpair failed and we were unable to recover it. 00:28:05.493 [2024-10-07 09:48:54.262790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.493 [2024-10-07 09:48:54.262817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.493 qpair failed and we were unable to recover it. 00:28:05.493 [2024-10-07 09:48:54.262944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.493 [2024-10-07 09:48:54.262971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.493 qpair failed and we were unable to recover it. 00:28:05.493 [2024-10-07 09:48:54.263112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.493 [2024-10-07 09:48:54.263139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.493 qpair failed and we were unable to recover it. 00:28:05.493 [2024-10-07 09:48:54.263257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.493 [2024-10-07 09:48:54.263284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.493 qpair failed and we were unable to recover it. 00:28:05.493 [2024-10-07 09:48:54.263394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.493 [2024-10-07 09:48:54.263421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.493 qpair failed and we were unable to recover it. 00:28:05.493 [2024-10-07 09:48:54.263539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.493 [2024-10-07 09:48:54.263566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.493 qpair failed and we were unable to recover it. 00:28:05.493 [2024-10-07 09:48:54.263647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.493 [2024-10-07 09:48:54.263690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.493 qpair failed and we were unable to recover it. 00:28:05.493 [2024-10-07 09:48:54.263797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.493 [2024-10-07 09:48:54.263824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.493 qpair failed and we were unable to recover it. 00:28:05.493 [2024-10-07 09:48:54.263914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.493 [2024-10-07 09:48:54.263941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.493 qpair failed and we were unable to recover it. 00:28:05.493 [2024-10-07 09:48:54.264056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.493 [2024-10-07 09:48:54.264082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.493 qpair failed and we were unable to recover it. 00:28:05.493 [2024-10-07 09:48:54.264193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.493 [2024-10-07 09:48:54.264221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.493 qpair failed and we were unable to recover it. 00:28:05.493 [2024-10-07 09:48:54.264299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.493 [2024-10-07 09:48:54.264326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.493 qpair failed and we were unable to recover it. 00:28:05.493 [2024-10-07 09:48:54.264436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.493 [2024-10-07 09:48:54.264463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.493 qpair failed and we were unable to recover it. 00:28:05.493 [2024-10-07 09:48:54.264604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.493 [2024-10-07 09:48:54.264631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.493 qpair failed and we were unable to recover it. 00:28:05.493 [2024-10-07 09:48:54.264753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.493 [2024-10-07 09:48:54.264781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.493 qpair failed and we were unable to recover it. 00:28:05.493 [2024-10-07 09:48:54.264894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.493 [2024-10-07 09:48:54.264921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.493 qpair failed and we were unable to recover it. 00:28:05.493 [2024-10-07 09:48:54.265019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.493 [2024-10-07 09:48:54.265048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.493 qpair failed and we were unable to recover it. 00:28:05.493 [2024-10-07 09:48:54.265160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.493 [2024-10-07 09:48:54.265187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.493 qpair failed and we were unable to recover it. 00:28:05.493 [2024-10-07 09:48:54.265268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.493 [2024-10-07 09:48:54.265300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.493 qpair failed and we were unable to recover it. 00:28:05.493 [2024-10-07 09:48:54.265408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.493 [2024-10-07 09:48:54.265435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.493 qpair failed and we were unable to recover it. 00:28:05.493 [2024-10-07 09:48:54.265517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.493 [2024-10-07 09:48:54.265544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.493 qpair failed and we were unable to recover it. 00:28:05.493 [2024-10-07 09:48:54.265652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.493 [2024-10-07 09:48:54.265685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.493 qpair failed and we were unable to recover it. 00:28:05.493 [2024-10-07 09:48:54.265809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.493 [2024-10-07 09:48:54.265836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.493 qpair failed and we were unable to recover it. 00:28:05.493 [2024-10-07 09:48:54.265913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.493 [2024-10-07 09:48:54.265939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.493 qpair failed and we were unable to recover it. 00:28:05.493 [2024-10-07 09:48:54.266042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.493 [2024-10-07 09:48:54.266068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.493 qpair failed and we were unable to recover it. 00:28:05.493 [2024-10-07 09:48:54.266182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.493 [2024-10-07 09:48:54.266208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.493 qpair failed and we were unable to recover it. 00:28:05.493 [2024-10-07 09:48:54.266319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.493 [2024-10-07 09:48:54.266345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.493 qpair failed and we were unable to recover it. 00:28:05.493 [2024-10-07 09:48:54.266455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.493 [2024-10-07 09:48:54.266482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.493 qpair failed and we were unable to recover it. 00:28:05.493 [2024-10-07 09:48:54.266598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.493 [2024-10-07 09:48:54.266626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.493 qpair failed and we were unable to recover it. 00:28:05.493 [2024-10-07 09:48:54.266759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.493 [2024-10-07 09:48:54.266786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.493 qpair failed and we were unable to recover it. 00:28:05.493 [2024-10-07 09:48:54.266894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.493 [2024-10-07 09:48:54.266921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.494 qpair failed and we were unable to recover it. 00:28:05.494 [2024-10-07 09:48:54.267009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.494 [2024-10-07 09:48:54.267036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.494 qpair failed and we were unable to recover it. 00:28:05.494 [2024-10-07 09:48:54.267159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.494 [2024-10-07 09:48:54.267186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.494 qpair failed and we were unable to recover it. 00:28:05.494 [2024-10-07 09:48:54.267265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.494 [2024-10-07 09:48:54.267292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.494 qpair failed and we were unable to recover it. 00:28:05.494 [2024-10-07 09:48:54.267401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.494 [2024-10-07 09:48:54.267428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.494 qpair failed and we were unable to recover it. 00:28:05.494 [2024-10-07 09:48:54.267534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.494 [2024-10-07 09:48:54.267561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.494 qpair failed and we were unable to recover it. 00:28:05.494 [2024-10-07 09:48:54.267647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.494 [2024-10-07 09:48:54.267681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.494 qpair failed and we were unable to recover it. 00:28:05.494 [2024-10-07 09:48:54.267798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.494 [2024-10-07 09:48:54.267825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.494 qpair failed and we were unable to recover it. 00:28:05.494 [2024-10-07 09:48:54.267952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.494 [2024-10-07 09:48:54.267979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.494 qpair failed and we were unable to recover it. 00:28:05.494 [2024-10-07 09:48:54.268125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.494 [2024-10-07 09:48:54.268152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.494 qpair failed and we were unable to recover it. 00:28:05.494 [2024-10-07 09:48:54.268266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.494 [2024-10-07 09:48:54.268293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.494 qpair failed and we were unable to recover it. 00:28:05.494 [2024-10-07 09:48:54.268368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.494 [2024-10-07 09:48:54.268394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.494 qpair failed and we were unable to recover it. 00:28:05.494 [2024-10-07 09:48:54.268504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.494 [2024-10-07 09:48:54.268531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.494 qpair failed and we were unable to recover it. 00:28:05.494 [2024-10-07 09:48:54.268658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.494 [2024-10-07 09:48:54.268691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.494 qpair failed and we were unable to recover it. 00:28:05.494 [2024-10-07 09:48:54.268835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.494 [2024-10-07 09:48:54.268862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.494 qpair failed and we were unable to recover it. 00:28:05.494 [2024-10-07 09:48:54.268981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.494 [2024-10-07 09:48:54.269008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.494 qpair failed and we were unable to recover it. 00:28:05.494 [2024-10-07 09:48:54.269098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.494 [2024-10-07 09:48:54.269124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.494 qpair failed and we were unable to recover it. 00:28:05.494 [2024-10-07 09:48:54.269211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.494 [2024-10-07 09:48:54.269238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.494 qpair failed and we were unable to recover it. 00:28:05.494 [2024-10-07 09:48:54.269319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.494 [2024-10-07 09:48:54.269347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.494 qpair failed and we were unable to recover it. 00:28:05.494 [2024-10-07 09:48:54.269439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.494 [2024-10-07 09:48:54.269467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.494 qpair failed and we were unable to recover it. 00:28:05.494 [2024-10-07 09:48:54.269579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.494 [2024-10-07 09:48:54.269606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.494 qpair failed and we were unable to recover it. 00:28:05.494 [2024-10-07 09:48:54.269692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.494 [2024-10-07 09:48:54.269717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.494 qpair failed and we were unable to recover it. 00:28:05.494 [2024-10-07 09:48:54.269804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.494 [2024-10-07 09:48:54.269830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.494 qpair failed and we were unable to recover it. 00:28:05.494 [2024-10-07 09:48:54.269915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.494 [2024-10-07 09:48:54.269943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.494 qpair failed and we were unable to recover it. 00:28:05.494 [2024-10-07 09:48:54.270083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.494 [2024-10-07 09:48:54.270110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.494 qpair failed and we were unable to recover it. 00:28:05.494 [2024-10-07 09:48:54.270219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.494 [2024-10-07 09:48:54.270246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.494 qpair failed and we were unable to recover it. 00:28:05.494 [2024-10-07 09:48:54.270360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.494 [2024-10-07 09:48:54.270387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.494 qpair failed and we were unable to recover it. 00:28:05.494 [2024-10-07 09:48:54.270498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.494 [2024-10-07 09:48:54.270524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.494 qpair failed and we were unable to recover it. 00:28:05.494 [2024-10-07 09:48:54.270604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.494 [2024-10-07 09:48:54.270636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.494 qpair failed and we were unable to recover it. 00:28:05.494 [2024-10-07 09:48:54.270763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.494 [2024-10-07 09:48:54.270791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.494 qpair failed and we were unable to recover it. 00:28:05.494 [2024-10-07 09:48:54.270904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.494 [2024-10-07 09:48:54.270934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.494 qpair failed and we were unable to recover it. 00:28:05.494 [2024-10-07 09:48:54.271046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.494 [2024-10-07 09:48:54.271074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.494 qpair failed and we were unable to recover it. 00:28:05.494 [2024-10-07 09:48:54.271213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.494 [2024-10-07 09:48:54.271240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.494 qpair failed and we were unable to recover it. 00:28:05.494 [2024-10-07 09:48:54.271321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.494 [2024-10-07 09:48:54.271348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.494 qpair failed and we were unable to recover it. 00:28:05.494 [2024-10-07 09:48:54.271477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.494 [2024-10-07 09:48:54.271504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.494 qpair failed and we were unable to recover it. 00:28:05.494 [2024-10-07 09:48:54.271611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.494 [2024-10-07 09:48:54.271638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.494 qpair failed and we were unable to recover it. 00:28:05.494 [2024-10-07 09:48:54.271786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.494 [2024-10-07 09:48:54.271813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.494 qpair failed and we were unable to recover it. 00:28:05.494 [2024-10-07 09:48:54.271952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.494 [2024-10-07 09:48:54.271978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.494 qpair failed and we were unable to recover it. 00:28:05.494 [2024-10-07 09:48:54.272105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.494 [2024-10-07 09:48:54.272158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.495 qpair failed and we were unable to recover it. 00:28:05.495 [2024-10-07 09:48:54.272255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.495 [2024-10-07 09:48:54.272282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.495 qpair failed and we were unable to recover it. 00:28:05.495 [2024-10-07 09:48:54.272393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.495 [2024-10-07 09:48:54.272430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.495 qpair failed and we were unable to recover it. 00:28:05.495 [2024-10-07 09:48:54.272545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.495 [2024-10-07 09:48:54.272572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.495 qpair failed and we were unable to recover it. 00:28:05.495 [2024-10-07 09:48:54.272687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.495 [2024-10-07 09:48:54.272716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.495 qpair failed and we were unable to recover it. 00:28:05.495 [2024-10-07 09:48:54.272857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.495 [2024-10-07 09:48:54.272884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.495 qpair failed and we were unable to recover it. 00:28:05.495 [2024-10-07 09:48:54.272969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.495 [2024-10-07 09:48:54.272996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.495 qpair failed and we were unable to recover it. 00:28:05.495 [2024-10-07 09:48:54.273079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.495 [2024-10-07 09:48:54.273106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.495 qpair failed and we were unable to recover it. 00:28:05.495 [2024-10-07 09:48:54.273218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.495 [2024-10-07 09:48:54.273244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.495 qpair failed and we were unable to recover it. 00:28:05.495 [2024-10-07 09:48:54.273349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.495 [2024-10-07 09:48:54.273375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.495 qpair failed and we were unable to recover it. 00:28:05.495 [2024-10-07 09:48:54.273453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.495 [2024-10-07 09:48:54.273480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.495 qpair failed and we were unable to recover it. 00:28:05.495 [2024-10-07 09:48:54.273563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.495 [2024-10-07 09:48:54.273589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.495 qpair failed and we were unable to recover it. 00:28:05.495 [2024-10-07 09:48:54.273699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.495 [2024-10-07 09:48:54.273727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.495 qpair failed and we were unable to recover it. 00:28:05.495 [2024-10-07 09:48:54.273833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.495 [2024-10-07 09:48:54.273860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.495 qpair failed and we were unable to recover it. 00:28:05.495 [2024-10-07 09:48:54.273961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.495 [2024-10-07 09:48:54.273988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.495 qpair failed and we were unable to recover it. 00:28:05.495 [2024-10-07 09:48:54.274099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.495 [2024-10-07 09:48:54.274126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.495 qpair failed and we were unable to recover it. 00:28:05.495 [2024-10-07 09:48:54.274240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.495 [2024-10-07 09:48:54.274269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.495 qpair failed and we were unable to recover it. 00:28:05.495 [2024-10-07 09:48:54.274391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.495 [2024-10-07 09:48:54.274418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.495 qpair failed and we were unable to recover it. 00:28:05.495 [2024-10-07 09:48:54.274525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.495 [2024-10-07 09:48:54.274552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.495 qpair failed and we were unable to recover it. 00:28:05.495 [2024-10-07 09:48:54.274643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.495 [2024-10-07 09:48:54.274678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.495 qpair failed and we were unable to recover it. 00:28:05.495 [2024-10-07 09:48:54.274796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.495 [2024-10-07 09:48:54.274824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.495 qpair failed and we were unable to recover it. 00:28:05.495 [2024-10-07 09:48:54.274935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.495 [2024-10-07 09:48:54.274963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.495 qpair failed and we were unable to recover it. 00:28:05.495 [2024-10-07 09:48:54.275054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.495 [2024-10-07 09:48:54.275082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.495 qpair failed and we were unable to recover it. 00:28:05.495 [2024-10-07 09:48:54.275162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.495 [2024-10-07 09:48:54.275189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.495 qpair failed and we were unable to recover it. 00:28:05.495 [2024-10-07 09:48:54.275313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.495 [2024-10-07 09:48:54.275340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.495 qpair failed and we were unable to recover it. 00:28:05.495 [2024-10-07 09:48:54.275447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.495 [2024-10-07 09:48:54.275475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.495 qpair failed and we were unable to recover it. 00:28:05.495 [2024-10-07 09:48:54.275627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.495 [2024-10-07 09:48:54.275654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.495 qpair failed and we were unable to recover it. 00:28:05.495 [2024-10-07 09:48:54.275802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.495 [2024-10-07 09:48:54.275830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.495 qpair failed and we were unable to recover it. 00:28:05.495 [2024-10-07 09:48:54.276004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.495 [2024-10-07 09:48:54.276059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.495 qpair failed and we were unable to recover it. 00:28:05.495 [2024-10-07 09:48:54.276255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.495 [2024-10-07 09:48:54.276315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.495 qpair failed and we were unable to recover it. 00:28:05.495 [2024-10-07 09:48:54.276433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.495 [2024-10-07 09:48:54.276465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.495 qpair failed and we were unable to recover it. 00:28:05.495 [2024-10-07 09:48:54.276605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.495 [2024-10-07 09:48:54.276632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.495 qpair failed and we were unable to recover it. 00:28:05.495 [2024-10-07 09:48:54.276752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.495 [2024-10-07 09:48:54.276781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.495 qpair failed and we were unable to recover it. 00:28:05.495 [2024-10-07 09:48:54.276870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.495 [2024-10-07 09:48:54.276896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.495 qpair failed and we were unable to recover it. 00:28:05.495 [2024-10-07 09:48:54.277059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.495 [2024-10-07 09:48:54.277110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.496 qpair failed and we were unable to recover it. 00:28:05.496 [2024-10-07 09:48:54.277295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.496 [2024-10-07 09:48:54.277322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.496 qpair failed and we were unable to recover it. 00:28:05.496 [2024-10-07 09:48:54.277468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.496 [2024-10-07 09:48:54.277495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.496 qpair failed and we were unable to recover it. 00:28:05.496 [2024-10-07 09:48:54.277589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.496 [2024-10-07 09:48:54.277613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.496 qpair failed and we were unable to recover it. 00:28:05.496 [2024-10-07 09:48:54.277697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.496 [2024-10-07 09:48:54.277723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.496 qpair failed and we were unable to recover it. 00:28:05.496 [2024-10-07 09:48:54.277832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.496 [2024-10-07 09:48:54.277858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.496 qpair failed and we were unable to recover it. 00:28:05.496 [2024-10-07 09:48:54.277973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.496 [2024-10-07 09:48:54.277999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.496 qpair failed and we were unable to recover it. 00:28:05.496 [2024-10-07 09:48:54.278157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.496 [2024-10-07 09:48:54.278208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.496 qpair failed and we were unable to recover it. 00:28:05.496 [2024-10-07 09:48:54.278317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.496 [2024-10-07 09:48:54.278344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.496 qpair failed and we were unable to recover it. 00:28:05.496 [2024-10-07 09:48:54.278467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.496 [2024-10-07 09:48:54.278495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.496 qpair failed and we were unable to recover it. 00:28:05.496 [2024-10-07 09:48:54.278609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.496 [2024-10-07 09:48:54.278636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.496 qpair failed and we were unable to recover it. 00:28:05.496 [2024-10-07 09:48:54.278765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.496 [2024-10-07 09:48:54.278792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.496 qpair failed and we were unable to recover it. 00:28:05.496 [2024-10-07 09:48:54.278900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.496 [2024-10-07 09:48:54.278926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.496 qpair failed and we were unable to recover it. 00:28:05.496 [2024-10-07 09:48:54.279040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.496 [2024-10-07 09:48:54.279066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.496 qpair failed and we were unable to recover it. 00:28:05.496 [2024-10-07 09:48:54.279150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.496 [2024-10-07 09:48:54.279177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.496 qpair failed and we were unable to recover it. 00:28:05.496 [2024-10-07 09:48:54.279259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.496 [2024-10-07 09:48:54.279286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.496 qpair failed and we were unable to recover it. 00:28:05.496 [2024-10-07 09:48:54.279401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.496 [2024-10-07 09:48:54.279428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.496 qpair failed and we were unable to recover it. 00:28:05.496 [2024-10-07 09:48:54.279543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.496 [2024-10-07 09:48:54.279570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.496 qpair failed and we were unable to recover it. 00:28:05.496 [2024-10-07 09:48:54.279681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.496 [2024-10-07 09:48:54.279709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.496 qpair failed and we were unable to recover it. 00:28:05.496 [2024-10-07 09:48:54.279798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.496 [2024-10-07 09:48:54.279825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.496 qpair failed and we were unable to recover it. 00:28:05.496 [2024-10-07 09:48:54.279904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.496 [2024-10-07 09:48:54.279930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.496 qpair failed and we were unable to recover it. 00:28:05.496 [2024-10-07 09:48:54.280041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.496 [2024-10-07 09:48:54.280068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.496 qpair failed and we were unable to recover it. 00:28:05.496 [2024-10-07 09:48:54.280182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.496 [2024-10-07 09:48:54.280209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.496 qpair failed and we were unable to recover it. 00:28:05.496 [2024-10-07 09:48:54.280294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.496 [2024-10-07 09:48:54.280322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.496 qpair failed and we were unable to recover it. 00:28:05.496 [2024-10-07 09:48:54.280433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.496 [2024-10-07 09:48:54.280461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.496 qpair failed and we were unable to recover it. 00:28:05.496 [2024-10-07 09:48:54.280598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.496 [2024-10-07 09:48:54.280624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.496 qpair failed and we were unable to recover it. 00:28:05.496 [2024-10-07 09:48:54.280755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.496 [2024-10-07 09:48:54.280796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.496 qpair failed and we were unable to recover it. 00:28:05.496 [2024-10-07 09:48:54.280891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.496 [2024-10-07 09:48:54.280921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.496 qpair failed and we were unable to recover it. 00:28:05.496 [2024-10-07 09:48:54.281061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.496 [2024-10-07 09:48:54.281118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.496 qpair failed and we were unable to recover it. 00:28:05.496 [2024-10-07 09:48:54.281271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.496 [2024-10-07 09:48:54.281326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.496 qpair failed and we were unable to recover it. 00:28:05.496 [2024-10-07 09:48:54.281438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.496 [2024-10-07 09:48:54.281466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.496 qpair failed and we were unable to recover it. 00:28:05.496 [2024-10-07 09:48:54.281582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.496 [2024-10-07 09:48:54.281610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.496 qpair failed and we were unable to recover it. 00:28:05.496 [2024-10-07 09:48:54.281707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.496 [2024-10-07 09:48:54.281735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.496 qpair failed and we were unable to recover it. 00:28:05.496 [2024-10-07 09:48:54.281850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.496 [2024-10-07 09:48:54.281877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.496 qpair failed and we were unable to recover it. 00:28:05.496 [2024-10-07 09:48:54.282018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.496 [2024-10-07 09:48:54.282079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.496 qpair failed and we were unable to recover it. 00:28:05.496 [2024-10-07 09:48:54.282221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.496 [2024-10-07 09:48:54.282248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.496 qpair failed and we were unable to recover it. 00:28:05.496 [2024-10-07 09:48:54.282385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.496 [2024-10-07 09:48:54.282418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.496 qpair failed and we were unable to recover it. 00:28:05.496 [2024-10-07 09:48:54.282499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.496 [2024-10-07 09:48:54.282526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.496 qpair failed and we were unable to recover it. 00:28:05.496 [2024-10-07 09:48:54.282647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.496 [2024-10-07 09:48:54.282690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.496 qpair failed and we were unable to recover it. 00:28:05.497 [2024-10-07 09:48:54.282805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.497 [2024-10-07 09:48:54.282833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.497 qpair failed and we were unable to recover it. 00:28:05.497 [2024-10-07 09:48:54.282917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.497 [2024-10-07 09:48:54.282944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.497 qpair failed and we were unable to recover it. 00:28:05.497 [2024-10-07 09:48:54.283030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.497 [2024-10-07 09:48:54.283057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.497 qpair failed and we were unable to recover it. 00:28:05.497 [2024-10-07 09:48:54.283176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.497 [2024-10-07 09:48:54.283204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.497 qpair failed and we were unable to recover it. 00:28:05.497 [2024-10-07 09:48:54.283316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.497 [2024-10-07 09:48:54.283342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.497 qpair failed and we were unable to recover it. 00:28:05.497 [2024-10-07 09:48:54.283450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.497 [2024-10-07 09:48:54.283477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.497 qpair failed and we were unable to recover it. 00:28:05.497 [2024-10-07 09:48:54.283587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.497 [2024-10-07 09:48:54.283614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.497 qpair failed and we were unable to recover it. 00:28:05.497 [2024-10-07 09:48:54.283728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.497 [2024-10-07 09:48:54.283755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.497 qpair failed and we were unable to recover it. 00:28:05.497 [2024-10-07 09:48:54.283892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.497 [2024-10-07 09:48:54.283919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.497 qpair failed and we were unable to recover it. 00:28:05.497 [2024-10-07 09:48:54.284037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.497 [2024-10-07 09:48:54.284064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.497 qpair failed and we were unable to recover it. 00:28:05.497 [2024-10-07 09:48:54.284178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.497 [2024-10-07 09:48:54.284205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.497 qpair failed and we were unable to recover it. 00:28:05.497 [2024-10-07 09:48:54.284358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.497 [2024-10-07 09:48:54.284387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.497 qpair failed and we were unable to recover it. 00:28:05.497 [2024-10-07 09:48:54.284505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.497 [2024-10-07 09:48:54.284532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.497 qpair failed and we were unable to recover it. 00:28:05.497 [2024-10-07 09:48:54.284653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.497 [2024-10-07 09:48:54.284685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.497 qpair failed and we were unable to recover it. 00:28:05.497 [2024-10-07 09:48:54.284800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.497 [2024-10-07 09:48:54.284827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.497 qpair failed and we were unable to recover it. 00:28:05.497 [2024-10-07 09:48:54.284908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.497 [2024-10-07 09:48:54.284935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.497 qpair failed and we were unable to recover it. 00:28:05.497 [2024-10-07 09:48:54.285020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.497 [2024-10-07 09:48:54.285047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.497 qpair failed and we were unable to recover it. 00:28:05.497 [2024-10-07 09:48:54.285159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.497 [2024-10-07 09:48:54.285186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.497 qpair failed and we were unable to recover it. 00:28:05.497 [2024-10-07 09:48:54.285292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.497 [2024-10-07 09:48:54.285319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.497 qpair failed and we were unable to recover it. 00:28:05.497 [2024-10-07 09:48:54.285405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.497 [2024-10-07 09:48:54.285433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.497 qpair failed and we were unable to recover it. 00:28:05.497 [2024-10-07 09:48:54.285541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.497 [2024-10-07 09:48:54.285568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.497 qpair failed and we were unable to recover it. 00:28:05.497 [2024-10-07 09:48:54.285676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.497 [2024-10-07 09:48:54.285703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.497 qpair failed and we were unable to recover it. 00:28:05.497 [2024-10-07 09:48:54.285842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.497 [2024-10-07 09:48:54.285868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.497 qpair failed and we were unable to recover it. 00:28:05.497 [2024-10-07 09:48:54.285984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.497 [2024-10-07 09:48:54.286011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.497 qpair failed and we were unable to recover it. 00:28:05.497 [2024-10-07 09:48:54.286107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.497 [2024-10-07 09:48:54.286133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.497 qpair failed and we were unable to recover it. 00:28:05.497 [2024-10-07 09:48:54.286251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.497 [2024-10-07 09:48:54.286277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.497 qpair failed and we were unable to recover it. 00:28:05.497 [2024-10-07 09:48:54.286388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.497 [2024-10-07 09:48:54.286414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.497 qpair failed and we were unable to recover it. 00:28:05.497 [2024-10-07 09:48:54.286503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.497 [2024-10-07 09:48:54.286530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.497 qpair failed and we were unable to recover it. 00:28:05.497 [2024-10-07 09:48:54.286638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.497 [2024-10-07 09:48:54.286670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.497 qpair failed and we were unable to recover it. 00:28:05.497 [2024-10-07 09:48:54.286779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.497 [2024-10-07 09:48:54.286806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.497 qpair failed and we were unable to recover it. 00:28:05.497 [2024-10-07 09:48:54.286910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.497 [2024-10-07 09:48:54.286937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.497 qpair failed and we were unable to recover it. 00:28:05.497 [2024-10-07 09:48:54.287051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.497 [2024-10-07 09:48:54.287078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.497 qpair failed and we were unable to recover it. 00:28:05.497 [2024-10-07 09:48:54.287185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.497 [2024-10-07 09:48:54.287212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.497 qpair failed and we were unable to recover it. 00:28:05.497 [2024-10-07 09:48:54.287296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.497 [2024-10-07 09:48:54.287323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.497 qpair failed and we were unable to recover it. 00:28:05.497 [2024-10-07 09:48:54.287436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.497 [2024-10-07 09:48:54.287463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.497 qpair failed and we were unable to recover it. 00:28:05.497 [2024-10-07 09:48:54.287573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.497 [2024-10-07 09:48:54.287602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.497 qpair failed and we were unable to recover it. 00:28:05.497 [2024-10-07 09:48:54.287720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.497 [2024-10-07 09:48:54.287748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.497 qpair failed and we were unable to recover it. 00:28:05.497 [2024-10-07 09:48:54.287863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.497 [2024-10-07 09:48:54.287896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.497 qpair failed and we were unable to recover it. 00:28:05.497 [2024-10-07 09:48:54.288008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.498 [2024-10-07 09:48:54.288036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.498 qpair failed and we were unable to recover it. 00:28:05.498 [2024-10-07 09:48:54.288128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.498 [2024-10-07 09:48:54.288155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.498 qpair failed and we were unable to recover it. 00:28:05.498 [2024-10-07 09:48:54.288266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.498 [2024-10-07 09:48:54.288293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.498 qpair failed and we were unable to recover it. 00:28:05.498 [2024-10-07 09:48:54.288384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.498 [2024-10-07 09:48:54.288411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.498 qpair failed and we were unable to recover it. 00:28:05.498 [2024-10-07 09:48:54.288484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.498 [2024-10-07 09:48:54.288509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.498 qpair failed and we were unable to recover it. 00:28:05.498 [2024-10-07 09:48:54.288628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.498 [2024-10-07 09:48:54.288657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.498 qpair failed and we were unable to recover it. 00:28:05.498 [2024-10-07 09:48:54.288807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.498 [2024-10-07 09:48:54.288834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.498 qpair failed and we were unable to recover it. 00:28:05.498 [2024-10-07 09:48:54.288948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.498 [2024-10-07 09:48:54.288974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.498 qpair failed and we were unable to recover it. 00:28:05.498 [2024-10-07 09:48:54.289084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.498 [2024-10-07 09:48:54.289111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.498 qpair failed and we were unable to recover it. 00:28:05.498 [2024-10-07 09:48:54.289221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.498 [2024-10-07 09:48:54.289249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.498 qpair failed and we were unable to recover it. 00:28:05.498 [2024-10-07 09:48:54.289330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.498 [2024-10-07 09:48:54.289357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.498 qpair failed and we were unable to recover it. 00:28:05.498 [2024-10-07 09:48:54.289436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.498 [2024-10-07 09:48:54.289463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.498 qpair failed and we were unable to recover it. 00:28:05.498 [2024-10-07 09:48:54.289546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.498 [2024-10-07 09:48:54.289573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.498 qpair failed and we were unable to recover it. 00:28:05.498 [2024-10-07 09:48:54.289712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.498 [2024-10-07 09:48:54.289740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.498 qpair failed and we were unable to recover it. 00:28:05.498 [2024-10-07 09:48:54.289822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.498 [2024-10-07 09:48:54.289849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.498 qpair failed and we were unable to recover it. 00:28:05.498 [2024-10-07 09:48:54.289996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.498 [2024-10-07 09:48:54.290023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.498 qpair failed and we were unable to recover it. 00:28:05.498 [2024-10-07 09:48:54.290137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.498 [2024-10-07 09:48:54.290164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.498 qpair failed and we were unable to recover it. 00:28:05.498 [2024-10-07 09:48:54.290316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.498 [2024-10-07 09:48:54.290343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.498 qpair failed and we were unable to recover it. 00:28:05.498 [2024-10-07 09:48:54.290453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.498 [2024-10-07 09:48:54.290479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.498 qpair failed and we were unable to recover it. 00:28:05.498 [2024-10-07 09:48:54.290563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.498 [2024-10-07 09:48:54.290590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.498 qpair failed and we were unable to recover it. 00:28:05.498 [2024-10-07 09:48:54.290681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.498 [2024-10-07 09:48:54.290709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.498 qpair failed and we were unable to recover it. 00:28:05.498 [2024-10-07 09:48:54.290821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.498 [2024-10-07 09:48:54.290848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.498 qpair failed and we were unable to recover it. 00:28:05.498 [2024-10-07 09:48:54.290988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.498 [2024-10-07 09:48:54.291014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.498 qpair failed and we were unable to recover it. 00:28:05.498 [2024-10-07 09:48:54.291124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.498 [2024-10-07 09:48:54.291151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.498 qpair failed and we were unable to recover it. 00:28:05.498 [2024-10-07 09:48:54.291229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.498 [2024-10-07 09:48:54.291255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.498 qpair failed and we were unable to recover it. 00:28:05.498 [2024-10-07 09:48:54.291341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.498 [2024-10-07 09:48:54.291367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.498 qpair failed and we were unable to recover it. 00:28:05.498 [2024-10-07 09:48:54.291466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.498 [2024-10-07 09:48:54.291493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.498 qpair failed and we were unable to recover it. 00:28:05.498 [2024-10-07 09:48:54.291572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.498 [2024-10-07 09:48:54.291599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.498 qpair failed and we were unable to recover it. 00:28:05.498 [2024-10-07 09:48:54.291680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.498 [2024-10-07 09:48:54.291708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.498 qpair failed and we were unable to recover it. 00:28:05.498 [2024-10-07 09:48:54.291820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.498 [2024-10-07 09:48:54.291848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.498 qpair failed and we were unable to recover it. 00:28:05.498 [2024-10-07 09:48:54.291990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.498 [2024-10-07 09:48:54.292017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.498 qpair failed and we were unable to recover it. 00:28:05.498 [2024-10-07 09:48:54.292132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.498 [2024-10-07 09:48:54.292159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.498 qpair failed and we were unable to recover it. 00:28:05.498 [2024-10-07 09:48:54.292252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.498 [2024-10-07 09:48:54.292278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.498 qpair failed and we were unable to recover it. 00:28:05.498 [2024-10-07 09:48:54.292414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.498 [2024-10-07 09:48:54.292441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.498 qpair failed and we were unable to recover it. 00:28:05.498 [2024-10-07 09:48:54.292527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.498 [2024-10-07 09:48:54.292553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.498 qpair failed and we were unable to recover it. 00:28:05.498 [2024-10-07 09:48:54.292680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.498 [2024-10-07 09:48:54.292720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.498 qpair failed and we were unable to recover it. 00:28:05.498 [2024-10-07 09:48:54.292870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.498 [2024-10-07 09:48:54.292898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.498 qpair failed and we were unable to recover it. 00:28:05.498 [2024-10-07 09:48:54.293096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.498 [2024-10-07 09:48:54.293153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.498 qpair failed and we were unable to recover it. 00:28:05.498 [2024-10-07 09:48:54.293381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.498 [2024-10-07 09:48:54.293437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.498 qpair failed and we were unable to recover it. 00:28:05.499 [2024-10-07 09:48:54.293582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.499 [2024-10-07 09:48:54.293609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.499 qpair failed and we were unable to recover it. 00:28:05.499 [2024-10-07 09:48:54.293733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.499 [2024-10-07 09:48:54.293761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.499 qpair failed and we were unable to recover it. 00:28:05.499 [2024-10-07 09:48:54.293940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.499 [2024-10-07 09:48:54.294020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.499 qpair failed and we were unable to recover it. 00:28:05.499 [2024-10-07 09:48:54.294189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.499 [2024-10-07 09:48:54.294244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.499 qpair failed and we were unable to recover it. 00:28:05.499 [2024-10-07 09:48:54.294321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.499 [2024-10-07 09:48:54.294349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.499 qpair failed and we were unable to recover it. 00:28:05.499 [2024-10-07 09:48:54.294490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.499 [2024-10-07 09:48:54.294517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.499 qpair failed and we were unable to recover it. 00:28:05.499 [2024-10-07 09:48:54.294629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.499 [2024-10-07 09:48:54.294656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.499 qpair failed and we were unable to recover it. 00:28:05.499 [2024-10-07 09:48:54.294789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.499 [2024-10-07 09:48:54.294816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.499 qpair failed and we were unable to recover it. 00:28:05.499 [2024-10-07 09:48:54.294973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.499 [2024-10-07 09:48:54.295034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.499 qpair failed and we were unable to recover it. 00:28:05.499 [2024-10-07 09:48:54.295252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.499 [2024-10-07 09:48:54.295307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.499 qpair failed and we were unable to recover it. 00:28:05.499 [2024-10-07 09:48:54.295413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.499 [2024-10-07 09:48:54.295440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.499 qpair failed and we were unable to recover it. 00:28:05.499 [2024-10-07 09:48:54.295557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.499 [2024-10-07 09:48:54.295583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.499 qpair failed and we were unable to recover it. 00:28:05.499 [2024-10-07 09:48:54.295696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.499 [2024-10-07 09:48:54.295724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.499 qpair failed and we were unable to recover it. 00:28:05.499 [2024-10-07 09:48:54.295866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.499 [2024-10-07 09:48:54.295893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.499 qpair failed and we were unable to recover it. 00:28:05.499 [2024-10-07 09:48:54.296010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.499 [2024-10-07 09:48:54.296037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.499 qpair failed and we were unable to recover it. 00:28:05.499 [2024-10-07 09:48:54.296151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.499 [2024-10-07 09:48:54.296178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.499 qpair failed and we were unable to recover it. 00:28:05.499 [2024-10-07 09:48:54.296264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.499 [2024-10-07 09:48:54.296292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.499 qpair failed and we were unable to recover it. 00:28:05.499 [2024-10-07 09:48:54.296401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.499 [2024-10-07 09:48:54.296428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.499 qpair failed and we were unable to recover it. 00:28:05.499 [2024-10-07 09:48:54.296510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.499 [2024-10-07 09:48:54.296537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.499 qpair failed and we were unable to recover it. 00:28:05.499 [2024-10-07 09:48:54.296650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.499 [2024-10-07 09:48:54.296692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.499 qpair failed and we were unable to recover it. 00:28:05.499 [2024-10-07 09:48:54.296778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.499 [2024-10-07 09:48:54.296807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.499 qpair failed and we were unable to recover it. 00:28:05.499 [2024-10-07 09:48:54.296905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.499 [2024-10-07 09:48:54.296932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.499 qpair failed and we were unable to recover it. 00:28:05.499 [2024-10-07 09:48:54.297071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.499 [2024-10-07 09:48:54.297098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.499 qpair failed and we were unable to recover it. 00:28:05.499 [2024-10-07 09:48:54.297178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.499 [2024-10-07 09:48:54.297205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.499 qpair failed and we were unable to recover it. 00:28:05.499 [2024-10-07 09:48:54.297282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.499 [2024-10-07 09:48:54.297308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.499 qpair failed and we were unable to recover it. 00:28:05.499 [2024-10-07 09:48:54.297425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.499 [2024-10-07 09:48:54.297451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.499 qpair failed and we were unable to recover it. 00:28:05.499 [2024-10-07 09:48:54.297535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.499 [2024-10-07 09:48:54.297562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.499 qpair failed and we were unable to recover it. 00:28:05.499 [2024-10-07 09:48:54.297645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.499 [2024-10-07 09:48:54.297682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.499 qpair failed and we were unable to recover it. 00:28:05.499 [2024-10-07 09:48:54.297780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.499 [2024-10-07 09:48:54.297809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.499 qpair failed and we were unable to recover it. 00:28:05.499 [2024-10-07 09:48:54.297901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.499 [2024-10-07 09:48:54.297928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.499 qpair failed and we were unable to recover it. 00:28:05.499 [2024-10-07 09:48:54.298039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.499 [2024-10-07 09:48:54.298067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.499 qpair failed and we were unable to recover it. 00:28:05.499 [2024-10-07 09:48:54.298208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.499 [2024-10-07 09:48:54.298235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.499 qpair failed and we were unable to recover it. 00:28:05.499 [2024-10-07 09:48:54.298318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.499 [2024-10-07 09:48:54.298346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.499 qpair failed and we were unable to recover it. 00:28:05.499 [2024-10-07 09:48:54.298450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.499 [2024-10-07 09:48:54.298477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.499 qpair failed and we were unable to recover it. 00:28:05.499 [2024-10-07 09:48:54.298589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.499 [2024-10-07 09:48:54.298617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.499 qpair failed and we were unable to recover it. 00:28:05.499 [2024-10-07 09:48:54.298707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.499 [2024-10-07 09:48:54.298736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.499 qpair failed and we were unable to recover it. 00:28:05.499 [2024-10-07 09:48:54.298847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.499 [2024-10-07 09:48:54.298874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.499 qpair failed and we were unable to recover it. 00:28:05.499 [2024-10-07 09:48:54.298985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.499 [2024-10-07 09:48:54.299012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.499 qpair failed and we were unable to recover it. 00:28:05.499 [2024-10-07 09:48:54.299102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.500 [2024-10-07 09:48:54.299130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.500 qpair failed and we were unable to recover it. 00:28:05.500 [2024-10-07 09:48:54.299265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.500 [2024-10-07 09:48:54.299292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.500 qpair failed and we were unable to recover it. 00:28:05.500 [2024-10-07 09:48:54.299408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.500 [2024-10-07 09:48:54.299435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.500 qpair failed and we were unable to recover it. 00:28:05.500 [2024-10-07 09:48:54.299579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.500 [2024-10-07 09:48:54.299606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.500 qpair failed and we were unable to recover it. 00:28:05.500 [2024-10-07 09:48:54.299717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.500 [2024-10-07 09:48:54.299745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.500 qpair failed and we were unable to recover it. 00:28:05.500 [2024-10-07 09:48:54.299872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.500 [2024-10-07 09:48:54.299899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.500 qpair failed and we were unable to recover it. 00:28:05.500 [2024-10-07 09:48:54.300093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.500 [2024-10-07 09:48:54.300153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.500 qpair failed and we were unable to recover it. 00:28:05.500 [2024-10-07 09:48:54.300347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.500 [2024-10-07 09:48:54.300399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.500 qpair failed and we were unable to recover it. 00:28:05.500 [2024-10-07 09:48:54.300513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.500 [2024-10-07 09:48:54.300540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.500 qpair failed and we were unable to recover it. 00:28:05.500 [2024-10-07 09:48:54.300629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.500 [2024-10-07 09:48:54.300656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.500 qpair failed and we were unable to recover it. 00:28:05.500 [2024-10-07 09:48:54.300774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.500 [2024-10-07 09:48:54.300801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.500 qpair failed and we were unable to recover it. 00:28:05.500 [2024-10-07 09:48:54.300944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.500 [2024-10-07 09:48:54.301000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.500 qpair failed and we were unable to recover it. 00:28:05.500 [2024-10-07 09:48:54.301179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.500 [2024-10-07 09:48:54.301232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.500 qpair failed and we were unable to recover it. 00:28:05.500 [2024-10-07 09:48:54.301399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.500 [2024-10-07 09:48:54.301467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.500 qpair failed and we were unable to recover it. 00:28:05.500 [2024-10-07 09:48:54.301580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.500 [2024-10-07 09:48:54.301616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.500 qpair failed and we were unable to recover it. 00:28:05.500 [2024-10-07 09:48:54.301732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.500 [2024-10-07 09:48:54.301760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.500 qpair failed and we were unable to recover it. 00:28:05.500 [2024-10-07 09:48:54.301879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.500 [2024-10-07 09:48:54.301943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.500 qpair failed and we were unable to recover it. 00:28:05.500 [2024-10-07 09:48:54.302061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.500 [2024-10-07 09:48:54.302088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.500 qpair failed and we were unable to recover it. 00:28:05.500 [2024-10-07 09:48:54.302201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.500 [2024-10-07 09:48:54.302228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.500 qpair failed and we were unable to recover it. 00:28:05.500 [2024-10-07 09:48:54.302365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.500 [2024-10-07 09:48:54.302392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.500 qpair failed and we were unable to recover it. 00:28:05.500 [2024-10-07 09:48:54.302502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.500 [2024-10-07 09:48:54.302529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.500 qpair failed and we were unable to recover it. 00:28:05.500 [2024-10-07 09:48:54.302675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.500 [2024-10-07 09:48:54.302703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.500 qpair failed and we were unable to recover it. 00:28:05.500 [2024-10-07 09:48:54.302814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.500 [2024-10-07 09:48:54.302841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.500 qpair failed and we were unable to recover it. 00:28:05.500 [2024-10-07 09:48:54.302980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.500 [2024-10-07 09:48:54.303007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.500 qpair failed and we were unable to recover it. 00:28:05.500 [2024-10-07 09:48:54.303090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.500 [2024-10-07 09:48:54.303118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.500 qpair failed and we were unable to recover it. 00:28:05.500 [2024-10-07 09:48:54.303235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.500 [2024-10-07 09:48:54.303262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.500 qpair failed and we were unable to recover it. 00:28:05.500 [2024-10-07 09:48:54.303351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.500 [2024-10-07 09:48:54.303380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.500 qpair failed and we were unable to recover it. 00:28:05.500 [2024-10-07 09:48:54.303468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.500 [2024-10-07 09:48:54.303494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.500 qpair failed and we were unable to recover it. 00:28:05.500 [2024-10-07 09:48:54.303610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.500 [2024-10-07 09:48:54.303636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.500 qpair failed and we were unable to recover it. 00:28:05.500 [2024-10-07 09:48:54.303805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.500 [2024-10-07 09:48:54.303862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.500 qpair failed and we were unable to recover it. 00:28:05.500 [2024-10-07 09:48:54.303980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.500 [2024-10-07 09:48:54.304034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.500 qpair failed and we were unable to recover it. 00:28:05.500 [2024-10-07 09:48:54.304145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.500 [2024-10-07 09:48:54.304172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.500 qpair failed and we were unable to recover it. 00:28:05.500 [2024-10-07 09:48:54.304276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.500 [2024-10-07 09:48:54.304343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.500 qpair failed and we were unable to recover it. 00:28:05.500 [2024-10-07 09:48:54.304428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.500 [2024-10-07 09:48:54.304455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.501 qpair failed and we were unable to recover it. 00:28:05.501 [2024-10-07 09:48:54.304560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.501 [2024-10-07 09:48:54.304587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.501 qpair failed and we were unable to recover it. 00:28:05.501 [2024-10-07 09:48:54.304671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.501 [2024-10-07 09:48:54.304698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.501 qpair failed and we were unable to recover it. 00:28:05.501 [2024-10-07 09:48:54.304805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.501 [2024-10-07 09:48:54.304832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.501 qpair failed and we were unable to recover it. 00:28:05.501 [2024-10-07 09:48:54.304975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.501 [2024-10-07 09:48:54.305002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.501 qpair failed and we were unable to recover it. 00:28:05.501 [2024-10-07 09:48:54.305111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.501 [2024-10-07 09:48:54.305138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.501 qpair failed and we were unable to recover it. 00:28:05.501 [2024-10-07 09:48:54.305227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.501 [2024-10-07 09:48:54.305254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.501 qpair failed and we were unable to recover it. 00:28:05.501 [2024-10-07 09:48:54.305382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.501 [2024-10-07 09:48:54.305421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.501 qpair failed and we were unable to recover it. 00:28:05.501 [2024-10-07 09:48:54.305512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.501 [2024-10-07 09:48:54.305541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.501 qpair failed and we were unable to recover it. 00:28:05.501 [2024-10-07 09:48:54.305679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.501 [2024-10-07 09:48:54.305708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.501 qpair failed and we were unable to recover it. 00:28:05.501 [2024-10-07 09:48:54.305807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.501 [2024-10-07 09:48:54.305834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.501 qpair failed and we were unable to recover it. 00:28:05.501 [2024-10-07 09:48:54.305909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.501 [2024-10-07 09:48:54.305967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.501 qpair failed and we were unable to recover it. 00:28:05.501 [2024-10-07 09:48:54.306150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.501 [2024-10-07 09:48:54.306218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.501 qpair failed and we were unable to recover it. 00:28:05.501 [2024-10-07 09:48:54.306534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.501 [2024-10-07 09:48:54.306560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.501 qpair failed and we were unable to recover it. 00:28:05.501 [2024-10-07 09:48:54.306680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.501 [2024-10-07 09:48:54.306709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.501 qpair failed and we were unable to recover it. 00:28:05.501 [2024-10-07 09:48:54.306798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.501 [2024-10-07 09:48:54.306824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.501 qpair failed and we were unable to recover it. 00:28:05.501 [2024-10-07 09:48:54.306904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.501 [2024-10-07 09:48:54.306973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.501 qpair failed and we were unable to recover it. 00:28:05.501 [2024-10-07 09:48:54.307247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.501 [2024-10-07 09:48:54.307313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.501 qpair failed and we were unable to recover it. 00:28:05.501 [2024-10-07 09:48:54.307529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.501 [2024-10-07 09:48:54.307555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.501 qpair failed and we were unable to recover it. 00:28:05.501 [2024-10-07 09:48:54.307676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.501 [2024-10-07 09:48:54.307703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.501 qpair failed and we were unable to recover it. 00:28:05.501 [2024-10-07 09:48:54.307817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.501 [2024-10-07 09:48:54.307844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.501 qpair failed and we were unable to recover it. 00:28:05.501 [2024-10-07 09:48:54.307953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.501 [2024-10-07 09:48:54.307980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.501 qpair failed and we were unable to recover it. 00:28:05.501 [2024-10-07 09:48:54.308118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.501 [2024-10-07 09:48:54.308175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.501 qpair failed and we were unable to recover it. 00:28:05.501 [2024-10-07 09:48:54.308478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.501 [2024-10-07 09:48:54.308544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.501 qpair failed and we were unable to recover it. 00:28:05.501 [2024-10-07 09:48:54.308808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.501 [2024-10-07 09:48:54.308835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.501 qpair failed and we were unable to recover it. 00:28:05.501 [2024-10-07 09:48:54.308926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.501 [2024-10-07 09:48:54.308952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.501 qpair failed and we were unable to recover it. 00:28:05.501 [2024-10-07 09:48:54.309043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.501 [2024-10-07 09:48:54.309069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.501 qpair failed and we were unable to recover it. 00:28:05.501 [2024-10-07 09:48:54.309190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.501 [2024-10-07 09:48:54.309245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.501 qpair failed and we were unable to recover it. 00:28:05.501 [2024-10-07 09:48:54.309461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.501 [2024-10-07 09:48:54.309526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.501 qpair failed and we were unable to recover it. 00:28:05.501 [2024-10-07 09:48:54.309708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.501 [2024-10-07 09:48:54.309735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.501 qpair failed and we were unable to recover it. 00:28:05.501 [2024-10-07 09:48:54.309876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.501 [2024-10-07 09:48:54.309903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.501 qpair failed and we were unable to recover it. 00:28:05.501 [2024-10-07 09:48:54.310017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.501 [2024-10-07 09:48:54.310044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.501 qpair failed and we were unable to recover it. 00:28:05.501 [2024-10-07 09:48:54.310128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.501 [2024-10-07 09:48:54.310155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.501 qpair failed and we were unable to recover it. 00:28:05.501 [2024-10-07 09:48:54.310356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.501 [2024-10-07 09:48:54.310383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.501 qpair failed and we were unable to recover it. 00:28:05.501 [2024-10-07 09:48:54.310581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.501 [2024-10-07 09:48:54.310607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.501 qpair failed and we were unable to recover it. 00:28:05.501 [2024-10-07 09:48:54.310689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.501 [2024-10-07 09:48:54.310720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.501 qpair failed and we were unable to recover it. 00:28:05.501 [2024-10-07 09:48:54.310860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.501 [2024-10-07 09:48:54.310887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.501 qpair failed and we were unable to recover it. 00:28:05.501 [2024-10-07 09:48:54.311071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.501 [2024-10-07 09:48:54.311121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.501 qpair failed and we were unable to recover it. 00:28:05.501 [2024-10-07 09:48:54.311322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.501 [2024-10-07 09:48:54.311388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.501 qpair failed and we were unable to recover it. 00:28:05.501 [2024-10-07 09:48:54.311598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.502 [2024-10-07 09:48:54.311625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.502 qpair failed and we were unable to recover it. 00:28:05.502 [2024-10-07 09:48:54.311717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.502 [2024-10-07 09:48:54.311743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.502 qpair failed and we were unable to recover it. 00:28:05.502 [2024-10-07 09:48:54.311829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.502 [2024-10-07 09:48:54.311855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.502 qpair failed and we were unable to recover it. 00:28:05.502 [2024-10-07 09:48:54.311944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.502 [2024-10-07 09:48:54.311970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.502 qpair failed and we were unable to recover it. 00:28:05.502 [2024-10-07 09:48:54.312079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.502 [2024-10-07 09:48:54.312106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.502 qpair failed and we were unable to recover it. 00:28:05.502 [2024-10-07 09:48:54.312193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.502 [2024-10-07 09:48:54.312220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.502 qpair failed and we were unable to recover it. 00:28:05.502 [2024-10-07 09:48:54.312453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.502 [2024-10-07 09:48:54.312517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.502 qpair failed and we were unable to recover it. 00:28:05.502 [2024-10-07 09:48:54.312759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.502 [2024-10-07 09:48:54.312787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.502 qpair failed and we were unable to recover it. 00:28:05.502 [2024-10-07 09:48:54.312872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.502 [2024-10-07 09:48:54.312898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.502 qpair failed and we were unable to recover it. 00:28:05.502 [2024-10-07 09:48:54.312982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.502 [2024-10-07 09:48:54.313009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.502 qpair failed and we were unable to recover it. 00:28:05.502 [2024-10-07 09:48:54.313142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.502 [2024-10-07 09:48:54.313198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.502 qpair failed and we were unable to recover it. 00:28:05.502 [2024-10-07 09:48:54.313514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.502 [2024-10-07 09:48:54.313571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.502 qpair failed and we were unable to recover it. 00:28:05.502 [2024-10-07 09:48:54.313741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.502 [2024-10-07 09:48:54.313767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.502 qpair failed and we were unable to recover it. 00:28:05.502 [2024-10-07 09:48:54.313854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.502 [2024-10-07 09:48:54.313880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.502 qpair failed and we were unable to recover it. 00:28:05.502 [2024-10-07 09:48:54.313997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.502 [2024-10-07 09:48:54.314023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.502 qpair failed and we were unable to recover it. 00:28:05.502 [2024-10-07 09:48:54.314108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.502 [2024-10-07 09:48:54.314161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.502 qpair failed and we were unable to recover it. 00:28:05.502 [2024-10-07 09:48:54.314295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.502 [2024-10-07 09:48:54.314352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.502 qpair failed and we were unable to recover it. 00:28:05.502 [2024-10-07 09:48:54.314653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.502 [2024-10-07 09:48:54.314731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.502 qpair failed and we were unable to recover it. 00:28:05.502 [2024-10-07 09:48:54.314850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.502 [2024-10-07 09:48:54.314876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.502 qpair failed and we were unable to recover it. 00:28:05.502 [2024-10-07 09:48:54.314981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.502 [2024-10-07 09:48:54.315027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.502 qpair failed and we were unable to recover it. 00:28:05.502 [2024-10-07 09:48:54.315221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.502 [2024-10-07 09:48:54.315276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.502 qpair failed and we were unable to recover it. 00:28:05.502 [2024-10-07 09:48:54.315483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.502 [2024-10-07 09:48:54.315539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.502 qpair failed and we were unable to recover it. 00:28:05.502 [2024-10-07 09:48:54.315714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.502 [2024-10-07 09:48:54.315741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.502 qpair failed and we were unable to recover it. 00:28:05.502 [2024-10-07 09:48:54.315850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.502 [2024-10-07 09:48:54.315877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.502 qpair failed and we were unable to recover it. 00:28:05.502 [2024-10-07 09:48:54.315989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.502 [2024-10-07 09:48:54.316016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.502 qpair failed and we were unable to recover it. 00:28:05.502 [2024-10-07 09:48:54.316161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.502 [2024-10-07 09:48:54.316188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.502 qpair failed and we were unable to recover it. 00:28:05.502 [2024-10-07 09:48:54.316382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.502 [2024-10-07 09:48:54.316464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.502 qpair failed and we were unable to recover it. 00:28:05.502 [2024-10-07 09:48:54.316696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.502 [2024-10-07 09:48:54.316727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.502 qpair failed and we were unable to recover it. 00:28:05.502 [2024-10-07 09:48:54.316838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.502 [2024-10-07 09:48:54.316865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.502 qpair failed and we were unable to recover it. 00:28:05.502 [2024-10-07 09:48:54.316989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.502 [2024-10-07 09:48:54.317016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.502 qpair failed and we were unable to recover it. 00:28:05.502 [2024-10-07 09:48:54.317304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.502 [2024-10-07 09:48:54.317369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.502 qpair failed and we were unable to recover it. 00:28:05.502 [2024-10-07 09:48:54.317661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.502 [2024-10-07 09:48:54.317737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.502 qpair failed and we were unable to recover it. 00:28:05.502 [2024-10-07 09:48:54.317890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.502 [2024-10-07 09:48:54.317915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.502 qpair failed and we were unable to recover it. 00:28:05.502 [2024-10-07 09:48:54.318030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.502 [2024-10-07 09:48:54.318105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.502 qpair failed and we were unable to recover it. 00:28:05.502 [2024-10-07 09:48:54.318324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.502 [2024-10-07 09:48:54.318388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.502 qpair failed and we were unable to recover it. 00:28:05.502 [2024-10-07 09:48:54.318691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.502 [2024-10-07 09:48:54.318749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.502 qpair failed and we were unable to recover it. 00:28:05.502 [2024-10-07 09:48:54.318845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.502 [2024-10-07 09:48:54.318882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.502 qpair failed and we were unable to recover it. 00:28:05.502 [2024-10-07 09:48:54.318972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.502 [2024-10-07 09:48:54.318997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.502 qpair failed and we were unable to recover it. 00:28:05.502 [2024-10-07 09:48:54.319235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.502 [2024-10-07 09:48:54.319311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.502 qpair failed and we were unable to recover it. 00:28:05.503 [2024-10-07 09:48:54.319623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.503 [2024-10-07 09:48:54.319723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.503 qpair failed and we were unable to recover it. 00:28:05.503 [2024-10-07 09:48:54.319822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.503 [2024-10-07 09:48:54.319847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.503 qpair failed and we were unable to recover it. 00:28:05.503 [2024-10-07 09:48:54.319957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.503 [2024-10-07 09:48:54.319983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.503 qpair failed and we were unable to recover it. 00:28:05.503 [2024-10-07 09:48:54.320207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.503 [2024-10-07 09:48:54.320273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.503 qpair failed and we were unable to recover it. 00:28:05.503 [2024-10-07 09:48:54.320486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.503 [2024-10-07 09:48:54.320550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.503 qpair failed and we were unable to recover it. 00:28:05.503 [2024-10-07 09:48:54.320763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.503 [2024-10-07 09:48:54.320790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.503 qpair failed and we were unable to recover it. 00:28:05.503 [2024-10-07 09:48:54.320909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.503 [2024-10-07 09:48:54.320936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.503 qpair failed and we were unable to recover it. 00:28:05.503 [2024-10-07 09:48:54.321075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.503 [2024-10-07 09:48:54.321102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.503 qpair failed and we were unable to recover it. 00:28:05.503 [2024-10-07 09:48:54.321217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.503 [2024-10-07 09:48:54.321243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.503 qpair failed and we were unable to recover it. 00:28:05.503 [2024-10-07 09:48:54.321337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.503 [2024-10-07 09:48:54.321362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.503 qpair failed and we were unable to recover it. 00:28:05.503 [2024-10-07 09:48:54.321438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.503 [2024-10-07 09:48:54.321492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.503 qpair failed and we were unable to recover it. 00:28:05.503 [2024-10-07 09:48:54.321766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.503 [2024-10-07 09:48:54.321793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.503 qpair failed and we were unable to recover it. 00:28:05.503 [2024-10-07 09:48:54.321905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.503 [2024-10-07 09:48:54.321932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.503 qpair failed and we were unable to recover it. 00:28:05.503 [2024-10-07 09:48:54.322171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.503 [2024-10-07 09:48:54.322237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.503 qpair failed and we were unable to recover it. 00:28:05.503 [2024-10-07 09:48:54.322462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.503 [2024-10-07 09:48:54.322488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.503 qpair failed and we were unable to recover it. 00:28:05.503 [2024-10-07 09:48:54.322682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.503 [2024-10-07 09:48:54.322749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.503 qpair failed and we were unable to recover it. 00:28:05.503 [2024-10-07 09:48:54.323036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.503 [2024-10-07 09:48:54.323093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.503 qpair failed and we were unable to recover it. 00:28:05.503 [2024-10-07 09:48:54.323346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.503 [2024-10-07 09:48:54.323411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.503 qpair failed and we were unable to recover it. 00:28:05.503 [2024-10-07 09:48:54.323656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.503 [2024-10-07 09:48:54.323726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.503 qpair failed and we were unable to recover it. 00:28:05.503 [2024-10-07 09:48:54.323901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.503 [2024-10-07 09:48:54.323958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.503 qpair failed and we were unable to recover it. 00:28:05.503 [2024-10-07 09:48:54.324155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.503 [2024-10-07 09:48:54.324220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.503 qpair failed and we were unable to recover it. 00:28:05.503 [2024-10-07 09:48:54.324507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.503 [2024-10-07 09:48:54.324571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.503 qpair failed and we were unable to recover it. 00:28:05.503 [2024-10-07 09:48:54.324852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.503 [2024-10-07 09:48:54.324904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.503 qpair failed and we were unable to recover it. 00:28:05.503 [2024-10-07 09:48:54.325093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.503 [2024-10-07 09:48:54.325172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.503 qpair failed and we were unable to recover it. 00:28:05.503 [2024-10-07 09:48:54.325475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.503 [2024-10-07 09:48:54.325542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.503 qpair failed and we were unable to recover it. 00:28:05.503 [2024-10-07 09:48:54.325826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.503 [2024-10-07 09:48:54.325892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.503 qpair failed and we were unable to recover it. 00:28:05.503 [2024-10-07 09:48:54.326198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.503 [2024-10-07 09:48:54.326263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.503 qpair failed and we were unable to recover it. 00:28:05.503 [2024-10-07 09:48:54.326567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.503 [2024-10-07 09:48:54.326633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.503 qpair failed and we were unable to recover it. 00:28:05.503 [2024-10-07 09:48:54.326898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.503 [2024-10-07 09:48:54.326963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.503 qpair failed and we were unable to recover it. 00:28:05.503 [2024-10-07 09:48:54.327211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.503 [2024-10-07 09:48:54.327276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.503 qpair failed and we were unable to recover it. 00:28:05.503 [2024-10-07 09:48:54.327558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.503 [2024-10-07 09:48:54.327623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.503 qpair failed and we were unable to recover it. 00:28:05.503 [2024-10-07 09:48:54.327929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.503 [2024-10-07 09:48:54.327994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.503 qpair failed and we were unable to recover it. 00:28:05.503 [2024-10-07 09:48:54.328227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.503 [2024-10-07 09:48:54.328276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.503 qpair failed and we were unable to recover it. 00:28:05.503 [2024-10-07 09:48:54.328512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.503 [2024-10-07 09:48:54.328589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.503 qpair failed and we were unable to recover it. 00:28:05.503 [2024-10-07 09:48:54.328894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.503 [2024-10-07 09:48:54.328952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.503 qpair failed and we were unable to recover it. 00:28:05.503 [2024-10-07 09:48:54.329259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.503 [2024-10-07 09:48:54.329316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.503 qpair failed and we were unable to recover it. 00:28:05.503 [2024-10-07 09:48:54.329611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.503 [2024-10-07 09:48:54.329702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.503 qpair failed and we were unable to recover it. 00:28:05.503 [2024-10-07 09:48:54.329963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.503 [2024-10-07 09:48:54.330028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.503 qpair failed and we were unable to recover it. 00:28:05.503 [2024-10-07 09:48:54.330318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.503 [2024-10-07 09:48:54.330382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.503 qpair failed and we were unable to recover it. 00:28:05.504 [2024-10-07 09:48:54.330637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.504 [2024-10-07 09:48:54.330722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.504 qpair failed and we were unable to recover it. 00:28:05.504 [2024-10-07 09:48:54.330929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.504 [2024-10-07 09:48:54.330985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.504 qpair failed and we were unable to recover it. 00:28:05.504 [2024-10-07 09:48:54.331228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.504 [2024-10-07 09:48:54.331293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.504 qpair failed and we were unable to recover it. 00:28:05.504 [2024-10-07 09:48:54.331580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.504 [2024-10-07 09:48:54.331644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.504 qpair failed and we were unable to recover it. 00:28:05.504 [2024-10-07 09:48:54.331934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.504 [2024-10-07 09:48:54.331999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.504 qpair failed and we were unable to recover it. 00:28:05.504 [2024-10-07 09:48:54.332302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.504 [2024-10-07 09:48:54.332356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.504 qpair failed and we were unable to recover it. 00:28:05.504 [2024-10-07 09:48:54.332639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.504 [2024-10-07 09:48:54.332726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.504 qpair failed and we were unable to recover it. 00:28:05.504 [2024-10-07 09:48:54.332970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.504 [2024-10-07 09:48:54.333036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.504 qpair failed and we were unable to recover it. 00:28:05.504 [2024-10-07 09:48:54.333282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.504 [2024-10-07 09:48:54.333347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.504 qpair failed and we were unable to recover it. 00:28:05.504 [2024-10-07 09:48:54.333628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.504 [2024-10-07 09:48:54.333725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.504 qpair failed and we were unable to recover it. 00:28:05.504 [2024-10-07 09:48:54.334021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.504 [2024-10-07 09:48:54.334077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.504 qpair failed and we were unable to recover it. 00:28:05.504 [2024-10-07 09:48:54.334280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.504 [2024-10-07 09:48:54.334347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.504 qpair failed and we were unable to recover it. 00:28:05.504 [2024-10-07 09:48:54.334636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.504 [2024-10-07 09:48:54.334727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.504 qpair failed and we were unable to recover it. 00:28:05.504 [2024-10-07 09:48:54.335029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.504 [2024-10-07 09:48:54.335095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.504 qpair failed and we were unable to recover it. 00:28:05.504 [2024-10-07 09:48:54.335376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.504 [2024-10-07 09:48:54.335440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.504 qpair failed and we were unable to recover it. 00:28:05.504 [2024-10-07 09:48:54.335743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.504 [2024-10-07 09:48:54.335811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.504 qpair failed and we were unable to recover it. 00:28:05.504 [2024-10-07 09:48:54.336076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.504 [2024-10-07 09:48:54.336131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.504 qpair failed and we were unable to recover it. 00:28:05.504 [2024-10-07 09:48:54.336303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.504 [2024-10-07 09:48:54.336382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.504 qpair failed and we were unable to recover it. 00:28:05.504 [2024-10-07 09:48:54.336712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.504 [2024-10-07 09:48:54.336779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.504 qpair failed and we were unable to recover it. 00:28:05.504 [2024-10-07 09:48:54.337066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.504 [2024-10-07 09:48:54.337132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.504 qpair failed and we were unable to recover it. 00:28:05.504 [2024-10-07 09:48:54.337432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.504 [2024-10-07 09:48:54.337496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.504 qpair failed and we were unable to recover it. 00:28:05.504 [2024-10-07 09:48:54.337793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.504 [2024-10-07 09:48:54.337860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.504 qpair failed and we were unable to recover it. 00:28:05.504 [2024-10-07 09:48:54.338156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.504 [2024-10-07 09:48:54.338222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.504 qpair failed and we were unable to recover it. 00:28:05.504 [2024-10-07 09:48:54.338472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.504 [2024-10-07 09:48:54.338536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.504 qpair failed and we were unable to recover it. 00:28:05.504 [2024-10-07 09:48:54.338829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.504 [2024-10-07 09:48:54.338887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.504 qpair failed and we were unable to recover it. 00:28:05.504 [2024-10-07 09:48:54.339092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.504 [2024-10-07 09:48:54.339157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.504 qpair failed and we were unable to recover it. 00:28:05.504 [2024-10-07 09:48:54.339422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.504 [2024-10-07 09:48:54.339487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.504 qpair failed and we were unable to recover it. 00:28:05.504 [2024-10-07 09:48:54.339768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.504 [2024-10-07 09:48:54.339835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.504 qpair failed and we were unable to recover it. 00:28:05.504 [2024-10-07 09:48:54.340130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.504 [2024-10-07 09:48:54.340194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.504 qpair failed and we were unable to recover it. 00:28:05.504 [2024-10-07 09:48:54.340380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.504 [2024-10-07 09:48:54.340441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.504 qpair failed and we were unable to recover it. 00:28:05.504 [2024-10-07 09:48:54.340718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.504 [2024-10-07 09:48:54.340769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.504 qpair failed and we were unable to recover it. 00:28:05.504 [2024-10-07 09:48:54.341028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.504 [2024-10-07 09:48:54.341077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.504 qpair failed and we were unable to recover it. 00:28:05.504 [2024-10-07 09:48:54.341276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.504 [2024-10-07 09:48:54.341348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.504 qpair failed and we were unable to recover it. 00:28:05.504 [2024-10-07 09:48:54.341591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.504 [2024-10-07 09:48:54.341656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.504 qpair failed and we were unable to recover it. 00:28:05.505 [2024-10-07 09:48:54.341940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.505 [2024-10-07 09:48:54.342004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.505 qpair failed and we were unable to recover it. 00:28:05.505 [2024-10-07 09:48:54.342289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.505 [2024-10-07 09:48:54.342354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.505 qpair failed and we were unable to recover it. 00:28:05.505 [2024-10-07 09:48:54.342605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.505 [2024-10-07 09:48:54.342692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.505 qpair failed and we were unable to recover it. 00:28:05.505 [2024-10-07 09:48:54.342949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.505 [2024-10-07 09:48:54.343014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.505 qpair failed and we were unable to recover it. 00:28:05.505 [2024-10-07 09:48:54.343303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.505 [2024-10-07 09:48:54.343369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.505 qpair failed and we were unable to recover it. 00:28:05.505 [2024-10-07 09:48:54.343680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.505 [2024-10-07 09:48:54.343738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.505 qpair failed and we were unable to recover it. 00:28:05.505 [2024-10-07 09:48:54.343914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.505 [2024-10-07 09:48:54.343970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.505 qpair failed and we were unable to recover it. 00:28:05.505 [2024-10-07 09:48:54.344237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.505 [2024-10-07 09:48:54.344303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.505 qpair failed and we were unable to recover it. 00:28:05.505 [2024-10-07 09:48:54.344566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.505 [2024-10-07 09:48:54.344633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.505 qpair failed and we were unable to recover it. 00:28:05.505 [2024-10-07 09:48:54.344974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.505 [2024-10-07 09:48:54.345039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.505 qpair failed and we were unable to recover it. 00:28:05.505 [2024-10-07 09:48:54.345299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.505 [2024-10-07 09:48:54.345365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.505 qpair failed and we were unable to recover it. 00:28:05.505 [2024-10-07 09:48:54.345662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.505 [2024-10-07 09:48:54.345760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.505 qpair failed and we were unable to recover it. 00:28:05.505 [2024-10-07 09:48:54.346002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.505 [2024-10-07 09:48:54.346067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.505 qpair failed and we were unable to recover it. 00:28:05.505 [2024-10-07 09:48:54.346335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.505 [2024-10-07 09:48:54.346391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.505 qpair failed and we were unable to recover it. 00:28:05.505 [2024-10-07 09:48:54.346694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.505 [2024-10-07 09:48:54.346760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.505 qpair failed and we were unable to recover it. 00:28:05.505 [2024-10-07 09:48:54.347049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.505 [2024-10-07 09:48:54.347099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.505 qpair failed and we were unable to recover it. 00:28:05.505 [2024-10-07 09:48:54.347323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.505 [2024-10-07 09:48:54.347389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.505 qpair failed and we were unable to recover it. 00:28:05.505 [2024-10-07 09:48:54.347653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.505 [2024-10-07 09:48:54.347769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.505 qpair failed and we were unable to recover it. 00:28:05.505 [2024-10-07 09:48:54.347935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.505 [2024-10-07 09:48:54.347981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.505 qpair failed and we were unable to recover it. 00:28:05.505 [2024-10-07 09:48:54.348167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.505 [2024-10-07 09:48:54.348213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.505 qpair failed and we were unable to recover it. 00:28:05.505 [2024-10-07 09:48:54.348360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.505 [2024-10-07 09:48:54.348406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.505 qpair failed and we were unable to recover it. 00:28:05.505 [2024-10-07 09:48:54.348586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.505 [2024-10-07 09:48:54.348639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.505 qpair failed and we were unable to recover it. 00:28:05.505 [2024-10-07 09:48:54.348823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.505 [2024-10-07 09:48:54.348870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.505 qpair failed and we were unable to recover it. 00:28:05.505 [2024-10-07 09:48:54.349028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.505 [2024-10-07 09:48:54.349080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.505 qpair failed and we were unable to recover it. 00:28:05.505 [2024-10-07 09:48:54.349240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.505 [2024-10-07 09:48:54.349293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.505 qpair failed and we were unable to recover it. 00:28:05.505 [2024-10-07 09:48:54.349486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.505 [2024-10-07 09:48:54.349539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.505 qpair failed and we were unable to recover it. 00:28:05.505 [2024-10-07 09:48:54.349770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.505 [2024-10-07 09:48:54.349837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.505 qpair failed and we were unable to recover it. 00:28:05.505 [2024-10-07 09:48:54.350136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.505 [2024-10-07 09:48:54.350201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.505 qpair failed and we were unable to recover it. 00:28:05.505 [2024-10-07 09:48:54.350450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.505 [2024-10-07 09:48:54.350515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.505 qpair failed and we were unable to recover it. 00:28:05.505 [2024-10-07 09:48:54.350771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.505 [2024-10-07 09:48:54.350837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.505 qpair failed and we were unable to recover it. 00:28:05.505 [2024-10-07 09:48:54.351055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.505 [2024-10-07 09:48:54.351120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.505 qpair failed and we were unable to recover it. 00:28:05.505 [2024-10-07 09:48:54.351360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.505 [2024-10-07 09:48:54.351425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.505 qpair failed and we were unable to recover it. 00:28:05.505 [2024-10-07 09:48:54.351686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.505 [2024-10-07 09:48:54.351752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.505 qpair failed and we were unable to recover it. 00:28:05.505 [2024-10-07 09:48:54.352001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.505 [2024-10-07 09:48:54.352066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.505 qpair failed and we were unable to recover it. 00:28:05.505 [2024-10-07 09:48:54.352281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.505 [2024-10-07 09:48:54.352347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.505 qpair failed and we were unable to recover it. 00:28:05.505 [2024-10-07 09:48:54.352605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.505 [2024-10-07 09:48:54.352684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.505 qpair failed and we were unable to recover it. 00:28:05.505 [2024-10-07 09:48:54.352971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.505 [2024-10-07 09:48:54.353037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.505 qpair failed and we were unable to recover it. 00:28:05.505 [2024-10-07 09:48:54.353295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.505 [2024-10-07 09:48:54.353361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.505 qpair failed and we were unable to recover it. 00:28:05.505 [2024-10-07 09:48:54.353602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.505 [2024-10-07 09:48:54.353697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.505 qpair failed and we were unable to recover it. 00:28:05.506 [2024-10-07 09:48:54.353950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.506 [2024-10-07 09:48:54.353999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.506 qpair failed and we were unable to recover it. 00:28:05.506 [2024-10-07 09:48:54.354192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.506 [2024-10-07 09:48:54.354260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.506 qpair failed and we were unable to recover it. 00:28:05.506 [2024-10-07 09:48:54.354503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.506 [2024-10-07 09:48:54.354567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.506 qpair failed and we were unable to recover it. 00:28:05.506 [2024-10-07 09:48:54.354869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.506 [2024-10-07 09:48:54.354935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.506 qpair failed and we were unable to recover it. 00:28:05.506 [2024-10-07 09:48:54.355170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.506 [2024-10-07 09:48:54.355238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.506 qpair failed and we were unable to recover it. 00:28:05.506 [2024-10-07 09:48:54.355498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.506 [2024-10-07 09:48:54.355546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.506 qpair failed and we were unable to recover it. 00:28:05.506 [2024-10-07 09:48:54.355738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.506 [2024-10-07 09:48:54.355813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.506 qpair failed and we were unable to recover it. 00:28:05.506 [2024-10-07 09:48:54.356040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.506 [2024-10-07 09:48:54.356105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.506 qpair failed and we were unable to recover it. 00:28:05.506 [2024-10-07 09:48:54.356392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.506 [2024-10-07 09:48:54.356456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.506 qpair failed and we were unable to recover it. 00:28:05.506 [2024-10-07 09:48:54.356707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.506 [2024-10-07 09:48:54.356773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.506 qpair failed and we were unable to recover it. 00:28:05.506 [2024-10-07 09:48:54.357004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.506 [2024-10-07 09:48:54.357068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.506 qpair failed and we were unable to recover it. 00:28:05.506 [2024-10-07 09:48:54.357348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.506 [2024-10-07 09:48:54.357413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.506 qpair failed and we were unable to recover it. 00:28:05.506 [2024-10-07 09:48:54.357625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.506 [2024-10-07 09:48:54.357709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.506 qpair failed and we were unable to recover it. 00:28:05.506 [2024-10-07 09:48:54.357981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.506 [2024-10-07 09:48:54.358046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.506 qpair failed and we were unable to recover it. 00:28:05.506 [2024-10-07 09:48:54.358300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.506 [2024-10-07 09:48:54.358366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.506 qpair failed and we were unable to recover it. 00:28:05.506 [2024-10-07 09:48:54.358553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.506 [2024-10-07 09:48:54.358617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.506 qpair failed and we were unable to recover it. 00:28:05.506 [2024-10-07 09:48:54.358887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.506 [2024-10-07 09:48:54.358952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.506 qpair failed and we were unable to recover it. 00:28:05.506 [2024-10-07 09:48:54.359203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.506 [2024-10-07 09:48:54.359268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.506 qpair failed and we were unable to recover it. 00:28:05.506 [2024-10-07 09:48:54.359484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.506 [2024-10-07 09:48:54.359551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.506 qpair failed and we were unable to recover it. 00:28:05.506 [2024-10-07 09:48:54.359771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.506 [2024-10-07 09:48:54.359838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.506 qpair failed and we were unable to recover it. 00:28:05.506 [2024-10-07 09:48:54.360053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.506 [2024-10-07 09:48:54.360118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.506 qpair failed and we were unable to recover it. 00:28:05.506 [2024-10-07 09:48:54.360398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.506 [2024-10-07 09:48:54.360464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.506 qpair failed and we were unable to recover it. 00:28:05.506 [2024-10-07 09:48:54.360718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.506 [2024-10-07 09:48:54.360769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.506 qpair failed and we were unable to recover it. 00:28:05.506 [2024-10-07 09:48:54.361007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.506 [2024-10-07 09:48:54.361071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.506 qpair failed and we were unable to recover it. 00:28:05.506 [2024-10-07 09:48:54.361334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.506 [2024-10-07 09:48:54.361410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.506 qpair failed and we were unable to recover it. 00:28:05.506 [2024-10-07 09:48:54.361690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.506 [2024-10-07 09:48:54.361762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.506 qpair failed and we were unable to recover it. 00:28:05.506 [2024-10-07 09:48:54.361961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.506 [2024-10-07 09:48:54.362027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.506 qpair failed and we were unable to recover it. 00:28:05.506 [2024-10-07 09:48:54.362227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.506 [2024-10-07 09:48:54.362293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.506 qpair failed and we were unable to recover it. 00:28:05.506 [2024-10-07 09:48:54.362578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.506 [2024-10-07 09:48:54.362643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.506 qpair failed and we were unable to recover it. 00:28:05.506 [2024-10-07 09:48:54.362941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.506 [2024-10-07 09:48:54.363006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.506 qpair failed and we were unable to recover it. 00:28:05.506 [2024-10-07 09:48:54.363207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.506 [2024-10-07 09:48:54.363273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.506 qpair failed and we were unable to recover it. 00:28:05.506 [2024-10-07 09:48:54.363530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.506 [2024-10-07 09:48:54.363595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.506 qpair failed and we were unable to recover it. 00:28:05.506 [2024-10-07 09:48:54.363814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.506 [2024-10-07 09:48:54.363880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.506 qpair failed and we were unable to recover it. 00:28:05.506 [2024-10-07 09:48:54.364168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.506 [2024-10-07 09:48:54.364233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.506 qpair failed and we were unable to recover it. 00:28:05.506 [2024-10-07 09:48:54.364490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.506 [2024-10-07 09:48:54.364554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.506 qpair failed and we were unable to recover it. 00:28:05.506 [2024-10-07 09:48:54.364833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.506 [2024-10-07 09:48:54.364899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.506 qpair failed and we were unable to recover it. 00:28:05.506 [2024-10-07 09:48:54.365151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.506 [2024-10-07 09:48:54.365216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.506 qpair failed and we were unable to recover it. 00:28:05.506 [2024-10-07 09:48:54.365438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.506 [2024-10-07 09:48:54.365502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.506 qpair failed and we were unable to recover it. 00:28:05.506 [2024-10-07 09:48:54.365826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.506 [2024-10-07 09:48:54.365893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.507 qpair failed and we were unable to recover it. 00:28:05.507 [2024-10-07 09:48:54.366122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.507 [2024-10-07 09:48:54.366189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.507 qpair failed and we were unable to recover it. 00:28:05.507 [2024-10-07 09:48:54.366419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.507 [2024-10-07 09:48:54.366485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.507 qpair failed and we were unable to recover it. 00:28:05.507 [2024-10-07 09:48:54.366771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.507 [2024-10-07 09:48:54.366838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.507 qpair failed and we were unable to recover it. 00:28:05.507 [2024-10-07 09:48:54.367035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.507 [2024-10-07 09:48:54.367101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.507 qpair failed and we were unable to recover it. 00:28:05.507 [2024-10-07 09:48:54.367357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.507 [2024-10-07 09:48:54.367421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.507 qpair failed and we were unable to recover it. 00:28:05.507 [2024-10-07 09:48:54.367686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.507 [2024-10-07 09:48:54.367753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.507 qpair failed and we were unable to recover it. 00:28:05.507 [2024-10-07 09:48:54.367991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.507 [2024-10-07 09:48:54.368057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.507 qpair failed and we were unable to recover it. 00:28:05.507 [2024-10-07 09:48:54.368306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.507 [2024-10-07 09:48:54.368371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.507 qpair failed and we were unable to recover it. 00:28:05.507 [2024-10-07 09:48:54.368614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.507 [2024-10-07 09:48:54.368711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.507 qpair failed and we were unable to recover it. 00:28:05.507 [2024-10-07 09:48:54.368942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.507 [2024-10-07 09:48:54.369008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.507 qpair failed and we were unable to recover it. 00:28:05.507 [2024-10-07 09:48:54.369266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.507 [2024-10-07 09:48:54.369332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.507 qpair failed and we were unable to recover it. 00:28:05.507 [2024-10-07 09:48:54.369574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.507 [2024-10-07 09:48:54.369649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.507 qpair failed and we were unable to recover it. 00:28:05.507 [2024-10-07 09:48:54.369897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.507 [2024-10-07 09:48:54.369964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.507 qpair failed and we were unable to recover it. 00:28:05.507 [2024-10-07 09:48:54.370248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.507 [2024-10-07 09:48:54.370313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.507 qpair failed and we were unable to recover it. 00:28:05.507 [2024-10-07 09:48:54.370525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.507 [2024-10-07 09:48:54.370590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.507 qpair failed and we were unable to recover it. 00:28:05.507 [2024-10-07 09:48:54.370857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.507 [2024-10-07 09:48:54.370923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.507 qpair failed and we were unable to recover it. 00:28:05.507 [2024-10-07 09:48:54.371213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.507 [2024-10-07 09:48:54.371278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.507 qpair failed and we were unable to recover it. 00:28:05.507 [2024-10-07 09:48:54.371519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.507 [2024-10-07 09:48:54.371585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.507 qpair failed and we were unable to recover it. 00:28:05.507 [2024-10-07 09:48:54.371890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.507 [2024-10-07 09:48:54.371958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.507 qpair failed and we were unable to recover it. 00:28:05.507 [2024-10-07 09:48:54.372204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.507 [2024-10-07 09:48:54.372269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.507 qpair failed and we were unable to recover it. 00:28:05.507 [2024-10-07 09:48:54.372512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.507 [2024-10-07 09:48:54.372577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.507 qpair failed and we were unable to recover it. 00:28:05.507 [2024-10-07 09:48:54.372839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.507 [2024-10-07 09:48:54.372905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.507 qpair failed and we were unable to recover it. 00:28:05.507 [2024-10-07 09:48:54.373159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.507 [2024-10-07 09:48:54.373225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.507 qpair failed and we were unable to recover it. 00:28:05.507 [2024-10-07 09:48:54.373468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.507 [2024-10-07 09:48:54.373533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.507 qpair failed and we were unable to recover it. 00:28:05.507 [2024-10-07 09:48:54.373786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.507 [2024-10-07 09:48:54.373855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.507 qpair failed and we were unable to recover it. 00:28:05.507 [2024-10-07 09:48:54.374119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.507 [2024-10-07 09:48:54.374183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.507 qpair failed and we were unable to recover it. 00:28:05.507 [2024-10-07 09:48:54.374463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.507 [2024-10-07 09:48:54.374528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.507 qpair failed and we were unable to recover it. 00:28:05.507 [2024-10-07 09:48:54.374774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.507 [2024-10-07 09:48:54.374844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.507 qpair failed and we were unable to recover it. 00:28:05.507 [2024-10-07 09:48:54.375057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.507 [2024-10-07 09:48:54.375122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.507 qpair failed and we were unable to recover it. 00:28:05.507 [2024-10-07 09:48:54.375375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.507 [2024-10-07 09:48:54.375440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.507 qpair failed and we were unable to recover it. 00:28:05.507 [2024-10-07 09:48:54.375645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.507 [2024-10-07 09:48:54.375737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.507 qpair failed and we were unable to recover it. 00:28:05.507 [2024-10-07 09:48:54.375983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.507 [2024-10-07 09:48:54.376048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.507 qpair failed and we were unable to recover it. 00:28:05.507 [2024-10-07 09:48:54.376281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.507 [2024-10-07 09:48:54.376347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.507 qpair failed and we were unable to recover it. 00:28:05.507 [2024-10-07 09:48:54.376597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.507 [2024-10-07 09:48:54.376663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.507 qpair failed and we were unable to recover it. 00:28:05.507 [2024-10-07 09:48:54.376900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.507 [2024-10-07 09:48:54.376965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.507 qpair failed and we were unable to recover it. 00:28:05.507 [2024-10-07 09:48:54.377253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.507 [2024-10-07 09:48:54.377318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.507 qpair failed and we were unable to recover it. 00:28:05.507 [2024-10-07 09:48:54.377600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.507 [2024-10-07 09:48:54.377680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.507 qpair failed and we were unable to recover it. 00:28:05.507 [2024-10-07 09:48:54.377932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.507 [2024-10-07 09:48:54.377998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.507 qpair failed and we were unable to recover it. 00:28:05.507 [2024-10-07 09:48:54.378235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.508 [2024-10-07 09:48:54.378316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.508 qpair failed and we were unable to recover it. 00:28:05.508 [2024-10-07 09:48:54.378570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.508 [2024-10-07 09:48:54.378635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.508 qpair failed and we were unable to recover it. 00:28:05.508 [2024-10-07 09:48:54.378900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.508 [2024-10-07 09:48:54.378965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.508 qpair failed and we were unable to recover it. 00:28:05.508 [2024-10-07 09:48:54.379160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.508 [2024-10-07 09:48:54.379225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.508 qpair failed and we were unable to recover it. 00:28:05.508 [2024-10-07 09:48:54.379406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.508 [2024-10-07 09:48:54.379471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.508 qpair failed and we were unable to recover it. 00:28:05.508 [2024-10-07 09:48:54.379714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.508 [2024-10-07 09:48:54.379782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.508 qpair failed and we were unable to recover it. 00:28:05.508 [2024-10-07 09:48:54.380044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.508 [2024-10-07 09:48:54.380108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.508 qpair failed and we were unable to recover it. 00:28:05.508 [2024-10-07 09:48:54.380392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.508 [2024-10-07 09:48:54.380457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.508 qpair failed and we were unable to recover it. 00:28:05.508 [2024-10-07 09:48:54.380708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.508 [2024-10-07 09:48:54.380775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.508 qpair failed and we were unable to recover it. 00:28:05.508 [2024-10-07 09:48:54.381010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.508 [2024-10-07 09:48:54.381075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.508 qpair failed and we were unable to recover it. 00:28:05.508 [2024-10-07 09:48:54.381326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.508 [2024-10-07 09:48:54.381392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.508 qpair failed and we were unable to recover it. 00:28:05.508 [2024-10-07 09:48:54.381590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.508 [2024-10-07 09:48:54.381658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.508 qpair failed and we were unable to recover it. 00:28:05.508 [2024-10-07 09:48:54.381971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.508 [2024-10-07 09:48:54.382037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.508 qpair failed and we were unable to recover it. 00:28:05.508 [2024-10-07 09:48:54.382276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.508 [2024-10-07 09:48:54.382341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.508 qpair failed and we were unable to recover it. 00:28:05.508 [2024-10-07 09:48:54.382599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.508 [2024-10-07 09:48:54.382663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.508 qpair failed and we were unable to recover it. 00:28:05.508 [2024-10-07 09:48:54.382898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.508 [2024-10-07 09:48:54.382965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.508 qpair failed and we were unable to recover it. 00:28:05.508 [2024-10-07 09:48:54.383252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.508 [2024-10-07 09:48:54.383317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.508 qpair failed and we were unable to recover it. 00:28:05.508 [2024-10-07 09:48:54.383560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.508 [2024-10-07 09:48:54.383624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.508 qpair failed and we were unable to recover it. 00:28:05.508 [2024-10-07 09:48:54.383845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.508 [2024-10-07 09:48:54.383911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.508 qpair failed and we were unable to recover it. 00:28:05.508 [2024-10-07 09:48:54.384141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.508 [2024-10-07 09:48:54.384209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.508 qpair failed and we were unable to recover it. 00:28:05.508 [2024-10-07 09:48:54.384497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.508 [2024-10-07 09:48:54.384562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.508 qpair failed and we were unable to recover it. 00:28:05.508 [2024-10-07 09:48:54.384841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.508 [2024-10-07 09:48:54.384908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.508 qpair failed and we were unable to recover it. 00:28:05.508 [2024-10-07 09:48:54.385143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.508 [2024-10-07 09:48:54.385209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.508 qpair failed and we were unable to recover it. 00:28:05.508 [2024-10-07 09:48:54.385417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.508 [2024-10-07 09:48:54.385484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.508 qpair failed and we were unable to recover it. 00:28:05.508 [2024-10-07 09:48:54.385746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.508 [2024-10-07 09:48:54.385822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.508 qpair failed and we were unable to recover it. 00:28:05.508 [2024-10-07 09:48:54.386093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.508 [2024-10-07 09:48:54.386159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.508 qpair failed and we were unable to recover it. 00:28:05.508 [2024-10-07 09:48:54.386410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.508 [2024-10-07 09:48:54.386475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.508 qpair failed and we were unable to recover it. 00:28:05.508 [2024-10-07 09:48:54.386697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.508 [2024-10-07 09:48:54.386781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.508 qpair failed and we were unable to recover it. 00:28:05.508 [2024-10-07 09:48:54.387020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.508 [2024-10-07 09:48:54.387087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.508 qpair failed and we were unable to recover it. 00:28:05.508 [2024-10-07 09:48:54.387288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.508 [2024-10-07 09:48:54.387353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.508 qpair failed and we were unable to recover it. 00:28:05.508 [2024-10-07 09:48:54.387599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.508 [2024-10-07 09:48:54.387663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.508 qpair failed and we were unable to recover it. 00:28:05.508 [2024-10-07 09:48:54.387946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.508 [2024-10-07 09:48:54.388011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.508 qpair failed and we were unable to recover it. 00:28:05.508 [2024-10-07 09:48:54.388247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.508 [2024-10-07 09:48:54.388312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.508 qpair failed and we were unable to recover it. 00:28:05.508 [2024-10-07 09:48:54.388558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.508 [2024-10-07 09:48:54.388622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.508 qpair failed and we were unable to recover it. 00:28:05.508 [2024-10-07 09:48:54.388850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.508 [2024-10-07 09:48:54.388916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.508 qpair failed and we were unable to recover it. 00:28:05.508 [2024-10-07 09:48:54.389222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.508 [2024-10-07 09:48:54.389287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.508 qpair failed and we were unable to recover it. 00:28:05.508 [2024-10-07 09:48:54.389584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.508 [2024-10-07 09:48:54.389648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.508 qpair failed and we were unable to recover it. 00:28:05.508 [2024-10-07 09:48:54.389908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.508 [2024-10-07 09:48:54.389973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.508 qpair failed and we were unable to recover it. 00:28:05.508 [2024-10-07 09:48:54.390222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.509 [2024-10-07 09:48:54.390287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.509 qpair failed and we were unable to recover it. 00:28:05.509 [2024-10-07 09:48:54.390538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.509 [2024-10-07 09:48:54.390602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.509 qpair failed and we were unable to recover it. 00:28:05.509 [2024-10-07 09:48:54.390867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.509 [2024-10-07 09:48:54.390933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.509 qpair failed and we were unable to recover it. 00:28:05.509 [2024-10-07 09:48:54.391223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.509 [2024-10-07 09:48:54.391290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.509 qpair failed and we were unable to recover it. 00:28:05.509 [2024-10-07 09:48:54.391537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.509 [2024-10-07 09:48:54.391600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.509 qpair failed and we were unable to recover it. 00:28:05.509 [2024-10-07 09:48:54.391830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.509 [2024-10-07 09:48:54.391895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.509 qpair failed and we were unable to recover it. 00:28:05.509 [2024-10-07 09:48:54.392106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.509 [2024-10-07 09:48:54.392171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.509 qpair failed and we were unable to recover it. 00:28:05.509 [2024-10-07 09:48:54.392455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.509 [2024-10-07 09:48:54.392521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.509 qpair failed and we were unable to recover it. 00:28:05.509 [2024-10-07 09:48:54.392766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.509 [2024-10-07 09:48:54.392832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.509 qpair failed and we were unable to recover it. 00:28:05.509 [2024-10-07 09:48:54.393019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.509 [2024-10-07 09:48:54.393082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.509 qpair failed and we were unable to recover it. 00:28:05.509 [2024-10-07 09:48:54.393363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.509 [2024-10-07 09:48:54.393428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.509 qpair failed and we were unable to recover it. 00:28:05.509 [2024-10-07 09:48:54.393729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.509 [2024-10-07 09:48:54.393797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.509 qpair failed and we were unable to recover it. 00:28:05.509 [2024-10-07 09:48:54.394049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.509 [2024-10-07 09:48:54.394113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.509 qpair failed and we were unable to recover it. 00:28:05.509 [2024-10-07 09:48:54.394333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.509 [2024-10-07 09:48:54.394398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.509 qpair failed and we were unable to recover it. 00:28:05.509 [2024-10-07 09:48:54.394644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.509 [2024-10-07 09:48:54.394735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.509 qpair failed and we were unable to recover it. 00:28:05.509 [2024-10-07 09:48:54.394939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.509 [2024-10-07 09:48:54.395017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.509 qpair failed and we were unable to recover it. 00:28:05.509 [2024-10-07 09:48:54.395250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.509 [2024-10-07 09:48:54.395316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.509 qpair failed and we were unable to recover it. 00:28:05.509 [2024-10-07 09:48:54.395570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.509 [2024-10-07 09:48:54.395635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.509 qpair failed and we were unable to recover it. 00:28:05.509 [2024-10-07 09:48:54.395861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.509 [2024-10-07 09:48:54.395925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.509 qpair failed and we were unable to recover it. 00:28:05.509 [2024-10-07 09:48:54.396208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.509 [2024-10-07 09:48:54.396273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.509 qpair failed and we were unable to recover it. 00:28:05.509 [2024-10-07 09:48:54.396506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.509 [2024-10-07 09:48:54.396571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.509 qpair failed and we were unable to recover it. 00:28:05.509 [2024-10-07 09:48:54.396827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.509 [2024-10-07 09:48:54.396894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.509 qpair failed and we were unable to recover it. 00:28:05.509 [2024-10-07 09:48:54.397202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.509 [2024-10-07 09:48:54.397267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.509 qpair failed and we were unable to recover it. 00:28:05.509 [2024-10-07 09:48:54.397550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.509 [2024-10-07 09:48:54.397615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.509 qpair failed and we were unable to recover it. 00:28:05.509 [2024-10-07 09:48:54.397827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.509 [2024-10-07 09:48:54.397894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.509 qpair failed and we were unable to recover it. 00:28:05.509 [2024-10-07 09:48:54.398185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.509 [2024-10-07 09:48:54.398250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.509 qpair failed and we were unable to recover it. 00:28:05.509 [2024-10-07 09:48:54.398475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.509 [2024-10-07 09:48:54.398540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.509 qpair failed and we were unable to recover it. 00:28:05.509 [2024-10-07 09:48:54.398824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.509 [2024-10-07 09:48:54.398890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.509 qpair failed and we were unable to recover it. 00:28:05.509 [2024-10-07 09:48:54.399117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.509 [2024-10-07 09:48:54.399181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.509 qpair failed and we were unable to recover it. 00:28:05.509 [2024-10-07 09:48:54.399426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.509 [2024-10-07 09:48:54.399492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.509 qpair failed and we were unable to recover it. 00:28:05.509 [2024-10-07 09:48:54.399737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.509 [2024-10-07 09:48:54.399805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.509 qpair failed and we were unable to recover it. 00:28:05.509 [2024-10-07 09:48:54.400094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.509 [2024-10-07 09:48:54.400160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.509 qpair failed and we were unable to recover it. 00:28:05.509 [2024-10-07 09:48:54.400439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.509 [2024-10-07 09:48:54.400503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.509 qpair failed and we were unable to recover it. 00:28:05.509 [2024-10-07 09:48:54.400788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.509 [2024-10-07 09:48:54.400855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.509 qpair failed and we were unable to recover it. 00:28:05.509 [2024-10-07 09:48:54.401123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.509 [2024-10-07 09:48:54.401188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.509 qpair failed and we were unable to recover it. 00:28:05.509 [2024-10-07 09:48:54.401398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.509 [2024-10-07 09:48:54.401465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.509 qpair failed and we were unable to recover it. 00:28:05.509 [2024-10-07 09:48:54.401736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.509 [2024-10-07 09:48:54.401803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.509 qpair failed and we were unable to recover it. 00:28:05.509 [2024-10-07 09:48:54.402091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.510 [2024-10-07 09:48:54.402164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.510 qpair failed and we were unable to recover it. 00:28:05.510 [2024-10-07 09:48:54.402422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.510 [2024-10-07 09:48:54.402488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.510 qpair failed and we were unable to recover it. 00:28:05.510 [2024-10-07 09:48:54.402738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.510 [2024-10-07 09:48:54.402803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.510 qpair failed and we were unable to recover it. 00:28:05.510 [2024-10-07 09:48:54.403057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.510 [2024-10-07 09:48:54.403123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.510 qpair failed and we were unable to recover it. 00:28:05.510 [2024-10-07 09:48:54.403365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.510 [2024-10-07 09:48:54.403430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.510 qpair failed and we were unable to recover it. 00:28:05.510 [2024-10-07 09:48:54.403694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.510 [2024-10-07 09:48:54.403770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.510 qpair failed and we were unable to recover it. 00:28:05.510 [2024-10-07 09:48:54.404021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.510 [2024-10-07 09:48:54.404087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.510 qpair failed and we were unable to recover it. 00:28:05.510 [2024-10-07 09:48:54.404343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.510 [2024-10-07 09:48:54.404419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.510 qpair failed and we were unable to recover it. 00:28:05.510 [2024-10-07 09:48:54.404689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.510 [2024-10-07 09:48:54.404765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.510 qpair failed and we were unable to recover it. 00:28:05.510 [2024-10-07 09:48:54.405055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.510 [2024-10-07 09:48:54.405127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.510 qpair failed and we were unable to recover it. 00:28:05.510 [2024-10-07 09:48:54.405377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.510 [2024-10-07 09:48:54.405441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.510 qpair failed and we were unable to recover it. 00:28:05.510 [2024-10-07 09:48:54.405690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.510 [2024-10-07 09:48:54.405756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.510 qpair failed and we were unable to recover it. 00:28:05.510 [2024-10-07 09:48:54.406018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.510 [2024-10-07 09:48:54.406084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.510 qpair failed and we were unable to recover it. 00:28:05.510 [2024-10-07 09:48:54.406327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.510 [2024-10-07 09:48:54.406392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.510 qpair failed and we were unable to recover it. 00:28:05.510 [2024-10-07 09:48:54.406640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.510 [2024-10-07 09:48:54.406746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.510 qpair failed and we were unable to recover it. 00:28:05.510 [2024-10-07 09:48:54.407003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.510 [2024-10-07 09:48:54.407068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.510 qpair failed and we were unable to recover it. 00:28:05.510 [2024-10-07 09:48:54.407323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.510 [2024-10-07 09:48:54.407388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.510 qpair failed and we were unable to recover it. 00:28:05.510 [2024-10-07 09:48:54.407619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.510 [2024-10-07 09:48:54.407701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.510 qpair failed and we were unable to recover it. 00:28:05.510 [2024-10-07 09:48:54.407908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.510 [2024-10-07 09:48:54.407980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.510 qpair failed and we were unable to recover it. 00:28:05.510 [2024-10-07 09:48:54.408243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.510 [2024-10-07 09:48:54.408309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.510 qpair failed and we were unable to recover it. 00:28:05.510 [2024-10-07 09:48:54.408565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.510 [2024-10-07 09:48:54.408641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.510 qpair failed and we were unable to recover it. 00:28:05.510 [2024-10-07 09:48:54.408947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.510 [2024-10-07 09:48:54.409013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.510 qpair failed and we were unable to recover it. 00:28:05.510 [2024-10-07 09:48:54.409267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.510 [2024-10-07 09:48:54.409332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.510 qpair failed and we were unable to recover it. 00:28:05.510 [2024-10-07 09:48:54.409578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.510 [2024-10-07 09:48:54.409642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.510 qpair failed and we were unable to recover it. 00:28:05.510 [2024-10-07 09:48:54.409920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.510 [2024-10-07 09:48:54.409985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.510 qpair failed and we were unable to recover it. 00:28:05.510 [2024-10-07 09:48:54.410233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.510 [2024-10-07 09:48:54.410297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.510 qpair failed and we were unable to recover it. 00:28:05.510 [2024-10-07 09:48:54.410588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.510 [2024-10-07 09:48:54.410652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.510 qpair failed and we were unable to recover it. 00:28:05.510 [2024-10-07 09:48:54.410942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.510 [2024-10-07 09:48:54.411009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.510 qpair failed and we were unable to recover it. 00:28:05.510 [2024-10-07 09:48:54.411247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.510 [2024-10-07 09:48:54.411313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.510 qpair failed and we were unable to recover it. 00:28:05.510 [2024-10-07 09:48:54.411554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.510 [2024-10-07 09:48:54.411618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.510 qpair failed and we were unable to recover it. 00:28:05.510 [2024-10-07 09:48:54.411839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.510 [2024-10-07 09:48:54.411905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.510 qpair failed and we were unable to recover it. 00:28:05.510 [2024-10-07 09:48:54.412143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.510 [2024-10-07 09:48:54.412209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.510 qpair failed and we were unable to recover it. 00:28:05.510 [2024-10-07 09:48:54.412502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.510 [2024-10-07 09:48:54.412566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.510 qpair failed and we were unable to recover it. 00:28:05.510 [2024-10-07 09:48:54.412787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.510 [2024-10-07 09:48:54.412852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.510 qpair failed and we were unable to recover it. 00:28:05.510 [2024-10-07 09:48:54.413072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.510 [2024-10-07 09:48:54.413138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.510 qpair failed and we were unable to recover it. 00:28:05.510 [2024-10-07 09:48:54.413387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.510 [2024-10-07 09:48:54.413451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.510 qpair failed and we were unable to recover it. 00:28:05.510 [2024-10-07 09:48:54.413737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.510 [2024-10-07 09:48:54.413804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.510 qpair failed and we were unable to recover it. 00:28:05.510 [2024-10-07 09:48:54.414040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.510 [2024-10-07 09:48:54.414103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.510 qpair failed and we were unable to recover it. 00:28:05.510 [2024-10-07 09:48:54.414358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.510 [2024-10-07 09:48:54.414422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.510 qpair failed and we were unable to recover it. 00:28:05.510 [2024-10-07 09:48:54.414686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.510 [2024-10-07 09:48:54.414761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.510 qpair failed and we were unable to recover it. 00:28:05.510 [2024-10-07 09:48:54.415045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.510 [2024-10-07 09:48:54.415109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.510 qpair failed and we were unable to recover it. 00:28:05.510 [2024-10-07 09:48:54.415349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.510 [2024-10-07 09:48:54.415414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.510 qpair failed and we were unable to recover it. 00:28:05.510 [2024-10-07 09:48:54.415615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.511 [2024-10-07 09:48:54.415707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.511 qpair failed and we were unable to recover it. 00:28:05.511 [2024-10-07 09:48:54.415933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.511 [2024-10-07 09:48:54.416000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.511 qpair failed and we were unable to recover it. 00:28:05.511 [2024-10-07 09:48:54.416295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.511 [2024-10-07 09:48:54.416359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.511 qpair failed and we were unable to recover it. 00:28:05.511 [2024-10-07 09:48:54.416643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.511 [2024-10-07 09:48:54.416735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.511 qpair failed and we were unable to recover it. 00:28:05.511 [2024-10-07 09:48:54.416985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.511 [2024-10-07 09:48:54.417049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.511 qpair failed and we were unable to recover it. 00:28:05.511 [2024-10-07 09:48:54.417264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.511 [2024-10-07 09:48:54.417338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.511 qpair failed and we were unable to recover it. 00:28:05.511 [2024-10-07 09:48:54.417579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.511 [2024-10-07 09:48:54.417643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.511 qpair failed and we were unable to recover it. 00:28:05.511 [2024-10-07 09:48:54.417947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.511 [2024-10-07 09:48:54.418011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.511 qpair failed and we were unable to recover it. 00:28:05.511 [2024-10-07 09:48:54.418262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.511 [2024-10-07 09:48:54.418327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.511 qpair failed and we were unable to recover it. 00:28:05.511 [2024-10-07 09:48:54.418502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.511 [2024-10-07 09:48:54.418568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.511 qpair failed and we were unable to recover it. 00:28:05.511 [2024-10-07 09:48:54.418893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.511 [2024-10-07 09:48:54.418960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.511 qpair failed and we were unable to recover it. 00:28:05.511 [2024-10-07 09:48:54.419205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.511 [2024-10-07 09:48:54.419270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.511 qpair failed and we were unable to recover it. 00:28:05.511 [2024-10-07 09:48:54.419490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.511 [2024-10-07 09:48:54.419555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.511 qpair failed and we were unable to recover it. 00:28:05.511 [2024-10-07 09:48:54.419776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.511 [2024-10-07 09:48:54.419843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.511 qpair failed and we were unable to recover it. 00:28:05.511 [2024-10-07 09:48:54.420082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.511 [2024-10-07 09:48:54.420146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.511 qpair failed and we were unable to recover it. 00:28:05.511 [2024-10-07 09:48:54.420441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.511 [2024-10-07 09:48:54.420507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.511 qpair failed and we were unable to recover it. 00:28:05.511 [2024-10-07 09:48:54.420745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.511 [2024-10-07 09:48:54.420810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.511 qpair failed and we were unable to recover it. 00:28:05.511 [2024-10-07 09:48:54.421059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.511 [2024-10-07 09:48:54.421124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.511 qpair failed and we were unable to recover it. 00:28:05.511 [2024-10-07 09:48:54.421373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.511 [2024-10-07 09:48:54.421437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.511 qpair failed and we were unable to recover it. 00:28:05.511 [2024-10-07 09:48:54.421744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.511 [2024-10-07 09:48:54.421811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.511 qpair failed and we were unable to recover it. 00:28:05.511 [2024-10-07 09:48:54.422055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.511 [2024-10-07 09:48:54.422120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.511 qpair failed and we were unable to recover it. 00:28:05.511 [2024-10-07 09:48:54.422367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.511 [2024-10-07 09:48:54.422433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.511 qpair failed and we were unable to recover it. 00:28:05.511 [2024-10-07 09:48:54.422651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.511 [2024-10-07 09:48:54.422736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.511 qpair failed and we were unable to recover it. 00:28:05.511 [2024-10-07 09:48:54.422926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.511 [2024-10-07 09:48:54.422991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.511 qpair failed and we were unable to recover it. 00:28:05.511 [2024-10-07 09:48:54.423293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.511 [2024-10-07 09:48:54.423358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.511 qpair failed and we were unable to recover it. 00:28:05.511 [2024-10-07 09:48:54.423606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.511 [2024-10-07 09:48:54.423684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.511 qpair failed and we were unable to recover it. 00:28:05.511 [2024-10-07 09:48:54.423902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.511 [2024-10-07 09:48:54.423974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.511 qpair failed and we were unable to recover it. 00:28:05.511 [2024-10-07 09:48:54.424215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.511 [2024-10-07 09:48:54.424281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.511 qpair failed and we were unable to recover it. 00:28:05.511 [2024-10-07 09:48:54.424519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.511 [2024-10-07 09:48:54.424583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.511 qpair failed and we were unable to recover it. 00:28:05.511 [2024-10-07 09:48:54.424843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.511 [2024-10-07 09:48:54.424910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.511 qpair failed and we were unable to recover it. 00:28:05.511 [2024-10-07 09:48:54.425162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.511 [2024-10-07 09:48:54.425227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.511 qpair failed and we were unable to recover it. 00:28:05.511 [2024-10-07 09:48:54.425410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.511 [2024-10-07 09:48:54.425476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.511 qpair failed and we were unable to recover it. 00:28:05.511 [2024-10-07 09:48:54.425795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.511 [2024-10-07 09:48:54.425862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.511 qpair failed and we were unable to recover it. 00:28:05.511 [2024-10-07 09:48:54.426164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.511 [2024-10-07 09:48:54.426228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.511 qpair failed and we were unable to recover it. 00:28:05.511 [2024-10-07 09:48:54.426475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.511 [2024-10-07 09:48:54.426542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.511 qpair failed and we were unable to recover it. 00:28:05.511 [2024-10-07 09:48:54.426850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.511 [2024-10-07 09:48:54.426916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.511 qpair failed and we were unable to recover it. 00:28:05.511 [2024-10-07 09:48:54.427209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.511 [2024-10-07 09:48:54.427276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.511 qpair failed and we were unable to recover it. 00:28:05.511 [2024-10-07 09:48:54.427563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.511 [2024-10-07 09:48:54.427627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.511 qpair failed and we were unable to recover it. 00:28:05.511 [2024-10-07 09:48:54.427937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.511 [2024-10-07 09:48:54.428002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.511 qpair failed and we were unable to recover it. 00:28:05.511 [2024-10-07 09:48:54.428259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.511 [2024-10-07 09:48:54.428323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.511 qpair failed and we were unable to recover it. 00:28:05.511 [2024-10-07 09:48:54.428588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.511 [2024-10-07 09:48:54.428653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.511 qpair failed and we were unable to recover it. 00:28:05.511 [2024-10-07 09:48:54.428932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.512 [2024-10-07 09:48:54.428998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.512 qpair failed and we were unable to recover it. 00:28:05.512 [2024-10-07 09:48:54.429250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.512 [2024-10-07 09:48:54.429315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.512 qpair failed and we were unable to recover it. 00:28:05.512 [2024-10-07 09:48:54.429564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.512 [2024-10-07 09:48:54.429628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.512 qpair failed and we were unable to recover it. 00:28:05.512 [2024-10-07 09:48:54.429908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.512 [2024-10-07 09:48:54.429974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.512 qpair failed and we were unable to recover it. 00:28:05.512 [2024-10-07 09:48:54.430159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.512 [2024-10-07 09:48:54.430224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.512 qpair failed and we were unable to recover it. 00:28:05.512 [2024-10-07 09:48:54.430480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.512 [2024-10-07 09:48:54.430544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.512 qpair failed and we were unable to recover it. 00:28:05.512 [2024-10-07 09:48:54.430850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.512 [2024-10-07 09:48:54.430917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.512 qpair failed and we were unable to recover it. 00:28:05.512 [2024-10-07 09:48:54.431176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.512 [2024-10-07 09:48:54.431243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.512 qpair failed and we were unable to recover it. 00:28:05.512 [2024-10-07 09:48:54.431461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.512 [2024-10-07 09:48:54.431525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.512 qpair failed and we were unable to recover it. 00:28:05.512 [2024-10-07 09:48:54.431787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.512 [2024-10-07 09:48:54.431853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.512 qpair failed and we were unable to recover it. 00:28:05.512 [2024-10-07 09:48:54.432159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.512 [2024-10-07 09:48:54.432223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.512 qpair failed and we were unable to recover it. 00:28:05.512 [2024-10-07 09:48:54.432474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.512 [2024-10-07 09:48:54.432538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.512 qpair failed and we were unable to recover it. 00:28:05.512 [2024-10-07 09:48:54.432790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.512 [2024-10-07 09:48:54.432857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.512 qpair failed and we were unable to recover it. 00:28:05.512 [2024-10-07 09:48:54.433099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.512 [2024-10-07 09:48:54.433164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.512 qpair failed and we were unable to recover it. 00:28:05.512 [2024-10-07 09:48:54.433405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.512 [2024-10-07 09:48:54.433470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.512 qpair failed and we were unable to recover it. 00:28:05.512 [2024-10-07 09:48:54.433754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.512 [2024-10-07 09:48:54.433820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.512 qpair failed and we were unable to recover it. 00:28:05.512 [2024-10-07 09:48:54.434058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.512 [2024-10-07 09:48:54.434124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.512 qpair failed and we were unable to recover it. 00:28:05.512 [2024-10-07 09:48:54.434411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.512 [2024-10-07 09:48:54.434476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.512 qpair failed and we were unable to recover it. 00:28:05.512 [2024-10-07 09:48:54.434723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.512 [2024-10-07 09:48:54.434791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.512 qpair failed and we were unable to recover it. 00:28:05.512 [2024-10-07 09:48:54.435092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.512 [2024-10-07 09:48:54.435157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.512 qpair failed and we were unable to recover it. 00:28:05.512 [2024-10-07 09:48:54.435401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.512 [2024-10-07 09:48:54.435465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.512 qpair failed and we were unable to recover it. 00:28:05.512 [2024-10-07 09:48:54.435710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.512 [2024-10-07 09:48:54.435778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.512 qpair failed and we were unable to recover it. 00:28:05.512 [2024-10-07 09:48:54.436006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.512 [2024-10-07 09:48:54.436071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.512 qpair failed and we were unable to recover it. 00:28:05.512 [2024-10-07 09:48:54.436353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.512 [2024-10-07 09:48:54.436417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.512 qpair failed and we were unable to recover it. 00:28:05.512 [2024-10-07 09:48:54.436688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.512 [2024-10-07 09:48:54.436754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.512 qpair failed and we were unable to recover it. 00:28:05.512 [2024-10-07 09:48:54.437003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.512 [2024-10-07 09:48:54.437067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.512 qpair failed and we were unable to recover it. 00:28:05.512 [2024-10-07 09:48:54.437352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.512 [2024-10-07 09:48:54.437417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.512 qpair failed and we were unable to recover it. 00:28:05.512 [2024-10-07 09:48:54.437652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.512 [2024-10-07 09:48:54.437730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.512 qpair failed and we were unable to recover it. 00:28:05.512 [2024-10-07 09:48:54.437938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.512 [2024-10-07 09:48:54.438003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.512 qpair failed and we were unable to recover it. 00:28:05.512 [2024-10-07 09:48:54.438234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.512 [2024-10-07 09:48:54.438298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.512 qpair failed and we were unable to recover it. 00:28:05.512 [2024-10-07 09:48:54.438535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.512 [2024-10-07 09:48:54.438598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.512 qpair failed and we were unable to recover it. 00:28:05.512 [2024-10-07 09:48:54.438875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.512 [2024-10-07 09:48:54.438941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.512 qpair failed and we were unable to recover it. 00:28:05.512 [2024-10-07 09:48:54.439156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.512 [2024-10-07 09:48:54.439231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.512 qpair failed and we were unable to recover it. 00:28:05.512 [2024-10-07 09:48:54.439469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.512 [2024-10-07 09:48:54.439534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.512 qpair failed and we were unable to recover it. 00:28:05.512 [2024-10-07 09:48:54.439831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.512 [2024-10-07 09:48:54.439898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.512 qpair failed and we were unable to recover it. 00:28:05.512 [2024-10-07 09:48:54.440121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.512 [2024-10-07 09:48:54.440187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.512 qpair failed and we were unable to recover it. 00:28:05.512 [2024-10-07 09:48:54.440468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.512 [2024-10-07 09:48:54.440532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.512 qpair failed and we were unable to recover it. 00:28:05.512 [2024-10-07 09:48:54.440767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.512 [2024-10-07 09:48:54.440833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.512 qpair failed and we were unable to recover it. 00:28:05.794 [2024-10-07 09:48:54.441069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.794 [2024-10-07 09:48:54.441135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.794 qpair failed and we were unable to recover it. 00:28:05.794 [2024-10-07 09:48:54.441337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.794 [2024-10-07 09:48:54.441402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.794 qpair failed and we were unable to recover it. 00:28:05.794 [2024-10-07 09:48:54.441654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.794 [2024-10-07 09:48:54.441737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.794 qpair failed and we were unable to recover it. 00:28:05.794 [2024-10-07 09:48:54.441984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.794 [2024-10-07 09:48:54.442049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.794 qpair failed and we were unable to recover it. 00:28:05.794 [2024-10-07 09:48:54.442301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.794 [2024-10-07 09:48:54.442366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.794 qpair failed and we were unable to recover it. 00:28:05.794 [2024-10-07 09:48:54.442570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.794 [2024-10-07 09:48:54.442635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.794 qpair failed and we were unable to recover it. 00:28:05.794 [2024-10-07 09:48:54.442856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.794 [2024-10-07 09:48:54.442890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.794 qpair failed and we were unable to recover it. 00:28:05.794 [2024-10-07 09:48:54.443008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.794 [2024-10-07 09:48:54.443042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.794 qpair failed and we were unable to recover it. 00:28:05.794 [2024-10-07 09:48:54.443153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.794 [2024-10-07 09:48:54.443187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.794 qpair failed and we were unable to recover it. 00:28:05.794 [2024-10-07 09:48:54.443337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.795 [2024-10-07 09:48:54.443371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.795 qpair failed and we were unable to recover it. 00:28:05.795 [2024-10-07 09:48:54.443494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.795 [2024-10-07 09:48:54.443528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.795 qpair failed and we were unable to recover it. 00:28:05.795 [2024-10-07 09:48:54.443675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.795 [2024-10-07 09:48:54.443711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.795 qpair failed and we were unable to recover it. 00:28:05.795 [2024-10-07 09:48:54.443852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.795 [2024-10-07 09:48:54.443886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.795 qpair failed and we were unable to recover it. 00:28:05.795 [2024-10-07 09:48:54.443995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.795 [2024-10-07 09:48:54.444030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.795 qpair failed and we were unable to recover it. 00:28:05.795 [2024-10-07 09:48:54.444145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.795 [2024-10-07 09:48:54.444180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.795 qpair failed and we were unable to recover it. 00:28:05.795 [2024-10-07 09:48:54.444368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.795 [2024-10-07 09:48:54.444401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.795 qpair failed and we were unable to recover it. 00:28:05.795 [2024-10-07 09:48:54.444507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.795 [2024-10-07 09:48:54.444541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.795 qpair failed and we were unable to recover it. 00:28:05.795 [2024-10-07 09:48:54.444640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.795 [2024-10-07 09:48:54.444684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.795 qpair failed and we were unable to recover it. 00:28:05.795 [2024-10-07 09:48:54.444790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.795 [2024-10-07 09:48:54.444824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.795 qpair failed and we were unable to recover it. 00:28:05.795 [2024-10-07 09:48:54.444923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.795 [2024-10-07 09:48:54.444958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.795 qpair failed and we were unable to recover it. 00:28:05.795 [2024-10-07 09:48:54.445125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.795 [2024-10-07 09:48:54.445158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.795 qpair failed and we were unable to recover it. 00:28:05.795 [2024-10-07 09:48:54.445249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.795 [2024-10-07 09:48:54.445288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.795 qpair failed and we were unable to recover it. 00:28:05.795 [2024-10-07 09:48:54.445397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.795 [2024-10-07 09:48:54.445432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.795 qpair failed and we were unable to recover it. 00:28:05.795 [2024-10-07 09:48:54.445532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.795 [2024-10-07 09:48:54.445565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.795 qpair failed and we were unable to recover it. 00:28:05.795 [2024-10-07 09:48:54.445682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.795 [2024-10-07 09:48:54.445717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.795 qpair failed and we were unable to recover it. 00:28:05.795 [2024-10-07 09:48:54.445815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.795 [2024-10-07 09:48:54.445849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.795 qpair failed and we were unable to recover it. 00:28:05.795 [2024-10-07 09:48:54.445986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.795 [2024-10-07 09:48:54.446020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.795 qpair failed and we were unable to recover it. 00:28:05.795 [2024-10-07 09:48:54.446152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.795 [2024-10-07 09:48:54.446186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.795 qpair failed and we were unable to recover it. 00:28:05.795 [2024-10-07 09:48:54.446299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.795 [2024-10-07 09:48:54.446334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.795 qpair failed and we were unable to recover it. 00:28:05.795 [2024-10-07 09:48:54.446477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.795 [2024-10-07 09:48:54.446511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.795 qpair failed and we were unable to recover it. 00:28:05.795 [2024-10-07 09:48:54.446649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.795 [2024-10-07 09:48:54.446693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.795 qpair failed and we were unable to recover it. 00:28:05.795 [2024-10-07 09:48:54.446805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.795 [2024-10-07 09:48:54.446839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.795 qpair failed and we were unable to recover it. 00:28:05.795 [2024-10-07 09:48:54.446942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.795 [2024-10-07 09:48:54.446976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.795 qpair failed and we were unable to recover it. 00:28:05.795 [2024-10-07 09:48:54.447082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.795 [2024-10-07 09:48:54.447116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.795 qpair failed and we were unable to recover it. 00:28:05.795 [2024-10-07 09:48:54.447224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.795 [2024-10-07 09:48:54.447258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.795 qpair failed and we were unable to recover it. 00:28:05.795 [2024-10-07 09:48:54.447368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.795 [2024-10-07 09:48:54.447402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.795 qpair failed and we were unable to recover it. 00:28:05.795 [2024-10-07 09:48:54.447539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.795 [2024-10-07 09:48:54.447573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.795 qpair failed and we were unable to recover it. 00:28:05.795 [2024-10-07 09:48:54.447696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.795 [2024-10-07 09:48:54.447745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.795 qpair failed and we were unable to recover it. 00:28:05.795 [2024-10-07 09:48:54.447885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.795 [2024-10-07 09:48:54.447918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.795 qpair failed and we were unable to recover it. 00:28:05.795 [2024-10-07 09:48:54.448026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.796 [2024-10-07 09:48:54.448059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.796 qpair failed and we were unable to recover it. 00:28:05.796 [2024-10-07 09:48:54.448172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.796 [2024-10-07 09:48:54.448205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.796 qpair failed and we were unable to recover it. 00:28:05.796 [2024-10-07 09:48:54.448384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.796 [2024-10-07 09:48:54.448417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.796 qpair failed and we were unable to recover it. 00:28:05.796 [2024-10-07 09:48:54.448686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.796 [2024-10-07 09:48:54.448746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.796 qpair failed and we were unable to recover it. 00:28:05.796 [2024-10-07 09:48:54.448890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.796 [2024-10-07 09:48:54.448923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.796 qpair failed and we were unable to recover it. 00:28:05.796 [2024-10-07 09:48:54.449149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.796 [2024-10-07 09:48:54.449214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.796 qpair failed and we were unable to recover it. 00:28:05.796 [2024-10-07 09:48:54.449449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.796 [2024-10-07 09:48:54.449513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.796 qpair failed and we were unable to recover it. 00:28:05.796 [2024-10-07 09:48:54.449753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.796 [2024-10-07 09:48:54.449787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.796 qpair failed and we were unable to recover it. 00:28:05.796 [2024-10-07 09:48:54.449925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.796 [2024-10-07 09:48:54.449959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.796 qpair failed and we were unable to recover it. 00:28:05.796 [2024-10-07 09:48:54.450209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.796 [2024-10-07 09:48:54.450285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.796 qpair failed and we were unable to recover it. 00:28:05.796 [2024-10-07 09:48:54.450525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.796 [2024-10-07 09:48:54.450593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.796 qpair failed and we were unable to recover it. 00:28:05.796 [2024-10-07 09:48:54.450772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.796 [2024-10-07 09:48:54.450806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.796 qpair failed and we were unable to recover it. 00:28:05.796 [2024-10-07 09:48:54.450911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.796 [2024-10-07 09:48:54.450944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.796 qpair failed and we were unable to recover it. 00:28:05.796 [2024-10-07 09:48:54.451162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.796 [2024-10-07 09:48:54.451228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.796 qpair failed and we were unable to recover it. 00:28:05.796 [2024-10-07 09:48:54.451527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.796 [2024-10-07 09:48:54.451592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.796 qpair failed and we were unable to recover it. 00:28:05.796 [2024-10-07 09:48:54.451772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.796 [2024-10-07 09:48:54.451805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.796 qpair failed and we were unable to recover it. 00:28:05.796 [2024-10-07 09:48:54.451943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.796 [2024-10-07 09:48:54.452016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.796 qpair failed and we were unable to recover it. 00:28:05.796 [2024-10-07 09:48:54.452267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.796 [2024-10-07 09:48:54.452332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.796 qpair failed and we were unable to recover it. 00:28:05.796 [2024-10-07 09:48:54.452619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.796 [2024-10-07 09:48:54.452651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.796 qpair failed and we were unable to recover it. 00:28:05.796 [2024-10-07 09:48:54.452769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.796 [2024-10-07 09:48:54.452802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.796 qpair failed and we were unable to recover it. 00:28:05.796 [2024-10-07 09:48:54.452902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.796 [2024-10-07 09:48:54.452935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.796 qpair failed and we were unable to recover it. 00:28:05.796 [2024-10-07 09:48:54.453079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.796 [2024-10-07 09:48:54.453143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.796 qpair failed and we were unable to recover it. 00:28:05.796 [2024-10-07 09:48:54.453366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.796 [2024-10-07 09:48:54.453430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.796 qpair failed and we were unable to recover it. 00:28:05.796 [2024-10-07 09:48:54.453629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.796 [2024-10-07 09:48:54.453725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.796 qpair failed and we were unable to recover it. 00:28:05.796 [2024-10-07 09:48:54.453858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.796 [2024-10-07 09:48:54.453891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.796 qpair failed and we were unable to recover it. 00:28:05.796 [2024-10-07 09:48:54.454053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.796 [2024-10-07 09:48:54.454117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.796 qpair failed and we were unable to recover it. 00:28:05.796 [2024-10-07 09:48:54.454368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.796 [2024-10-07 09:48:54.454433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.796 qpair failed and we were unable to recover it. 00:28:05.796 [2024-10-07 09:48:54.454642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.796 [2024-10-07 09:48:54.454744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.796 qpair failed and we were unable to recover it. 00:28:05.796 [2024-10-07 09:48:54.454849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.796 [2024-10-07 09:48:54.454881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.796 qpair failed and we were unable to recover it. 00:28:05.796 [2024-10-07 09:48:54.455028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.796 [2024-10-07 09:48:54.455092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.796 qpair failed and we were unable to recover it. 00:28:05.796 [2024-10-07 09:48:54.455313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.796 [2024-10-07 09:48:54.455384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.796 qpair failed and we were unable to recover it. 00:28:05.796 [2024-10-07 09:48:54.455590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.796 [2024-10-07 09:48:54.455653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.796 qpair failed and we were unable to recover it. 00:28:05.797 [2024-10-07 09:48:54.455845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.797 [2024-10-07 09:48:54.455878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.797 qpair failed and we were unable to recover it. 00:28:05.797 [2024-10-07 09:48:54.456039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.797 [2024-10-07 09:48:54.456103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.797 qpair failed and we were unable to recover it. 00:28:05.797 [2024-10-07 09:48:54.456397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.797 [2024-10-07 09:48:54.456462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.797 qpair failed and we were unable to recover it. 00:28:05.797 [2024-10-07 09:48:54.456740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.797 [2024-10-07 09:48:54.456774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.797 qpair failed and we were unable to recover it. 00:28:05.797 [2024-10-07 09:48:54.456877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.797 [2024-10-07 09:48:54.456910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.797 qpair failed and we were unable to recover it. 00:28:05.797 [2024-10-07 09:48:54.457011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.797 [2024-10-07 09:48:54.457089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.797 qpair failed and we were unable to recover it. 00:28:05.797 [2024-10-07 09:48:54.457350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.797 [2024-10-07 09:48:54.457414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.797 qpair failed and we were unable to recover it. 00:28:05.797 [2024-10-07 09:48:54.457659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.797 [2024-10-07 09:48:54.457742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.797 qpair failed and we were unable to recover it. 00:28:05.797 [2024-10-07 09:48:54.457844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.797 [2024-10-07 09:48:54.457877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.797 qpair failed and we were unable to recover it. 00:28:05.797 [2024-10-07 09:48:54.458028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.797 [2024-10-07 09:48:54.458092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.797 qpair failed and we were unable to recover it. 00:28:05.797 [2024-10-07 09:48:54.458292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.797 [2024-10-07 09:48:54.458357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.797 qpair failed and we were unable to recover it. 00:28:05.797 [2024-10-07 09:48:54.458647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.797 [2024-10-07 09:48:54.458738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.797 qpair failed and we were unable to recover it. 00:28:05.797 [2024-10-07 09:48:54.458851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.797 [2024-10-07 09:48:54.458883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.797 qpair failed and we were unable to recover it. 00:28:05.797 [2024-10-07 09:48:54.459109] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb91f0 is same with the state(6) to be set 00:28:05.797 [2024-10-07 09:48:54.459531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.797 [2024-10-07 09:48:54.459646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.797 qpair failed and we were unable to recover it. 00:28:05.797 [2024-10-07 09:48:54.459836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.797 [2024-10-07 09:48:54.459871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.797 qpair failed and we were unable to recover it. 00:28:05.797 [2024-10-07 09:48:54.459983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.797 [2024-10-07 09:48:54.460054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.797 qpair failed and we were unable to recover it. 00:28:05.797 [2024-10-07 09:48:54.460316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.797 [2024-10-07 09:48:54.460369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.797 qpair failed and we were unable to recover it. 00:28:05.797 [2024-10-07 09:48:54.460637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.797 [2024-10-07 09:48:54.460728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.797 qpair failed and we were unable to recover it. 00:28:05.797 [2024-10-07 09:48:54.460888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.797 [2024-10-07 09:48:54.460921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.797 qpair failed and we were unable to recover it. 00:28:05.797 [2024-10-07 09:48:54.461126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.797 [2024-10-07 09:48:54.461192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.797 qpair failed and we were unable to recover it. 00:28:05.797 [2024-10-07 09:48:54.461492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.797 [2024-10-07 09:48:54.461547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.797 qpair failed and we were unable to recover it. 00:28:05.797 [2024-10-07 09:48:54.461781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.797 [2024-10-07 09:48:54.461815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.797 qpair failed and we were unable to recover it. 00:28:05.797 [2024-10-07 09:48:54.461949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.797 [2024-10-07 09:48:54.462041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.797 qpair failed and we were unable to recover it. 00:28:05.797 [2024-10-07 09:48:54.462317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.797 [2024-10-07 09:48:54.462381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.797 qpair failed and we were unable to recover it. 00:28:05.797 [2024-10-07 09:48:54.462647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.797 [2024-10-07 09:48:54.462730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.797 qpair failed and we were unable to recover it. 00:28:05.797 [2024-10-07 09:48:54.462867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.797 [2024-10-07 09:48:54.462899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.797 qpair failed and we were unable to recover it. 00:28:05.797 [2024-10-07 09:48:54.463059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.797 [2024-10-07 09:48:54.463123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.797 qpair failed and we were unable to recover it. 00:28:05.797 [2024-10-07 09:48:54.463375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.797 [2024-10-07 09:48:54.463451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.797 qpair failed and we were unable to recover it. 00:28:05.797 [2024-10-07 09:48:54.463620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.798 [2024-10-07 09:48:54.463715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.798 qpair failed and we were unable to recover it. 00:28:05.798 [2024-10-07 09:48:54.463879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.798 [2024-10-07 09:48:54.463911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.798 qpair failed and we were unable to recover it. 00:28:05.798 [2024-10-07 09:48:54.464045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.798 [2024-10-07 09:48:54.464078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.798 qpair failed and we were unable to recover it. 00:28:05.798 [2024-10-07 09:48:54.464317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.798 [2024-10-07 09:48:54.464380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.798 qpair failed and we were unable to recover it. 00:28:05.798 [2024-10-07 09:48:54.464581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.798 [2024-10-07 09:48:54.464636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.798 qpair failed and we were unable to recover it. 00:28:05.798 [2024-10-07 09:48:54.464801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.798 [2024-10-07 09:48:54.464834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.798 qpair failed and we were unable to recover it. 00:28:05.798 [2024-10-07 09:48:54.464935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.798 [2024-10-07 09:48:54.464979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.798 qpair failed and we were unable to recover it. 00:28:05.798 [2024-10-07 09:48:54.465109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.798 [2024-10-07 09:48:54.465142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.798 qpair failed and we were unable to recover it. 00:28:05.798 [2024-10-07 09:48:54.465344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.798 [2024-10-07 09:48:54.465411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.798 qpair failed and we were unable to recover it. 00:28:05.798 [2024-10-07 09:48:54.465694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.798 [2024-10-07 09:48:54.465745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.798 qpair failed and we were unable to recover it. 00:28:05.798 [2024-10-07 09:48:54.465856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.798 [2024-10-07 09:48:54.465889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.798 qpair failed and we were unable to recover it. 00:28:05.798 [2024-10-07 09:48:54.466050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.798 [2024-10-07 09:48:54.466124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.798 qpair failed and we were unable to recover it. 00:28:05.798 [2024-10-07 09:48:54.466485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.798 [2024-10-07 09:48:54.466549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.798 qpair failed and we were unable to recover it. 00:28:05.798 [2024-10-07 09:48:54.466826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.798 [2024-10-07 09:48:54.466882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.798 qpair failed and we were unable to recover it. 00:28:05.798 [2024-10-07 09:48:54.467202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.798 [2024-10-07 09:48:54.467252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.798 qpair failed and we were unable to recover it. 00:28:05.798 [2024-10-07 09:48:54.467443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.798 [2024-10-07 09:48:54.467524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.798 qpair failed and we were unable to recover it. 00:28:05.798 [2024-10-07 09:48:54.467708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.798 [2024-10-07 09:48:54.467780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.798 qpair failed and we were unable to recover it. 00:28:05.798 [2024-10-07 09:48:54.467985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.798 [2024-10-07 09:48:54.468053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.798 qpair failed and we were unable to recover it. 00:28:05.798 [2024-10-07 09:48:54.468345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.798 [2024-10-07 09:48:54.468414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.798 qpair failed and we were unable to recover it. 00:28:05.798 [2024-10-07 09:48:54.468662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.798 [2024-10-07 09:48:54.468756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.798 qpair failed and we were unable to recover it. 00:28:05.798 [2024-10-07 09:48:54.468984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.798 [2024-10-07 09:48:54.469071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.798 qpair failed and we were unable to recover it. 00:28:05.798 [2024-10-07 09:48:54.469319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.798 [2024-10-07 09:48:54.469386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.798 qpair failed and we were unable to recover it. 00:28:05.798 [2024-10-07 09:48:54.469693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.798 [2024-10-07 09:48:54.469752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.798 qpair failed and we were unable to recover it. 00:28:05.798 [2024-10-07 09:48:54.470009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.798 [2024-10-07 09:48:54.470076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.798 qpair failed and we were unable to recover it. 00:28:05.798 [2024-10-07 09:48:54.470418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.798 [2024-10-07 09:48:54.470500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.798 qpair failed and we were unable to recover it. 00:28:05.798 [2024-10-07 09:48:54.470725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.798 [2024-10-07 09:48:54.470784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.798 qpair failed and we were unable to recover it. 00:28:05.798 [2024-10-07 09:48:54.471066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.798 [2024-10-07 09:48:54.471144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.798 qpair failed and we were unable to recover it. 00:28:05.798 [2024-10-07 09:48:54.471502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.798 [2024-10-07 09:48:54.471569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.798 qpair failed and we were unable to recover it. 00:28:05.799 [2024-10-07 09:48:54.471835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.799 [2024-10-07 09:48:54.471870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.799 qpair failed and we were unable to recover it. 00:28:05.799 [2024-10-07 09:48:54.472017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.799 [2024-10-07 09:48:54.472050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.799 qpair failed and we were unable to recover it. 00:28:05.799 [2024-10-07 09:48:54.472297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.799 [2024-10-07 09:48:54.472363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.799 qpair failed and we were unable to recover it. 00:28:05.799 [2024-10-07 09:48:54.472617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.799 [2024-10-07 09:48:54.472718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.799 qpair failed and we were unable to recover it. 00:28:05.799 [2024-10-07 09:48:54.473003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.799 [2024-10-07 09:48:54.473068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.799 qpair failed and we were unable to recover it. 00:28:05.799 [2024-10-07 09:48:54.473371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.799 [2024-10-07 09:48:54.473452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.799 qpair failed and we were unable to recover it. 00:28:05.799 [2024-10-07 09:48:54.473652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.799 [2024-10-07 09:48:54.473756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.799 qpair failed and we were unable to recover it. 00:28:05.799 [2024-10-07 09:48:54.473987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.799 [2024-10-07 09:48:54.474052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.799 qpair failed and we were unable to recover it. 00:28:05.799 [2024-10-07 09:48:54.474322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.799 [2024-10-07 09:48:54.474355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.799 qpair failed and we were unable to recover it. 00:28:05.799 [2024-10-07 09:48:54.474471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.799 [2024-10-07 09:48:54.474505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.799 qpair failed and we were unable to recover it. 00:28:05.799 [2024-10-07 09:48:54.474721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.799 [2024-10-07 09:48:54.474779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.799 qpair failed and we were unable to recover it. 00:28:05.799 [2024-10-07 09:48:54.474960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.799 [2024-10-07 09:48:54.475018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.799 qpair failed and we were unable to recover it. 00:28:05.799 [2024-10-07 09:48:54.475341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.799 [2024-10-07 09:48:54.475385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.799 qpair failed and we were unable to recover it. 00:28:05.799 [2024-10-07 09:48:54.475518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.799 [2024-10-07 09:48:54.475550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.799 qpair failed and we were unable to recover it. 00:28:05.799 [2024-10-07 09:48:54.475734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.799 [2024-10-07 09:48:54.475792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.799 qpair failed and we were unable to recover it. 00:28:05.799 [2024-10-07 09:48:54.475994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.799 [2024-10-07 09:48:54.476051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.799 qpair failed and we were unable to recover it. 00:28:05.799 [2024-10-07 09:48:54.476301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.799 [2024-10-07 09:48:54.476367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.799 qpair failed and we were unable to recover it. 00:28:05.799 [2024-10-07 09:48:54.476613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.799 [2024-10-07 09:48:54.476693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.799 qpair failed and we were unable to recover it. 00:28:05.799 [2024-10-07 09:48:54.476967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.799 [2024-10-07 09:48:54.476999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.799 qpair failed and we were unable to recover it. 00:28:05.799 [2024-10-07 09:48:54.477136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.799 [2024-10-07 09:48:54.477170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.799 qpair failed and we were unable to recover it. 00:28:05.799 [2024-10-07 09:48:54.477433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.799 [2024-10-07 09:48:54.477498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.799 qpair failed and we were unable to recover it. 00:28:05.799 [2024-10-07 09:48:54.477782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.799 [2024-10-07 09:48:54.477840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.799 qpair failed and we were unable to recover it. 00:28:05.799 [2024-10-07 09:48:54.478058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.799 [2024-10-07 09:48:54.478122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.799 qpair failed and we were unable to recover it. 00:28:05.799 [2024-10-07 09:48:54.478448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.799 [2024-10-07 09:48:54.478510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.799 qpair failed and we were unable to recover it. 00:28:05.799 [2024-10-07 09:48:54.478714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.799 [2024-10-07 09:48:54.478771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.799 qpair failed and we were unable to recover it. 00:28:05.799 [2024-10-07 09:48:54.479025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.799 [2024-10-07 09:48:54.479090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.799 qpair failed and we were unable to recover it. 00:28:05.799 [2024-10-07 09:48:54.479407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.799 [2024-10-07 09:48:54.479480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.799 qpair failed and we were unable to recover it. 00:28:05.799 [2024-10-07 09:48:54.479792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.799 [2024-10-07 09:48:54.479849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.799 qpair failed and we were unable to recover it. 00:28:05.799 [2024-10-07 09:48:54.480057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.799 [2024-10-07 09:48:54.480136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.799 qpair failed and we were unable to recover it. 00:28:05.800 [2024-10-07 09:48:54.480414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.800 [2024-10-07 09:48:54.480479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.800 qpair failed and we were unable to recover it. 00:28:05.800 [2024-10-07 09:48:54.480731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.800 [2024-10-07 09:48:54.480788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.800 qpair failed and we were unable to recover it. 00:28:05.800 [2024-10-07 09:48:54.481069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.800 [2024-10-07 09:48:54.481134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.800 qpair failed and we were unable to recover it. 00:28:05.800 [2024-10-07 09:48:54.481438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.800 [2024-10-07 09:48:54.481518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.800 qpair failed and we were unable to recover it. 00:28:05.800 [2024-10-07 09:48:54.481759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.800 [2024-10-07 09:48:54.481818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.800 qpair failed and we were unable to recover it. 00:28:05.800 [2024-10-07 09:48:54.482102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.800 [2024-10-07 09:48:54.482167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.800 qpair failed and we were unable to recover it. 00:28:05.800 [2024-10-07 09:48:54.482512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.800 [2024-10-07 09:48:54.482588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.800 qpair failed and we were unable to recover it. 00:28:05.800 [2024-10-07 09:48:54.482874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.800 [2024-10-07 09:48:54.482931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.800 qpair failed and we were unable to recover it. 00:28:05.800 [2024-10-07 09:48:54.483154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.800 [2024-10-07 09:48:54.483220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.800 qpair failed and we were unable to recover it. 00:28:05.800 [2024-10-07 09:48:54.483477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.800 [2024-10-07 09:48:54.483543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.800 qpair failed and we were unable to recover it. 00:28:05.800 [2024-10-07 09:48:54.483786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.800 [2024-10-07 09:48:54.483843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.800 qpair failed and we were unable to recover it. 00:28:05.800 [2024-10-07 09:48:54.484101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.800 [2024-10-07 09:48:54.484167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.800 qpair failed and we were unable to recover it. 00:28:05.800 [2024-10-07 09:48:54.484372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.800 [2024-10-07 09:48:54.484439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.800 qpair failed and we were unable to recover it. 00:28:05.800 [2024-10-07 09:48:54.484751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.800 [2024-10-07 09:48:54.484809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.800 qpair failed and we were unable to recover it. 00:28:05.800 [2024-10-07 09:48:54.485073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.800 [2024-10-07 09:48:54.485139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.800 qpair failed and we were unable to recover it. 00:28:05.800 [2024-10-07 09:48:54.485408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.800 [2024-10-07 09:48:54.485474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.800 qpair failed and we were unable to recover it. 00:28:05.800 [2024-10-07 09:48:54.485728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.800 [2024-10-07 09:48:54.485762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.800 qpair failed and we were unable to recover it. 00:28:05.800 [2024-10-07 09:48:54.485899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.800 [2024-10-07 09:48:54.485932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.800 qpair failed and we were unable to recover it. 00:28:05.800 [2024-10-07 09:48:54.486102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.800 [2024-10-07 09:48:54.486159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.800 qpair failed and we were unable to recover it. 00:28:05.800 [2024-10-07 09:48:54.486404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.800 [2024-10-07 09:48:54.486469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.800 qpair failed and we were unable to recover it. 00:28:05.800 [2024-10-07 09:48:54.486732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.800 [2024-10-07 09:48:54.486790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.800 qpair failed and we were unable to recover it. 00:28:05.800 [2024-10-07 09:48:54.487017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.800 [2024-10-07 09:48:54.487051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.800 qpair failed and we were unable to recover it. 00:28:05.800 [2024-10-07 09:48:54.487193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.800 [2024-10-07 09:48:54.487226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.800 qpair failed and we were unable to recover it. 00:28:05.800 [2024-10-07 09:48:54.487431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.801 [2024-10-07 09:48:54.487496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.801 qpair failed and we were unable to recover it. 00:28:05.801 [2024-10-07 09:48:54.487717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.801 [2024-10-07 09:48:54.487776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.801 qpair failed and we were unable to recover it. 00:28:05.801 [2024-10-07 09:48:54.488019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.801 [2024-10-07 09:48:54.488086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.801 qpair failed and we were unable to recover it. 00:28:05.801 [2024-10-07 09:48:54.488323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.801 [2024-10-07 09:48:54.488389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.801 qpair failed and we were unable to recover it. 00:28:05.801 [2024-10-07 09:48:54.488657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.801 [2024-10-07 09:48:54.488744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.801 qpair failed and we were unable to recover it. 00:28:05.801 [2024-10-07 09:48:54.488948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.801 [2024-10-07 09:48:54.489024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.801 qpair failed and we were unable to recover it. 00:28:05.801 [2024-10-07 09:48:54.489256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.801 [2024-10-07 09:48:54.489320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.801 qpair failed and we were unable to recover it. 00:28:05.801 [2024-10-07 09:48:54.489634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.801 [2024-10-07 09:48:54.489674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.801 qpair failed and we were unable to recover it. 00:28:05.801 [2024-10-07 09:48:54.489808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.801 [2024-10-07 09:48:54.489841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.801 qpair failed and we were unable to recover it. 00:28:05.801 [2024-10-07 09:48:54.490110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.801 [2024-10-07 09:48:54.490175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.801 qpair failed and we were unable to recover it. 00:28:05.801 [2024-10-07 09:48:54.490423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.801 [2024-10-07 09:48:54.490489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.801 qpair failed and we were unable to recover it. 00:28:05.801 [2024-10-07 09:48:54.490751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.801 [2024-10-07 09:48:54.490809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.801 qpair failed and we were unable to recover it. 00:28:05.801 [2024-10-07 09:48:54.491052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.801 [2024-10-07 09:48:54.491121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.801 qpair failed and we were unable to recover it. 00:28:05.801 [2024-10-07 09:48:54.491413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.801 [2024-10-07 09:48:54.491479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.801 qpair failed and we were unable to recover it. 00:28:05.801 [2024-10-07 09:48:54.491757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.801 [2024-10-07 09:48:54.491814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.801 qpair failed and we were unable to recover it. 00:28:05.801 [2024-10-07 09:48:54.492062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.801 [2024-10-07 09:48:54.492128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.801 qpair failed and we were unable to recover it. 00:28:05.801 [2024-10-07 09:48:54.492390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.801 [2024-10-07 09:48:54.492428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.801 qpair failed and we were unable to recover it. 00:28:05.801 [2024-10-07 09:48:54.492566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.801 [2024-10-07 09:48:54.492599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.801 qpair failed and we were unable to recover it. 00:28:05.801 [2024-10-07 09:48:54.492848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.801 [2024-10-07 09:48:54.492905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.801 qpair failed and we were unable to recover it. 00:28:05.801 [2024-10-07 09:48:54.493123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.801 [2024-10-07 09:48:54.493156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.801 qpair failed and we were unable to recover it. 00:28:05.801 [2024-10-07 09:48:54.493286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.801 [2024-10-07 09:48:54.493358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.801 qpair failed and we were unable to recover it. 00:28:05.801 [2024-10-07 09:48:54.493601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.801 [2024-10-07 09:48:54.493683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.801 qpair failed and we were unable to recover it. 00:28:05.801 [2024-10-07 09:48:54.493922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.801 [2024-10-07 09:48:54.493978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.801 qpair failed and we were unable to recover it. 00:28:05.801 [2024-10-07 09:48:54.494289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.801 [2024-10-07 09:48:54.494367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.801 qpair failed and we were unable to recover it. 00:28:05.801 [2024-10-07 09:48:54.494596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.801 [2024-10-07 09:48:54.494662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.801 qpair failed and we were unable to recover it. 00:28:05.801 [2024-10-07 09:48:54.494960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.801 [2024-10-07 09:48:54.495025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.801 qpair failed and we were unable to recover it. 00:28:05.801 [2024-10-07 09:48:54.495278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.801 [2024-10-07 09:48:54.495311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.801 qpair failed and we were unable to recover it. 00:28:05.801 [2024-10-07 09:48:54.495473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.801 [2024-10-07 09:48:54.495506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.801 qpair failed and we were unable to recover it. 00:28:05.801 [2024-10-07 09:48:54.495746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.801 [2024-10-07 09:48:54.495814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.801 qpair failed and we were unable to recover it. 00:28:05.801 [2024-10-07 09:48:54.496066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.802 [2024-10-07 09:48:54.496100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.802 qpair failed and we were unable to recover it. 00:28:05.802 [2024-10-07 09:48:54.496245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.802 [2024-10-07 09:48:54.496278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.802 qpair failed and we were unable to recover it. 00:28:05.802 [2024-10-07 09:48:54.496528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.802 [2024-10-07 09:48:54.496593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.802 qpair failed and we were unable to recover it. 00:28:05.802 [2024-10-07 09:48:54.496848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.802 [2024-10-07 09:48:54.496915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.802 qpair failed and we were unable to recover it. 00:28:05.802 [2024-10-07 09:48:54.497139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.802 [2024-10-07 09:48:54.497203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.802 qpair failed and we were unable to recover it. 00:28:05.802 [2024-10-07 09:48:54.497453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.802 [2024-10-07 09:48:54.497520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.802 qpair failed and we were unable to recover it. 00:28:05.802 [2024-10-07 09:48:54.497796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.802 [2024-10-07 09:48:54.497864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.802 qpair failed and we were unable to recover it. 00:28:05.802 [2024-10-07 09:48:54.498058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.802 [2024-10-07 09:48:54.498125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.802 qpair failed and we were unable to recover it. 00:28:05.802 [2024-10-07 09:48:54.498379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.802 [2024-10-07 09:48:54.498445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.802 qpair failed and we were unable to recover it. 00:28:05.802 [2024-10-07 09:48:54.498746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.802 [2024-10-07 09:48:54.498812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.802 qpair failed and we were unable to recover it. 00:28:05.802 [2024-10-07 09:48:54.499077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.802 [2024-10-07 09:48:54.499143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.802 qpair failed and we were unable to recover it. 00:28:05.802 [2024-10-07 09:48:54.499395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.802 [2024-10-07 09:48:54.499460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.802 qpair failed and we were unable to recover it. 00:28:05.802 [2024-10-07 09:48:54.499721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.802 [2024-10-07 09:48:54.499788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.802 qpair failed and we were unable to recover it. 00:28:05.802 [2024-10-07 09:48:54.499988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.802 [2024-10-07 09:48:54.500053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.802 qpair failed and we were unable to recover it. 00:28:05.802 [2024-10-07 09:48:54.500360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.802 [2024-10-07 09:48:54.500426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.802 qpair failed and we were unable to recover it. 00:28:05.802 [2024-10-07 09:48:54.500678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.802 [2024-10-07 09:48:54.500746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.802 qpair failed and we were unable to recover it. 00:28:05.802 [2024-10-07 09:48:54.500988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.802 [2024-10-07 09:48:54.501055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.802 qpair failed and we were unable to recover it. 00:28:05.802 [2024-10-07 09:48:54.501310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.802 [2024-10-07 09:48:54.501379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.802 qpair failed and we were unable to recover it. 00:28:05.802 [2024-10-07 09:48:54.501631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.802 [2024-10-07 09:48:54.501664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.802 qpair failed and we were unable to recover it. 00:28:05.802 [2024-10-07 09:48:54.501781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.802 [2024-10-07 09:48:54.501815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.802 qpair failed and we were unable to recover it. 00:28:05.802 [2024-10-07 09:48:54.502015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.802 [2024-10-07 09:48:54.502081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.802 qpair failed and we were unable to recover it. 00:28:05.802 [2024-10-07 09:48:54.502374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.802 [2024-10-07 09:48:54.502438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.802 qpair failed and we were unable to recover it. 00:28:05.802 [2024-10-07 09:48:54.502707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.802 [2024-10-07 09:48:54.502773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.802 qpair failed and we were unable to recover it. 00:28:05.802 [2024-10-07 09:48:54.502977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.802 [2024-10-07 09:48:54.503046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.802 qpair failed and we were unable to recover it. 00:28:05.802 [2024-10-07 09:48:54.503285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.802 [2024-10-07 09:48:54.503350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.802 qpair failed and we were unable to recover it. 00:28:05.802 [2024-10-07 09:48:54.503578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.802 [2024-10-07 09:48:54.503642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.802 qpair failed and we were unable to recover it. 00:28:05.802 [2024-10-07 09:48:54.503998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.802 [2024-10-07 09:48:54.504030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.802 qpair failed and we were unable to recover it. 00:28:05.802 [2024-10-07 09:48:54.504185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.802 [2024-10-07 09:48:54.504222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.802 qpair failed and we were unable to recover it. 00:28:05.802 [2024-10-07 09:48:54.504494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.802 [2024-10-07 09:48:54.504558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.802 qpair failed and we were unable to recover it. 00:28:05.802 [2024-10-07 09:48:54.504821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.802 [2024-10-07 09:48:54.504889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.803 qpair failed and we were unable to recover it. 00:28:05.803 [2024-10-07 09:48:54.505186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.803 [2024-10-07 09:48:54.505261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.803 qpair failed and we were unable to recover it. 00:28:05.803 [2024-10-07 09:48:54.505488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.803 [2024-10-07 09:48:54.505553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.803 qpair failed and we were unable to recover it. 00:28:05.803 [2024-10-07 09:48:54.505793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.803 [2024-10-07 09:48:54.505861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.803 qpair failed and we were unable to recover it. 00:28:05.803 [2024-10-07 09:48:54.506154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.803 [2024-10-07 09:48:54.506220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.803 qpair failed and we were unable to recover it. 00:28:05.803 [2024-10-07 09:48:54.506516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.803 [2024-10-07 09:48:54.506580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.803 qpair failed and we were unable to recover it. 00:28:05.803 [2024-10-07 09:48:54.506861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.803 [2024-10-07 09:48:54.506928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.803 qpair failed and we were unable to recover it. 00:28:05.803 [2024-10-07 09:48:54.507232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.803 [2024-10-07 09:48:54.507297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.803 qpair failed and we were unable to recover it. 00:28:05.803 [2024-10-07 09:48:54.507577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.803 [2024-10-07 09:48:54.507642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.803 qpair failed and we were unable to recover it. 00:28:05.803 [2024-10-07 09:48:54.507958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.803 [2024-10-07 09:48:54.508023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.803 qpair failed and we were unable to recover it. 00:28:05.803 [2024-10-07 09:48:54.508289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.803 [2024-10-07 09:48:54.508354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.803 qpair failed and we were unable to recover it. 00:28:05.803 [2024-10-07 09:48:54.508596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.803 [2024-10-07 09:48:54.508661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.803 qpair failed and we were unable to recover it. 00:28:05.803 [2024-10-07 09:48:54.508997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.803 [2024-10-07 09:48:54.509073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.803 qpair failed and we were unable to recover it. 00:28:05.803 [2024-10-07 09:48:54.509371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.803 [2024-10-07 09:48:54.509435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.803 qpair failed and we were unable to recover it. 00:28:05.803 [2024-10-07 09:48:54.509697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.803 [2024-10-07 09:48:54.509763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.803 qpair failed and we were unable to recover it. 00:28:05.803 [2024-10-07 09:48:54.510027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.803 [2024-10-07 09:48:54.510107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.803 qpair failed and we were unable to recover it. 00:28:05.803 [2024-10-07 09:48:54.510401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.803 [2024-10-07 09:48:54.510469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.803 qpair failed and we were unable to recover it. 00:28:05.803 [2024-10-07 09:48:54.510740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.803 [2024-10-07 09:48:54.510807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.803 qpair failed and we were unable to recover it. 00:28:05.803 [2024-10-07 09:48:54.511056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.803 [2024-10-07 09:48:54.511127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.803 qpair failed and we were unable to recover it. 00:28:05.803 [2024-10-07 09:48:54.511346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.803 [2024-10-07 09:48:54.511412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.803 qpair failed and we were unable to recover it. 00:28:05.803 [2024-10-07 09:48:54.511695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.803 [2024-10-07 09:48:54.511762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.803 qpair failed and we were unable to recover it. 00:28:05.803 [2024-10-07 09:48:54.511976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.803 [2024-10-07 09:48:54.512046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.803 qpair failed and we were unable to recover it. 00:28:05.803 [2024-10-07 09:48:54.512364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.803 [2024-10-07 09:48:54.512440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.803 qpair failed and we were unable to recover it. 00:28:05.803 [2024-10-07 09:48:54.512746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.803 [2024-10-07 09:48:54.512812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.803 qpair failed and we were unable to recover it. 00:28:05.803 [2024-10-07 09:48:54.513071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.803 [2024-10-07 09:48:54.513139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.803 qpair failed and we were unable to recover it. 00:28:05.803 [2024-10-07 09:48:54.513446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.803 [2024-10-07 09:48:54.513518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.803 qpair failed and we were unable to recover it. 00:28:05.803 [2024-10-07 09:48:54.513813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.803 [2024-10-07 09:48:54.513879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.803 qpair failed and we were unable to recover it. 00:28:05.803 [2024-10-07 09:48:54.514124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.803 [2024-10-07 09:48:54.514192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.803 qpair failed and we were unable to recover it. 00:28:05.803 [2024-10-07 09:48:54.514490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.803 [2024-10-07 09:48:54.514566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.803 qpair failed and we were unable to recover it. 00:28:05.803 [2024-10-07 09:48:54.514868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.803 [2024-10-07 09:48:54.514933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.803 qpair failed and we were unable to recover it. 00:28:05.803 [2024-10-07 09:48:54.515183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.803 [2024-10-07 09:48:54.515249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.803 qpair failed and we were unable to recover it. 00:28:05.803 [2024-10-07 09:48:54.515539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.804 [2024-10-07 09:48:54.515604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.804 qpair failed and we were unable to recover it. 00:28:05.804 [2024-10-07 09:48:54.515919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.804 [2024-10-07 09:48:54.515985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.804 qpair failed and we were unable to recover it. 00:28:05.804 [2024-10-07 09:48:54.516282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.804 [2024-10-07 09:48:54.516359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.804 qpair failed and we were unable to recover it. 00:28:05.804 [2024-10-07 09:48:54.516659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.804 [2024-10-07 09:48:54.516742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.804 qpair failed and we were unable to recover it. 00:28:05.804 [2024-10-07 09:48:54.516998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.804 [2024-10-07 09:48:54.517064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.804 qpair failed and we were unable to recover it. 00:28:05.804 [2024-10-07 09:48:54.517359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.804 [2024-10-07 09:48:54.517425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.804 qpair failed and we were unable to recover it. 00:28:05.804 [2024-10-07 09:48:54.517696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.804 [2024-10-07 09:48:54.517763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.804 qpair failed and we were unable to recover it. 00:28:05.804 [2024-10-07 09:48:54.518018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.804 [2024-10-07 09:48:54.518093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.804 qpair failed and we were unable to recover it. 00:28:05.804 [2024-10-07 09:48:54.518384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.804 [2024-10-07 09:48:54.518450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.804 qpair failed and we were unable to recover it. 00:28:05.804 [2024-10-07 09:48:54.518727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.804 [2024-10-07 09:48:54.518796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.804 qpair failed and we were unable to recover it. 00:28:05.804 [2024-10-07 09:48:54.519036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.804 [2024-10-07 09:48:54.519102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.804 qpair failed and we were unable to recover it. 00:28:05.804 [2024-10-07 09:48:54.519394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.804 [2024-10-07 09:48:54.519459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.804 qpair failed and we were unable to recover it. 00:28:05.804 [2024-10-07 09:48:54.519705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.804 [2024-10-07 09:48:54.519772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.804 qpair failed and we were unable to recover it. 00:28:05.804 [2024-10-07 09:48:54.520036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.804 [2024-10-07 09:48:54.520101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.804 qpair failed and we were unable to recover it. 00:28:05.804 [2024-10-07 09:48:54.520349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.804 [2024-10-07 09:48:54.520415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.804 qpair failed and we were unable to recover it. 00:28:05.804 [2024-10-07 09:48:54.520711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.804 [2024-10-07 09:48:54.520779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.804 qpair failed and we were unable to recover it. 00:28:05.804 [2024-10-07 09:48:54.521038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.804 [2024-10-07 09:48:54.521104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.804 qpair failed and we were unable to recover it. 00:28:05.804 [2024-10-07 09:48:54.521399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.804 [2024-10-07 09:48:54.521476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.804 qpair failed and we were unable to recover it. 00:28:05.804 [2024-10-07 09:48:54.521717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.804 [2024-10-07 09:48:54.521786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.804 qpair failed and we were unable to recover it. 00:28:05.804 [2024-10-07 09:48:54.522076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.804 [2024-10-07 09:48:54.522142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.804 qpair failed and we were unable to recover it. 00:28:05.804 [2024-10-07 09:48:54.522427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.804 [2024-10-07 09:48:54.522492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.804 qpair failed and we were unable to recover it. 00:28:05.804 [2024-10-07 09:48:54.522796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.804 [2024-10-07 09:48:54.522873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.804 qpair failed and we were unable to recover it. 00:28:05.804 [2024-10-07 09:48:54.523166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.804 [2024-10-07 09:48:54.523232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.804 qpair failed and we were unable to recover it. 00:28:05.804 [2024-10-07 09:48:54.523450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.804 [2024-10-07 09:48:54.523515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.804 qpair failed and we were unable to recover it. 00:28:05.804 [2024-10-07 09:48:54.523754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.804 [2024-10-07 09:48:54.523821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.804 qpair failed and we were unable to recover it. 00:28:05.804 [2024-10-07 09:48:54.524050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.804 [2024-10-07 09:48:54.524115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.804 qpair failed and we were unable to recover it. 00:28:05.804 [2024-10-07 09:48:54.524311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.804 [2024-10-07 09:48:54.524378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.804 qpair failed and we were unable to recover it. 00:28:05.804 [2024-10-07 09:48:54.524628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.804 [2024-10-07 09:48:54.524713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.804 qpair failed and we were unable to recover it. 00:28:05.804 [2024-10-07 09:48:54.524931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.804 [2024-10-07 09:48:54.524996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.805 qpair failed and we were unable to recover it. 00:28:05.805 [2024-10-07 09:48:54.525292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.805 [2024-10-07 09:48:54.525371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.805 qpair failed and we were unable to recover it. 00:28:05.805 [2024-10-07 09:48:54.525634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.805 [2024-10-07 09:48:54.525715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.805 qpair failed and we were unable to recover it. 00:28:05.805 [2024-10-07 09:48:54.525965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.805 [2024-10-07 09:48:54.526030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.805 qpair failed and we were unable to recover it. 00:28:05.805 [2024-10-07 09:48:54.526294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.805 [2024-10-07 09:48:54.526359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.805 qpair failed and we were unable to recover it. 00:28:05.805 [2024-10-07 09:48:54.526615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.805 [2024-10-07 09:48:54.526701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.805 qpair failed and we were unable to recover it. 00:28:05.805 [2024-10-07 09:48:54.526971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.805 [2024-10-07 09:48:54.527038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.805 qpair failed and we were unable to recover it. 00:28:05.805 [2024-10-07 09:48:54.527300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.805 [2024-10-07 09:48:54.527369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.805 qpair failed and we were unable to recover it. 00:28:05.805 [2024-10-07 09:48:54.527661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.805 [2024-10-07 09:48:54.527757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.805 qpair failed and we were unable to recover it. 00:28:05.805 [2024-10-07 09:48:54.527998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.805 [2024-10-07 09:48:54.528064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.805 qpair failed and we were unable to recover it. 00:28:05.805 [2024-10-07 09:48:54.528324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.805 [2024-10-07 09:48:54.528391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.805 qpair failed and we were unable to recover it. 00:28:05.805 [2024-10-07 09:48:54.528698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.805 [2024-10-07 09:48:54.528771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.805 qpair failed and we were unable to recover it. 00:28:05.805 [2024-10-07 09:48:54.529029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.805 [2024-10-07 09:48:54.529096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.805 qpair failed and we were unable to recover it. 00:28:05.805 [2024-10-07 09:48:54.529334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.805 [2024-10-07 09:48:54.529400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.805 qpair failed and we were unable to recover it. 00:28:05.805 [2024-10-07 09:48:54.529647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.805 [2024-10-07 09:48:54.529768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.805 qpair failed and we were unable to recover it. 00:28:05.805 [2024-10-07 09:48:54.529996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.805 [2024-10-07 09:48:54.530063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.805 qpair failed and we were unable to recover it. 00:28:05.805 [2024-10-07 09:48:54.530293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.805 [2024-10-07 09:48:54.530359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.805 qpair failed and we were unable to recover it. 00:28:05.805 [2024-10-07 09:48:54.530594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.805 [2024-10-07 09:48:54.530659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.805 qpair failed and we were unable to recover it. 00:28:05.805 [2024-10-07 09:48:54.530901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.805 [2024-10-07 09:48:54.530969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.805 qpair failed and we were unable to recover it. 00:28:05.805 [2024-10-07 09:48:54.531186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.805 [2024-10-07 09:48:54.531265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.805 qpair failed and we were unable to recover it. 00:28:05.805 [2024-10-07 09:48:54.531570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.805 [2024-10-07 09:48:54.531646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.805 qpair failed and we were unable to recover it. 00:28:05.805 [2024-10-07 09:48:54.531935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.805 [2024-10-07 09:48:54.532001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.805 qpair failed and we were unable to recover it. 00:28:05.805 [2024-10-07 09:48:54.532296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.805 [2024-10-07 09:48:54.532362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.805 qpair failed and we were unable to recover it. 00:28:05.805 [2024-10-07 09:48:54.532624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.805 [2024-10-07 09:48:54.532707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.805 qpair failed and we were unable to recover it. 00:28:05.805 [2024-10-07 09:48:54.532931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.805 [2024-10-07 09:48:54.532996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.805 qpair failed and we were unable to recover it. 00:28:05.805 [2024-10-07 09:48:54.533298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.805 [2024-10-07 09:48:54.533371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.805 qpair failed and we were unable to recover it. 00:28:05.805 [2024-10-07 09:48:54.533685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.805 [2024-10-07 09:48:54.533752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.805 qpair failed and we were unable to recover it. 00:28:05.805 [2024-10-07 09:48:54.533999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.805 [2024-10-07 09:48:54.534064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.805 qpair failed and we were unable to recover it. 00:28:05.805 [2024-10-07 09:48:54.534267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.805 [2024-10-07 09:48:54.534335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.805 qpair failed and we were unable to recover it. 00:28:05.805 [2024-10-07 09:48:54.534623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.805 [2024-10-07 09:48:54.534715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.805 qpair failed and we were unable to recover it. 00:28:05.805 [2024-10-07 09:48:54.534992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.806 [2024-10-07 09:48:54.535058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.806 qpair failed and we were unable to recover it. 00:28:05.806 [2024-10-07 09:48:54.535359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.806 [2024-10-07 09:48:54.535433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.806 qpair failed and we were unable to recover it. 00:28:05.806 [2024-10-07 09:48:54.535722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.806 [2024-10-07 09:48:54.535789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.806 qpair failed and we were unable to recover it. 00:28:05.806 [2024-10-07 09:48:54.536016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.806 [2024-10-07 09:48:54.536083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.806 qpair failed and we were unable to recover it. 00:28:05.806 [2024-10-07 09:48:54.536382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.806 [2024-10-07 09:48:54.536458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.806 qpair failed and we were unable to recover it. 00:28:05.806 [2024-10-07 09:48:54.536735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.806 [2024-10-07 09:48:54.536802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.806 qpair failed and we were unable to recover it. 00:28:05.806 [2024-10-07 09:48:54.537065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.806 [2024-10-07 09:48:54.537130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.806 qpair failed and we were unable to recover it. 00:28:05.806 [2024-10-07 09:48:54.537386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.806 [2024-10-07 09:48:54.537454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.806 qpair failed and we were unable to recover it. 00:28:05.806 [2024-10-07 09:48:54.537774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.806 [2024-10-07 09:48:54.537841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.806 qpair failed and we were unable to recover it. 00:28:05.806 [2024-10-07 09:48:54.538093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.806 [2024-10-07 09:48:54.538161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.806 qpair failed and we were unable to recover it. 00:28:05.806 [2024-10-07 09:48:54.538418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.806 [2024-10-07 09:48:54.538484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.806 qpair failed and we were unable to recover it. 00:28:05.806 [2024-10-07 09:48:54.538732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.806 [2024-10-07 09:48:54.538799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.806 qpair failed and we were unable to recover it. 00:28:05.806 [2024-10-07 09:48:54.539056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.806 [2024-10-07 09:48:54.539124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.806 qpair failed and we were unable to recover it. 00:28:05.806 [2024-10-07 09:48:54.539417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.806 [2024-10-07 09:48:54.539483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.806 qpair failed and we were unable to recover it. 00:28:05.806 [2024-10-07 09:48:54.539746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.806 [2024-10-07 09:48:54.539813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.806 qpair failed and we were unable to recover it. 00:28:05.806 [2024-10-07 09:48:54.540062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.806 [2024-10-07 09:48:54.540127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.806 qpair failed and we were unable to recover it. 00:28:05.806 [2024-10-07 09:48:54.540433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.806 [2024-10-07 09:48:54.540511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.806 qpair failed and we were unable to recover it. 00:28:05.806 [2024-10-07 09:48:54.540809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.806 [2024-10-07 09:48:54.540875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.806 qpair failed and we were unable to recover it. 00:28:05.806 [2024-10-07 09:48:54.541065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.806 [2024-10-07 09:48:54.541131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.806 qpair failed and we were unable to recover it. 00:28:05.806 [2024-10-07 09:48:54.541323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.806 [2024-10-07 09:48:54.541390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.806 qpair failed and we were unable to recover it. 00:28:05.806 [2024-10-07 09:48:54.541611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.806 [2024-10-07 09:48:54.541690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.806 qpair failed and we were unable to recover it. 00:28:05.806 [2024-10-07 09:48:54.541950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.806 [2024-10-07 09:48:54.542015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.806 qpair failed and we were unable to recover it. 00:28:05.806 [2024-10-07 09:48:54.542274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.806 [2024-10-07 09:48:54.542339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.806 qpair failed and we were unable to recover it. 00:28:05.806 [2024-10-07 09:48:54.542588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.806 [2024-10-07 09:48:54.542653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.806 qpair failed and we were unable to recover it. 00:28:05.806 [2024-10-07 09:48:54.542921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.806 [2024-10-07 09:48:54.542988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.806 qpair failed and we were unable to recover it. 00:28:05.807 [2024-10-07 09:48:54.543297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.807 [2024-10-07 09:48:54.543371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.807 qpair failed and we were unable to recover it. 00:28:05.807 [2024-10-07 09:48:54.543615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.807 [2024-10-07 09:48:54.543700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.807 qpair failed and we were unable to recover it. 00:28:05.807 [2024-10-07 09:48:54.543997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.807 [2024-10-07 09:48:54.544062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.807 qpair failed and we were unable to recover it. 00:28:05.807 [2024-10-07 09:48:54.544268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.807 [2024-10-07 09:48:54.544333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.807 qpair failed and we were unable to recover it. 00:28:05.807 [2024-10-07 09:48:54.544557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.807 [2024-10-07 09:48:54.544632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.807 qpair failed and we were unable to recover it. 00:28:05.807 [2024-10-07 09:48:54.544922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.807 [2024-10-07 09:48:54.544989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.807 qpair failed and we were unable to recover it. 00:28:05.807 [2024-10-07 09:48:54.545292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.807 [2024-10-07 09:48:54.545364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.807 qpair failed and we were unable to recover it. 00:28:05.807 [2024-10-07 09:48:54.545692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.807 [2024-10-07 09:48:54.545758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.807 qpair failed and we were unable to recover it. 00:28:05.807 [2024-10-07 09:48:54.545983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.807 [2024-10-07 09:48:54.546048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.807 qpair failed and we were unable to recover it. 00:28:05.807 [2024-10-07 09:48:54.546348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.807 [2024-10-07 09:48:54.546423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.807 qpair failed and we were unable to recover it. 00:28:05.807 [2024-10-07 09:48:54.546632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.807 [2024-10-07 09:48:54.546731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.807 qpair failed and we were unable to recover it. 00:28:05.807 [2024-10-07 09:48:54.547026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.807 [2024-10-07 09:48:54.547091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.807 qpair failed and we were unable to recover it. 00:28:05.807 [2024-10-07 09:48:54.547331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.807 [2024-10-07 09:48:54.547396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.807 qpair failed and we were unable to recover it. 00:28:05.807 [2024-10-07 09:48:54.547701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.807 [2024-10-07 09:48:54.547775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.807 qpair failed and we were unable to recover it. 00:28:05.807 [2024-10-07 09:48:54.548025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.807 [2024-10-07 09:48:54.548091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.807 qpair failed and we were unable to recover it. 00:28:05.807 [2024-10-07 09:48:54.548376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.807 [2024-10-07 09:48:54.548440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.807 qpair failed and we were unable to recover it. 00:28:05.807 [2024-10-07 09:48:54.548693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.807 [2024-10-07 09:48:54.548759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.807 qpair failed and we were unable to recover it. 00:28:05.807 [2024-10-07 09:48:54.549038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.807 [2024-10-07 09:48:54.549103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.807 qpair failed and we were unable to recover it. 00:28:05.807 [2024-10-07 09:48:54.549407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.807 [2024-10-07 09:48:54.549473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.807 qpair failed and we were unable to recover it. 00:28:05.807 [2024-10-07 09:48:54.549779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.807 [2024-10-07 09:48:54.549845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.807 qpair failed and we were unable to recover it. 00:28:05.807 [2024-10-07 09:48:54.550099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.807 [2024-10-07 09:48:54.550165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.807 qpair failed and we were unable to recover it. 00:28:05.807 [2024-10-07 09:48:54.550454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.807 [2024-10-07 09:48:54.550520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.807 qpair failed and we were unable to recover it. 00:28:05.807 [2024-10-07 09:48:54.550756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.807 [2024-10-07 09:48:54.550822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.807 qpair failed and we were unable to recover it. 00:28:05.807 [2024-10-07 09:48:54.551073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.807 [2024-10-07 09:48:54.551138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.807 qpair failed and we were unable to recover it. 00:28:05.807 [2024-10-07 09:48:54.551398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.807 [2024-10-07 09:48:54.551464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.807 qpair failed and we were unable to recover it. 00:28:05.807 [2024-10-07 09:48:54.551714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.807 [2024-10-07 09:48:54.551781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.807 qpair failed and we were unable to recover it. 00:28:05.807 [2024-10-07 09:48:54.552075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.807 [2024-10-07 09:48:54.552143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.807 qpair failed and we were unable to recover it. 00:28:05.807 [2024-10-07 09:48:54.552390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.807 [2024-10-07 09:48:54.552457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.807 qpair failed and we were unable to recover it. 00:28:05.807 [2024-10-07 09:48:54.552750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.808 [2024-10-07 09:48:54.552816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.808 qpair failed and we were unable to recover it. 00:28:05.808 [2024-10-07 09:48:54.553115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.808 [2024-10-07 09:48:54.553191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.808 qpair failed and we were unable to recover it. 00:28:05.808 [2024-10-07 09:48:54.553445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.808 [2024-10-07 09:48:54.553510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.808 qpair failed and we were unable to recover it. 00:28:05.808 [2024-10-07 09:48:54.553790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.808 [2024-10-07 09:48:54.553858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.808 qpair failed and we were unable to recover it. 00:28:05.808 [2024-10-07 09:48:54.554135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.808 [2024-10-07 09:48:54.554200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.808 qpair failed and we were unable to recover it. 00:28:05.808 [2024-10-07 09:48:54.554500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.808 [2024-10-07 09:48:54.554575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.808 qpair failed and we were unable to recover it. 00:28:05.808 [2024-10-07 09:48:54.554855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.808 [2024-10-07 09:48:54.554923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.808 qpair failed and we were unable to recover it. 00:28:05.808 [2024-10-07 09:48:54.555228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.808 [2024-10-07 09:48:54.555303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.808 qpair failed and we were unable to recover it. 00:28:05.808 [2024-10-07 09:48:54.555608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.808 [2024-10-07 09:48:54.555693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.808 qpair failed and we were unable to recover it. 00:28:05.808 [2024-10-07 09:48:54.555934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.808 [2024-10-07 09:48:54.555998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.808 qpair failed and we were unable to recover it. 00:28:05.808 [2024-10-07 09:48:54.556232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.808 [2024-10-07 09:48:54.556297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.808 qpair failed and we were unable to recover it. 00:28:05.808 [2024-10-07 09:48:54.556583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.808 [2024-10-07 09:48:54.556649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.808 qpair failed and we were unable to recover it. 00:28:05.808 [2024-10-07 09:48:54.556986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.808 [2024-10-07 09:48:54.557051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.808 qpair failed and we were unable to recover it. 00:28:05.808 [2024-10-07 09:48:54.557293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.808 [2024-10-07 09:48:54.557359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.808 qpair failed and we were unable to recover it. 00:28:05.808 [2024-10-07 09:48:54.557554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.808 [2024-10-07 09:48:54.557619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.808 qpair failed and we were unable to recover it. 00:28:05.808 [2024-10-07 09:48:54.557875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.808 [2024-10-07 09:48:54.557941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.808 qpair failed and we were unable to recover it. 00:28:05.808 [2024-10-07 09:48:54.558193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.808 [2024-10-07 09:48:54.558270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.808 qpair failed and we were unable to recover it. 00:28:05.808 [2024-10-07 09:48:54.558566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.808 [2024-10-07 09:48:54.558631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.808 qpair failed and we were unable to recover it. 00:28:05.808 [2024-10-07 09:48:54.558947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.808 [2024-10-07 09:48:54.559018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.808 qpair failed and we were unable to recover it. 00:28:05.808 [2024-10-07 09:48:54.559284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.808 [2024-10-07 09:48:54.559351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.808 qpair failed and we were unable to recover it. 00:28:05.808 [2024-10-07 09:48:54.559654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.808 [2024-10-07 09:48:54.559744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.808 qpair failed and we were unable to recover it. 00:28:05.808 [2024-10-07 09:48:54.560006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.808 [2024-10-07 09:48:54.560071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.808 qpair failed and we were unable to recover it. 00:28:05.808 [2024-10-07 09:48:54.560363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.808 [2024-10-07 09:48:54.560428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.808 qpair failed and we were unable to recover it. 00:28:05.808 [2024-10-07 09:48:54.560715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.808 [2024-10-07 09:48:54.560782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.808 qpair failed and we were unable to recover it. 00:28:05.808 [2024-10-07 09:48:54.561030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.808 [2024-10-07 09:48:54.561097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.808 qpair failed and we were unable to recover it. 00:28:05.808 [2024-10-07 09:48:54.561385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.808 [2024-10-07 09:48:54.561450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.808 qpair failed and we were unable to recover it. 00:28:05.808 [2024-10-07 09:48:54.561703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.808 [2024-10-07 09:48:54.561773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.808 qpair failed and we were unable to recover it. 00:28:05.808 [2024-10-07 09:48:54.562031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.808 [2024-10-07 09:48:54.562096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.808 qpair failed and we were unable to recover it. 00:28:05.808 [2024-10-07 09:48:54.562383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.808 [2024-10-07 09:48:54.562447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.808 qpair failed and we were unable to recover it. 00:28:05.808 [2024-10-07 09:48:54.562693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.808 [2024-10-07 09:48:54.562760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.808 qpair failed and we were unable to recover it. 00:28:05.808 [2024-10-07 09:48:54.563015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.808 [2024-10-07 09:48:54.563081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.808 qpair failed and we were unable to recover it. 00:28:05.809 [2024-10-07 09:48:54.563272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.809 [2024-10-07 09:48:54.563336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.809 qpair failed and we were unable to recover it. 00:28:05.809 [2024-10-07 09:48:54.563620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.809 [2024-10-07 09:48:54.563698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.809 qpair failed and we were unable to recover it. 00:28:05.809 [2024-10-07 09:48:54.563992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.809 [2024-10-07 09:48:54.564059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.809 qpair failed and we were unable to recover it. 00:28:05.809 [2024-10-07 09:48:54.564311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.809 [2024-10-07 09:48:54.564378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.809 qpair failed and we were unable to recover it. 00:28:05.809 [2024-10-07 09:48:54.564633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.809 [2024-10-07 09:48:54.564721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.809 qpair failed and we were unable to recover it. 00:28:05.809 [2024-10-07 09:48:54.564989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.809 [2024-10-07 09:48:54.565056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.809 qpair failed and we were unable to recover it. 00:28:05.809 [2024-10-07 09:48:54.565254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.809 [2024-10-07 09:48:54.565319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.809 qpair failed and we were unable to recover it. 00:28:05.809 [2024-10-07 09:48:54.565609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.809 [2024-10-07 09:48:54.565693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.809 qpair failed and we were unable to recover it. 00:28:05.809 [2024-10-07 09:48:54.565948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.809 [2024-10-07 09:48:54.566015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.809 qpair failed and we were unable to recover it. 00:28:05.809 [2024-10-07 09:48:54.566258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.809 [2024-10-07 09:48:54.566323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.809 qpair failed and we were unable to recover it. 00:28:05.809 [2024-10-07 09:48:54.566528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.809 [2024-10-07 09:48:54.566593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.809 qpair failed and we were unable to recover it. 00:28:05.809 [2024-10-07 09:48:54.566865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.809 [2024-10-07 09:48:54.566933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.809 qpair failed and we were unable to recover it. 00:28:05.809 [2024-10-07 09:48:54.567192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.809 [2024-10-07 09:48:54.567258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.809 qpair failed and we were unable to recover it. 00:28:05.809 [2024-10-07 09:48:54.567555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.809 [2024-10-07 09:48:54.567620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.809 qpair failed and we were unable to recover it. 00:28:05.809 [2024-10-07 09:48:54.567891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.809 [2024-10-07 09:48:54.567957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.809 qpair failed and we were unable to recover it. 00:28:05.809 [2024-10-07 09:48:54.568213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.809 [2024-10-07 09:48:54.568279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.809 qpair failed and we were unable to recover it. 00:28:05.809 [2024-10-07 09:48:54.568541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.809 [2024-10-07 09:48:54.568605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.809 qpair failed and we were unable to recover it. 00:28:05.809 [2024-10-07 09:48:54.568881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.809 [2024-10-07 09:48:54.568948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.809 qpair failed and we were unable to recover it. 00:28:05.809 [2024-10-07 09:48:54.569248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.809 [2024-10-07 09:48:54.569312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.809 qpair failed and we were unable to recover it. 00:28:05.809 [2024-10-07 09:48:54.569593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.809 [2024-10-07 09:48:54.569659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.809 qpair failed and we were unable to recover it. 00:28:05.809 [2024-10-07 09:48:54.569948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.809 [2024-10-07 09:48:54.570013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.809 qpair failed and we were unable to recover it. 00:28:05.809 [2024-10-07 09:48:54.570274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.809 [2024-10-07 09:48:54.570339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.809 qpair failed and we were unable to recover it. 00:28:05.809 [2024-10-07 09:48:54.570589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.809 [2024-10-07 09:48:54.570657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.809 qpair failed and we were unable to recover it. 00:28:05.809 [2024-10-07 09:48:54.570941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.809 [2024-10-07 09:48:54.571007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.809 qpair failed and we were unable to recover it. 00:28:05.809 [2024-10-07 09:48:54.571264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.809 [2024-10-07 09:48:54.571329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.809 qpair failed and we were unable to recover it. 00:28:05.809 [2024-10-07 09:48:54.571593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.809 [2024-10-07 09:48:54.571686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.809 qpair failed and we were unable to recover it. 00:28:05.809 [2024-10-07 09:48:54.571895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.809 [2024-10-07 09:48:54.571964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.809 qpair failed and we were unable to recover it. 00:28:05.809 [2024-10-07 09:48:54.572217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.809 [2024-10-07 09:48:54.572284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.809 qpair failed and we were unable to recover it. 00:28:05.809 [2024-10-07 09:48:54.572492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.809 [2024-10-07 09:48:54.572557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.809 qpair failed and we were unable to recover it. 00:28:05.809 [2024-10-07 09:48:54.572835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.809 [2024-10-07 09:48:54.572902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.809 qpair failed and we were unable to recover it. 00:28:05.809 [2024-10-07 09:48:54.573201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.810 [2024-10-07 09:48:54.573276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.810 qpair failed and we were unable to recover it. 00:28:05.810 [2024-10-07 09:48:54.573577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.810 [2024-10-07 09:48:54.573642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.810 qpair failed and we were unable to recover it. 00:28:05.810 [2024-10-07 09:48:54.573866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.810 [2024-10-07 09:48:54.573934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.810 qpair failed and we were unable to recover it. 00:28:05.810 [2024-10-07 09:48:54.574231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.810 [2024-10-07 09:48:54.574296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.810 qpair failed and we were unable to recover it. 00:28:05.810 [2024-10-07 09:48:54.574586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.810 [2024-10-07 09:48:54.574650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.810 qpair failed and we were unable to recover it. 00:28:05.810 [2024-10-07 09:48:54.574919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.810 [2024-10-07 09:48:54.574985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.810 qpair failed and we were unable to recover it. 00:28:05.810 [2024-10-07 09:48:54.575232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.810 [2024-10-07 09:48:54.575304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.810 qpair failed and we were unable to recover it. 00:28:05.810 [2024-10-07 09:48:54.575621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.810 [2024-10-07 09:48:54.575706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.810 qpair failed and we were unable to recover it. 00:28:05.810 [2024-10-07 09:48:54.576011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.810 [2024-10-07 09:48:54.576087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.810 qpair failed and we were unable to recover it. 00:28:05.810 [2024-10-07 09:48:54.576395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.810 [2024-10-07 09:48:54.576460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.810 qpair failed and we were unable to recover it. 00:28:05.810 [2024-10-07 09:48:54.576714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.810 [2024-10-07 09:48:54.576787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.810 qpair failed and we were unable to recover it. 00:28:05.810 [2024-10-07 09:48:54.577054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.810 [2024-10-07 09:48:54.577120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.810 qpair failed and we were unable to recover it. 00:28:05.810 [2024-10-07 09:48:54.577365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.810 [2024-10-07 09:48:54.577429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.810 qpair failed and we were unable to recover it. 00:28:05.810 [2024-10-07 09:48:54.577693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.810 [2024-10-07 09:48:54.577760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.810 qpair failed and we were unable to recover it. 00:28:05.810 [2024-10-07 09:48:54.578014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.810 [2024-10-07 09:48:54.578080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.810 qpair failed and we were unable to recover it. 00:28:05.810 [2024-10-07 09:48:54.578377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.810 [2024-10-07 09:48:54.578451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.810 qpair failed and we were unable to recover it. 00:28:05.810 [2024-10-07 09:48:54.578737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.810 [2024-10-07 09:48:54.578803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.810 qpair failed and we were unable to recover it. 00:28:05.810 [2024-10-07 09:48:54.579024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.810 [2024-10-07 09:48:54.579090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.810 qpair failed and we were unable to recover it. 00:28:05.810 [2024-10-07 09:48:54.579293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.810 [2024-10-07 09:48:54.579360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.810 qpair failed and we were unable to recover it. 00:28:05.810 [2024-10-07 09:48:54.579606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.810 [2024-10-07 09:48:54.579685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.810 qpair failed and we were unable to recover it. 00:28:05.810 [2024-10-07 09:48:54.579953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.810 [2024-10-07 09:48:54.580019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.810 qpair failed and we were unable to recover it. 00:28:05.810 [2024-10-07 09:48:54.580230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.810 [2024-10-07 09:48:54.580297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.810 qpair failed and we were unable to recover it. 00:28:05.810 [2024-10-07 09:48:54.580546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.810 [2024-10-07 09:48:54.580612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.810 qpair failed and we were unable to recover it. 00:28:05.810 [2024-10-07 09:48:54.580888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.810 [2024-10-07 09:48:54.580956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.810 qpair failed and we were unable to recover it. 00:28:05.810 [2024-10-07 09:48:54.581250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.810 [2024-10-07 09:48:54.581316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.810 qpair failed and we were unable to recover it. 00:28:05.810 [2024-10-07 09:48:54.581565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.810 [2024-10-07 09:48:54.581632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.810 qpair failed and we were unable to recover it. 00:28:05.810 [2024-10-07 09:48:54.581903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.810 [2024-10-07 09:48:54.581969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.810 qpair failed and we were unable to recover it. 00:28:05.810 [2024-10-07 09:48:54.582194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.810 [2024-10-07 09:48:54.582259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.810 qpair failed and we were unable to recover it. 00:28:05.810 [2024-10-07 09:48:54.582556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.810 [2024-10-07 09:48:54.582621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.810 qpair failed and we were unable to recover it. 00:28:05.810 [2024-10-07 09:48:54.582939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.811 [2024-10-07 09:48:54.583004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.811 qpair failed and we were unable to recover it. 00:28:05.811 [2024-10-07 09:48:54.583242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.811 [2024-10-07 09:48:54.583309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.811 qpair failed and we were unable to recover it. 00:28:05.811 [2024-10-07 09:48:54.583527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.811 [2024-10-07 09:48:54.583595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.811 qpair failed and we were unable to recover it. 00:28:05.811 [2024-10-07 09:48:54.583861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.811 [2024-10-07 09:48:54.583927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.811 qpair failed and we were unable to recover it. 00:28:05.811 [2024-10-07 09:48:54.584135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.811 [2024-10-07 09:48:54.584203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.811 qpair failed and we were unable to recover it. 00:28:05.811 [2024-10-07 09:48:54.584503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.811 [2024-10-07 09:48:54.584568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.811 qpair failed and we were unable to recover it. 00:28:05.811 [2024-10-07 09:48:54.584796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.811 [2024-10-07 09:48:54.584862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.811 qpair failed and we were unable to recover it. 00:28:05.811 [2024-10-07 09:48:54.585111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.811 [2024-10-07 09:48:54.585177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.811 qpair failed and we were unable to recover it. 00:28:05.811 [2024-10-07 09:48:54.585462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.811 [2024-10-07 09:48:54.585527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.811 qpair failed and we were unable to recover it. 00:28:05.811 [2024-10-07 09:48:54.585732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.811 [2024-10-07 09:48:54.585801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.811 qpair failed and we were unable to recover it. 00:28:05.811 [2024-10-07 09:48:54.586042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.811 [2024-10-07 09:48:54.586110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.811 qpair failed and we were unable to recover it. 00:28:05.811 [2024-10-07 09:48:54.586351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.811 [2024-10-07 09:48:54.586415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.811 qpair failed and we were unable to recover it. 00:28:05.811 [2024-10-07 09:48:54.586703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.811 [2024-10-07 09:48:54.586771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.811 qpair failed and we were unable to recover it. 00:28:05.811 [2024-10-07 09:48:54.587063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.811 [2024-10-07 09:48:54.587128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.811 qpair failed and we were unable to recover it. 00:28:05.811 [2024-10-07 09:48:54.587426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.811 [2024-10-07 09:48:54.587490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.811 qpair failed and we were unable to recover it. 00:28:05.811 [2024-10-07 09:48:54.587728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.811 [2024-10-07 09:48:54.587795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.811 qpair failed and we were unable to recover it. 00:28:05.811 [2024-10-07 09:48:54.588063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.811 [2024-10-07 09:48:54.588128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.811 qpair failed and we were unable to recover it. 00:28:05.811 [2024-10-07 09:48:54.588381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.811 [2024-10-07 09:48:54.588445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.811 qpair failed and we were unable to recover it. 00:28:05.811 [2024-10-07 09:48:54.588721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.811 [2024-10-07 09:48:54.588787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.811 qpair failed and we were unable to recover it. 00:28:05.811 [2024-10-07 09:48:54.589023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.811 [2024-10-07 09:48:54.589090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.811 qpair failed and we were unable to recover it. 00:28:05.811 [2024-10-07 09:48:54.589302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.811 [2024-10-07 09:48:54.589370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.811 qpair failed and we were unable to recover it. 00:28:05.811 [2024-10-07 09:48:54.589623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.811 [2024-10-07 09:48:54.589707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.811 qpair failed and we were unable to recover it. 00:28:05.811 [2024-10-07 09:48:54.589966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.811 [2024-10-07 09:48:54.590033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.811 qpair failed and we were unable to recover it. 00:28:05.811 [2024-10-07 09:48:54.590282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.811 [2024-10-07 09:48:54.590350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.811 qpair failed and we were unable to recover it. 00:28:05.811 [2024-10-07 09:48:54.590609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.811 [2024-10-07 09:48:54.590691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.811 qpair failed and we were unable to recover it. 00:28:05.811 [2024-10-07 09:48:54.590945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.811 [2024-10-07 09:48:54.591010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.811 qpair failed and we were unable to recover it. 00:28:05.811 [2024-10-07 09:48:54.591308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.811 [2024-10-07 09:48:54.591384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.811 qpair failed and we were unable to recover it. 00:28:05.811 [2024-10-07 09:48:54.591638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.811 [2024-10-07 09:48:54.591723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.811 qpair failed and we were unable to recover it. 00:28:05.811 [2024-10-07 09:48:54.592010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.811 [2024-10-07 09:48:54.592075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.811 qpair failed and we were unable to recover it. 00:28:05.811 [2024-10-07 09:48:54.592317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.811 [2024-10-07 09:48:54.592382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.811 qpair failed and we were unable to recover it. 00:28:05.811 [2024-10-07 09:48:54.592705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.811 [2024-10-07 09:48:54.592772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.811 qpair failed and we were unable to recover it. 00:28:05.812 [2024-10-07 09:48:54.593077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.812 [2024-10-07 09:48:54.593141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.812 qpair failed and we were unable to recover it. 00:28:05.812 [2024-10-07 09:48:54.593383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.812 [2024-10-07 09:48:54.593449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.812 qpair failed and we were unable to recover it. 00:28:05.812 [2024-10-07 09:48:54.593752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.812 [2024-10-07 09:48:54.593842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.812 qpair failed and we were unable to recover it. 00:28:05.812 [2024-10-07 09:48:54.594100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.812 [2024-10-07 09:48:54.594166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.812 qpair failed and we were unable to recover it. 00:28:05.812 [2024-10-07 09:48:54.594420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.812 [2024-10-07 09:48:54.594488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.812 qpair failed and we were unable to recover it. 00:28:05.812 [2024-10-07 09:48:54.594751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.812 [2024-10-07 09:48:54.594819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.812 qpair failed and we were unable to recover it. 00:28:05.812 [2024-10-07 09:48:54.595071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.812 [2024-10-07 09:48:54.595137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.812 qpair failed and we were unable to recover it. 00:28:05.812 [2024-10-07 09:48:54.595439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.812 [2024-10-07 09:48:54.595514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.812 qpair failed and we were unable to recover it. 00:28:05.812 [2024-10-07 09:48:54.595761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.812 [2024-10-07 09:48:54.595828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.812 qpair failed and we were unable to recover it. 00:28:05.812 [2024-10-07 09:48:54.596024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.812 [2024-10-07 09:48:54.596088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.812 qpair failed and we were unable to recover it. 00:28:05.812 [2024-10-07 09:48:54.596379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.812 [2024-10-07 09:48:54.596445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.812 qpair failed and we were unable to recover it. 00:28:05.812 [2024-10-07 09:48:54.596750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.812 [2024-10-07 09:48:54.596817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.812 qpair failed and we were unable to recover it. 00:28:05.812 [2024-10-07 09:48:54.597106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.812 [2024-10-07 09:48:54.597171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.812 qpair failed and we were unable to recover it. 00:28:05.812 [2024-10-07 09:48:54.597473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.812 [2024-10-07 09:48:54.597539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.812 qpair failed and we were unable to recover it. 00:28:05.812 [2024-10-07 09:48:54.597798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.812 [2024-10-07 09:48:54.597866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.812 qpair failed and we were unable to recover it. 00:28:05.812 [2024-10-07 09:48:54.598126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.812 [2024-10-07 09:48:54.598191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.812 qpair failed and we were unable to recover it. 00:28:05.812 [2024-10-07 09:48:54.598477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.812 [2024-10-07 09:48:54.598543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.812 qpair failed and we were unable to recover it. 00:28:05.812 [2024-10-07 09:48:54.598736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.812 [2024-10-07 09:48:54.598802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.812 qpair failed and we were unable to recover it. 00:28:05.812 [2024-10-07 09:48:54.599092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.812 [2024-10-07 09:48:54.599158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.812 qpair failed and we were unable to recover it. 00:28:05.813 [2024-10-07 09:48:54.599408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.813 [2024-10-07 09:48:54.599473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.813 qpair failed and we were unable to recover it. 00:28:05.813 [2024-10-07 09:48:54.599662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.813 [2024-10-07 09:48:54.599742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.813 qpair failed and we were unable to recover it. 00:28:05.813 [2024-10-07 09:48:54.599979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.813 [2024-10-07 09:48:54.600044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.813 qpair failed and we were unable to recover it. 00:28:05.813 [2024-10-07 09:48:54.600273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.813 [2024-10-07 09:48:54.600338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.813 qpair failed and we were unable to recover it. 00:28:05.813 [2024-10-07 09:48:54.600587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.813 [2024-10-07 09:48:54.600651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.813 qpair failed and we were unable to recover it. 00:28:05.813 [2024-10-07 09:48:54.600956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.813 [2024-10-07 09:48:54.601021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.813 qpair failed and we were unable to recover it. 00:28:05.813 [2024-10-07 09:48:54.601240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.813 [2024-10-07 09:48:54.601304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.813 qpair failed and we were unable to recover it. 00:28:05.813 [2024-10-07 09:48:54.601585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.813 [2024-10-07 09:48:54.601650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.813 qpair failed and we were unable to recover it. 00:28:05.813 [2024-10-07 09:48:54.601891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.813 [2024-10-07 09:48:54.601957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.813 qpair failed and we were unable to recover it. 00:28:05.813 [2024-10-07 09:48:54.602188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.813 [2024-10-07 09:48:54.602252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.813 qpair failed and we were unable to recover it. 00:28:05.813 [2024-10-07 09:48:54.602525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.813 [2024-10-07 09:48:54.602591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.813 qpair failed and we were unable to recover it. 00:28:05.813 [2024-10-07 09:48:54.602869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.813 [2024-10-07 09:48:54.602936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.813 qpair failed and we were unable to recover it. 00:28:05.813 [2024-10-07 09:48:54.603193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.813 [2024-10-07 09:48:54.603258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.813 qpair failed and we were unable to recover it. 00:28:05.813 [2024-10-07 09:48:54.603500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.813 [2024-10-07 09:48:54.603567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.813 qpair failed and we were unable to recover it. 00:28:05.813 [2024-10-07 09:48:54.603803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.813 [2024-10-07 09:48:54.603872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.813 qpair failed and we were unable to recover it. 00:28:05.813 [2024-10-07 09:48:54.604100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.813 [2024-10-07 09:48:54.604166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.813 qpair failed and we were unable to recover it. 00:28:05.813 [2024-10-07 09:48:54.604409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.813 [2024-10-07 09:48:54.604477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.813 qpair failed and we were unable to recover it. 00:28:05.813 [2024-10-07 09:48:54.604712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.813 [2024-10-07 09:48:54.604780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.813 qpair failed and we were unable to recover it. 00:28:05.813 [2024-10-07 09:48:54.605041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.813 [2024-10-07 09:48:54.605105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.813 qpair failed and we were unable to recover it. 00:28:05.813 [2024-10-07 09:48:54.605406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.813 [2024-10-07 09:48:54.605471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.813 qpair failed and we were unable to recover it. 00:28:05.813 [2024-10-07 09:48:54.605727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.813 [2024-10-07 09:48:54.605796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.813 qpair failed and we were unable to recover it. 00:28:05.813 [2024-10-07 09:48:54.606052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.813 [2024-10-07 09:48:54.606120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.813 qpair failed and we were unable to recover it. 00:28:05.813 [2024-10-07 09:48:54.606413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.813 [2024-10-07 09:48:54.606483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.813 qpair failed and we were unable to recover it. 00:28:05.813 [2024-10-07 09:48:54.606737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.813 [2024-10-07 09:48:54.606815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.813 qpair failed and we were unable to recover it. 00:28:05.813 [2024-10-07 09:48:54.607080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.813 [2024-10-07 09:48:54.607146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.813 qpair failed and we were unable to recover it. 00:28:05.813 [2024-10-07 09:48:54.607417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.813 [2024-10-07 09:48:54.607491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.813 qpair failed and we were unable to recover it. 00:28:05.813 [2024-10-07 09:48:54.607762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.813 [2024-10-07 09:48:54.607827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.813 qpair failed and we were unable to recover it. 00:28:05.813 [2024-10-07 09:48:54.608086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.813 [2024-10-07 09:48:54.608151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.813 qpair failed and we were unable to recover it. 00:28:05.813 [2024-10-07 09:48:54.608405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.813 [2024-10-07 09:48:54.608470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.813 qpair failed and we were unable to recover it. 00:28:05.813 [2024-10-07 09:48:54.608712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.813 [2024-10-07 09:48:54.608779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.813 qpair failed and we were unable to recover it. 00:28:05.813 [2024-10-07 09:48:54.609068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.813 [2024-10-07 09:48:54.609138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.814 qpair failed and we were unable to recover it. 00:28:05.814 [2024-10-07 09:48:54.609395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.814 [2024-10-07 09:48:54.609465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.814 qpair failed and we were unable to recover it. 00:28:05.814 [2024-10-07 09:48:54.609702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.814 [2024-10-07 09:48:54.609769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.814 qpair failed and we were unable to recover it. 00:28:05.814 [2024-10-07 09:48:54.609999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.814 [2024-10-07 09:48:54.610067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.814 qpair failed and we were unable to recover it. 00:28:05.814 [2024-10-07 09:48:54.610323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.814 [2024-10-07 09:48:54.610392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.814 qpair failed and we were unable to recover it. 00:28:05.814 [2024-10-07 09:48:54.610649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.814 [2024-10-07 09:48:54.610729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.814 qpair failed and we were unable to recover it. 00:28:05.814 [2024-10-07 09:48:54.610928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.814 [2024-10-07 09:48:54.611002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.814 qpair failed and we were unable to recover it. 00:28:05.814 [2024-10-07 09:48:54.611302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.814 [2024-10-07 09:48:54.611368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.814 qpair failed and we were unable to recover it. 00:28:05.814 [2024-10-07 09:48:54.611597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.814 [2024-10-07 09:48:54.611662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.814 qpair failed and we were unable to recover it. 00:28:05.814 [2024-10-07 09:48:54.611986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.814 [2024-10-07 09:48:54.612062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.814 qpair failed and we were unable to recover it. 00:28:05.814 [2024-10-07 09:48:54.612338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.814 [2024-10-07 09:48:54.612406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.814 qpair failed and we were unable to recover it. 00:28:05.814 [2024-10-07 09:48:54.612702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.814 [2024-10-07 09:48:54.612794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.814 qpair failed and we were unable to recover it. 00:28:05.814 [2024-10-07 09:48:54.613028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.814 [2024-10-07 09:48:54.613099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.814 qpair failed and we were unable to recover it. 00:28:05.814 [2024-10-07 09:48:54.613311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.814 [2024-10-07 09:48:54.613380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.814 qpair failed and we were unable to recover it. 00:28:05.814 [2024-10-07 09:48:54.613695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.814 [2024-10-07 09:48:54.613763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.814 qpair failed and we were unable to recover it. 00:28:05.814 [2024-10-07 09:48:54.613991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.814 [2024-10-07 09:48:54.614067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.814 qpair failed and we were unable to recover it. 00:28:05.814 [2024-10-07 09:48:54.614333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.814 [2024-10-07 09:48:54.614399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.814 qpair failed and we were unable to recover it. 00:28:05.814 [2024-10-07 09:48:54.614687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.814 [2024-10-07 09:48:54.614756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.814 qpair failed and we were unable to recover it. 00:28:05.814 [2024-10-07 09:48:54.615017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.814 [2024-10-07 09:48:54.615082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.814 qpair failed and we were unable to recover it. 00:28:05.814 [2024-10-07 09:48:54.615371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.814 [2024-10-07 09:48:54.615436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.814 qpair failed and we were unable to recover it. 00:28:05.814 [2024-10-07 09:48:54.615702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.814 [2024-10-07 09:48:54.615770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.814 qpair failed and we were unable to recover it. 00:28:05.814 [2024-10-07 09:48:54.615987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.814 [2024-10-07 09:48:54.616052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.814 qpair failed and we were unable to recover it. 00:28:05.814 [2024-10-07 09:48:54.616268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.814 [2024-10-07 09:48:54.616336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.814 qpair failed and we were unable to recover it. 00:28:05.814 [2024-10-07 09:48:54.616634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.814 [2024-10-07 09:48:54.616732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.814 qpair failed and we were unable to recover it. 00:28:05.814 [2024-10-07 09:48:54.616970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.814 [2024-10-07 09:48:54.617034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.814 qpair failed and we were unable to recover it. 00:28:05.814 [2024-10-07 09:48:54.617248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.814 [2024-10-07 09:48:54.617322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.814 qpair failed and we were unable to recover it. 00:28:05.814 [2024-10-07 09:48:54.617617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.814 [2024-10-07 09:48:54.617714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.814 qpair failed and we were unable to recover it. 00:28:05.814 [2024-10-07 09:48:54.617985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.814 [2024-10-07 09:48:54.618057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.814 qpair failed and we were unable to recover it. 00:28:05.814 [2024-10-07 09:48:54.618312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.814 [2024-10-07 09:48:54.618377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.814 qpair failed and we were unable to recover it. 00:28:05.815 [2024-10-07 09:48:54.618562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.815 [2024-10-07 09:48:54.618599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.815 qpair failed and we were unable to recover it. 00:28:05.815 [2024-10-07 09:48:54.618728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.815 [2024-10-07 09:48:54.618761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.815 qpair failed and we were unable to recover it. 00:28:05.815 [2024-10-07 09:48:54.618867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.815 [2024-10-07 09:48:54.618899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.815 qpair failed and we were unable to recover it. 00:28:05.815 [2024-10-07 09:48:54.619071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.815 [2024-10-07 09:48:54.619103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.815 qpair failed and we were unable to recover it. 00:28:05.815 [2024-10-07 09:48:54.619213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.815 [2024-10-07 09:48:54.619253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.815 qpair failed and we were unable to recover it. 00:28:05.815 [2024-10-07 09:48:54.619393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.815 [2024-10-07 09:48:54.619436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.815 qpair failed and we were unable to recover it. 00:28:05.815 [2024-10-07 09:48:54.619715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.815 [2024-10-07 09:48:54.619749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.815 qpair failed and we were unable to recover it. 00:28:05.815 [2024-10-07 09:48:54.619863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.815 [2024-10-07 09:48:54.619895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.815 qpair failed and we were unable to recover it. 00:28:05.815 [2024-10-07 09:48:54.620001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.815 [2024-10-07 09:48:54.620033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.815 qpair failed and we were unable to recover it. 00:28:05.815 [2024-10-07 09:48:54.620181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.815 [2024-10-07 09:48:54.620261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.815 qpair failed and we were unable to recover it. 00:28:05.815 [2024-10-07 09:48:54.620471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.815 [2024-10-07 09:48:54.620530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.815 qpair failed and we were unable to recover it. 00:28:05.815 [2024-10-07 09:48:54.620639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.815 [2024-10-07 09:48:54.620685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.815 qpair failed and we were unable to recover it. 00:28:05.815 [2024-10-07 09:48:54.620854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.815 [2024-10-07 09:48:54.620887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.815 qpair failed and we were unable to recover it. 00:28:05.815 [2024-10-07 09:48:54.621003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.815 [2024-10-07 09:48:54.621046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.815 qpair failed and we were unable to recover it. 00:28:05.815 [2024-10-07 09:48:54.621182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.815 [2024-10-07 09:48:54.621215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.815 qpair failed and we were unable to recover it. 00:28:05.815 [2024-10-07 09:48:54.621311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.815 [2024-10-07 09:48:54.621343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.815 qpair failed and we were unable to recover it. 00:28:05.815 [2024-10-07 09:48:54.621437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.815 [2024-10-07 09:48:54.621470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.815 qpair failed and we were unable to recover it. 00:28:05.815 [2024-10-07 09:48:54.621595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.815 [2024-10-07 09:48:54.621644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.815 qpair failed and we were unable to recover it. 00:28:05.815 [2024-10-07 09:48:54.622141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.815 [2024-10-07 09:48:54.622182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.815 qpair failed and we were unable to recover it. 00:28:05.815 [2024-10-07 09:48:54.622322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.815 [2024-10-07 09:48:54.622356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.815 qpair failed and we were unable to recover it. 00:28:05.815 [2024-10-07 09:48:54.622505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.815 [2024-10-07 09:48:54.622538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.815 qpair failed and we were unable to recover it. 00:28:05.815 [2024-10-07 09:48:54.622641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.815 [2024-10-07 09:48:54.622687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.815 qpair failed and we were unable to recover it. 00:28:05.815 [2024-10-07 09:48:54.622795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.815 [2024-10-07 09:48:54.622828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.815 qpair failed and we were unable to recover it. 00:28:05.815 [2024-10-07 09:48:54.622928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.815 [2024-10-07 09:48:54.622959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.815 qpair failed and we were unable to recover it. 00:28:05.815 [2024-10-07 09:48:54.623125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.815 [2024-10-07 09:48:54.623157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.815 qpair failed and we were unable to recover it. 00:28:05.815 [2024-10-07 09:48:54.623263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.815 [2024-10-07 09:48:54.623295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.815 qpair failed and we were unable to recover it. 00:28:05.815 [2024-10-07 09:48:54.623416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.815 [2024-10-07 09:48:54.623448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.815 qpair failed and we were unable to recover it. 00:28:05.815 [2024-10-07 09:48:54.623616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.815 [2024-10-07 09:48:54.623648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.815 qpair failed and we were unable to recover it. 00:28:05.815 [2024-10-07 09:48:54.623756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.815 [2024-10-07 09:48:54.623789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.815 qpair failed and we were unable to recover it. 00:28:05.815 [2024-10-07 09:48:54.623889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.816 [2024-10-07 09:48:54.623921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.816 qpair failed and we were unable to recover it. 00:28:05.816 [2024-10-07 09:48:54.624035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.816 [2024-10-07 09:48:54.624079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.816 qpair failed and we were unable to recover it. 00:28:05.816 [2024-10-07 09:48:54.624244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.816 [2024-10-07 09:48:54.624297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.816 qpair failed and we were unable to recover it. 00:28:05.816 [2024-10-07 09:48:54.624444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.816 [2024-10-07 09:48:54.624480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.816 qpair failed and we were unable to recover it. 00:28:05.816 [2024-10-07 09:48:54.624588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.816 [2024-10-07 09:48:54.624627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.816 qpair failed and we were unable to recover it. 00:28:05.816 [2024-10-07 09:48:54.624784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.816 [2024-10-07 09:48:54.624820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.816 qpair failed and we were unable to recover it. 00:28:05.816 [2024-10-07 09:48:54.624931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.816 [2024-10-07 09:48:54.624963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.816 qpair failed and we were unable to recover it. 00:28:05.816 [2024-10-07 09:48:54.625064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.816 [2024-10-07 09:48:54.625097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.816 qpair failed and we were unable to recover it. 00:28:05.816 [2024-10-07 09:48:54.625259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.816 [2024-10-07 09:48:54.625301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.816 qpair failed and we were unable to recover it. 00:28:05.816 [2024-10-07 09:48:54.625437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.816 [2024-10-07 09:48:54.625470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.816 qpair failed and we were unable to recover it. 00:28:05.816 [2024-10-07 09:48:54.625569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.816 [2024-10-07 09:48:54.625613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.816 qpair failed and we were unable to recover it. 00:28:05.816 [2024-10-07 09:48:54.625759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.816 [2024-10-07 09:48:54.625793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.816 qpair failed and we were unable to recover it. 00:28:05.816 [2024-10-07 09:48:54.625898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.816 [2024-10-07 09:48:54.625931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.816 qpair failed and we were unable to recover it. 00:28:05.816 [2024-10-07 09:48:54.626079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.816 [2024-10-07 09:48:54.626113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.816 qpair failed and we were unable to recover it. 00:28:05.816 [2024-10-07 09:48:54.626281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.816 [2024-10-07 09:48:54.626313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.816 qpair failed and we were unable to recover it. 00:28:05.816 [2024-10-07 09:48:54.627421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.816 [2024-10-07 09:48:54.627483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.816 qpair failed and we were unable to recover it. 00:28:05.816 [2024-10-07 09:48:54.627628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.816 [2024-10-07 09:48:54.627680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.816 qpair failed and we were unable to recover it. 00:28:05.816 [2024-10-07 09:48:54.627790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.816 [2024-10-07 09:48:54.627821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.816 qpair failed and we were unable to recover it. 00:28:05.816 [2024-10-07 09:48:54.627963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.816 [2024-10-07 09:48:54.627995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.816 qpair failed and we were unable to recover it. 00:28:05.816 [2024-10-07 09:48:54.628148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.816 [2024-10-07 09:48:54.628196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.816 qpair failed and we were unable to recover it. 00:28:05.816 [2024-10-07 09:48:54.628337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.816 [2024-10-07 09:48:54.628371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.816 qpair failed and we were unable to recover it. 00:28:05.816 [2024-10-07 09:48:54.628504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.816 [2024-10-07 09:48:54.628537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.816 qpair failed and we were unable to recover it. 00:28:05.816 [2024-10-07 09:48:54.628641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.816 [2024-10-07 09:48:54.628686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.816 qpair failed and we were unable to recover it. 00:28:05.816 [2024-10-07 09:48:54.628788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.816 [2024-10-07 09:48:54.628821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.816 qpair failed and we were unable to recover it. 00:28:05.816 [2024-10-07 09:48:54.628992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.816 [2024-10-07 09:48:54.629026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.816 qpair failed and we were unable to recover it. 00:28:05.816 [2024-10-07 09:48:54.629239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.816 [2024-10-07 09:48:54.629306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.816 qpair failed and we were unable to recover it. 00:28:05.816 [2024-10-07 09:48:54.629534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.816 [2024-10-07 09:48:54.629602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.816 qpair failed and we were unable to recover it. 00:28:05.816 [2024-10-07 09:48:54.629797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.816 [2024-10-07 09:48:54.629830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.816 qpair failed and we were unable to recover it. 00:28:05.816 [2024-10-07 09:48:54.629962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.816 [2024-10-07 09:48:54.629994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.816 qpair failed and we were unable to recover it. 00:28:05.816 [2024-10-07 09:48:54.630215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.816 [2024-10-07 09:48:54.630250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.816 qpair failed and we were unable to recover it. 00:28:05.816 [2024-10-07 09:48:54.630392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.816 [2024-10-07 09:48:54.630426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.817 qpair failed and we were unable to recover it. 00:28:05.817 [2024-10-07 09:48:54.630551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.817 [2024-10-07 09:48:54.630585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.817 qpair failed and we were unable to recover it. 00:28:05.817 [2024-10-07 09:48:54.630795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.817 [2024-10-07 09:48:54.630828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.817 qpair failed and we were unable to recover it. 00:28:05.817 [2024-10-07 09:48:54.630934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.817 [2024-10-07 09:48:54.630965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.817 qpair failed and we were unable to recover it. 00:28:05.817 [2024-10-07 09:48:54.631053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.817 [2024-10-07 09:48:54.631085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.817 qpair failed and we were unable to recover it. 00:28:05.817 [2024-10-07 09:48:54.631232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.817 [2024-10-07 09:48:54.631265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.817 qpair failed and we were unable to recover it. 00:28:05.817 [2024-10-07 09:48:54.631433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.817 [2024-10-07 09:48:54.631498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.817 qpair failed and we were unable to recover it. 00:28:05.817 [2024-10-07 09:48:54.631721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.817 [2024-10-07 09:48:54.631754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.817 qpair failed and we were unable to recover it. 00:28:05.817 [2024-10-07 09:48:54.631882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.817 [2024-10-07 09:48:54.631915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.817 qpair failed and we were unable to recover it. 00:28:05.817 [2024-10-07 09:48:54.632053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.817 [2024-10-07 09:48:54.632085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.817 qpair failed and we were unable to recover it. 00:28:05.817 [2024-10-07 09:48:54.632214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.817 [2024-10-07 09:48:54.632246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.817 qpair failed and we were unable to recover it. 00:28:05.817 [2024-10-07 09:48:54.632434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.817 [2024-10-07 09:48:54.632469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.817 qpair failed and we were unable to recover it. 00:28:05.817 [2024-10-07 09:48:54.632600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.817 [2024-10-07 09:48:54.632679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.817 qpair failed and we were unable to recover it. 00:28:05.817 [2024-10-07 09:48:54.632791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.817 [2024-10-07 09:48:54.632826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.817 qpair failed and we were unable to recover it. 00:28:05.817 [2024-10-07 09:48:54.632959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.817 [2024-10-07 09:48:54.632999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.817 qpair failed and we were unable to recover it. 00:28:05.817 [2024-10-07 09:48:54.633130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.817 [2024-10-07 09:48:54.633182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.817 qpair failed and we were unable to recover it. 00:28:05.817 [2024-10-07 09:48:54.633344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.817 [2024-10-07 09:48:54.633413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.817 qpair failed and we were unable to recover it. 00:28:05.817 [2024-10-07 09:48:54.633654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.817 [2024-10-07 09:48:54.633721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.817 qpair failed and we were unable to recover it. 00:28:05.817 [2024-10-07 09:48:54.633835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.817 [2024-10-07 09:48:54.633866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.817 qpair failed and we were unable to recover it. 00:28:05.817 [2024-10-07 09:48:54.633999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.817 [2024-10-07 09:48:54.634031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.817 qpair failed and we were unable to recover it. 00:28:05.817 [2024-10-07 09:48:54.634160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.817 [2024-10-07 09:48:54.634207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.817 qpair failed and we were unable to recover it. 00:28:05.817 [2024-10-07 09:48:54.634354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.817 [2024-10-07 09:48:54.634384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.817 qpair failed and we were unable to recover it. 00:28:05.817 [2024-10-07 09:48:54.634509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.817 [2024-10-07 09:48:54.634539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.817 qpair failed and we were unable to recover it. 00:28:05.817 [2024-10-07 09:48:54.634649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.817 [2024-10-07 09:48:54.634703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.817 qpair failed and we were unable to recover it. 00:28:05.817 [2024-10-07 09:48:54.634809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.817 [2024-10-07 09:48:54.634839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.817 qpair failed and we were unable to recover it. 00:28:05.817 [2024-10-07 09:48:54.634940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.817 [2024-10-07 09:48:54.634977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.817 qpair failed and we were unable to recover it. 00:28:05.817 [2024-10-07 09:48:54.635116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.817 [2024-10-07 09:48:54.635147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.817 qpair failed and we were unable to recover it. 00:28:05.817 [2024-10-07 09:48:54.635277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.817 [2024-10-07 09:48:54.635312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.817 qpair failed and we were unable to recover it. 00:28:05.817 [2024-10-07 09:48:54.635479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.817 [2024-10-07 09:48:54.635508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.817 qpair failed and we were unable to recover it. 00:28:05.817 [2024-10-07 09:48:54.635591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.817 [2024-10-07 09:48:54.635620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.817 qpair failed and we were unable to recover it. 00:28:05.817 [2024-10-07 09:48:54.635731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.817 [2024-10-07 09:48:54.635761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.817 qpair failed and we were unable to recover it. 00:28:05.817 [2024-10-07 09:48:54.635867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.817 [2024-10-07 09:48:54.635914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.817 qpair failed and we were unable to recover it. 00:28:05.817 [2024-10-07 09:48:54.636097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.817 [2024-10-07 09:48:54.636132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.817 qpair failed and we were unable to recover it. 00:28:05.817 [2024-10-07 09:48:54.636319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.818 [2024-10-07 09:48:54.636368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.818 qpair failed and we were unable to recover it. 00:28:05.818 [2024-10-07 09:48:54.636508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.818 [2024-10-07 09:48:54.636544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.818 qpair failed and we were unable to recover it. 00:28:05.818 [2024-10-07 09:48:54.636700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.818 [2024-10-07 09:48:54.636746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.818 qpair failed and we were unable to recover it. 00:28:05.818 [2024-10-07 09:48:54.636841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.818 [2024-10-07 09:48:54.636876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.818 qpair failed and we were unable to recover it. 00:28:05.818 [2024-10-07 09:48:54.636988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.818 [2024-10-07 09:48:54.637020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.818 qpair failed and we were unable to recover it. 00:28:05.818 [2024-10-07 09:48:54.637231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.818 [2024-10-07 09:48:54.637279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.818 qpair failed and we were unable to recover it. 00:28:05.818 [2024-10-07 09:48:54.637395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.818 [2024-10-07 09:48:54.637457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.818 qpair failed and we were unable to recover it. 00:28:05.818 [2024-10-07 09:48:54.637599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.818 [2024-10-07 09:48:54.637633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.818 qpair failed and we were unable to recover it. 00:28:05.818 [2024-10-07 09:48:54.637769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.818 [2024-10-07 09:48:54.637799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.818 qpair failed and we were unable to recover it. 00:28:05.818 [2024-10-07 09:48:54.637888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.818 [2024-10-07 09:48:54.637918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.818 qpair failed and we were unable to recover it. 00:28:05.818 [2024-10-07 09:48:54.638081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.818 [2024-10-07 09:48:54.638113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.818 qpair failed and we were unable to recover it. 00:28:05.818 [2024-10-07 09:48:54.638343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.818 [2024-10-07 09:48:54.638415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.818 qpair failed and we were unable to recover it. 00:28:05.818 [2024-10-07 09:48:54.638648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.818 [2024-10-07 09:48:54.638692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.818 qpair failed and we were unable to recover it. 00:28:05.818 [2024-10-07 09:48:54.638810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.818 [2024-10-07 09:48:54.638843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.818 qpair failed and we were unable to recover it. 00:28:05.818 [2024-10-07 09:48:54.638931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.818 [2024-10-07 09:48:54.638961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.818 qpair failed and we were unable to recover it. 00:28:05.818 [2024-10-07 09:48:54.639065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.818 [2024-10-07 09:48:54.639093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.818 qpair failed and we were unable to recover it. 00:28:05.818 [2024-10-07 09:48:54.639198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.818 [2024-10-07 09:48:54.639237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.818 qpair failed and we were unable to recover it. 00:28:05.818 [2024-10-07 09:48:54.639372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.818 [2024-10-07 09:48:54.639423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.818 qpair failed and we were unable to recover it. 00:28:05.818 [2024-10-07 09:48:54.639543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.818 [2024-10-07 09:48:54.639573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.818 qpair failed and we were unable to recover it. 00:28:05.818 [2024-10-07 09:48:54.639715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.818 [2024-10-07 09:48:54.639750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.818 qpair failed and we were unable to recover it. 00:28:05.818 [2024-10-07 09:48:54.639863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.818 [2024-10-07 09:48:54.639892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.818 qpair failed and we were unable to recover it. 00:28:05.818 [2024-10-07 09:48:54.640014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.818 [2024-10-07 09:48:54.640047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.818 qpair failed and we were unable to recover it. 00:28:05.818 [2024-10-07 09:48:54.640139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.818 [2024-10-07 09:48:54.640168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.818 qpair failed and we were unable to recover it. 00:28:05.818 [2024-10-07 09:48:54.640261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.818 [2024-10-07 09:48:54.640290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.818 qpair failed and we were unable to recover it. 00:28:05.818 [2024-10-07 09:48:54.640399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.818 [2024-10-07 09:48:54.640428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.818 qpair failed and we were unable to recover it. 00:28:05.818 [2024-10-07 09:48:54.640568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.818 [2024-10-07 09:48:54.640611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.818 qpair failed and we were unable to recover it. 00:28:05.819 [2024-10-07 09:48:54.640763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.819 [2024-10-07 09:48:54.640794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.819 qpair failed and we were unable to recover it. 00:28:05.819 [2024-10-07 09:48:54.640883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.819 [2024-10-07 09:48:54.640912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.819 qpair failed and we were unable to recover it. 00:28:05.819 [2024-10-07 09:48:54.641008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.819 [2024-10-07 09:48:54.641037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.819 qpair failed and we were unable to recover it. 00:28:05.819 [2024-10-07 09:48:54.641153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.819 [2024-10-07 09:48:54.641182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.819 qpair failed and we were unable to recover it. 00:28:05.819 [2024-10-07 09:48:54.641349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.819 [2024-10-07 09:48:54.641404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.819 qpair failed and we were unable to recover it. 00:28:05.819 [2024-10-07 09:48:54.641580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.819 [2024-10-07 09:48:54.641609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.819 qpair failed and we were unable to recover it. 00:28:05.819 [2024-10-07 09:48:54.641738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.819 [2024-10-07 09:48:54.641773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.819 qpair failed and we were unable to recover it. 00:28:05.819 [2024-10-07 09:48:54.641890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.819 [2024-10-07 09:48:54.641919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.819 qpair failed and we were unable to recover it. 00:28:05.819 [2024-10-07 09:48:54.642092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.819 [2024-10-07 09:48:54.642124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.819 qpair failed and we were unable to recover it. 00:28:05.819 [2024-10-07 09:48:54.642273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.819 [2024-10-07 09:48:54.642323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.819 qpair failed and we were unable to recover it. 00:28:05.819 [2024-10-07 09:48:54.643146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.819 [2024-10-07 09:48:54.643196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.819 qpair failed and we were unable to recover it. 00:28:05.819 [2024-10-07 09:48:54.643338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.819 [2024-10-07 09:48:54.643369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.819 qpair failed and we were unable to recover it. 00:28:05.819 [2024-10-07 09:48:54.643489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.819 [2024-10-07 09:48:54.643533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.819 qpair failed and we were unable to recover it. 00:28:05.819 [2024-10-07 09:48:54.643676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.819 [2024-10-07 09:48:54.643720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.819 qpair failed and we were unable to recover it. 00:28:05.819 [2024-10-07 09:48:54.643816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.819 [2024-10-07 09:48:54.643846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.819 qpair failed and we were unable to recover it. 00:28:05.819 [2024-10-07 09:48:54.643967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.819 [2024-10-07 09:48:54.643999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.819 qpair failed and we were unable to recover it. 00:28:05.819 [2024-10-07 09:48:54.644099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.819 [2024-10-07 09:48:54.644129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.819 qpair failed and we were unable to recover it. 00:28:05.819 [2024-10-07 09:48:54.644225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.819 [2024-10-07 09:48:54.644255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.819 qpair failed and we were unable to recover it. 00:28:05.819 [2024-10-07 09:48:54.644350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.819 [2024-10-07 09:48:54.644396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.819 qpair failed and we were unable to recover it. 00:28:05.819 [2024-10-07 09:48:54.644524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.819 [2024-10-07 09:48:54.644553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.819 qpair failed and we were unable to recover it. 00:28:05.819 [2024-10-07 09:48:54.644690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.819 [2024-10-07 09:48:54.644720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.819 qpair failed and we were unable to recover it. 00:28:05.819 [2024-10-07 09:48:54.644809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.819 [2024-10-07 09:48:54.644838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.819 qpair failed and we were unable to recover it. 00:28:05.819 [2024-10-07 09:48:54.645012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.819 [2024-10-07 09:48:54.645049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.819 qpair failed and we were unable to recover it. 00:28:05.819 [2024-10-07 09:48:54.645135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.819 [2024-10-07 09:48:54.645181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.819 qpair failed and we were unable to recover it. 00:28:05.819 [2024-10-07 09:48:54.645301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.819 [2024-10-07 09:48:54.645331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.819 qpair failed and we were unable to recover it. 00:28:05.819 [2024-10-07 09:48:54.646346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.819 [2024-10-07 09:48:54.646396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.819 qpair failed and we were unable to recover it. 00:28:05.819 [2024-10-07 09:48:54.646553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.819 [2024-10-07 09:48:54.646584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.819 qpair failed and we were unable to recover it. 00:28:05.819 [2024-10-07 09:48:54.646739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.819 [2024-10-07 09:48:54.646770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.819 qpair failed and we were unable to recover it. 00:28:05.819 [2024-10-07 09:48:54.646868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.819 [2024-10-07 09:48:54.646897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.819 qpair failed and we were unable to recover it. 00:28:05.819 [2024-10-07 09:48:54.647051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.819 [2024-10-07 09:48:54.647080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.819 qpair failed and we were unable to recover it. 00:28:05.819 [2024-10-07 09:48:54.647187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.819 [2024-10-07 09:48:54.647216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.820 qpair failed and we were unable to recover it. 00:28:05.820 [2024-10-07 09:48:54.647343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.820 [2024-10-07 09:48:54.647388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.820 qpair failed and we were unable to recover it. 00:28:05.820 [2024-10-07 09:48:54.647540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.820 [2024-10-07 09:48:54.647570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.820 qpair failed and we were unable to recover it. 00:28:05.820 [2024-10-07 09:48:54.647690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.820 [2024-10-07 09:48:54.647750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.820 qpair failed and we were unable to recover it. 00:28:05.820 [2024-10-07 09:48:54.647879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.820 [2024-10-07 09:48:54.647909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.820 qpair failed and we were unable to recover it. 00:28:05.820 [2024-10-07 09:48:54.648036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.820 [2024-10-07 09:48:54.648081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.820 qpair failed and we were unable to recover it. 00:28:05.820 [2024-10-07 09:48:54.648259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.820 [2024-10-07 09:48:54.648289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.820 qpair failed and we were unable to recover it. 00:28:05.820 [2024-10-07 09:48:54.648475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.820 [2024-10-07 09:48:54.648504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.820 qpair failed and we were unable to recover it. 00:28:05.820 [2024-10-07 09:48:54.648653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.820 [2024-10-07 09:48:54.648711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.820 qpair failed and we were unable to recover it. 00:28:05.820 [2024-10-07 09:48:54.648797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.820 [2024-10-07 09:48:54.648826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.820 qpair failed and we were unable to recover it. 00:28:05.820 [2024-10-07 09:48:54.648942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.820 [2024-10-07 09:48:54.648979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.820 qpair failed and we were unable to recover it. 00:28:05.820 [2024-10-07 09:48:54.649077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.820 [2024-10-07 09:48:54.649121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.820 qpair failed and we were unable to recover it. 00:28:05.820 [2024-10-07 09:48:54.649247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.820 [2024-10-07 09:48:54.649277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.820 qpair failed and we were unable to recover it. 00:28:05.820 [2024-10-07 09:48:54.649481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.820 [2024-10-07 09:48:54.649511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.820 qpair failed and we were unable to recover it. 00:28:05.820 [2024-10-07 09:48:54.649681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.820 [2024-10-07 09:48:54.649710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.820 qpair failed and we were unable to recover it. 00:28:05.820 [2024-10-07 09:48:54.649797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.820 [2024-10-07 09:48:54.649825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.820 qpair failed and we were unable to recover it. 00:28:05.820 [2024-10-07 09:48:54.649947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.820 [2024-10-07 09:48:54.649975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.820 qpair failed and we were unable to recover it. 00:28:05.820 [2024-10-07 09:48:54.650203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.820 [2024-10-07 09:48:54.650268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.820 qpair failed and we were unable to recover it. 00:28:05.820 [2024-10-07 09:48:54.650499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.820 [2024-10-07 09:48:54.650570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.820 qpair failed and we were unable to recover it. 00:28:05.820 [2024-10-07 09:48:54.650788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.820 [2024-10-07 09:48:54.650817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.820 qpair failed and we were unable to recover it. 00:28:05.820 [2024-10-07 09:48:54.650940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.820 [2024-10-07 09:48:54.650977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.820 qpair failed and we were unable to recover it. 00:28:05.820 [2024-10-07 09:48:54.651174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.820 [2024-10-07 09:48:54.651239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.820 qpair failed and we were unable to recover it. 00:28:05.820 [2024-10-07 09:48:54.651485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.820 [2024-10-07 09:48:54.651549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.820 qpair failed and we were unable to recover it. 00:28:05.820 [2024-10-07 09:48:54.651767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.820 [2024-10-07 09:48:54.651796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.820 qpair failed and we were unable to recover it. 00:28:05.820 [2024-10-07 09:48:54.652010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.820 [2024-10-07 09:48:54.652044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.820 qpair failed and we were unable to recover it. 00:28:05.820 [2024-10-07 09:48:54.652205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.820 [2024-10-07 09:48:54.652255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.820 qpair failed and we were unable to recover it. 00:28:05.820 [2024-10-07 09:48:54.652475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.820 [2024-10-07 09:48:54.652553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.820 qpair failed and we were unable to recover it. 00:28:05.820 [2024-10-07 09:48:54.652752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.820 [2024-10-07 09:48:54.652782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.820 qpair failed and we were unable to recover it. 00:28:05.820 [2024-10-07 09:48:54.652881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.820 [2024-10-07 09:48:54.652909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.820 qpair failed and we were unable to recover it. 00:28:05.820 [2024-10-07 09:48:54.653064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.820 [2024-10-07 09:48:54.653093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.821 qpair failed and we were unable to recover it. 00:28:05.821 [2024-10-07 09:48:54.653239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.821 [2024-10-07 09:48:54.653274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.821 qpair failed and we were unable to recover it. 00:28:05.821 [2024-10-07 09:48:54.653489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.821 [2024-10-07 09:48:54.653566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.821 qpair failed and we were unable to recover it. 00:28:05.821 [2024-10-07 09:48:54.653740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.821 [2024-10-07 09:48:54.653772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.821 qpair failed and we were unable to recover it. 00:28:05.821 [2024-10-07 09:48:54.653920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.821 [2024-10-07 09:48:54.653949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.821 qpair failed and we were unable to recover it. 00:28:05.821 [2024-10-07 09:48:54.654076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.821 [2024-10-07 09:48:54.654122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.821 qpair failed and we were unable to recover it. 00:28:05.821 [2024-10-07 09:48:54.654266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.821 [2024-10-07 09:48:54.654315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.821 qpair failed and we were unable to recover it. 00:28:05.821 [2024-10-07 09:48:54.654482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.821 [2024-10-07 09:48:54.654553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.821 qpair failed and we were unable to recover it. 00:28:05.821 [2024-10-07 09:48:54.654757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.821 [2024-10-07 09:48:54.654786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.821 qpair failed and we were unable to recover it. 00:28:05.821 [2024-10-07 09:48:54.654901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.821 [2024-10-07 09:48:54.654931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.821 qpair failed and we were unable to recover it. 00:28:05.821 [2024-10-07 09:48:54.655085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.821 [2024-10-07 09:48:54.655124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.821 qpair failed and we were unable to recover it. 00:28:05.821 [2024-10-07 09:48:54.655431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.821 [2024-10-07 09:48:54.655500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.821 qpair failed and we were unable to recover it. 00:28:05.821 [2024-10-07 09:48:54.655726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.821 [2024-10-07 09:48:54.655755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.821 qpair failed and we were unable to recover it. 00:28:05.821 [2024-10-07 09:48:54.655877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.821 [2024-10-07 09:48:54.655905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.821 qpair failed and we were unable to recover it. 00:28:05.821 [2024-10-07 09:48:54.656003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.821 [2024-10-07 09:48:54.656046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.821 qpair failed and we were unable to recover it. 00:28:05.821 [2024-10-07 09:48:54.656300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.821 [2024-10-07 09:48:54.656372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.821 qpair failed and we were unable to recover it. 00:28:05.821 [2024-10-07 09:48:54.656565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.821 [2024-10-07 09:48:54.656595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.821 qpair failed and we were unable to recover it. 00:28:05.821 [2024-10-07 09:48:54.656690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.821 [2024-10-07 09:48:54.656719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.821 qpair failed and we were unable to recover it. 00:28:05.821 [2024-10-07 09:48:54.656910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.821 [2024-10-07 09:48:54.656956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.821 qpair failed and we were unable to recover it. 00:28:05.821 [2024-10-07 09:48:54.658564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.821 [2024-10-07 09:48:54.658602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.821 qpair failed and we were unable to recover it. 00:28:05.821 [2024-10-07 09:48:54.658765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.821 [2024-10-07 09:48:54.658796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.821 qpair failed and we were unable to recover it. 00:28:05.821 [2024-10-07 09:48:54.659558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.821 [2024-10-07 09:48:54.659595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.821 qpair failed and we were unable to recover it. 00:28:05.821 [2024-10-07 09:48:54.659772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.821 [2024-10-07 09:48:54.659804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.821 qpair failed and we were unable to recover it. 00:28:05.821 [2024-10-07 09:48:54.659933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.821 [2024-10-07 09:48:54.659963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.821 qpair failed and we were unable to recover it. 00:28:05.821 [2024-10-07 09:48:54.660122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.821 [2024-10-07 09:48:54.660167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.821 qpair failed and we were unable to recover it. 00:28:05.821 [2024-10-07 09:48:54.660362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.821 [2024-10-07 09:48:54.660394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.821 qpair failed and we were unable to recover it. 00:28:05.821 [2024-10-07 09:48:54.660528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.821 [2024-10-07 09:48:54.660559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.821 qpair failed and we were unable to recover it. 00:28:05.821 [2024-10-07 09:48:54.660696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.821 [2024-10-07 09:48:54.660726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.821 qpair failed and we were unable to recover it. 00:28:05.821 [2024-10-07 09:48:54.660827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.821 [2024-10-07 09:48:54.660862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.821 qpair failed and we were unable to recover it. 00:28:05.821 [2024-10-07 09:48:54.660965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.821 [2024-10-07 09:48:54.660998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.821 qpair failed and we were unable to recover it. 00:28:05.821 [2024-10-07 09:48:54.661094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.821 [2024-10-07 09:48:54.661123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.821 qpair failed and we were unable to recover it. 00:28:05.821 [2024-10-07 09:48:54.661269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.821 [2024-10-07 09:48:54.661315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.822 qpair failed and we were unable to recover it. 00:28:05.822 [2024-10-07 09:48:54.661491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.822 [2024-10-07 09:48:54.661541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.822 qpair failed and we were unable to recover it. 00:28:05.822 [2024-10-07 09:48:54.661672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.822 [2024-10-07 09:48:54.661703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.822 qpair failed and we were unable to recover it. 00:28:05.822 [2024-10-07 09:48:54.661829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.822 [2024-10-07 09:48:54.661858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.822 qpair failed and we were unable to recover it. 00:28:05.822 [2024-10-07 09:48:54.662007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.822 [2024-10-07 09:48:54.662039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.822 qpair failed and we were unable to recover it. 00:28:05.822 [2024-10-07 09:48:54.662152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.822 [2024-10-07 09:48:54.662185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.822 qpair failed and we were unable to recover it. 00:28:05.822 [2024-10-07 09:48:54.662320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.822 [2024-10-07 09:48:54.662352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.822 qpair failed and we were unable to recover it. 00:28:05.822 [2024-10-07 09:48:54.662451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.822 [2024-10-07 09:48:54.662483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.822 qpair failed and we were unable to recover it. 00:28:05.822 [2024-10-07 09:48:54.662618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.822 [2024-10-07 09:48:54.662649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.822 qpair failed and we were unable to recover it. 00:28:05.822 [2024-10-07 09:48:54.662754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.822 [2024-10-07 09:48:54.662783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.822 qpair failed and we were unable to recover it. 00:28:05.822 [2024-10-07 09:48:54.662891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.822 [2024-10-07 09:48:54.662921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.822 qpair failed and we were unable to recover it. 00:28:05.822 [2024-10-07 09:48:54.663079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.822 [2024-10-07 09:48:54.663111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.822 qpair failed and we were unable to recover it. 00:28:05.822 [2024-10-07 09:48:54.663272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.822 [2024-10-07 09:48:54.663296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.822 qpair failed and we were unable to recover it. 00:28:05.822 [2024-10-07 09:48:54.663491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.822 [2024-10-07 09:48:54.663522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.822 qpair failed and we were unable to recover it. 00:28:05.822 [2024-10-07 09:48:54.663669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.822 [2024-10-07 09:48:54.663696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.822 qpair failed and we were unable to recover it. 00:28:05.822 [2024-10-07 09:48:54.663844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.822 [2024-10-07 09:48:54.663874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.822 qpair failed and we were unable to recover it. 00:28:05.822 [2024-10-07 09:48:54.664087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.822 [2024-10-07 09:48:54.664117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.822 qpair failed and we were unable to recover it. 00:28:05.822 [2024-10-07 09:48:54.664275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.822 [2024-10-07 09:48:54.664307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.822 qpair failed and we were unable to recover it. 00:28:05.822 [2024-10-07 09:48:54.664430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.822 [2024-10-07 09:48:54.664455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.822 qpair failed and we were unable to recover it. 00:28:05.822 [2024-10-07 09:48:54.664559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.822 [2024-10-07 09:48:54.664585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.822 qpair failed and we were unable to recover it. 00:28:05.822 [2024-10-07 09:48:54.664679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.822 [2024-10-07 09:48:54.664706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.822 qpair failed and we were unable to recover it. 00:28:05.822 [2024-10-07 09:48:54.664895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.822 [2024-10-07 09:48:54.664926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.822 qpair failed and we were unable to recover it. 00:28:05.822 [2024-10-07 09:48:54.665147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.822 [2024-10-07 09:48:54.665196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.822 qpair failed and we were unable to recover it. 00:28:05.822 [2024-10-07 09:48:54.665358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.822 [2024-10-07 09:48:54.665388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.822 qpair failed and we were unable to recover it. 00:28:05.822 [2024-10-07 09:48:54.665491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.822 [2024-10-07 09:48:54.665520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.822 qpair failed and we were unable to recover it. 00:28:05.822 [2024-10-07 09:48:54.665604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.822 [2024-10-07 09:48:54.665630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.822 qpair failed and we were unable to recover it. 00:28:05.822 [2024-10-07 09:48:54.665729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.822 [2024-10-07 09:48:54.665773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.822 qpair failed and we were unable to recover it. 00:28:05.822 [2024-10-07 09:48:54.665885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.822 [2024-10-07 09:48:54.665930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.822 qpair failed and we were unable to recover it. 00:28:05.822 [2024-10-07 09:48:54.666041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.822 [2024-10-07 09:48:54.666084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.822 qpair failed and we were unable to recover it. 00:28:05.822 [2024-10-07 09:48:54.666226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.822 [2024-10-07 09:48:54.666252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.822 qpair failed and we were unable to recover it. 00:28:05.822 [2024-10-07 09:48:54.666359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.822 [2024-10-07 09:48:54.666385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.822 qpair failed and we were unable to recover it. 00:28:05.823 [2024-10-07 09:48:54.666498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.823 [2024-10-07 09:48:54.666523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.823 qpair failed and we were unable to recover it. 00:28:05.823 [2024-10-07 09:48:54.666643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.823 [2024-10-07 09:48:54.666677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.823 qpair failed and we were unable to recover it. 00:28:05.823 [2024-10-07 09:48:54.666757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.823 [2024-10-07 09:48:54.666783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.823 qpair failed and we were unable to recover it. 00:28:05.823 [2024-10-07 09:48:54.666856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.823 [2024-10-07 09:48:54.666882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.823 qpair failed and we were unable to recover it. 00:28:05.823 [2024-10-07 09:48:54.666980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.823 [2024-10-07 09:48:54.667007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.823 qpair failed and we were unable to recover it. 00:28:05.823 [2024-10-07 09:48:54.667097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.823 [2024-10-07 09:48:54.667123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.823 qpair failed and we were unable to recover it. 00:28:05.823 [2024-10-07 09:48:54.667237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.823 [2024-10-07 09:48:54.667262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.823 qpair failed and we were unable to recover it. 00:28:05.823 [2024-10-07 09:48:54.667381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.823 [2024-10-07 09:48:54.667407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.823 qpair failed and we were unable to recover it. 00:28:05.823 [2024-10-07 09:48:54.667521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.823 [2024-10-07 09:48:54.667547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.823 qpair failed and we were unable to recover it. 00:28:05.823 [2024-10-07 09:48:54.667638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.823 [2024-10-07 09:48:54.667664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.823 qpair failed and we were unable to recover it. 00:28:05.823 [2024-10-07 09:48:54.667791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.823 [2024-10-07 09:48:54.667817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.823 qpair failed and we were unable to recover it. 00:28:05.823 [2024-10-07 09:48:54.667895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.823 [2024-10-07 09:48:54.667921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.823 qpair failed and we were unable to recover it. 00:28:05.823 [2024-10-07 09:48:54.668000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.823 [2024-10-07 09:48:54.668026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.823 qpair failed and we were unable to recover it. 00:28:05.823 [2024-10-07 09:48:54.668140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.823 [2024-10-07 09:48:54.668166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.823 qpair failed and we were unable to recover it. 00:28:05.823 [2024-10-07 09:48:54.668255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.823 [2024-10-07 09:48:54.668301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.823 qpair failed and we were unable to recover it. 00:28:05.823 [2024-10-07 09:48:54.668423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.823 [2024-10-07 09:48:54.668450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.823 qpair failed and we were unable to recover it. 00:28:05.823 [2024-10-07 09:48:54.668539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.823 [2024-10-07 09:48:54.668575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.823 qpair failed and we were unable to recover it. 00:28:05.823 [2024-10-07 09:48:54.668722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.823 [2024-10-07 09:48:54.668749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.823 qpair failed and we were unable to recover it. 00:28:05.823 [2024-10-07 09:48:54.668863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.823 [2024-10-07 09:48:54.668889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.823 qpair failed and we were unable to recover it. 00:28:05.823 [2024-10-07 09:48:54.669014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.823 [2024-10-07 09:48:54.669040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.823 qpair failed and we were unable to recover it. 00:28:05.823 [2024-10-07 09:48:54.669155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.823 [2024-10-07 09:48:54.669186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.823 qpair failed and we were unable to recover it. 00:28:05.823 [2024-10-07 09:48:54.669298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.823 [2024-10-07 09:48:54.669324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.823 qpair failed and we were unable to recover it. 00:28:05.823 [2024-10-07 09:48:54.669434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.823 [2024-10-07 09:48:54.669460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.823 qpair failed and we were unable to recover it. 00:28:05.823 [2024-10-07 09:48:54.669548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.823 [2024-10-07 09:48:54.669574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.823 qpair failed and we were unable to recover it. 00:28:05.823 [2024-10-07 09:48:54.669721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.823 [2024-10-07 09:48:54.669747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.823 qpair failed and we were unable to recover it. 00:28:05.823 [2024-10-07 09:48:54.669827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.823 [2024-10-07 09:48:54.669853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.823 qpair failed and we were unable to recover it. 00:28:05.823 [2024-10-07 09:48:54.669930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.823 [2024-10-07 09:48:54.669956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.823 qpair failed and we were unable to recover it. 00:28:05.823 [2024-10-07 09:48:54.670048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.823 [2024-10-07 09:48:54.670074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.823 qpair failed and we were unable to recover it. 00:28:05.823 [2024-10-07 09:48:54.670183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.823 [2024-10-07 09:48:54.670208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.823 qpair failed and we were unable to recover it. 00:28:05.824 [2024-10-07 09:48:54.670352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.824 [2024-10-07 09:48:54.670378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.824 qpair failed and we were unable to recover it. 00:28:05.824 [2024-10-07 09:48:54.670499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.824 [2024-10-07 09:48:54.670525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.824 qpair failed and we were unable to recover it. 00:28:05.824 [2024-10-07 09:48:54.670610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.824 [2024-10-07 09:48:54.670636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.824 qpair failed and we were unable to recover it. 00:28:05.824 [2024-10-07 09:48:54.670793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.824 [2024-10-07 09:48:54.670820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.824 qpair failed and we were unable to recover it. 00:28:05.824 [2024-10-07 09:48:54.670894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.824 [2024-10-07 09:48:54.670920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.824 qpair failed and we were unable to recover it. 00:28:05.824 [2024-10-07 09:48:54.671021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.824 [2024-10-07 09:48:54.671047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.824 qpair failed and we were unable to recover it. 00:28:05.824 [2024-10-07 09:48:54.671184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.824 [2024-10-07 09:48:54.671209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.824 qpair failed and we were unable to recover it. 00:28:05.824 [2024-10-07 09:48:54.671297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.824 [2024-10-07 09:48:54.671322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.824 qpair failed and we were unable to recover it. 00:28:05.824 [2024-10-07 09:48:54.671438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.824 [2024-10-07 09:48:54.671464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.824 qpair failed and we were unable to recover it. 00:28:05.824 [2024-10-07 09:48:54.671552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.824 [2024-10-07 09:48:54.671581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.824 qpair failed and we were unable to recover it. 00:28:05.824 [2024-10-07 09:48:54.671682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.824 [2024-10-07 09:48:54.671710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.824 qpair failed and we were unable to recover it. 00:28:05.824 [2024-10-07 09:48:54.671804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.824 [2024-10-07 09:48:54.671830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.824 qpair failed and we were unable to recover it. 00:28:05.824 [2024-10-07 09:48:54.671942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.824 [2024-10-07 09:48:54.671972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.824 qpair failed and we were unable to recover it. 00:28:05.824 [2024-10-07 09:48:54.672048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.824 [2024-10-07 09:48:54.672074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.824 qpair failed and we were unable to recover it. 00:28:05.824 [2024-10-07 09:48:54.672162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.824 [2024-10-07 09:48:54.672188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.824 qpair failed and we were unable to recover it. 00:28:05.824 [2024-10-07 09:48:54.672337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.824 [2024-10-07 09:48:54.672365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.824 qpair failed and we were unable to recover it. 00:28:05.824 [2024-10-07 09:48:54.672464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.824 [2024-10-07 09:48:54.672489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.824 qpair failed and we were unable to recover it. 00:28:05.824 [2024-10-07 09:48:54.672630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.824 [2024-10-07 09:48:54.672657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.824 qpair failed and we were unable to recover it. 00:28:05.824 [2024-10-07 09:48:54.672787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.824 [2024-10-07 09:48:54.672817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.824 qpair failed and we were unable to recover it. 00:28:05.824 [2024-10-07 09:48:54.672905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.824 [2024-10-07 09:48:54.672950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.824 qpair failed and we were unable to recover it. 00:28:05.824 [2024-10-07 09:48:54.673056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.824 [2024-10-07 09:48:54.673082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.824 qpair failed and we were unable to recover it. 00:28:05.824 [2024-10-07 09:48:54.673172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.824 [2024-10-07 09:48:54.673198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.824 qpair failed and we were unable to recover it. 00:28:05.824 [2024-10-07 09:48:54.673332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.824 [2024-10-07 09:48:54.673358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.824 qpair failed and we were unable to recover it. 00:28:05.824 [2024-10-07 09:48:54.673470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.824 [2024-10-07 09:48:54.673496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.824 qpair failed and we were unable to recover it. 00:28:05.825 [2024-10-07 09:48:54.673641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.825 [2024-10-07 09:48:54.673685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.825 qpair failed and we were unable to recover it. 00:28:05.825 [2024-10-07 09:48:54.673828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.825 [2024-10-07 09:48:54.673855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.825 qpair failed and we were unable to recover it. 00:28:05.825 [2024-10-07 09:48:54.673998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.825 [2024-10-07 09:48:54.674024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.825 qpair failed and we were unable to recover it. 00:28:05.825 [2024-10-07 09:48:54.674147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.825 [2024-10-07 09:48:54.674173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.825 qpair failed and we were unable to recover it. 00:28:05.825 [2024-10-07 09:48:54.674277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.825 [2024-10-07 09:48:54.674304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.825 qpair failed and we were unable to recover it. 00:28:05.825 [2024-10-07 09:48:54.674418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.825 [2024-10-07 09:48:54.674444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.825 qpair failed and we were unable to recover it. 00:28:05.825 [2024-10-07 09:48:54.674534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.825 [2024-10-07 09:48:54.674560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.825 qpair failed and we were unable to recover it. 00:28:05.825 [2024-10-07 09:48:54.674751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.825 [2024-10-07 09:48:54.674778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.825 qpair failed and we were unable to recover it. 00:28:05.825 [2024-10-07 09:48:54.674897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.825 [2024-10-07 09:48:54.674923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.825 qpair failed and we were unable to recover it. 00:28:05.825 [2024-10-07 09:48:54.675002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.825 [2024-10-07 09:48:54.675030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.825 qpair failed and we were unable to recover it. 00:28:05.825 [2024-10-07 09:48:54.675115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.825 [2024-10-07 09:48:54.675141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.825 qpair failed and we were unable to recover it. 00:28:05.825 [2024-10-07 09:48:54.675254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.825 [2024-10-07 09:48:54.675281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.825 qpair failed and we were unable to recover it. 00:28:05.825 [2024-10-07 09:48:54.675385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.825 [2024-10-07 09:48:54.675413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.825 qpair failed and we were unable to recover it. 00:28:05.825 [2024-10-07 09:48:54.675537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.825 [2024-10-07 09:48:54.675564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.825 qpair failed and we were unable to recover it. 00:28:05.825 [2024-10-07 09:48:54.675688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.825 [2024-10-07 09:48:54.675716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.825 qpair failed and we were unable to recover it. 00:28:05.825 [2024-10-07 09:48:54.675803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.825 [2024-10-07 09:48:54.675830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.825 qpair failed and we were unable to recover it. 00:28:05.825 [2024-10-07 09:48:54.675969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.825 [2024-10-07 09:48:54.675999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.825 qpair failed and we were unable to recover it. 00:28:05.825 [2024-10-07 09:48:54.676116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.825 [2024-10-07 09:48:54.676143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.825 qpair failed and we were unable to recover it. 00:28:05.825 [2024-10-07 09:48:54.676259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.825 [2024-10-07 09:48:54.676286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.825 qpair failed and we were unable to recover it. 00:28:05.825 [2024-10-07 09:48:54.676427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.825 [2024-10-07 09:48:54.676453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.825 qpair failed and we were unable to recover it. 00:28:05.825 [2024-10-07 09:48:54.676596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.825 [2024-10-07 09:48:54.676622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.825 qpair failed and we were unable to recover it. 00:28:05.825 [2024-10-07 09:48:54.676808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.825 [2024-10-07 09:48:54.676840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.825 qpair failed and we were unable to recover it. 00:28:05.825 [2024-10-07 09:48:54.676973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.825 [2024-10-07 09:48:54.677022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.825 qpair failed and we were unable to recover it. 00:28:05.825 [2024-10-07 09:48:54.677158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.825 [2024-10-07 09:48:54.677183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.825 qpair failed and we were unable to recover it. 00:28:05.825 [2024-10-07 09:48:54.677292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.825 [2024-10-07 09:48:54.677318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.825 qpair failed and we were unable to recover it. 00:28:05.825 [2024-10-07 09:48:54.677395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.825 [2024-10-07 09:48:54.677421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.825 qpair failed and we were unable to recover it. 00:28:05.825 [2024-10-07 09:48:54.677538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.825 [2024-10-07 09:48:54.677564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.825 qpair failed and we were unable to recover it. 00:28:05.825 [2024-10-07 09:48:54.677648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.825 [2024-10-07 09:48:54.677681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.825 qpair failed and we were unable to recover it. 00:28:05.825 [2024-10-07 09:48:54.677815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.825 [2024-10-07 09:48:54.677841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.825 qpair failed and we were unable to recover it. 00:28:05.825 [2024-10-07 09:48:54.677928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.825 [2024-10-07 09:48:54.677954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.825 qpair failed and we were unable to recover it. 00:28:05.826 [2024-10-07 09:48:54.678147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.826 [2024-10-07 09:48:54.678172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.826 qpair failed and we were unable to recover it. 00:28:05.826 [2024-10-07 09:48:54.678261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.826 [2024-10-07 09:48:54.678286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.826 qpair failed and we were unable to recover it. 00:28:05.826 [2024-10-07 09:48:54.678405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.826 [2024-10-07 09:48:54.678431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.826 qpair failed and we were unable to recover it. 00:28:05.826 [2024-10-07 09:48:54.678539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.826 [2024-10-07 09:48:54.678565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.826 qpair failed and we were unable to recover it. 00:28:05.826 [2024-10-07 09:48:54.678694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.826 [2024-10-07 09:48:54.678720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.826 qpair failed and we were unable to recover it. 00:28:05.826 [2024-10-07 09:48:54.678830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.826 [2024-10-07 09:48:54.678856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.826 qpair failed and we were unable to recover it. 00:28:05.826 [2024-10-07 09:48:54.678970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.826 [2024-10-07 09:48:54.678995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.826 qpair failed and we were unable to recover it. 00:28:05.826 [2024-10-07 09:48:54.679075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.826 [2024-10-07 09:48:54.679100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.826 qpair failed and we were unable to recover it. 00:28:05.826 [2024-10-07 09:48:54.679202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.826 [2024-10-07 09:48:54.679228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.826 qpair failed and we were unable to recover it. 00:28:05.826 [2024-10-07 09:48:54.679321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.826 [2024-10-07 09:48:54.679361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.826 qpair failed and we were unable to recover it. 00:28:05.826 [2024-10-07 09:48:54.679456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.826 [2024-10-07 09:48:54.679483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.826 qpair failed and we were unable to recover it. 00:28:05.826 [2024-10-07 09:48:54.679567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.826 [2024-10-07 09:48:54.679594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.826 qpair failed and we were unable to recover it. 00:28:05.826 [2024-10-07 09:48:54.679707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.826 [2024-10-07 09:48:54.679735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.826 qpair failed and we were unable to recover it. 00:28:05.826 [2024-10-07 09:48:54.679847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.826 [2024-10-07 09:48:54.679873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.826 qpair failed and we were unable to recover it. 00:28:05.826 [2024-10-07 09:48:54.679952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.826 [2024-10-07 09:48:54.679978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.826 qpair failed and we were unable to recover it. 00:28:05.826 [2024-10-07 09:48:54.680075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.826 [2024-10-07 09:48:54.680101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.826 qpair failed and we were unable to recover it. 00:28:05.826 [2024-10-07 09:48:54.680217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.826 [2024-10-07 09:48:54.680244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.826 qpair failed and we were unable to recover it. 00:28:05.826 [2024-10-07 09:48:54.680360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.826 [2024-10-07 09:48:54.680387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.826 qpair failed and we were unable to recover it. 00:28:05.826 [2024-10-07 09:48:54.680527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.826 [2024-10-07 09:48:54.680558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.826 qpair failed and we were unable to recover it. 00:28:05.826 [2024-10-07 09:48:54.680705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.826 [2024-10-07 09:48:54.680731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.826 qpair failed and we were unable to recover it. 00:28:05.826 [2024-10-07 09:48:54.680819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.826 [2024-10-07 09:48:54.680845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.826 qpair failed and we were unable to recover it. 00:28:05.826 [2024-10-07 09:48:54.680948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.826 [2024-10-07 09:48:54.680974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.826 qpair failed and we were unable to recover it. 00:28:05.826 [2024-10-07 09:48:54.681055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.826 [2024-10-07 09:48:54.681080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.826 qpair failed and we were unable to recover it. 00:28:05.826 [2024-10-07 09:48:54.681168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.826 [2024-10-07 09:48:54.681194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.826 qpair failed and we were unable to recover it. 00:28:05.826 [2024-10-07 09:48:54.681273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.826 [2024-10-07 09:48:54.681300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.826 qpair failed and we were unable to recover it. 00:28:05.826 [2024-10-07 09:48:54.681407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.826 [2024-10-07 09:48:54.681433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.826 qpair failed and we were unable to recover it. 00:28:05.826 [2024-10-07 09:48:54.681548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.826 [2024-10-07 09:48:54.681575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.826 qpair failed and we were unable to recover it. 00:28:05.826 [2024-10-07 09:48:54.681721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.826 [2024-10-07 09:48:54.681747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.826 qpair failed and we were unable to recover it. 00:28:05.826 [2024-10-07 09:48:54.681850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.826 [2024-10-07 09:48:54.681876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.826 qpair failed and we were unable to recover it. 00:28:05.826 [2024-10-07 09:48:54.682018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.826 [2024-10-07 09:48:54.682044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.826 qpair failed and we were unable to recover it. 00:28:05.827 [2024-10-07 09:48:54.682163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.827 [2024-10-07 09:48:54.682190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.827 qpair failed and we were unable to recover it. 00:28:05.827 [2024-10-07 09:48:54.682305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.827 [2024-10-07 09:48:54.682331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.827 qpair failed and we were unable to recover it. 00:28:05.827 [2024-10-07 09:48:54.682422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.827 [2024-10-07 09:48:54.682449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.827 qpair failed and we were unable to recover it. 00:28:05.827 [2024-10-07 09:48:54.682534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.827 [2024-10-07 09:48:54.682561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.827 qpair failed and we were unable to recover it. 00:28:05.827 [2024-10-07 09:48:54.682686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.827 [2024-10-07 09:48:54.682714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.827 qpair failed and we were unable to recover it. 00:28:05.827 [2024-10-07 09:48:54.682868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.827 [2024-10-07 09:48:54.682894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.827 qpair failed and we were unable to recover it. 00:28:05.827 [2024-10-07 09:48:54.683009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.827 [2024-10-07 09:48:54.683035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.827 qpair failed and we were unable to recover it. 00:28:05.827 [2024-10-07 09:48:54.683177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.827 [2024-10-07 09:48:54.683204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.827 qpair failed and we were unable to recover it. 00:28:05.827 [2024-10-07 09:48:54.683352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.827 [2024-10-07 09:48:54.683378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.827 qpair failed and we were unable to recover it. 00:28:05.827 [2024-10-07 09:48:54.683458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.827 [2024-10-07 09:48:54.683485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.827 qpair failed and we were unable to recover it. 00:28:05.827 [2024-10-07 09:48:54.683624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.827 [2024-10-07 09:48:54.683651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.827 qpair failed and we were unable to recover it. 00:28:05.827 [2024-10-07 09:48:54.683797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.827 [2024-10-07 09:48:54.683828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.827 qpair failed and we were unable to recover it. 00:28:05.827 [2024-10-07 09:48:54.683994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.827 [2024-10-07 09:48:54.684019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.827 qpair failed and we were unable to recover it. 00:28:05.827 [2024-10-07 09:48:54.684107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.827 [2024-10-07 09:48:54.684134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.827 qpair failed and we were unable to recover it. 00:28:05.827 [2024-10-07 09:48:54.684276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.827 [2024-10-07 09:48:54.684302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.827 qpair failed and we were unable to recover it. 00:28:05.827 [2024-10-07 09:48:54.684427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.827 [2024-10-07 09:48:54.684453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.827 qpair failed and we were unable to recover it. 00:28:05.827 [2024-10-07 09:48:54.684557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.827 [2024-10-07 09:48:54.684584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.827 qpair failed and we were unable to recover it. 00:28:05.827 [2024-10-07 09:48:54.684691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.827 [2024-10-07 09:48:54.684718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.827 qpair failed and we were unable to recover it. 00:28:05.827 [2024-10-07 09:48:54.684879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.827 [2024-10-07 09:48:54.684911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.827 qpair failed and we were unable to recover it. 00:28:05.827 [2024-10-07 09:48:54.685148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.827 [2024-10-07 09:48:54.685182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.827 qpair failed and we were unable to recover it. 00:28:05.827 [2024-10-07 09:48:54.685318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.827 [2024-10-07 09:48:54.685345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.827 qpair failed and we were unable to recover it. 00:28:05.827 [2024-10-07 09:48:54.685457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.827 [2024-10-07 09:48:54.685484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.827 qpair failed and we were unable to recover it. 00:28:05.827 [2024-10-07 09:48:54.685611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.827 [2024-10-07 09:48:54.685637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.827 qpair failed and we were unable to recover it. 00:28:05.827 [2024-10-07 09:48:54.685742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.827 [2024-10-07 09:48:54.685768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.827 qpair failed and we were unable to recover it. 00:28:05.827 [2024-10-07 09:48:54.685900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.827 [2024-10-07 09:48:54.685931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.827 qpair failed and we were unable to recover it. 00:28:05.827 [2024-10-07 09:48:54.686083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.827 [2024-10-07 09:48:54.686109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.827 qpair failed and we were unable to recover it. 00:28:05.827 [2024-10-07 09:48:54.686224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.827 [2024-10-07 09:48:54.686250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.827 qpair failed and we were unable to recover it. 00:28:05.827 [2024-10-07 09:48:54.686334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.827 [2024-10-07 09:48:54.686361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.827 qpair failed and we were unable to recover it. 00:28:05.827 [2024-10-07 09:48:54.686474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.827 [2024-10-07 09:48:54.686505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.827 qpair failed and we were unable to recover it. 00:28:05.827 [2024-10-07 09:48:54.686661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.827 [2024-10-07 09:48:54.686711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.827 qpair failed and we were unable to recover it. 00:28:05.827 [2024-10-07 09:48:54.686892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.828 [2024-10-07 09:48:54.686924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.828 qpair failed and we were unable to recover it. 00:28:05.828 [2024-10-07 09:48:54.687031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.828 [2024-10-07 09:48:54.687108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.828 qpair failed and we were unable to recover it. 00:28:05.828 [2024-10-07 09:48:54.687231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.828 [2024-10-07 09:48:54.687257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.828 qpair failed and we were unable to recover it. 00:28:05.828 [2024-10-07 09:48:54.687392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.828 [2024-10-07 09:48:54.687418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.828 qpair failed and we were unable to recover it. 00:28:05.828 [2024-10-07 09:48:54.687526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.828 [2024-10-07 09:48:54.687551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.828 qpair failed and we were unable to recover it. 00:28:05.828 [2024-10-07 09:48:54.687641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.828 [2024-10-07 09:48:54.687675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.828 qpair failed and we were unable to recover it. 00:28:05.828 [2024-10-07 09:48:54.687765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.828 [2024-10-07 09:48:54.687793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.828 qpair failed and we were unable to recover it. 00:28:05.828 [2024-10-07 09:48:54.687877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.828 [2024-10-07 09:48:54.687903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.828 qpair failed and we were unable to recover it. 00:28:05.828 [2024-10-07 09:48:54.688040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.828 [2024-10-07 09:48:54.688066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.828 qpair failed and we were unable to recover it. 00:28:05.828 [2024-10-07 09:48:54.688206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.828 [2024-10-07 09:48:54.688232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.828 qpair failed and we were unable to recover it. 00:28:05.828 [2024-10-07 09:48:54.688341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.828 [2024-10-07 09:48:54.688367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.828 qpair failed and we were unable to recover it. 00:28:05.828 [2024-10-07 09:48:54.688453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.828 [2024-10-07 09:48:54.688479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.828 qpair failed and we were unable to recover it. 00:28:05.828 [2024-10-07 09:48:54.688602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.828 [2024-10-07 09:48:54.688628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.828 qpair failed and we were unable to recover it. 00:28:05.828 [2024-10-07 09:48:54.688788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.828 [2024-10-07 09:48:54.688815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.828 qpair failed and we were unable to recover it. 00:28:05.828 [2024-10-07 09:48:54.688937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.828 [2024-10-07 09:48:54.688965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.828 qpair failed and we were unable to recover it. 00:28:05.828 [2024-10-07 09:48:54.689077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.828 [2024-10-07 09:48:54.689103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.828 qpair failed and we were unable to recover it. 00:28:05.828 [2024-10-07 09:48:54.689217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.828 [2024-10-07 09:48:54.689242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.828 qpair failed and we were unable to recover it. 00:28:05.828 [2024-10-07 09:48:54.689344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.828 [2024-10-07 09:48:54.689371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.828 qpair failed and we were unable to recover it. 00:28:05.828 [2024-10-07 09:48:54.689481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.828 [2024-10-07 09:48:54.689507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.828 qpair failed and we were unable to recover it. 00:28:05.828 [2024-10-07 09:48:54.689643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.828 [2024-10-07 09:48:54.689676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.828 qpair failed and we were unable to recover it. 00:28:05.828 [2024-10-07 09:48:54.689822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.828 [2024-10-07 09:48:54.689849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.828 qpair failed and we were unable to recover it. 00:28:05.828 [2024-10-07 09:48:54.689984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.828 [2024-10-07 09:48:54.690011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.828 qpair failed and we were unable to recover it. 00:28:05.828 [2024-10-07 09:48:54.690138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.828 [2024-10-07 09:48:54.690164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.828 qpair failed and we were unable to recover it. 00:28:05.828 [2024-10-07 09:48:54.690247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.828 [2024-10-07 09:48:54.690273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.828 qpair failed and we were unable to recover it. 00:28:05.828 [2024-10-07 09:48:54.690350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.828 [2024-10-07 09:48:54.690376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.828 qpair failed and we were unable to recover it. 00:28:05.828 [2024-10-07 09:48:54.690557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.828 [2024-10-07 09:48:54.690658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.828 qpair failed and we were unable to recover it. 00:28:05.828 [2024-10-07 09:48:54.690835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.828 [2024-10-07 09:48:54.690865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.828 qpair failed and we were unable to recover it. 00:28:05.828 [2024-10-07 09:48:54.690994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.828 [2024-10-07 09:48:54.691020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.828 qpair failed and we were unable to recover it. 00:28:05.828 [2024-10-07 09:48:54.691149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.828 [2024-10-07 09:48:54.691175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.828 qpair failed and we were unable to recover it. 00:28:05.828 [2024-10-07 09:48:54.691257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.828 [2024-10-07 09:48:54.691284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.829 qpair failed and we were unable to recover it. 00:28:05.829 [2024-10-07 09:48:54.691394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.829 [2024-10-07 09:48:54.691421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.829 qpair failed and we were unable to recover it. 00:28:05.829 [2024-10-07 09:48:54.691505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.829 [2024-10-07 09:48:54.691533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.829 qpair failed and we were unable to recover it. 00:28:05.829 [2024-10-07 09:48:54.691626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.829 [2024-10-07 09:48:54.691653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.829 qpair failed and we were unable to recover it. 00:28:05.829 [2024-10-07 09:48:54.691744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.829 [2024-10-07 09:48:54.691772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.829 qpair failed and we were unable to recover it. 00:28:05.829 [2024-10-07 09:48:54.691914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.829 [2024-10-07 09:48:54.691940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.829 qpair failed and we were unable to recover it. 00:28:05.829 [2024-10-07 09:48:54.692086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.829 [2024-10-07 09:48:54.692113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.829 qpair failed and we were unable to recover it. 00:28:05.829 [2024-10-07 09:48:54.692192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.829 [2024-10-07 09:48:54.692218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.829 qpair failed and we were unable to recover it. 00:28:05.829 [2024-10-07 09:48:54.692356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.829 [2024-10-07 09:48:54.692382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.829 qpair failed and we were unable to recover it. 00:28:05.829 [2024-10-07 09:48:54.692518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.829 [2024-10-07 09:48:54.692548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.829 qpair failed and we were unable to recover it. 00:28:05.829 [2024-10-07 09:48:54.692660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.829 [2024-10-07 09:48:54.692692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.829 qpair failed and we were unable to recover it. 00:28:05.829 [2024-10-07 09:48:54.692831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.829 [2024-10-07 09:48:54.692857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.829 qpair failed and we were unable to recover it. 00:28:05.829 [2024-10-07 09:48:54.692950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.829 [2024-10-07 09:48:54.692977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.829 qpair failed and we were unable to recover it. 00:28:05.829 [2024-10-07 09:48:54.693088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.829 [2024-10-07 09:48:54.693114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.829 qpair failed and we were unable to recover it. 00:28:05.829 [2024-10-07 09:48:54.693274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.829 [2024-10-07 09:48:54.693307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.829 qpair failed and we were unable to recover it. 00:28:05.829 [2024-10-07 09:48:54.693460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.829 [2024-10-07 09:48:54.693494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.829 qpair failed and we were unable to recover it. 00:28:05.829 [2024-10-07 09:48:54.693649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.829 [2024-10-07 09:48:54.693706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.829 qpair failed and we were unable to recover it. 00:28:05.829 [2024-10-07 09:48:54.693808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.829 [2024-10-07 09:48:54.693841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.829 qpair failed and we were unable to recover it. 00:28:05.829 [2024-10-07 09:48:54.694001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.829 [2024-10-07 09:48:54.694050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.829 qpair failed and we were unable to recover it. 00:28:05.829 [2024-10-07 09:48:54.694191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.829 [2024-10-07 09:48:54.694242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.829 qpair failed and we were unable to recover it. 00:28:05.829 [2024-10-07 09:48:54.694395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.829 [2024-10-07 09:48:54.694443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.829 qpair failed and we were unable to recover it. 00:28:05.829 [2024-10-07 09:48:54.694543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.829 [2024-10-07 09:48:54.694574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.829 qpair failed and we were unable to recover it. 00:28:05.829 [2024-10-07 09:48:54.694682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.829 [2024-10-07 09:48:54.694714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.829 qpair failed and we were unable to recover it. 00:28:05.829 [2024-10-07 09:48:54.694893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.829 [2024-10-07 09:48:54.694941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.829 qpair failed and we were unable to recover it. 00:28:05.829 [2024-10-07 09:48:54.695130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.829 [2024-10-07 09:48:54.695177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.829 qpair failed and we were unable to recover it. 00:28:05.829 [2024-10-07 09:48:54.695318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.829 [2024-10-07 09:48:54.695365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.829 qpair failed and we were unable to recover it. 00:28:05.829 [2024-10-07 09:48:54.695486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.829 [2024-10-07 09:48:54.695516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.830 qpair failed and we were unable to recover it. 00:28:05.830 [2024-10-07 09:48:54.695626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.830 [2024-10-07 09:48:54.695657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.830 qpair failed and we were unable to recover it. 00:28:05.830 [2024-10-07 09:48:54.695809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.830 [2024-10-07 09:48:54.695857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.830 qpair failed and we were unable to recover it. 00:28:05.830 [2024-10-07 09:48:54.696042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.830 [2024-10-07 09:48:54.696093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.830 qpair failed and we were unable to recover it. 00:28:05.830 [2024-10-07 09:48:54.696238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.830 [2024-10-07 09:48:54.696286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.830 qpair failed and we were unable to recover it. 00:28:05.830 [2024-10-07 09:48:54.696425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.830 [2024-10-07 09:48:54.696455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.830 qpair failed and we were unable to recover it. 00:28:05.830 [2024-10-07 09:48:54.696584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.830 [2024-10-07 09:48:54.696614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.830 qpair failed and we were unable to recover it. 00:28:05.830 [2024-10-07 09:48:54.696759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.830 [2024-10-07 09:48:54.696790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.830 qpair failed and we were unable to recover it. 00:28:05.830 [2024-10-07 09:48:54.696912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.830 [2024-10-07 09:48:54.696943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.830 qpair failed and we were unable to recover it. 00:28:05.830 [2024-10-07 09:48:54.697091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.830 [2024-10-07 09:48:54.697123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.830 qpair failed and we were unable to recover it. 00:28:05.830 [2024-10-07 09:48:54.697231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.830 [2024-10-07 09:48:54.697269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.830 qpair failed and we were unable to recover it. 00:28:05.830 [2024-10-07 09:48:54.697393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.830 [2024-10-07 09:48:54.697423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.830 qpair failed and we were unable to recover it. 00:28:05.830 [2024-10-07 09:48:54.697570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.830 [2024-10-07 09:48:54.697601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.830 qpair failed and we were unable to recover it. 00:28:05.830 [2024-10-07 09:48:54.697724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.830 [2024-10-07 09:48:54.697756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.830 qpair failed and we were unable to recover it. 00:28:05.830 [2024-10-07 09:48:54.697889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.830 [2024-10-07 09:48:54.697919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.830 qpair failed and we were unable to recover it. 00:28:05.830 [2024-10-07 09:48:54.698052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.830 [2024-10-07 09:48:54.698083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.830 qpair failed and we were unable to recover it. 00:28:05.830 [2024-10-07 09:48:54.698217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.830 [2024-10-07 09:48:54.698248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.830 qpair failed and we were unable to recover it. 00:28:05.830 [2024-10-07 09:48:54.698377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.830 [2024-10-07 09:48:54.698408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.830 qpair failed and we were unable to recover it. 00:28:05.830 [2024-10-07 09:48:54.698501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.830 [2024-10-07 09:48:54.698532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.830 qpair failed and we were unable to recover it. 00:28:05.830 [2024-10-07 09:48:54.698690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.830 [2024-10-07 09:48:54.698722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.830 qpair failed and we were unable to recover it. 00:28:05.830 [2024-10-07 09:48:54.698858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.830 [2024-10-07 09:48:54.698905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.830 qpair failed and we were unable to recover it. 00:28:05.830 [2024-10-07 09:48:54.699020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.830 [2024-10-07 09:48:54.699067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.830 qpair failed and we were unable to recover it. 00:28:05.830 [2024-10-07 09:48:54.699196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.830 [2024-10-07 09:48:54.699228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.830 qpair failed and we were unable to recover it. 00:28:05.830 [2024-10-07 09:48:54.699353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.830 [2024-10-07 09:48:54.699384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.830 qpair failed and we were unable to recover it. 00:28:05.830 [2024-10-07 09:48:54.699508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.830 [2024-10-07 09:48:54.699556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.830 qpair failed and we were unable to recover it. 00:28:05.830 [2024-10-07 09:48:54.699709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.830 [2024-10-07 09:48:54.699744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.830 qpair failed and we were unable to recover it. 00:28:05.830 [2024-10-07 09:48:54.699847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.830 [2024-10-07 09:48:54.699880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.830 qpair failed and we were unable to recover it. 00:28:05.830 [2024-10-07 09:48:54.700018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.830 [2024-10-07 09:48:54.700051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.830 qpair failed and we were unable to recover it. 00:28:05.831 [2024-10-07 09:48:54.700179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.831 [2024-10-07 09:48:54.700213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.831 qpair failed and we were unable to recover it. 00:28:05.831 [2024-10-07 09:48:54.700374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.831 [2024-10-07 09:48:54.700408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.831 qpair failed and we were unable to recover it. 00:28:05.831 [2024-10-07 09:48:54.700515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.831 [2024-10-07 09:48:54.700548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.831 qpair failed and we were unable to recover it. 00:28:05.831 [2024-10-07 09:48:54.700724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.831 [2024-10-07 09:48:54.700757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.831 qpair failed and we were unable to recover it. 00:28:05.831 [2024-10-07 09:48:54.700916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.831 [2024-10-07 09:48:54.700964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.831 qpair failed and we were unable to recover it. 00:28:05.831 [2024-10-07 09:48:54.701165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.831 [2024-10-07 09:48:54.701199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.831 qpair failed and we were unable to recover it. 00:28:05.831 [2024-10-07 09:48:54.701308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.831 [2024-10-07 09:48:54.701342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.831 qpair failed and we were unable to recover it. 00:28:05.831 [2024-10-07 09:48:54.701512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.831 [2024-10-07 09:48:54.701545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.831 qpair failed and we were unable to recover it. 00:28:05.831 [2024-10-07 09:48:54.701683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.831 [2024-10-07 09:48:54.701716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.831 qpair failed and we were unable to recover it. 00:28:05.831 [2024-10-07 09:48:54.701849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.831 [2024-10-07 09:48:54.701887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.831 qpair failed and we were unable to recover it. 00:28:05.831 [2024-10-07 09:48:54.702000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.831 [2024-10-07 09:48:54.702034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.831 qpair failed and we were unable to recover it. 00:28:05.831 [2024-10-07 09:48:54.702227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.831 [2024-10-07 09:48:54.702260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.831 qpair failed and we were unable to recover it. 00:28:05.831 [2024-10-07 09:48:54.702395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.831 [2024-10-07 09:48:54.702429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.831 qpair failed and we were unable to recover it. 00:28:05.831 [2024-10-07 09:48:54.702524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.831 [2024-10-07 09:48:54.702558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.831 qpair failed and we were unable to recover it. 00:28:05.831 [2024-10-07 09:48:54.702729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.831 [2024-10-07 09:48:54.702762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.831 qpair failed and we were unable to recover it. 00:28:05.831 [2024-10-07 09:48:54.702898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.831 [2024-10-07 09:48:54.702930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.831 qpair failed and we were unable to recover it. 00:28:05.831 [2024-10-07 09:48:54.703107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.831 [2024-10-07 09:48:54.703140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.831 qpair failed and we were unable to recover it. 00:28:05.831 [2024-10-07 09:48:54.703296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.831 [2024-10-07 09:48:54.703328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.831 qpair failed and we were unable to recover it. 00:28:05.831 [2024-10-07 09:48:54.703458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.831 [2024-10-07 09:48:54.703492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.831 qpair failed and we were unable to recover it. 00:28:05.831 [2024-10-07 09:48:54.703630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.831 [2024-10-07 09:48:54.703663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.831 qpair failed and we were unable to recover it. 00:28:05.831 [2024-10-07 09:48:54.703792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.831 [2024-10-07 09:48:54.703824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.831 qpair failed and we were unable to recover it. 00:28:05.831 [2024-10-07 09:48:54.703938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.831 [2024-10-07 09:48:54.703970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.831 qpair failed and we were unable to recover it. 00:28:05.831 [2024-10-07 09:48:54.704160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.831 [2024-10-07 09:48:54.704193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.831 qpair failed and we were unable to recover it. 00:28:05.831 [2024-10-07 09:48:54.704365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.831 [2024-10-07 09:48:54.704398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.831 qpair failed and we were unable to recover it. 00:28:05.831 [2024-10-07 09:48:54.704554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.831 [2024-10-07 09:48:54.704587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.831 qpair failed and we were unable to recover it. 00:28:05.831 [2024-10-07 09:48:54.704719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.831 [2024-10-07 09:48:54.704752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.831 qpair failed and we were unable to recover it. 00:28:05.831 [2024-10-07 09:48:54.704860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.831 [2024-10-07 09:48:54.704894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.831 qpair failed and we were unable to recover it. 00:28:05.831 [2024-10-07 09:48:54.705036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.831 [2024-10-07 09:48:54.705084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.831 qpair failed and we were unable to recover it. 00:28:05.832 [2024-10-07 09:48:54.705217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.832 [2024-10-07 09:48:54.705258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.832 qpair failed and we were unable to recover it. 00:28:05.832 [2024-10-07 09:48:54.705420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.832 [2024-10-07 09:48:54.705453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.832 qpair failed and we were unable to recover it. 00:28:05.832 [2024-10-07 09:48:54.705640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.832 [2024-10-07 09:48:54.705732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.832 qpair failed and we were unable to recover it. 00:28:05.832 [2024-10-07 09:48:54.705872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.832 [2024-10-07 09:48:54.705904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.832 qpair failed and we were unable to recover it. 00:28:05.832 [2024-10-07 09:48:54.706048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.832 [2024-10-07 09:48:54.706096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.832 qpair failed and we were unable to recover it. 00:28:05.832 [2024-10-07 09:48:54.706271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.832 [2024-10-07 09:48:54.706304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.832 qpair failed and we were unable to recover it. 00:28:05.832 [2024-10-07 09:48:54.706437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.832 [2024-10-07 09:48:54.706470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.832 qpair failed and we were unable to recover it. 00:28:05.832 [2024-10-07 09:48:54.706624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.832 [2024-10-07 09:48:54.706662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.832 qpair failed and we were unable to recover it. 00:28:05.832 [2024-10-07 09:48:54.706814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.832 [2024-10-07 09:48:54.706847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.832 qpair failed and we were unable to recover it. 00:28:05.832 [2024-10-07 09:48:54.707030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.832 [2024-10-07 09:48:54.707063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.832 qpair failed and we were unable to recover it. 00:28:05.832 [2024-10-07 09:48:54.707260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.832 [2024-10-07 09:48:54.707293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.832 qpair failed and we were unable to recover it. 00:28:05.832 [2024-10-07 09:48:54.707407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.832 [2024-10-07 09:48:54.707441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.832 qpair failed and we were unable to recover it. 00:28:05.832 [2024-10-07 09:48:54.707604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.832 [2024-10-07 09:48:54.707637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.832 qpair failed and we were unable to recover it. 00:28:05.832 [2024-10-07 09:48:54.707843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.832 [2024-10-07 09:48:54.707890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.832 qpair failed and we were unable to recover it. 00:28:05.832 [2024-10-07 09:48:54.708043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.832 [2024-10-07 09:48:54.708076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.832 qpair failed and we were unable to recover it. 00:28:05.832 [2024-10-07 09:48:54.708222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.832 [2024-10-07 09:48:54.708276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.832 qpair failed and we were unable to recover it. 00:28:05.832 [2024-10-07 09:48:54.708437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.832 [2024-10-07 09:48:54.708485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.832 qpair failed and we were unable to recover it. 00:28:05.832 [2024-10-07 09:48:54.708580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.832 [2024-10-07 09:48:54.708611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.832 qpair failed and we were unable to recover it. 00:28:05.832 [2024-10-07 09:48:54.708787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.832 [2024-10-07 09:48:54.708819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.832 qpair failed and we were unable to recover it. 00:28:05.832 [2024-10-07 09:48:54.708924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.832 [2024-10-07 09:48:54.708956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.832 qpair failed and we were unable to recover it. 00:28:05.832 [2024-10-07 09:48:54.709134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.832 [2024-10-07 09:48:54.709181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.832 qpair failed and we were unable to recover it. 00:28:05.832 [2024-10-07 09:48:54.709305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.832 [2024-10-07 09:48:54.709336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.832 qpair failed and we were unable to recover it. 00:28:05.832 [2024-10-07 09:48:54.709474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.832 [2024-10-07 09:48:54.709505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.832 qpair failed and we were unable to recover it. 00:28:05.832 [2024-10-07 09:48:54.709637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.832 [2024-10-07 09:48:54.709674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.832 qpair failed and we were unable to recover it. 00:28:05.832 [2024-10-07 09:48:54.709814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.832 [2024-10-07 09:48:54.709846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.832 qpair failed and we were unable to recover it. 00:28:05.832 [2024-10-07 09:48:54.709972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.832 [2024-10-07 09:48:54.710003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.832 qpair failed and we were unable to recover it. 00:28:05.832 [2024-10-07 09:48:54.710112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.832 [2024-10-07 09:48:54.710144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.832 qpair failed and we were unable to recover it. 00:28:05.832 [2024-10-07 09:48:54.710302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.832 [2024-10-07 09:48:54.710333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.832 qpair failed and we were unable to recover it. 00:28:05.832 [2024-10-07 09:48:54.710467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.832 [2024-10-07 09:48:54.710502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.832 qpair failed and we were unable to recover it. 00:28:05.833 [2024-10-07 09:48:54.710608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.833 [2024-10-07 09:48:54.710641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.833 qpair failed and we were unable to recover it. 00:28:05.833 [2024-10-07 09:48:54.710771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.833 [2024-10-07 09:48:54.710819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.833 qpair failed and we were unable to recover it. 00:28:05.833 [2024-10-07 09:48:54.710958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.833 [2024-10-07 09:48:54.710994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.833 qpair failed and we were unable to recover it. 00:28:05.833 [2024-10-07 09:48:54.711129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.833 [2024-10-07 09:48:54.711162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.833 qpair failed and we were unable to recover it. 00:28:05.833 [2024-10-07 09:48:54.711257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.833 [2024-10-07 09:48:54.711289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:05.833 qpair failed and we were unable to recover it. 00:28:05.833 [2024-10-07 09:48:54.711407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.833 [2024-10-07 09:48:54.711440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.833 qpair failed and we were unable to recover it. 00:28:05.833 [2024-10-07 09:48:54.711588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.833 [2024-10-07 09:48:54.711619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.833 qpair failed and we were unable to recover it. 00:28:05.833 [2024-10-07 09:48:54.711735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.833 [2024-10-07 09:48:54.711769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.833 qpair failed and we were unable to recover it. 00:28:05.833 [2024-10-07 09:48:54.711954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.833 [2024-10-07 09:48:54.711988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.833 qpair failed and we were unable to recover it. 00:28:05.833 [2024-10-07 09:48:54.712159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.833 [2024-10-07 09:48:54.712207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.833 qpair failed and we were unable to recover it. 00:28:05.833 [2024-10-07 09:48:54.712348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.833 [2024-10-07 09:48:54.712395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.833 qpair failed and we were unable to recover it. 00:28:05.833 [2024-10-07 09:48:54.712484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.833 [2024-10-07 09:48:54.712515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.833 qpair failed and we were unable to recover it. 00:28:05.833 [2024-10-07 09:48:54.712643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.833 [2024-10-07 09:48:54.712692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.833 qpair failed and we were unable to recover it. 00:28:05.833 [2024-10-07 09:48:54.712791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.833 [2024-10-07 09:48:54.712823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.833 qpair failed and we were unable to recover it. 00:28:05.833 [2024-10-07 09:48:54.712921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.833 [2024-10-07 09:48:54.712953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.833 qpair failed and we were unable to recover it. 00:28:05.833 [2024-10-07 09:48:54.713085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.833 [2024-10-07 09:48:54.713116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.833 qpair failed and we were unable to recover it. 00:28:05.833 [2024-10-07 09:48:54.713273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.833 [2024-10-07 09:48:54.713304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.833 qpair failed and we were unable to recover it. 00:28:05.833 [2024-10-07 09:48:54.713461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.833 [2024-10-07 09:48:54.713492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.833 qpair failed and we were unable to recover it. 00:28:05.833 [2024-10-07 09:48:54.713697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.833 [2024-10-07 09:48:54.713729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.833 qpair failed and we were unable to recover it. 00:28:05.833 [2024-10-07 09:48:54.713851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.833 [2024-10-07 09:48:54.713882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.833 qpair failed and we were unable to recover it. 00:28:05.833 [2024-10-07 09:48:54.714017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.833 [2024-10-07 09:48:54.714049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.833 qpair failed and we were unable to recover it. 00:28:05.833 [2024-10-07 09:48:54.714149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.833 [2024-10-07 09:48:54.714183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.833 qpair failed and we were unable to recover it. 00:28:05.833 [2024-10-07 09:48:54.714316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.833 [2024-10-07 09:48:54.714348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.833 qpair failed and we were unable to recover it. 00:28:05.833 [2024-10-07 09:48:54.714449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.833 [2024-10-07 09:48:54.714481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.833 qpair failed and we were unable to recover it. 00:28:05.833 [2024-10-07 09:48:54.714570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.833 [2024-10-07 09:48:54.714601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.833 qpair failed and we were unable to recover it. 00:28:05.833 [2024-10-07 09:48:54.714733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.833 [2024-10-07 09:48:54.714764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.833 qpair failed and we were unable to recover it. 00:28:05.833 [2024-10-07 09:48:54.714861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.833 [2024-10-07 09:48:54.714891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.833 qpair failed and we were unable to recover it. 00:28:05.833 [2024-10-07 09:48:54.715028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.833 [2024-10-07 09:48:54.715059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.833 qpair failed and we were unable to recover it. 00:28:05.833 [2024-10-07 09:48:54.715181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.833 [2024-10-07 09:48:54.715212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.833 qpair failed and we were unable to recover it. 00:28:05.833 [2024-10-07 09:48:54.715322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.834 [2024-10-07 09:48:54.715354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.834 qpair failed and we were unable to recover it. 00:28:05.834 [2024-10-07 09:48:54.715463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.834 [2024-10-07 09:48:54.715494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.834 qpair failed and we were unable to recover it. 00:28:05.834 [2024-10-07 09:48:54.715633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.834 [2024-10-07 09:48:54.715688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.834 qpair failed and we were unable to recover it. 00:28:05.834 [2024-10-07 09:48:54.715831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.834 [2024-10-07 09:48:54.715865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.834 qpair failed and we were unable to recover it. 00:28:05.834 [2024-10-07 09:48:54.716029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.834 [2024-10-07 09:48:54.716075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.834 qpair failed and we were unable to recover it. 00:28:05.834 [2024-10-07 09:48:54.716213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.834 [2024-10-07 09:48:54.716246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.834 qpair failed and we were unable to recover it. 00:28:05.834 [2024-10-07 09:48:54.716409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.834 [2024-10-07 09:48:54.716441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.834 qpair failed and we were unable to recover it. 00:28:05.834 [2024-10-07 09:48:54.716575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.834 [2024-10-07 09:48:54.716606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.834 qpair failed and we were unable to recover it. 00:28:05.834 [2024-10-07 09:48:54.716752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.834 [2024-10-07 09:48:54.716785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.834 qpair failed and we were unable to recover it. 00:28:05.834 [2024-10-07 09:48:54.716952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.834 [2024-10-07 09:48:54.717000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.834 qpair failed and we were unable to recover it. 00:28:05.834 [2024-10-07 09:48:54.717094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.834 [2024-10-07 09:48:54.717125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.834 qpair failed and we were unable to recover it. 00:28:05.834 [2024-10-07 09:48:54.717251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.834 [2024-10-07 09:48:54.717282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.834 qpair failed and we were unable to recover it. 00:28:05.834 [2024-10-07 09:48:54.717391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.834 [2024-10-07 09:48:54.717422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.834 qpair failed and we were unable to recover it. 00:28:05.834 [2024-10-07 09:48:54.717576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.834 [2024-10-07 09:48:54.717608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.834 qpair failed and we were unable to recover it. 00:28:05.834 [2024-10-07 09:48:54.717757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.834 [2024-10-07 09:48:54.717791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.834 qpair failed and we were unable to recover it. 00:28:05.834 [2024-10-07 09:48:54.717911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.834 [2024-10-07 09:48:54.717944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.834 qpair failed and we were unable to recover it. 00:28:05.834 [2024-10-07 09:48:54.718101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.834 [2024-10-07 09:48:54.718131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.834 qpair failed and we were unable to recover it. 00:28:05.834 [2024-10-07 09:48:54.718219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.834 [2024-10-07 09:48:54.718250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.834 qpair failed and we were unable to recover it. 00:28:05.834 [2024-10-07 09:48:54.718389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.834 [2024-10-07 09:48:54.718421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.834 qpair failed and we were unable to recover it. 00:28:05.834 [2024-10-07 09:48:54.718548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.834 [2024-10-07 09:48:54.718580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.834 qpair failed and we were unable to recover it. 00:28:05.834 [2024-10-07 09:48:54.718688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.834 [2024-10-07 09:48:54.718721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.834 qpair failed and we were unable to recover it. 00:28:05.834 [2024-10-07 09:48:54.718856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.834 [2024-10-07 09:48:54.718888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.834 qpair failed and we were unable to recover it. 00:28:05.834 [2024-10-07 09:48:54.719014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.834 [2024-10-07 09:48:54.719045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.834 qpair failed and we were unable to recover it. 00:28:05.834 [2024-10-07 09:48:54.719133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.834 [2024-10-07 09:48:54.719163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.834 qpair failed and we were unable to recover it. 00:28:05.834 [2024-10-07 09:48:54.719258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.834 [2024-10-07 09:48:54.719290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.834 qpair failed and we were unable to recover it. 00:28:05.834 [2024-10-07 09:48:54.719408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.834 [2024-10-07 09:48:54.719440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.834 qpair failed and we were unable to recover it. 00:28:05.834 [2024-10-07 09:48:54.719529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.834 [2024-10-07 09:48:54.719559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.834 qpair failed and we were unable to recover it. 00:28:05.834 [2024-10-07 09:48:54.719684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.834 [2024-10-07 09:48:54.719715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.834 qpair failed and we were unable to recover it. 00:28:05.834 [2024-10-07 09:48:54.719819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.834 [2024-10-07 09:48:54.719850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.834 qpair failed and we were unable to recover it. 00:28:05.834 [2024-10-07 09:48:54.719944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.834 [2024-10-07 09:48:54.719974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.834 qpair failed and we were unable to recover it. 00:28:05.834 [2024-10-07 09:48:54.720107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.834 [2024-10-07 09:48:54.720138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.834 qpair failed and we were unable to recover it. 00:28:05.835 [2024-10-07 09:48:54.720266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.835 [2024-10-07 09:48:54.720298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.835 qpair failed and we were unable to recover it. 00:28:05.835 [2024-10-07 09:48:54.720426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.835 [2024-10-07 09:48:54.720458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.835 qpair failed and we were unable to recover it. 00:28:05.835 [2024-10-07 09:48:54.720621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.835 [2024-10-07 09:48:54.720652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.835 qpair failed and we were unable to recover it. 00:28:05.835 [2024-10-07 09:48:54.720758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.835 [2024-10-07 09:48:54.720789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.835 qpair failed and we were unable to recover it. 00:28:05.835 [2024-10-07 09:48:54.720922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.835 [2024-10-07 09:48:54.720952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.835 qpair failed and we were unable to recover it. 00:28:05.835 [2024-10-07 09:48:54.721064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.835 [2024-10-07 09:48:54.721094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.835 qpair failed and we were unable to recover it. 00:28:05.835 [2024-10-07 09:48:54.721217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.835 [2024-10-07 09:48:54.721248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.835 qpair failed and we were unable to recover it. 00:28:05.835 [2024-10-07 09:48:54.721401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.835 [2024-10-07 09:48:54.721432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.835 qpair failed and we were unable to recover it. 00:28:05.835 [2024-10-07 09:48:54.721562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.835 [2024-10-07 09:48:54.721592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.835 qpair failed and we were unable to recover it. 00:28:05.835 [2024-10-07 09:48:54.721698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.835 [2024-10-07 09:48:54.721730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.835 qpair failed and we were unable to recover it. 00:28:05.835 [2024-10-07 09:48:54.721815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.835 [2024-10-07 09:48:54.721846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.835 qpair failed and we were unable to recover it. 00:28:05.835 [2024-10-07 09:48:54.721967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.835 [2024-10-07 09:48:54.721998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.835 qpair failed and we were unable to recover it. 00:28:05.835 [2024-10-07 09:48:54.722087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.835 [2024-10-07 09:48:54.722118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.835 qpair failed and we were unable to recover it. 00:28:05.835 [2024-10-07 09:48:54.722241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.835 [2024-10-07 09:48:54.722271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.835 qpair failed and we were unable to recover it. 00:28:05.835 [2024-10-07 09:48:54.722408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.835 [2024-10-07 09:48:54.722439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.835 qpair failed and we were unable to recover it. 00:28:05.835 [2024-10-07 09:48:54.722558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.835 [2024-10-07 09:48:54.722588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.835 qpair failed and we were unable to recover it. 00:28:05.835 [2024-10-07 09:48:54.722714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.835 [2024-10-07 09:48:54.722745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.835 qpair failed and we were unable to recover it. 00:28:05.835 [2024-10-07 09:48:54.722870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.835 [2024-10-07 09:48:54.722901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.835 qpair failed and we were unable to recover it. 00:28:05.835 [2024-10-07 09:48:54.723039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.835 [2024-10-07 09:48:54.723070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.835 qpair failed and we were unable to recover it. 00:28:05.835 [2024-10-07 09:48:54.723197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.835 [2024-10-07 09:48:54.723228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.835 qpair failed and we were unable to recover it. 00:28:05.835 [2024-10-07 09:48:54.723400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.835 [2024-10-07 09:48:54.723431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.835 qpair failed and we were unable to recover it. 00:28:05.835 [2024-10-07 09:48:54.723561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.835 [2024-10-07 09:48:54.723592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.835 qpair failed and we were unable to recover it. 00:28:05.835 [2024-10-07 09:48:54.723747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.835 [2024-10-07 09:48:54.723778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.835 qpair failed and we were unable to recover it. 00:28:05.835 [2024-10-07 09:48:54.723882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.835 [2024-10-07 09:48:54.723913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.835 qpair failed and we were unable to recover it. 00:28:05.835 [2024-10-07 09:48:54.724049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.835 [2024-10-07 09:48:54.724080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.835 qpair failed and we were unable to recover it. 00:28:05.835 [2024-10-07 09:48:54.724236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.835 [2024-10-07 09:48:54.724267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.836 qpair failed and we were unable to recover it. 00:28:05.836 [2024-10-07 09:48:54.724432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.836 [2024-10-07 09:48:54.724463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.836 qpair failed and we were unable to recover it. 00:28:05.836 [2024-10-07 09:48:54.724684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.836 [2024-10-07 09:48:54.724721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.836 qpair failed and we were unable to recover it. 00:28:05.836 [2024-10-07 09:48:54.724879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.836 [2024-10-07 09:48:54.724910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.836 qpair failed and we were unable to recover it. 00:28:05.836 [2024-10-07 09:48:54.725016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.836 [2024-10-07 09:48:54.725047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.836 qpair failed and we were unable to recover it. 00:28:05.836 [2024-10-07 09:48:54.725177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.836 [2024-10-07 09:48:54.725207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.836 qpair failed and we were unable to recover it. 00:28:05.836 [2024-10-07 09:48:54.725364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.836 [2024-10-07 09:48:54.725394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.836 qpair failed and we were unable to recover it. 00:28:05.836 [2024-10-07 09:48:54.725501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.836 [2024-10-07 09:48:54.725532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.836 qpair failed and we were unable to recover it. 00:28:05.836 [2024-10-07 09:48:54.725659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.836 [2024-10-07 09:48:54.725697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.836 qpair failed and we were unable to recover it. 00:28:05.836 [2024-10-07 09:48:54.725792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.836 [2024-10-07 09:48:54.725823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.836 qpair failed and we were unable to recover it. 00:28:05.836 [2024-10-07 09:48:54.725952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.836 [2024-10-07 09:48:54.725983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.836 qpair failed and we were unable to recover it. 00:28:05.836 [2024-10-07 09:48:54.726138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.836 [2024-10-07 09:48:54.726168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.836 qpair failed and we were unable to recover it. 00:28:05.836 [2024-10-07 09:48:54.726292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.836 [2024-10-07 09:48:54.726323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.836 qpair failed and we were unable to recover it. 00:28:05.836 [2024-10-07 09:48:54.726446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.836 [2024-10-07 09:48:54.726477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.836 qpair failed and we were unable to recover it. 00:28:05.836 [2024-10-07 09:48:54.726611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.836 [2024-10-07 09:48:54.726642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.836 qpair failed and we were unable to recover it. 00:28:05.836 [2024-10-07 09:48:54.726812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.836 [2024-10-07 09:48:54.726843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.836 qpair failed and we were unable to recover it. 00:28:05.836 [2024-10-07 09:48:54.726980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.836 [2024-10-07 09:48:54.727011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.836 qpair failed and we were unable to recover it. 00:28:05.836 [2024-10-07 09:48:54.727144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.836 [2024-10-07 09:48:54.727174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.836 qpair failed and we were unable to recover it. 00:28:05.836 [2024-10-07 09:48:54.727307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.836 [2024-10-07 09:48:54.727339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.836 qpair failed and we were unable to recover it. 00:28:05.836 [2024-10-07 09:48:54.727442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.836 [2024-10-07 09:48:54.727473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.836 qpair failed and we were unable to recover it. 00:28:05.836 [2024-10-07 09:48:54.727557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.836 [2024-10-07 09:48:54.727588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.836 qpair failed and we were unable to recover it. 00:28:05.836 [2024-10-07 09:48:54.727721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.836 [2024-10-07 09:48:54.727753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.836 qpair failed and we were unable to recover it. 00:28:05.836 [2024-10-07 09:48:54.727887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.836 [2024-10-07 09:48:54.727918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.836 qpair failed and we were unable to recover it. 00:28:05.836 [2024-10-07 09:48:54.728051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.836 [2024-10-07 09:48:54.728081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.836 qpair failed and we were unable to recover it. 00:28:05.836 [2024-10-07 09:48:54.728240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.836 [2024-10-07 09:48:54.728270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.836 qpair failed and we were unable to recover it. 00:28:05.836 [2024-10-07 09:48:54.728376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.836 [2024-10-07 09:48:54.728407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.836 qpair failed and we were unable to recover it. 00:28:05.836 [2024-10-07 09:48:54.728563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.836 [2024-10-07 09:48:54.728594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.836 qpair failed and we were unable to recover it. 00:28:05.836 [2024-10-07 09:48:54.728728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.836 [2024-10-07 09:48:54.728759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.836 qpair failed and we were unable to recover it. 00:28:05.836 [2024-10-07 09:48:54.728885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.836 [2024-10-07 09:48:54.728916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.836 qpair failed and we were unable to recover it. 00:28:05.836 [2024-10-07 09:48:54.729016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.837 [2024-10-07 09:48:54.729051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.837 qpair failed and we were unable to recover it. 00:28:05.837 [2024-10-07 09:48:54.729177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.837 [2024-10-07 09:48:54.729207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.837 qpair failed and we were unable to recover it. 00:28:05.837 [2024-10-07 09:48:54.729311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.837 [2024-10-07 09:48:54.729342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.837 qpair failed and we were unable to recover it. 00:28:05.837 [2024-10-07 09:48:54.729447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.837 [2024-10-07 09:48:54.729495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.837 qpair failed and we were unable to recover it. 00:28:05.837 [2024-10-07 09:48:54.729675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.837 [2024-10-07 09:48:54.729709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.837 qpair failed and we were unable to recover it. 00:28:05.837 [2024-10-07 09:48:54.729845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.837 [2024-10-07 09:48:54.729877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.837 qpair failed and we were unable to recover it. 00:28:05.837 [2024-10-07 09:48:54.730036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.837 [2024-10-07 09:48:54.730069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.837 qpair failed and we were unable to recover it. 00:28:05.837 [2024-10-07 09:48:54.730226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.837 [2024-10-07 09:48:54.730257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.837 qpair failed and we were unable to recover it. 00:28:05.837 [2024-10-07 09:48:54.730416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.837 [2024-10-07 09:48:54.730448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.837 qpair failed and we were unable to recover it. 00:28:05.837 [2024-10-07 09:48:54.730578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.837 [2024-10-07 09:48:54.730610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.837 qpair failed and we were unable to recover it. 00:28:05.837 [2024-10-07 09:48:54.730720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.837 [2024-10-07 09:48:54.730752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.837 qpair failed and we were unable to recover it. 00:28:05.837 [2024-10-07 09:48:54.730852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.837 [2024-10-07 09:48:54.730884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.837 qpair failed and we were unable to recover it. 00:28:05.837 [2024-10-07 09:48:54.731005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.837 [2024-10-07 09:48:54.731035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.837 qpair failed and we were unable to recover it. 00:28:05.837 [2024-10-07 09:48:54.731166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.837 [2024-10-07 09:48:54.731197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.837 qpair failed and we were unable to recover it. 00:28:05.837 [2024-10-07 09:48:54.731334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.837 [2024-10-07 09:48:54.731365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.837 qpair failed and we were unable to recover it. 00:28:05.837 [2024-10-07 09:48:54.731467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.837 [2024-10-07 09:48:54.731497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.837 qpair failed and we were unable to recover it. 00:28:05.837 [2024-10-07 09:48:54.731618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.837 [2024-10-07 09:48:54.731678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.837 qpair failed and we were unable to recover it. 00:28:05.837 [2024-10-07 09:48:54.731852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.837 [2024-10-07 09:48:54.731884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.837 qpair failed and we were unable to recover it. 00:28:05.837 [2024-10-07 09:48:54.732015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.837 [2024-10-07 09:48:54.732046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.837 qpair failed and we were unable to recover it. 00:28:05.837 [2024-10-07 09:48:54.732207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.837 [2024-10-07 09:48:54.732238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.837 qpair failed and we were unable to recover it. 00:28:05.837 [2024-10-07 09:48:54.732338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.837 [2024-10-07 09:48:54.732368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.837 qpair failed and we were unable to recover it. 00:28:05.837 [2024-10-07 09:48:54.732493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.837 [2024-10-07 09:48:54.732527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.837 qpair failed and we were unable to recover it. 00:28:05.837 [2024-10-07 09:48:54.732663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.837 [2024-10-07 09:48:54.732711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.837 qpair failed and we were unable to recover it. 00:28:05.837 [2024-10-07 09:48:54.732872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.837 [2024-10-07 09:48:54.732904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.837 qpair failed and we were unable to recover it. 00:28:05.837 [2024-10-07 09:48:54.733058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.837 [2024-10-07 09:48:54.733089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.837 qpair failed and we were unable to recover it. 00:28:05.837 [2024-10-07 09:48:54.733189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.837 [2024-10-07 09:48:54.733220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.837 qpair failed and we were unable to recover it. 00:28:05.837 [2024-10-07 09:48:54.733428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.837 [2024-10-07 09:48:54.733459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.837 qpair failed and we were unable to recover it. 00:28:05.837 [2024-10-07 09:48:54.733550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.837 [2024-10-07 09:48:54.733589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.837 qpair failed and we were unable to recover it. 00:28:05.837 [2024-10-07 09:48:54.733749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.837 [2024-10-07 09:48:54.733781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.837 qpair failed and we were unable to recover it. 00:28:05.837 [2024-10-07 09:48:54.733926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.837 [2024-10-07 09:48:54.733958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.837 qpair failed and we were unable to recover it. 00:28:05.837 [2024-10-07 09:48:54.734117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.837 [2024-10-07 09:48:54.734147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.838 qpair failed and we were unable to recover it. 00:28:05.838 [2024-10-07 09:48:54.734249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.838 [2024-10-07 09:48:54.734285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.838 qpair failed and we were unable to recover it. 00:28:05.838 [2024-10-07 09:48:54.734426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.838 [2024-10-07 09:48:54.734457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.838 qpair failed and we were unable to recover it. 00:28:05.838 [2024-10-07 09:48:54.734584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.838 [2024-10-07 09:48:54.734617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.838 qpair failed and we were unable to recover it. 00:28:05.838 [2024-10-07 09:48:54.734755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.838 [2024-10-07 09:48:54.734787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.838 qpair failed and we were unable to recover it. 00:28:05.838 [2024-10-07 09:48:54.734916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.838 [2024-10-07 09:48:54.734947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.838 qpair failed and we were unable to recover it. 00:28:05.838 [2024-10-07 09:48:54.735078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.838 [2024-10-07 09:48:54.735110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.838 qpair failed and we were unable to recover it. 00:28:05.838 [2024-10-07 09:48:54.735246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.838 [2024-10-07 09:48:54.735277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.838 qpair failed and we were unable to recover it. 00:28:05.838 [2024-10-07 09:48:54.735385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.838 [2024-10-07 09:48:54.735416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.838 qpair failed and we were unable to recover it. 00:28:05.838 [2024-10-07 09:48:54.735545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.838 [2024-10-07 09:48:54.735577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.838 qpair failed and we were unable to recover it. 00:28:05.838 [2024-10-07 09:48:54.735681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.838 [2024-10-07 09:48:54.735713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.838 qpair failed and we were unable to recover it. 00:28:05.838 [2024-10-07 09:48:54.735825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.838 [2024-10-07 09:48:54.735856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.838 qpair failed and we were unable to recover it. 00:28:05.838 [2024-10-07 09:48:54.736015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.838 [2024-10-07 09:48:54.736045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.838 qpair failed and we were unable to recover it. 00:28:05.838 [2024-10-07 09:48:54.736170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.838 [2024-10-07 09:48:54.736201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.838 qpair failed and we were unable to recover it. 00:28:05.838 [2024-10-07 09:48:54.736331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.838 [2024-10-07 09:48:54.736361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.838 qpair failed and we were unable to recover it. 00:28:05.838 [2024-10-07 09:48:54.736487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.838 [2024-10-07 09:48:54.736518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.838 qpair failed and we were unable to recover it. 00:28:05.838 [2024-10-07 09:48:54.736621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.838 [2024-10-07 09:48:54.736652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.838 qpair failed and we were unable to recover it. 00:28:05.838 [2024-10-07 09:48:54.736774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.838 [2024-10-07 09:48:54.736805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.838 qpair failed and we were unable to recover it. 00:28:05.838 [2024-10-07 09:48:54.736931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.838 [2024-10-07 09:48:54.736963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.838 qpair failed and we were unable to recover it. 00:28:05.838 [2024-10-07 09:48:54.737119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.838 [2024-10-07 09:48:54.737149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.838 qpair failed and we were unable to recover it. 00:28:05.838 [2024-10-07 09:48:54.737240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.838 [2024-10-07 09:48:54.737270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.838 qpair failed and we were unable to recover it. 00:28:05.838 [2024-10-07 09:48:54.737424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.838 [2024-10-07 09:48:54.737455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.838 qpair failed and we were unable to recover it. 00:28:05.838 [2024-10-07 09:48:54.737585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.838 [2024-10-07 09:48:54.737614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.838 qpair failed and we were unable to recover it. 00:28:05.838 [2024-10-07 09:48:54.737755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.838 [2024-10-07 09:48:54.737787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.838 qpair failed and we were unable to recover it. 00:28:05.838 [2024-10-07 09:48:54.737890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.838 [2024-10-07 09:48:54.737923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.838 qpair failed and we were unable to recover it. 00:28:05.838 [2024-10-07 09:48:54.738055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.838 [2024-10-07 09:48:54.738087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.838 qpair failed and we were unable to recover it. 00:28:05.838 [2024-10-07 09:48:54.738217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.838 [2024-10-07 09:48:54.738248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.838 qpair failed and we were unable to recover it. 00:28:05.838 [2024-10-07 09:48:54.738375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.838 [2024-10-07 09:48:54.738406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.838 qpair failed and we were unable to recover it. 00:28:05.838 [2024-10-07 09:48:54.738573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.838 [2024-10-07 09:48:54.738604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.838 qpair failed and we were unable to recover it. 00:28:05.838 [2024-10-07 09:48:54.738712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.838 [2024-10-07 09:48:54.738744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.838 qpair failed and we were unable to recover it. 00:28:05.838 [2024-10-07 09:48:54.738852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.839 [2024-10-07 09:48:54.738883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.839 qpair failed and we were unable to recover it. 00:28:05.839 [2024-10-07 09:48:54.738988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.839 [2024-10-07 09:48:54.739018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.839 qpair failed and we were unable to recover it. 00:28:05.839 [2024-10-07 09:48:54.739150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.839 [2024-10-07 09:48:54.739180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.839 qpair failed and we were unable to recover it. 00:28:05.839 [2024-10-07 09:48:54.739338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.839 [2024-10-07 09:48:54.739371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.839 qpair failed and we were unable to recover it. 00:28:05.839 [2024-10-07 09:48:54.739487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.839 [2024-10-07 09:48:54.739535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.839 qpair failed and we were unable to recover it. 00:28:05.839 [2024-10-07 09:48:54.739687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.839 [2024-10-07 09:48:54.739721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.839 qpair failed and we were unable to recover it. 00:28:05.839 [2024-10-07 09:48:54.739833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.839 [2024-10-07 09:48:54.739866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.839 qpair failed and we were unable to recover it. 00:28:05.839 [2024-10-07 09:48:54.740005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.839 [2024-10-07 09:48:54.740037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.839 qpair failed and we were unable to recover it. 00:28:05.839 [2024-10-07 09:48:54.740135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.839 [2024-10-07 09:48:54.740167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.839 qpair failed and we were unable to recover it. 00:28:05.839 [2024-10-07 09:48:54.740352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.839 [2024-10-07 09:48:54.740386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.839 qpair failed and we were unable to recover it. 00:28:05.839 [2024-10-07 09:48:54.740530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.839 [2024-10-07 09:48:54.740561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.839 qpair failed and we were unable to recover it. 00:28:05.839 [2024-10-07 09:48:54.740696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.839 [2024-10-07 09:48:54.740728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.839 qpair failed and we were unable to recover it. 00:28:05.839 [2024-10-07 09:48:54.740902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.839 [2024-10-07 09:48:54.740953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.839 qpair failed and we were unable to recover it. 00:28:05.839 [2024-10-07 09:48:54.741069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.839 [2024-10-07 09:48:54.741101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.839 qpair failed and we were unable to recover it. 00:28:05.839 [2024-10-07 09:48:54.741262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.839 [2024-10-07 09:48:54.741310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.839 qpair failed and we were unable to recover it. 00:28:05.839 [2024-10-07 09:48:54.741439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.839 [2024-10-07 09:48:54.741473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.839 qpair failed and we were unable to recover it. 00:28:05.839 [2024-10-07 09:48:54.741603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.839 [2024-10-07 09:48:54.741635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.839 qpair failed and we were unable to recover it. 00:28:05.839 [2024-10-07 09:48:54.741795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.839 [2024-10-07 09:48:54.741830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.839 qpair failed and we were unable to recover it. 00:28:05.839 [2024-10-07 09:48:54.741988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.839 [2024-10-07 09:48:54.742022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.839 qpair failed and we were unable to recover it. 00:28:05.839 [2024-10-07 09:48:54.742158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.839 [2024-10-07 09:48:54.742191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.839 qpair failed and we were unable to recover it. 00:28:05.839 [2024-10-07 09:48:54.742329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.839 [2024-10-07 09:48:54.742362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.839 qpair failed and we were unable to recover it. 00:28:05.839 [2024-10-07 09:48:54.742492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.839 [2024-10-07 09:48:54.742524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.839 qpair failed and we were unable to recover it. 00:28:05.839 [2024-10-07 09:48:54.742650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.839 [2024-10-07 09:48:54.742687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.839 qpair failed and we were unable to recover it. 00:28:05.839 [2024-10-07 09:48:54.742808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.839 [2024-10-07 09:48:54.742841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.839 qpair failed and we were unable to recover it. 00:28:05.839 [2024-10-07 09:48:54.742995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.839 [2024-10-07 09:48:54.743042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.839 qpair failed and we were unable to recover it. 00:28:05.839 [2024-10-07 09:48:54.743189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.839 [2024-10-07 09:48:54.743236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.839 qpair failed and we were unable to recover it. 00:28:05.839 [2024-10-07 09:48:54.743413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.839 [2024-10-07 09:48:54.743462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.839 qpair failed and we were unable to recover it. 00:28:05.839 [2024-10-07 09:48:54.743624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.839 [2024-10-07 09:48:54.743658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.839 qpair failed and we were unable to recover it. 00:28:05.839 [2024-10-07 09:48:54.743826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.839 [2024-10-07 09:48:54.743858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.839 qpair failed and we were unable to recover it. 00:28:05.839 [2024-10-07 09:48:54.743997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.839 [2024-10-07 09:48:54.744029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.839 qpair failed and we were unable to recover it. 00:28:05.839 [2024-10-07 09:48:54.744120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.840 [2024-10-07 09:48:54.744151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.840 qpair failed and we were unable to recover it. 00:28:05.840 [2024-10-07 09:48:54.744278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.840 [2024-10-07 09:48:54.744309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.840 qpair failed and we were unable to recover it. 00:28:05.840 [2024-10-07 09:48:54.744469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.840 [2024-10-07 09:48:54.744501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.840 qpair failed and we were unable to recover it. 00:28:05.840 [2024-10-07 09:48:54.744608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.840 [2024-10-07 09:48:54.744641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.840 qpair failed and we were unable to recover it. 00:28:05.840 [2024-10-07 09:48:54.744785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.840 [2024-10-07 09:48:54.744822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.840 qpair failed and we were unable to recover it. 00:28:05.840 [2024-10-07 09:48:54.744931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.840 [2024-10-07 09:48:54.744986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.840 qpair failed and we were unable to recover it. 00:28:05.840 [2024-10-07 09:48:54.745144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.840 [2024-10-07 09:48:54.745177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.840 qpair failed and we were unable to recover it. 00:28:05.840 [2024-10-07 09:48:54.745315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.840 [2024-10-07 09:48:54.745349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.840 qpair failed and we were unable to recover it. 00:28:05.840 [2024-10-07 09:48:54.745487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.840 [2024-10-07 09:48:54.745520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.840 qpair failed and we were unable to recover it. 00:28:05.840 [2024-10-07 09:48:54.745659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.840 [2024-10-07 09:48:54.745716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.840 qpair failed and we were unable to recover it. 00:28:05.840 [2024-10-07 09:48:54.745851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.840 [2024-10-07 09:48:54.745884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.840 qpair failed and we were unable to recover it. 00:28:05.840 [2024-10-07 09:48:54.746036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.840 [2024-10-07 09:48:54.746085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.840 qpair failed and we were unable to recover it. 00:28:05.840 [2024-10-07 09:48:54.746197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.840 [2024-10-07 09:48:54.746228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.840 qpair failed and we were unable to recover it. 00:28:05.840 [2024-10-07 09:48:54.746389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.840 [2024-10-07 09:48:54.746437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.840 qpair failed and we were unable to recover it. 00:28:05.840 [2024-10-07 09:48:54.746610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.840 [2024-10-07 09:48:54.746641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.840 qpair failed and we were unable to recover it. 00:28:05.840 [2024-10-07 09:48:54.746752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.840 [2024-10-07 09:48:54.746786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.840 qpair failed and we were unable to recover it. 00:28:05.840 [2024-10-07 09:48:54.746933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.840 [2024-10-07 09:48:54.746979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.840 qpair failed and we were unable to recover it. 00:28:05.840 [2024-10-07 09:48:54.747099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.840 [2024-10-07 09:48:54.747133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.840 qpair failed and we were unable to recover it. 00:28:05.840 [2024-10-07 09:48:54.747251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.840 [2024-10-07 09:48:54.747284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.840 qpair failed and we were unable to recover it. 00:28:05.840 [2024-10-07 09:48:54.747412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.840 [2024-10-07 09:48:54.747444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.840 qpair failed and we were unable to recover it. 00:28:05.840 [2024-10-07 09:48:54.747612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.840 [2024-10-07 09:48:54.747644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.840 qpair failed and we were unable to recover it. 00:28:05.840 [2024-10-07 09:48:54.747797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.840 [2024-10-07 09:48:54.747830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.840 qpair failed and we were unable to recover it. 00:28:05.840 [2024-10-07 09:48:54.747981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.840 [2024-10-07 09:48:54.748029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.840 qpair failed and we were unable to recover it. 00:28:05.840 [2024-10-07 09:48:54.748172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.840 [2024-10-07 09:48:54.748219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.840 qpair failed and we were unable to recover it. 00:28:05.840 [2024-10-07 09:48:54.748358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.840 [2024-10-07 09:48:54.748405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.840 qpair failed and we were unable to recover it. 00:28:05.840 [2024-10-07 09:48:54.748529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.840 [2024-10-07 09:48:54.748560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.840 qpair failed and we were unable to recover it. 00:28:05.840 [2024-10-07 09:48:54.748772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.840 [2024-10-07 09:48:54.748804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.840 qpair failed and we were unable to recover it. 00:28:05.840 [2024-10-07 09:48:54.748983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.840 [2024-10-07 09:48:54.749014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.840 qpair failed and we were unable to recover it. 00:28:05.840 [2024-10-07 09:48:54.749148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.840 [2024-10-07 09:48:54.749178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.840 qpair failed and we were unable to recover it. 00:28:05.840 [2024-10-07 09:48:54.749314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.840 [2024-10-07 09:48:54.749345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.840 qpair failed and we were unable to recover it. 00:28:05.840 [2024-10-07 09:48:54.749476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.840 [2024-10-07 09:48:54.749507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.840 qpair failed and we were unable to recover it. 00:28:05.840 [2024-10-07 09:48:54.749639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.840 [2024-10-07 09:48:54.749684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.840 qpair failed and we were unable to recover it. 00:28:05.840 [2024-10-07 09:48:54.749822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.840 [2024-10-07 09:48:54.749853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.840 qpair failed and we were unable to recover it. 00:28:05.841 [2024-10-07 09:48:54.749983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.841 [2024-10-07 09:48:54.750014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.841 qpair failed and we were unable to recover it. 00:28:05.841 [2024-10-07 09:48:54.750211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.841 [2024-10-07 09:48:54.750242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.841 qpair failed and we were unable to recover it. 00:28:05.841 [2024-10-07 09:48:54.750373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.841 [2024-10-07 09:48:54.750404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.841 qpair failed and we were unable to recover it. 00:28:05.841 [2024-10-07 09:48:54.750495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.841 [2024-10-07 09:48:54.750526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.841 qpair failed and we were unable to recover it. 00:28:05.841 [2024-10-07 09:48:54.750654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.841 [2024-10-07 09:48:54.750692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.841 qpair failed and we were unable to recover it. 00:28:05.841 [2024-10-07 09:48:54.750821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.841 [2024-10-07 09:48:54.750852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.841 qpair failed and we were unable to recover it. 00:28:05.841 [2024-10-07 09:48:54.750979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.841 [2024-10-07 09:48:54.751009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.841 qpair failed and we were unable to recover it. 00:28:05.841 [2024-10-07 09:48:54.751131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.841 [2024-10-07 09:48:54.751162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.841 qpair failed and we were unable to recover it. 00:28:05.841 [2024-10-07 09:48:54.751321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.841 [2024-10-07 09:48:54.751352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.841 qpair failed and we were unable to recover it. 00:28:05.841 [2024-10-07 09:48:54.751482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.841 [2024-10-07 09:48:54.751512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.841 qpair failed and we were unable to recover it. 00:28:05.841 [2024-10-07 09:48:54.751615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.841 [2024-10-07 09:48:54.751647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.841 qpair failed and we were unable to recover it. 00:28:05.841 [2024-10-07 09:48:54.751859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.841 [2024-10-07 09:48:54.751890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.841 qpair failed and we were unable to recover it. 00:28:05.841 [2024-10-07 09:48:54.752004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.841 [2024-10-07 09:48:54.752034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.841 qpair failed and we were unable to recover it. 00:28:05.841 [2024-10-07 09:48:54.752121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.841 [2024-10-07 09:48:54.752152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.841 qpair failed and we were unable to recover it. 00:28:05.841 [2024-10-07 09:48:54.752278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.841 [2024-10-07 09:48:54.752309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.841 qpair failed and we were unable to recover it. 00:28:05.841 [2024-10-07 09:48:54.752417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.841 [2024-10-07 09:48:54.752447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.841 qpair failed and we were unable to recover it. 00:28:05.841 [2024-10-07 09:48:54.752580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.841 [2024-10-07 09:48:54.752610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.841 qpair failed and we were unable to recover it. 00:28:05.841 [2024-10-07 09:48:54.752734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.841 [2024-10-07 09:48:54.752765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.841 qpair failed and we were unable to recover it. 00:28:05.841 [2024-10-07 09:48:54.752898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.841 [2024-10-07 09:48:54.752929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.841 qpair failed and we were unable to recover it. 00:28:05.841 [2024-10-07 09:48:54.753085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.841 [2024-10-07 09:48:54.753116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.841 qpair failed and we were unable to recover it. 00:28:05.841 [2024-10-07 09:48:54.753282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.841 [2024-10-07 09:48:54.753312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.841 qpair failed and we were unable to recover it. 00:28:05.841 [2024-10-07 09:48:54.753473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.841 [2024-10-07 09:48:54.753503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.841 qpair failed and we were unable to recover it. 00:28:05.841 [2024-10-07 09:48:54.753630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.841 [2024-10-07 09:48:54.753660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.841 qpair failed and we were unable to recover it. 00:28:05.841 [2024-10-07 09:48:54.753777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.841 [2024-10-07 09:48:54.753807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.841 qpair failed and we were unable to recover it. 00:28:05.841 [2024-10-07 09:48:54.753952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.841 [2024-10-07 09:48:54.753999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.841 qpair failed and we were unable to recover it. 00:28:05.841 [2024-10-07 09:48:54.754116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.841 [2024-10-07 09:48:54.754170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.841 qpair failed and we were unable to recover it. 00:28:05.841 [2024-10-07 09:48:54.754294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.841 [2024-10-07 09:48:54.754324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.841 qpair failed and we were unable to recover it. 00:28:05.841 [2024-10-07 09:48:54.754456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.842 [2024-10-07 09:48:54.754486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.842 qpair failed and we were unable to recover it. 00:28:05.842 [2024-10-07 09:48:54.754608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.842 [2024-10-07 09:48:54.754639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.842 qpair failed and we were unable to recover it. 00:28:05.842 [2024-10-07 09:48:54.754777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.842 [2024-10-07 09:48:54.754807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.842 qpair failed and we were unable to recover it. 00:28:05.842 [2024-10-07 09:48:54.754927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.842 [2024-10-07 09:48:54.754957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.842 qpair failed and we were unable to recover it. 00:28:05.842 [2024-10-07 09:48:54.755089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.842 [2024-10-07 09:48:54.755120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.842 qpair failed and we were unable to recover it. 00:28:05.842 [2024-10-07 09:48:54.755244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.842 [2024-10-07 09:48:54.755275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.842 qpair failed and we were unable to recover it. 00:28:05.842 [2024-10-07 09:48:54.755404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.842 [2024-10-07 09:48:54.755435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.842 qpair failed and we were unable to recover it. 00:28:05.842 [2024-10-07 09:48:54.755536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.842 [2024-10-07 09:48:54.755568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.842 qpair failed and we were unable to recover it. 00:28:05.842 [2024-10-07 09:48:54.755770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.842 [2024-10-07 09:48:54.755802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.842 qpair failed and we were unable to recover it. 00:28:05.842 [2024-10-07 09:48:54.755961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.842 [2024-10-07 09:48:54.756008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.842 qpair failed and we were unable to recover it. 00:28:05.842 [2024-10-07 09:48:54.756149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.842 [2024-10-07 09:48:54.756181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.842 qpair failed and we were unable to recover it. 00:28:05.842 [2024-10-07 09:48:54.756346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.842 [2024-10-07 09:48:54.756378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.842 qpair failed and we were unable to recover it. 00:28:05.842 [2024-10-07 09:48:54.756509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.842 [2024-10-07 09:48:54.756540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.842 qpair failed and we were unable to recover it. 00:28:05.842 [2024-10-07 09:48:54.756644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.842 [2024-10-07 09:48:54.756686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.842 qpair failed and we were unable to recover it. 00:28:05.842 [2024-10-07 09:48:54.756825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.842 [2024-10-07 09:48:54.756856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.842 qpair failed and we were unable to recover it. 00:28:05.842 [2024-10-07 09:48:54.757027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.842 [2024-10-07 09:48:54.757059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.842 qpair failed and we were unable to recover it. 00:28:05.842 [2024-10-07 09:48:54.757190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.842 [2024-10-07 09:48:54.757221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.842 qpair failed and we were unable to recover it. 00:28:05.842 [2024-10-07 09:48:54.757356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.842 [2024-10-07 09:48:54.757389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.842 qpair failed and we were unable to recover it. 00:28:05.842 [2024-10-07 09:48:54.757494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.842 [2024-10-07 09:48:54.757526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.842 qpair failed and we were unable to recover it. 00:28:05.842 [2024-10-07 09:48:54.757673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.842 [2024-10-07 09:48:54.757721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.842 qpair failed and we were unable to recover it. 00:28:05.842 [2024-10-07 09:48:54.757850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.842 [2024-10-07 09:48:54.757881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.842 qpair failed and we were unable to recover it. 00:28:05.842 [2024-10-07 09:48:54.758002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.842 [2024-10-07 09:48:54.758051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.842 qpair failed and we were unable to recover it. 00:28:05.842 [2024-10-07 09:48:54.758224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.842 [2024-10-07 09:48:54.758270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.842 qpair failed and we were unable to recover it. 00:28:05.842 [2024-10-07 09:48:54.758382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.842 [2024-10-07 09:48:54.758414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.842 qpair failed and we were unable to recover it. 00:28:05.842 [2024-10-07 09:48:54.758556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.842 [2024-10-07 09:48:54.758587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.842 qpair failed and we were unable to recover it. 00:28:05.842 [2024-10-07 09:48:54.758692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.842 [2024-10-07 09:48:54.758728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.842 qpair failed and we were unable to recover it. 00:28:05.842 [2024-10-07 09:48:54.758828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.842 [2024-10-07 09:48:54.758858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.842 qpair failed and we were unable to recover it. 00:28:05.842 [2024-10-07 09:48:54.758979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.842 [2024-10-07 09:48:54.759010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.842 qpair failed and we were unable to recover it. 00:28:05.842 [2024-10-07 09:48:54.759138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.842 [2024-10-07 09:48:54.759169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.842 qpair failed and we were unable to recover it. 00:28:05.842 [2024-10-07 09:48:54.759311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.842 [2024-10-07 09:48:54.759358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.842 qpair failed and we were unable to recover it. 00:28:05.842 [2024-10-07 09:48:54.759455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.842 [2024-10-07 09:48:54.759485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.842 qpair failed and we were unable to recover it. 00:28:05.842 [2024-10-07 09:48:54.759610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.842 [2024-10-07 09:48:54.759640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.842 qpair failed and we were unable to recover it. 00:28:05.842 [2024-10-07 09:48:54.759777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.842 [2024-10-07 09:48:54.759809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.842 qpair failed and we were unable to recover it. 00:28:05.843 [2024-10-07 09:48:54.759911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.843 [2024-10-07 09:48:54.759941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.843 qpair failed and we were unable to recover it. 00:28:05.843 [2024-10-07 09:48:54.760064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.843 [2024-10-07 09:48:54.760094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.843 qpair failed and we were unable to recover it. 00:28:05.843 [2024-10-07 09:48:54.760197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.843 [2024-10-07 09:48:54.760228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.843 qpair failed and we were unable to recover it. 00:28:05.843 [2024-10-07 09:48:54.760361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.843 [2024-10-07 09:48:54.760395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.843 qpair failed and we were unable to recover it. 00:28:05.843 [2024-10-07 09:48:54.760498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.843 [2024-10-07 09:48:54.760529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.843 qpair failed and we were unable to recover it. 00:28:05.843 [2024-10-07 09:48:54.760629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.843 [2024-10-07 09:48:54.760660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.843 qpair failed and we were unable to recover it. 00:28:05.843 [2024-10-07 09:48:54.760805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.843 [2024-10-07 09:48:54.760835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.843 qpair failed and we were unable to recover it. 00:28:05.843 [2024-10-07 09:48:54.760984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.843 [2024-10-07 09:48:54.761031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.843 qpair failed and we were unable to recover it. 00:28:05.843 [2024-10-07 09:48:54.761212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.843 [2024-10-07 09:48:54.761246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:05.843 qpair failed and we were unable to recover it. 00:28:05.843 [2024-10-07 09:48:54.761383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.843 [2024-10-07 09:48:54.761415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.843 qpair failed and we were unable to recover it. 00:28:05.843 [2024-10-07 09:48:54.761551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.843 [2024-10-07 09:48:54.761582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.843 qpair failed and we were unable to recover it. 00:28:05.843 [2024-10-07 09:48:54.761706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.843 [2024-10-07 09:48:54.761738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.843 qpair failed and we were unable to recover it. 00:28:05.843 [2024-10-07 09:48:54.761866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.843 [2024-10-07 09:48:54.761897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.843 qpair failed and we were unable to recover it. 00:28:05.843 [2024-10-07 09:48:54.762025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.843 [2024-10-07 09:48:54.762055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.843 qpair failed and we were unable to recover it. 00:28:05.843 [2024-10-07 09:48:54.762201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.843 [2024-10-07 09:48:54.762232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.843 qpair failed and we were unable to recover it. 00:28:05.843 [2024-10-07 09:48:54.762342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.843 [2024-10-07 09:48:54.762372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.843 qpair failed and we were unable to recover it. 00:28:05.843 [2024-10-07 09:48:54.762467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.843 [2024-10-07 09:48:54.762500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.843 qpair failed and we were unable to recover it. 00:28:05.843 [2024-10-07 09:48:54.762632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.843 [2024-10-07 09:48:54.762663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.843 qpair failed and we were unable to recover it. 00:28:05.843 [2024-10-07 09:48:54.762768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.843 [2024-10-07 09:48:54.762800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.843 qpair failed and we were unable to recover it. 00:28:05.843 [2024-10-07 09:48:54.762929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.843 [2024-10-07 09:48:54.762967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.843 qpair failed and we were unable to recover it. 00:28:05.843 [2024-10-07 09:48:54.763102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.843 [2024-10-07 09:48:54.763134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.843 qpair failed and we were unable to recover it. 00:28:05.843 [2024-10-07 09:48:54.763266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.843 [2024-10-07 09:48:54.763296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.843 qpair failed and we were unable to recover it. 00:28:05.843 [2024-10-07 09:48:54.763393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.843 [2024-10-07 09:48:54.763425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.843 qpair failed and we were unable to recover it. 00:28:05.843 [2024-10-07 09:48:54.763556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.843 [2024-10-07 09:48:54.763587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.843 qpair failed and we were unable to recover it. 00:28:05.843 [2024-10-07 09:48:54.763747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.843 [2024-10-07 09:48:54.763779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.843 qpair failed and we were unable to recover it. 00:28:05.843 [2024-10-07 09:48:54.763906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.843 [2024-10-07 09:48:54.763936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.843 qpair failed and we were unable to recover it. 00:28:05.843 [2024-10-07 09:48:54.764076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.843 [2024-10-07 09:48:54.764106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.843 qpair failed and we were unable to recover it. 00:28:05.843 [2024-10-07 09:48:54.764314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.843 [2024-10-07 09:48:54.764344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.843 qpair failed and we were unable to recover it. 00:28:05.843 [2024-10-07 09:48:54.764475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.843 [2024-10-07 09:48:54.764507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.843 qpair failed and we were unable to recover it. 00:28:05.843 [2024-10-07 09:48:54.764644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.843 [2024-10-07 09:48:54.764685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.843 qpair failed and we were unable to recover it. 00:28:05.843 [2024-10-07 09:48:54.764813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.843 [2024-10-07 09:48:54.764843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.843 qpair failed and we were unable to recover it. 00:28:05.843 [2024-10-07 09:48:54.764994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.843 [2024-10-07 09:48:54.765026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.843 qpair failed and we were unable to recover it. 00:28:05.843 [2024-10-07 09:48:54.765173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.843 [2024-10-07 09:48:54.765206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.843 qpair failed and we were unable to recover it. 00:28:05.843 [2024-10-07 09:48:54.765378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.843 [2024-10-07 09:48:54.765410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.843 qpair failed and we were unable to recover it. 00:28:05.843 [2024-10-07 09:48:54.765556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.843 [2024-10-07 09:48:54.765589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.843 qpair failed and we were unable to recover it. 00:28:05.843 [2024-10-07 09:48:54.765751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.843 [2024-10-07 09:48:54.765782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.844 qpair failed and we were unable to recover it. 00:28:05.844 [2024-10-07 09:48:54.765928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.844 [2024-10-07 09:48:54.765975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.844 qpair failed and we were unable to recover it. 00:28:05.844 [2024-10-07 09:48:54.766159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.844 [2024-10-07 09:48:54.766208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.844 qpair failed and we were unable to recover it. 00:28:05.844 [2024-10-07 09:48:54.766363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.844 [2024-10-07 09:48:54.766410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.844 qpair failed and we were unable to recover it. 00:28:05.844 [2024-10-07 09:48:54.766536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.844 [2024-10-07 09:48:54.766566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:05.844 qpair failed and we were unable to recover it. 00:28:05.844 [2024-10-07 09:48:54.766711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.844 [2024-10-07 09:48:54.766745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.844 qpair failed and we were unable to recover it. 00:28:05.844 [2024-10-07 09:48:54.766858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.844 [2024-10-07 09:48:54.766891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.844 qpair failed and we were unable to recover it. 00:28:05.844 [2024-10-07 09:48:54.766998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.844 [2024-10-07 09:48:54.767030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.844 qpair failed and we were unable to recover it. 00:28:05.844 [2024-10-07 09:48:54.767128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.844 [2024-10-07 09:48:54.767160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.844 qpair failed and we were unable to recover it. 00:28:05.844 [2024-10-07 09:48:54.767268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.844 [2024-10-07 09:48:54.767301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.844 qpair failed and we were unable to recover it. 00:28:05.844 [2024-10-07 09:48:54.767438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.844 [2024-10-07 09:48:54.767469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.844 qpair failed and we were unable to recover it. 00:28:05.844 [2024-10-07 09:48:54.767590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.844 [2024-10-07 09:48:54.767625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.844 qpair failed and we were unable to recover it. 00:28:05.844 [2024-10-07 09:48:54.767797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.844 [2024-10-07 09:48:54.767828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:05.844 qpair failed and we were unable to recover it. 00:28:05.844 [2024-10-07 09:48:54.767961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.135 [2024-10-07 09:48:54.767992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.135 qpair failed and we were unable to recover it. 00:28:06.135 [2024-10-07 09:48:54.768133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.135 [2024-10-07 09:48:54.768165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.135 qpair failed and we were unable to recover it. 00:28:06.135 [2024-10-07 09:48:54.768277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.135 [2024-10-07 09:48:54.768309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.135 qpair failed and we were unable to recover it. 00:28:06.135 [2024-10-07 09:48:54.768448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.135 [2024-10-07 09:48:54.768480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.135 qpair failed and we were unable to recover it. 00:28:06.135 [2024-10-07 09:48:54.768652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.135 [2024-10-07 09:48:54.768697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.135 qpair failed and we were unable to recover it. 00:28:06.135 [2024-10-07 09:48:54.768828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.135 [2024-10-07 09:48:54.768859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.135 qpair failed and we were unable to recover it. 00:28:06.135 [2024-10-07 09:48:54.769006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.135 [2024-10-07 09:48:54.769038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.135 qpair failed and we were unable to recover it. 00:28:06.135 [2024-10-07 09:48:54.769186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.135 [2024-10-07 09:48:54.769218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.135 qpair failed and we were unable to recover it. 00:28:06.135 [2024-10-07 09:48:54.769315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.135 [2024-10-07 09:48:54.769345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.135 qpair failed and we were unable to recover it. 00:28:06.135 [2024-10-07 09:48:54.769432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.135 [2024-10-07 09:48:54.769462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.135 qpair failed and we were unable to recover it. 00:28:06.135 [2024-10-07 09:48:54.769548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.135 [2024-10-07 09:48:54.769579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.135 qpair failed and we were unable to recover it. 00:28:06.135 [2024-10-07 09:48:54.769683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.135 [2024-10-07 09:48:54.769714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.135 qpair failed and we were unable to recover it. 00:28:06.135 [2024-10-07 09:48:54.769848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.135 [2024-10-07 09:48:54.769880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.135 qpair failed and we were unable to recover it. 00:28:06.135 [2024-10-07 09:48:54.769980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.135 [2024-10-07 09:48:54.770012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.135 qpair failed and we were unable to recover it. 00:28:06.135 [2024-10-07 09:48:54.770117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.135 [2024-10-07 09:48:54.770147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.135 qpair failed and we were unable to recover it. 00:28:06.135 [2024-10-07 09:48:54.770268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.135 [2024-10-07 09:48:54.770299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.135 qpair failed and we were unable to recover it. 00:28:06.135 [2024-10-07 09:48:54.770430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.135 [2024-10-07 09:48:54.770463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.135 qpair failed and we were unable to recover it. 00:28:06.135 [2024-10-07 09:48:54.770560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.135 [2024-10-07 09:48:54.770591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.135 qpair failed and we were unable to recover it. 00:28:06.135 [2024-10-07 09:48:54.770712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.135 [2024-10-07 09:48:54.770743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.135 qpair failed and we were unable to recover it. 00:28:06.135 [2024-10-07 09:48:54.770865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.135 [2024-10-07 09:48:54.770895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.135 qpair failed and we were unable to recover it. 00:28:06.135 [2024-10-07 09:48:54.771024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.135 [2024-10-07 09:48:54.771055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.135 qpair failed and we were unable to recover it. 00:28:06.135 [2024-10-07 09:48:54.771145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.135 [2024-10-07 09:48:54.771175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.135 qpair failed and we were unable to recover it. 00:28:06.135 [2024-10-07 09:48:54.771311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.135 [2024-10-07 09:48:54.771343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.135 qpair failed and we were unable to recover it. 00:28:06.135 [2024-10-07 09:48:54.771442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.135 [2024-10-07 09:48:54.771474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.135 qpair failed and we were unable to recover it. 00:28:06.135 [2024-10-07 09:48:54.771599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.135 [2024-10-07 09:48:54.771630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.135 qpair failed and we were unable to recover it. 00:28:06.135 [2024-10-07 09:48:54.771764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.135 [2024-10-07 09:48:54.771801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.135 qpair failed and we were unable to recover it. 00:28:06.135 [2024-10-07 09:48:54.771933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.136 [2024-10-07 09:48:54.771965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.136 qpair failed and we were unable to recover it. 00:28:06.136 [2024-10-07 09:48:54.772079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.136 [2024-10-07 09:48:54.772127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.136 qpair failed and we were unable to recover it. 00:28:06.136 [2024-10-07 09:48:54.772234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.136 [2024-10-07 09:48:54.772266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.136 qpair failed and we were unable to recover it. 00:28:06.136 [2024-10-07 09:48:54.772372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.136 [2024-10-07 09:48:54.772403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.136 qpair failed and we were unable to recover it. 00:28:06.136 [2024-10-07 09:48:54.772496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.136 [2024-10-07 09:48:54.772526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.136 qpair failed and we were unable to recover it. 00:28:06.136 [2024-10-07 09:48:54.772615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.136 [2024-10-07 09:48:54.772646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.136 qpair failed and we were unable to recover it. 00:28:06.136 [2024-10-07 09:48:54.772820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.136 [2024-10-07 09:48:54.772851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.136 qpair failed and we were unable to recover it. 00:28:06.136 [2024-10-07 09:48:54.772981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.136 [2024-10-07 09:48:54.773011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.136 qpair failed and we were unable to recover it. 00:28:06.136 [2024-10-07 09:48:54.773111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.136 [2024-10-07 09:48:54.773142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.136 qpair failed and we were unable to recover it. 00:28:06.136 [2024-10-07 09:48:54.773230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.136 [2024-10-07 09:48:54.773262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.136 qpair failed and we were unable to recover it. 00:28:06.136 [2024-10-07 09:48:54.773388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.136 [2024-10-07 09:48:54.773418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.136 qpair failed and we were unable to recover it. 00:28:06.136 [2024-10-07 09:48:54.773541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.136 [2024-10-07 09:48:54.773572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.136 qpair failed and we were unable to recover it. 00:28:06.136 [2024-10-07 09:48:54.773776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.136 [2024-10-07 09:48:54.773813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.136 qpair failed and we were unable to recover it. 00:28:06.136 [2024-10-07 09:48:54.773934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.136 [2024-10-07 09:48:54.773981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.136 qpair failed and we were unable to recover it. 00:28:06.136 [2024-10-07 09:48:54.774109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.136 [2024-10-07 09:48:54.774139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.136 qpair failed and we were unable to recover it. 00:28:06.136 [2024-10-07 09:48:54.774295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.136 [2024-10-07 09:48:54.774327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.136 qpair failed and we were unable to recover it. 00:28:06.136 [2024-10-07 09:48:54.774459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.136 [2024-10-07 09:48:54.774504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.136 qpair failed and we were unable to recover it. 00:28:06.136 [2024-10-07 09:48:54.774654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.136 [2024-10-07 09:48:54.774708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.136 qpair failed and we were unable to recover it. 00:28:06.136 [2024-10-07 09:48:54.774858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.136 [2024-10-07 09:48:54.774892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.136 qpair failed and we were unable to recover it. 00:28:06.136 [2024-10-07 09:48:54.775008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.136 [2024-10-07 09:48:54.775041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.136 qpair failed and we were unable to recover it. 00:28:06.136 [2024-10-07 09:48:54.775175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.136 [2024-10-07 09:48:54.775207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.136 qpair failed and we were unable to recover it. 00:28:06.136 [2024-10-07 09:48:54.775322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.136 [2024-10-07 09:48:54.775354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.136 qpair failed and we were unable to recover it. 00:28:06.136 [2024-10-07 09:48:54.775524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.136 [2024-10-07 09:48:54.775556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.136 qpair failed and we were unable to recover it. 00:28:06.136 [2024-10-07 09:48:54.775700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.136 [2024-10-07 09:48:54.775731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.136 qpair failed and we were unable to recover it. 00:28:06.136 [2024-10-07 09:48:54.775858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.136 [2024-10-07 09:48:54.775889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.136 qpair failed and we were unable to recover it. 00:28:06.136 [2024-10-07 09:48:54.776067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.136 [2024-10-07 09:48:54.776099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.136 qpair failed and we were unable to recover it. 00:28:06.136 [2024-10-07 09:48:54.776231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.136 [2024-10-07 09:48:54.776263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.136 qpair failed and we were unable to recover it. 00:28:06.136 [2024-10-07 09:48:54.776359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.136 [2024-10-07 09:48:54.776391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.136 qpair failed and we were unable to recover it. 00:28:06.136 [2024-10-07 09:48:54.776528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.136 [2024-10-07 09:48:54.776561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.136 qpair failed and we were unable to recover it. 00:28:06.136 [2024-10-07 09:48:54.776714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.136 [2024-10-07 09:48:54.776779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.136 qpair failed and we were unable to recover it. 00:28:06.136 [2024-10-07 09:48:54.776890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.136 [2024-10-07 09:48:54.776923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.136 qpair failed and we were unable to recover it. 00:28:06.136 [2024-10-07 09:48:54.777064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.136 [2024-10-07 09:48:54.777097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.136 qpair failed and we were unable to recover it. 00:28:06.136 [2024-10-07 09:48:54.777207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.136 [2024-10-07 09:48:54.777242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.136 qpair failed and we were unable to recover it. 00:28:06.136 [2024-10-07 09:48:54.777377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.136 [2024-10-07 09:48:54.777410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.136 qpair failed and we were unable to recover it. 00:28:06.136 [2024-10-07 09:48:54.777509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.136 [2024-10-07 09:48:54.777542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.136 qpair failed and we were unable to recover it. 00:28:06.136 [2024-10-07 09:48:54.777640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.136 [2024-10-07 09:48:54.777685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.136 qpair failed and we were unable to recover it. 00:28:06.136 [2024-10-07 09:48:54.777869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.136 [2024-10-07 09:48:54.777900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.136 qpair failed and we were unable to recover it. 00:28:06.136 [2024-10-07 09:48:54.778129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.136 [2024-10-07 09:48:54.778164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.136 qpair failed and we were unable to recover it. 00:28:06.136 [2024-10-07 09:48:54.778310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.137 [2024-10-07 09:48:54.778344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.137 qpair failed and we were unable to recover it. 00:28:06.137 [2024-10-07 09:48:54.778487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.137 [2024-10-07 09:48:54.778532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.137 qpair failed and we were unable to recover it. 00:28:06.137 [2024-10-07 09:48:54.778652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.137 [2024-10-07 09:48:54.778690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.137 qpair failed and we were unable to recover it. 00:28:06.137 [2024-10-07 09:48:54.778783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.137 [2024-10-07 09:48:54.778814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.137 qpair failed and we were unable to recover it. 00:28:06.137 [2024-10-07 09:48:54.778948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.137 [2024-10-07 09:48:54.778980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.137 qpair failed and we were unable to recover it. 00:28:06.137 [2024-10-07 09:48:54.779131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.137 [2024-10-07 09:48:54.779162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.137 qpair failed and we were unable to recover it. 00:28:06.137 [2024-10-07 09:48:54.779256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.137 [2024-10-07 09:48:54.779288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.137 qpair failed and we were unable to recover it. 00:28:06.137 [2024-10-07 09:48:54.779391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.137 [2024-10-07 09:48:54.779424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.137 qpair failed and we were unable to recover it. 00:28:06.137 [2024-10-07 09:48:54.779555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.137 [2024-10-07 09:48:54.779586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.137 qpair failed and we were unable to recover it. 00:28:06.137 [2024-10-07 09:48:54.779727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.137 [2024-10-07 09:48:54.779758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.137 qpair failed and we were unable to recover it. 00:28:06.137 [2024-10-07 09:48:54.779857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.137 [2024-10-07 09:48:54.779889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.137 qpair failed and we were unable to recover it. 00:28:06.137 [2024-10-07 09:48:54.780009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.137 [2024-10-07 09:48:54.780055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.137 qpair failed and we were unable to recover it. 00:28:06.137 [2024-10-07 09:48:54.780210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.137 [2024-10-07 09:48:54.780258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.137 qpair failed and we were unable to recover it. 00:28:06.137 [2024-10-07 09:48:54.780385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.137 [2024-10-07 09:48:54.780438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.137 qpair failed and we were unable to recover it. 00:28:06.137 [2024-10-07 09:48:54.780565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.137 [2024-10-07 09:48:54.780597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.137 qpair failed and we were unable to recover it. 00:28:06.137 [2024-10-07 09:48:54.780706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.137 [2024-10-07 09:48:54.780738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.137 qpair failed and we were unable to recover it. 00:28:06.137 [2024-10-07 09:48:54.780947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.137 [2024-10-07 09:48:54.780994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.137 qpair failed and we were unable to recover it. 00:28:06.137 [2024-10-07 09:48:54.781134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.137 [2024-10-07 09:48:54.781181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.137 qpair failed and we were unable to recover it. 00:28:06.137 [2024-10-07 09:48:54.781329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.137 [2024-10-07 09:48:54.781378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.137 qpair failed and we were unable to recover it. 00:28:06.137 [2024-10-07 09:48:54.781508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.137 [2024-10-07 09:48:54.781539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.137 qpair failed and we were unable to recover it. 00:28:06.137 [2024-10-07 09:48:54.781634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.137 [2024-10-07 09:48:54.781677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.137 qpair failed and we were unable to recover it. 00:28:06.137 [2024-10-07 09:48:54.781810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.137 [2024-10-07 09:48:54.781842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.137 qpair failed and we were unable to recover it. 00:28:06.137 [2024-10-07 09:48:54.781972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.137 [2024-10-07 09:48:54.782003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.137 qpair failed and we were unable to recover it. 00:28:06.137 [2024-10-07 09:48:54.782156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.137 [2024-10-07 09:48:54.782187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.137 qpair failed and we were unable to recover it. 00:28:06.137 [2024-10-07 09:48:54.782280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.137 [2024-10-07 09:48:54.782311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.137 qpair failed and we were unable to recover it. 00:28:06.137 [2024-10-07 09:48:54.782445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.137 [2024-10-07 09:48:54.782477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.137 qpair failed and we were unable to recover it. 00:28:06.137 [2024-10-07 09:48:54.782638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.137 [2024-10-07 09:48:54.782679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.137 qpair failed and we were unable to recover it. 00:28:06.137 [2024-10-07 09:48:54.782782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.137 [2024-10-07 09:48:54.782813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.137 qpair failed and we were unable to recover it. 00:28:06.137 [2024-10-07 09:48:54.782971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.137 [2024-10-07 09:48:54.783005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.137 qpair failed and we were unable to recover it. 00:28:06.137 [2024-10-07 09:48:54.783143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.137 [2024-10-07 09:48:54.783190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.137 qpair failed and we were unable to recover it. 00:28:06.137 [2024-10-07 09:48:54.783329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.137 [2024-10-07 09:48:54.783376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.137 qpair failed and we were unable to recover it. 00:28:06.137 [2024-10-07 09:48:54.783503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.137 [2024-10-07 09:48:54.783534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.137 qpair failed and we were unable to recover it. 00:28:06.137 [2024-10-07 09:48:54.783681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.137 [2024-10-07 09:48:54.783729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.137 qpair failed and we were unable to recover it. 00:28:06.137 [2024-10-07 09:48:54.783840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.137 [2024-10-07 09:48:54.783872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.137 qpair failed and we were unable to recover it. 00:28:06.137 [2024-10-07 09:48:54.783984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.137 [2024-10-07 09:48:54.784017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.137 qpair failed and we were unable to recover it. 00:28:06.137 [2024-10-07 09:48:54.784182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.137 [2024-10-07 09:48:54.784213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.137 qpair failed and we were unable to recover it. 00:28:06.137 [2024-10-07 09:48:54.784369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.137 [2024-10-07 09:48:54.784399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.137 qpair failed and we were unable to recover it. 00:28:06.137 [2024-10-07 09:48:54.784506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.137 [2024-10-07 09:48:54.784537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.137 qpair failed and we were unable to recover it. 00:28:06.137 [2024-10-07 09:48:54.784632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.138 [2024-10-07 09:48:54.784663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.138 qpair failed and we were unable to recover it. 00:28:06.138 [2024-10-07 09:48:54.784806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.138 [2024-10-07 09:48:54.784838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.138 qpair failed and we were unable to recover it. 00:28:06.138 [2024-10-07 09:48:54.784938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.138 [2024-10-07 09:48:54.784970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.138 qpair failed and we were unable to recover it. 00:28:06.138 [2024-10-07 09:48:54.785115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.138 [2024-10-07 09:48:54.785162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.138 qpair failed and we were unable to recover it. 00:28:06.138 [2024-10-07 09:48:54.785295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.138 [2024-10-07 09:48:54.785344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.138 qpair failed and we were unable to recover it. 00:28:06.138 [2024-10-07 09:48:54.785494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.138 [2024-10-07 09:48:54.785530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.138 qpair failed and we were unable to recover it. 00:28:06.138 [2024-10-07 09:48:54.785660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.138 [2024-10-07 09:48:54.785722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.138 qpair failed and we were unable to recover it. 00:28:06.138 [2024-10-07 09:48:54.785870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.138 [2024-10-07 09:48:54.785906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.138 qpair failed and we were unable to recover it. 00:28:06.138 [2024-10-07 09:48:54.786074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.138 [2024-10-07 09:48:54.786107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.138 qpair failed and we were unable to recover it. 00:28:06.138 [2024-10-07 09:48:54.786242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.138 [2024-10-07 09:48:54.786276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.138 qpair failed and we were unable to recover it. 00:28:06.138 [2024-10-07 09:48:54.786439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.138 [2024-10-07 09:48:54.786471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.138 qpair failed and we were unable to recover it. 00:28:06.138 [2024-10-07 09:48:54.786613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.138 [2024-10-07 09:48:54.786646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.138 qpair failed and we were unable to recover it. 00:28:06.138 [2024-10-07 09:48:54.786793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.138 [2024-10-07 09:48:54.786840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.138 qpair failed and we were unable to recover it. 00:28:06.138 [2024-10-07 09:48:54.787024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.138 [2024-10-07 09:48:54.787073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.138 qpair failed and we were unable to recover it. 00:28:06.138 [2024-10-07 09:48:54.787198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.138 [2024-10-07 09:48:54.787231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.138 qpair failed and we were unable to recover it. 00:28:06.138 [2024-10-07 09:48:54.787354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.138 [2024-10-07 09:48:54.787387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.138 qpair failed and we were unable to recover it. 00:28:06.138 [2024-10-07 09:48:54.787535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.138 [2024-10-07 09:48:54.787566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.138 qpair failed and we were unable to recover it. 00:28:06.138 [2024-10-07 09:48:54.787677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.138 [2024-10-07 09:48:54.787710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.138 qpair failed and we were unable to recover it. 00:28:06.138 [2024-10-07 09:48:54.787856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.138 [2024-10-07 09:48:54.787905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.138 qpair failed and we were unable to recover it. 00:28:06.138 [2024-10-07 09:48:54.788018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.138 [2024-10-07 09:48:54.788049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.138 qpair failed and we were unable to recover it. 00:28:06.138 [2024-10-07 09:48:54.788207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.138 [2024-10-07 09:48:54.788254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.138 qpair failed and we were unable to recover it. 00:28:06.138 [2024-10-07 09:48:54.788387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.138 [2024-10-07 09:48:54.788421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.138 qpair failed and we were unable to recover it. 00:28:06.138 [2024-10-07 09:48:54.788522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.138 [2024-10-07 09:48:54.788554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.138 qpair failed and we were unable to recover it. 00:28:06.138 [2024-10-07 09:48:54.788696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.138 [2024-10-07 09:48:54.788729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.138 qpair failed and we were unable to recover it. 00:28:06.138 [2024-10-07 09:48:54.788841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.138 [2024-10-07 09:48:54.788875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.138 qpair failed and we were unable to recover it. 00:28:06.138 [2024-10-07 09:48:54.788985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.138 [2024-10-07 09:48:54.789017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.138 qpair failed and we were unable to recover it. 00:28:06.138 [2024-10-07 09:48:54.789127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.138 [2024-10-07 09:48:54.789160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.138 qpair failed and we were unable to recover it. 00:28:06.138 [2024-10-07 09:48:54.789289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.138 [2024-10-07 09:48:54.789322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.138 qpair failed and we were unable to recover it. 00:28:06.138 [2024-10-07 09:48:54.789424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.138 [2024-10-07 09:48:54.789458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.138 qpair failed and we were unable to recover it. 00:28:06.138 [2024-10-07 09:48:54.789557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.138 [2024-10-07 09:48:54.789591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.138 qpair failed and we were unable to recover it. 00:28:06.138 [2024-10-07 09:48:54.789736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.138 [2024-10-07 09:48:54.789775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.138 qpair failed and we were unable to recover it. 00:28:06.138 [2024-10-07 09:48:54.789880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.138 [2024-10-07 09:48:54.789912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.138 qpair failed and we were unable to recover it. 00:28:06.138 [2024-10-07 09:48:54.790095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.138 [2024-10-07 09:48:54.790129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.138 qpair failed and we were unable to recover it. 00:28:06.138 [2024-10-07 09:48:54.790291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.138 [2024-10-07 09:48:54.790325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.138 qpair failed and we were unable to recover it. 00:28:06.138 [2024-10-07 09:48:54.790497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.138 [2024-10-07 09:48:54.790530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.138 qpair failed and we were unable to recover it. 00:28:06.138 [2024-10-07 09:48:54.790640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.138 [2024-10-07 09:48:54.790684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.138 qpair failed and we were unable to recover it. 00:28:06.138 [2024-10-07 09:48:54.790864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.138 [2024-10-07 09:48:54.790896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.138 qpair failed and we were unable to recover it. 00:28:06.138 [2024-10-07 09:48:54.790988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.138 [2024-10-07 09:48:54.791036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.138 qpair failed and we were unable to recover it. 00:28:06.138 [2024-10-07 09:48:54.791182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.138 [2024-10-07 09:48:54.791216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.139 qpair failed and we were unable to recover it. 00:28:06.139 [2024-10-07 09:48:54.791380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.139 [2024-10-07 09:48:54.791412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.139 qpair failed and we were unable to recover it. 00:28:06.139 [2024-10-07 09:48:54.791547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.139 [2024-10-07 09:48:54.791580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.139 qpair failed and we were unable to recover it. 00:28:06.139 [2024-10-07 09:48:54.791716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.139 [2024-10-07 09:48:54.791749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.139 qpair failed and we were unable to recover it. 00:28:06.139 [2024-10-07 09:48:54.791850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.139 [2024-10-07 09:48:54.791883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.139 qpair failed and we were unable to recover it. 00:28:06.139 [2024-10-07 09:48:54.792016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.139 [2024-10-07 09:48:54.792064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.139 qpair failed and we were unable to recover it. 00:28:06.139 [2024-10-07 09:48:54.792210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.139 [2024-10-07 09:48:54.792244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.139 qpair failed and we were unable to recover it. 00:28:06.139 [2024-10-07 09:48:54.792411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.139 [2024-10-07 09:48:54.792444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.139 qpair failed and we were unable to recover it. 00:28:06.139 [2024-10-07 09:48:54.792578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.139 [2024-10-07 09:48:54.792610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.139 qpair failed and we were unable to recover it. 00:28:06.139 [2024-10-07 09:48:54.792731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.139 [2024-10-07 09:48:54.792763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.139 qpair failed and we were unable to recover it. 00:28:06.139 [2024-10-07 09:48:54.792875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.139 [2024-10-07 09:48:54.792906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.139 qpair failed and we were unable to recover it. 00:28:06.139 [2024-10-07 09:48:54.793037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.139 [2024-10-07 09:48:54.793086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.139 qpair failed and we were unable to recover it. 00:28:06.139 [2024-10-07 09:48:54.793221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.139 [2024-10-07 09:48:54.793254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.139 qpair failed and we were unable to recover it. 00:28:06.139 [2024-10-07 09:48:54.793407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.139 [2024-10-07 09:48:54.793439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.139 qpair failed and we were unable to recover it. 00:28:06.139 [2024-10-07 09:48:54.793585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.139 [2024-10-07 09:48:54.793618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.139 qpair failed and we were unable to recover it. 00:28:06.139 [2024-10-07 09:48:54.793776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.139 [2024-10-07 09:48:54.793808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.139 qpair failed and we were unable to recover it. 00:28:06.139 [2024-10-07 09:48:54.793908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.139 [2024-10-07 09:48:54.793941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.139 qpair failed and we were unable to recover it. 00:28:06.139 [2024-10-07 09:48:54.794065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.139 [2024-10-07 09:48:54.794096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.139 qpair failed and we were unable to recover it. 00:28:06.139 [2024-10-07 09:48:54.794229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.139 [2024-10-07 09:48:54.794260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.139 qpair failed and we were unable to recover it. 00:28:06.139 [2024-10-07 09:48:54.794453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.139 [2024-10-07 09:48:54.794519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.139 qpair failed and we were unable to recover it. 00:28:06.139 [2024-10-07 09:48:54.794652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.139 [2024-10-07 09:48:54.794693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.139 qpair failed and we were unable to recover it. 00:28:06.139 [2024-10-07 09:48:54.794835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.139 [2024-10-07 09:48:54.794867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.139 qpair failed and we were unable to recover it. 00:28:06.139 [2024-10-07 09:48:54.794981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.139 [2024-10-07 09:48:54.795030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.139 qpair failed and we were unable to recover it. 00:28:06.139 [2024-10-07 09:48:54.795164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.139 [2024-10-07 09:48:54.795211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.139 qpair failed and we were unable to recover it. 00:28:06.139 [2024-10-07 09:48:54.795387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.139 [2024-10-07 09:48:54.795440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.139 qpair failed and we were unable to recover it. 00:28:06.139 [2024-10-07 09:48:54.795569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.139 [2024-10-07 09:48:54.795601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.139 qpair failed and we were unable to recover it. 00:28:06.139 [2024-10-07 09:48:54.795753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.139 [2024-10-07 09:48:54.795799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.139 qpair failed and we were unable to recover it. 00:28:06.139 [2024-10-07 09:48:54.795968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.139 [2024-10-07 09:48:54.796000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.139 qpair failed and we were unable to recover it. 00:28:06.139 [2024-10-07 09:48:54.796125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.139 [2024-10-07 09:48:54.796156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.139 qpair failed and we were unable to recover it. 00:28:06.139 [2024-10-07 09:48:54.796288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.139 [2024-10-07 09:48:54.796318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.139 qpair failed and we were unable to recover it. 00:28:06.139 [2024-10-07 09:48:54.796480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.139 [2024-10-07 09:48:54.796511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.139 qpair failed and we were unable to recover it. 00:28:06.139 [2024-10-07 09:48:54.796614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.139 [2024-10-07 09:48:54.796645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.139 qpair failed and we were unable to recover it. 00:28:06.139 [2024-10-07 09:48:54.796759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.139 [2024-10-07 09:48:54.796792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.139 qpair failed and we were unable to recover it. 00:28:06.140 [2024-10-07 09:48:54.796931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.140 [2024-10-07 09:48:54.796963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.140 qpair failed and we were unable to recover it. 00:28:06.140 [2024-10-07 09:48:54.797089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.140 [2024-10-07 09:48:54.797120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.140 qpair failed and we were unable to recover it. 00:28:06.140 [2024-10-07 09:48:54.797297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.140 [2024-10-07 09:48:54.797350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.140 qpair failed and we were unable to recover it. 00:28:06.140 [2024-10-07 09:48:54.797500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.140 [2024-10-07 09:48:54.797549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.140 qpair failed and we were unable to recover it. 00:28:06.140 [2024-10-07 09:48:54.797652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.140 [2024-10-07 09:48:54.797694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.140 qpair failed and we were unable to recover it. 00:28:06.140 [2024-10-07 09:48:54.797817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.140 [2024-10-07 09:48:54.797865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.140 qpair failed and we were unable to recover it. 00:28:06.140 [2024-10-07 09:48:54.797968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.140 [2024-10-07 09:48:54.797999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.140 qpair failed and we were unable to recover it. 00:28:06.140 [2024-10-07 09:48:54.798140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.140 [2024-10-07 09:48:54.798186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.140 qpair failed and we were unable to recover it. 00:28:06.140 [2024-10-07 09:48:54.798305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.140 [2024-10-07 09:48:54.798337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.140 qpair failed and we were unable to recover it. 00:28:06.140 [2024-10-07 09:48:54.798457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.140 [2024-10-07 09:48:54.798488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.140 qpair failed and we were unable to recover it. 00:28:06.140 [2024-10-07 09:48:54.798642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.140 [2024-10-07 09:48:54.798683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.140 qpair failed and we were unable to recover it. 00:28:06.140 [2024-10-07 09:48:54.798821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.140 [2024-10-07 09:48:54.798851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.140 qpair failed and we were unable to recover it. 00:28:06.140 [2024-10-07 09:48:54.798986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.140 [2024-10-07 09:48:54.799016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.140 qpair failed and we were unable to recover it. 00:28:06.140 [2024-10-07 09:48:54.799155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.140 [2024-10-07 09:48:54.799186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.140 qpair failed and we were unable to recover it. 00:28:06.140 [2024-10-07 09:48:54.799288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.140 [2024-10-07 09:48:54.799319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.140 qpair failed and we were unable to recover it. 00:28:06.140 [2024-10-07 09:48:54.799423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.140 [2024-10-07 09:48:54.799455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.140 qpair failed and we were unable to recover it. 00:28:06.140 [2024-10-07 09:48:54.799552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.140 [2024-10-07 09:48:54.799582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.140 qpair failed and we were unable to recover it. 00:28:06.140 [2024-10-07 09:48:54.799735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.140 [2024-10-07 09:48:54.799767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.140 qpair failed and we were unable to recover it. 00:28:06.140 [2024-10-07 09:48:54.799891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.140 [2024-10-07 09:48:54.799922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.140 qpair failed and we were unable to recover it. 00:28:06.140 [2024-10-07 09:48:54.800047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.140 [2024-10-07 09:48:54.800077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.140 qpair failed and we were unable to recover it. 00:28:06.140 [2024-10-07 09:48:54.800200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.140 [2024-10-07 09:48:54.800230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.140 qpair failed and we were unable to recover it. 00:28:06.140 [2024-10-07 09:48:54.800390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.140 [2024-10-07 09:48:54.800421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.140 qpair failed and we were unable to recover it. 00:28:06.140 [2024-10-07 09:48:54.800571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.140 [2024-10-07 09:48:54.800618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.140 qpair failed and we were unable to recover it. 00:28:06.140 [2024-10-07 09:48:54.800781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.140 [2024-10-07 09:48:54.800828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.140 qpair failed and we were unable to recover it. 00:28:06.140 [2024-10-07 09:48:54.800939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.140 [2024-10-07 09:48:54.800972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.140 qpair failed and we were unable to recover it. 00:28:06.140 [2024-10-07 09:48:54.801127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.140 [2024-10-07 09:48:54.801158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.140 qpair failed and we were unable to recover it. 00:28:06.140 [2024-10-07 09:48:54.801255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.140 [2024-10-07 09:48:54.801286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.140 qpair failed and we were unable to recover it. 00:28:06.140 [2024-10-07 09:48:54.801419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.140 [2024-10-07 09:48:54.801451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.140 qpair failed and we were unable to recover it. 00:28:06.140 [2024-10-07 09:48:54.801576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.140 [2024-10-07 09:48:54.801608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.140 qpair failed and we were unable to recover it. 00:28:06.140 [2024-10-07 09:48:54.801740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.140 [2024-10-07 09:48:54.801771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.140 qpair failed and we were unable to recover it. 00:28:06.140 [2024-10-07 09:48:54.801907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.140 [2024-10-07 09:48:54.801938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.140 qpair failed and we were unable to recover it. 00:28:06.140 [2024-10-07 09:48:54.802066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.140 [2024-10-07 09:48:54.802097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.140 qpair failed and we were unable to recover it. 00:28:06.140 [2024-10-07 09:48:54.802224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.140 [2024-10-07 09:48:54.802255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.140 qpair failed and we were unable to recover it. 00:28:06.140 [2024-10-07 09:48:54.802389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.140 [2024-10-07 09:48:54.802421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.140 qpair failed and we were unable to recover it. 00:28:06.140 [2024-10-07 09:48:54.802546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.140 [2024-10-07 09:48:54.802577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.140 qpair failed and we were unable to recover it. 00:28:06.140 [2024-10-07 09:48:54.802705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.140 [2024-10-07 09:48:54.802736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.140 qpair failed and we were unable to recover it. 00:28:06.140 [2024-10-07 09:48:54.802871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.140 [2024-10-07 09:48:54.802901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.140 qpair failed and we were unable to recover it. 00:28:06.140 [2024-10-07 09:48:54.803057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.140 [2024-10-07 09:48:54.803087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.140 qpair failed and we were unable to recover it. 00:28:06.140 [2024-10-07 09:48:54.803292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.140 [2024-10-07 09:48:54.803322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.141 qpair failed and we were unable to recover it. 00:28:06.141 [2024-10-07 09:48:54.803483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.141 [2024-10-07 09:48:54.803514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.141 qpair failed and we were unable to recover it. 00:28:06.141 [2024-10-07 09:48:54.803644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.141 [2024-10-07 09:48:54.803685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.141 qpair failed and we were unable to recover it. 00:28:06.141 [2024-10-07 09:48:54.803867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.141 [2024-10-07 09:48:54.803899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.141 qpair failed and we were unable to recover it. 00:28:06.141 [2024-10-07 09:48:54.804002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.141 [2024-10-07 09:48:54.804034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.141 qpair failed and we were unable to recover it. 00:28:06.141 [2024-10-07 09:48:54.804141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.141 [2024-10-07 09:48:54.804173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.141 qpair failed and we were unable to recover it. 00:28:06.141 [2024-10-07 09:48:54.804282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.141 [2024-10-07 09:48:54.804315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.141 qpair failed and we were unable to recover it. 00:28:06.141 [2024-10-07 09:48:54.804497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.141 [2024-10-07 09:48:54.804531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.141 qpair failed and we were unable to recover it. 00:28:06.141 [2024-10-07 09:48:54.804689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.141 [2024-10-07 09:48:54.804722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.141 qpair failed and we were unable to recover it. 00:28:06.141 [2024-10-07 09:48:54.804849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.141 [2024-10-07 09:48:54.804882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.141 qpair failed and we were unable to recover it. 00:28:06.141 [2024-10-07 09:48:54.805048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.141 [2024-10-07 09:48:54.805098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.141 qpair failed and we were unable to recover it. 00:28:06.141 [2024-10-07 09:48:54.805249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.141 [2024-10-07 09:48:54.805280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.141 qpair failed and we were unable to recover it. 00:28:06.141 [2024-10-07 09:48:54.805407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.141 [2024-10-07 09:48:54.805438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.141 qpair failed and we were unable to recover it. 00:28:06.141 [2024-10-07 09:48:54.805568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.141 [2024-10-07 09:48:54.805600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.141 qpair failed and we were unable to recover it. 00:28:06.141 [2024-10-07 09:48:54.805725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.141 [2024-10-07 09:48:54.805755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.141 qpair failed and we were unable to recover it. 00:28:06.141 [2024-10-07 09:48:54.805849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.141 [2024-10-07 09:48:54.805880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.141 qpair failed and we were unable to recover it. 00:28:06.141 [2024-10-07 09:48:54.806019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.141 [2024-10-07 09:48:54.806051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.141 qpair failed and we were unable to recover it. 00:28:06.141 [2024-10-07 09:48:54.806154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.141 [2024-10-07 09:48:54.806185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.141 qpair failed and we were unable to recover it. 00:28:06.141 [2024-10-07 09:48:54.806311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.141 [2024-10-07 09:48:54.806342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.141 qpair failed and we were unable to recover it. 00:28:06.141 [2024-10-07 09:48:54.806442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.141 [2024-10-07 09:48:54.806473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.141 qpair failed and we were unable to recover it. 00:28:06.141 [2024-10-07 09:48:54.806595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.141 [2024-10-07 09:48:54.806626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.141 qpair failed and we were unable to recover it. 00:28:06.141 [2024-10-07 09:48:54.806769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.141 [2024-10-07 09:48:54.806800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.141 qpair failed and we were unable to recover it. 00:28:06.141 [2024-10-07 09:48:54.806933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.141 [2024-10-07 09:48:54.806964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.141 qpair failed and we were unable to recover it. 00:28:06.141 [2024-10-07 09:48:54.807097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.141 [2024-10-07 09:48:54.807128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.141 qpair failed and we were unable to recover it. 00:28:06.141 [2024-10-07 09:48:54.807279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.141 [2024-10-07 09:48:54.807310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.141 qpair failed and we were unable to recover it. 00:28:06.141 [2024-10-07 09:48:54.807431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.141 [2024-10-07 09:48:54.807462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.141 qpair failed and we were unable to recover it. 00:28:06.141 [2024-10-07 09:48:54.807582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.141 [2024-10-07 09:48:54.807613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.141 qpair failed and we were unable to recover it. 00:28:06.141 [2024-10-07 09:48:54.807747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.141 [2024-10-07 09:48:54.807779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.141 qpair failed and we were unable to recover it. 00:28:06.141 [2024-10-07 09:48:54.807876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.141 [2024-10-07 09:48:54.807907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.141 qpair failed and we were unable to recover it. 00:28:06.141 [2024-10-07 09:48:54.808053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.141 [2024-10-07 09:48:54.808099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.141 qpair failed and we were unable to recover it. 00:28:06.141 [2024-10-07 09:48:54.808231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.141 [2024-10-07 09:48:54.808262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.141 qpair failed and we were unable to recover it. 00:28:06.141 [2024-10-07 09:48:54.808399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.141 [2024-10-07 09:48:54.808431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.141 qpair failed and we were unable to recover it. 00:28:06.141 [2024-10-07 09:48:54.808560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.141 [2024-10-07 09:48:54.808592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.141 qpair failed and we were unable to recover it. 00:28:06.141 [2024-10-07 09:48:54.808730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.141 [2024-10-07 09:48:54.808762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.141 qpair failed and we were unable to recover it. 00:28:06.141 [2024-10-07 09:48:54.808874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.141 [2024-10-07 09:48:54.808906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.141 qpair failed and we were unable to recover it. 00:28:06.141 [2024-10-07 09:48:54.809085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.141 [2024-10-07 09:48:54.809135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.141 qpair failed and we were unable to recover it. 00:28:06.141 [2024-10-07 09:48:54.809283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.141 [2024-10-07 09:48:54.809332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.141 qpair failed and we were unable to recover it. 00:28:06.141 [2024-10-07 09:48:54.809461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.141 [2024-10-07 09:48:54.809492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.141 qpair failed and we were unable to recover it. 00:28:06.141 [2024-10-07 09:48:54.809625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.141 [2024-10-07 09:48:54.809656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.141 qpair failed and we were unable to recover it. 00:28:06.141 [2024-10-07 09:48:54.809778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.142 [2024-10-07 09:48:54.809827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.142 qpair failed and we were unable to recover it. 00:28:06.142 [2024-10-07 09:48:54.809975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.142 [2024-10-07 09:48:54.810022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.142 qpair failed and we were unable to recover it. 00:28:06.142 [2024-10-07 09:48:54.810114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.142 [2024-10-07 09:48:54.810146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.142 qpair failed and we were unable to recover it. 00:28:06.142 [2024-10-07 09:48:54.810285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.142 [2024-10-07 09:48:54.810334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.142 qpair failed and we were unable to recover it. 00:28:06.142 [2024-10-07 09:48:54.810445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.142 [2024-10-07 09:48:54.810476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.142 qpair failed and we were unable to recover it. 00:28:06.142 [2024-10-07 09:48:54.810605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.142 [2024-10-07 09:48:54.810638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.142 qpair failed and we were unable to recover it. 00:28:06.142 [2024-10-07 09:48:54.810798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.142 [2024-10-07 09:48:54.810833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.142 qpair failed and we were unable to recover it. 00:28:06.142 [2024-10-07 09:48:54.811006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.142 [2024-10-07 09:48:54.811039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.142 qpair failed and we were unable to recover it. 00:28:06.142 [2024-10-07 09:48:54.811172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.142 [2024-10-07 09:48:54.811206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.142 qpair failed and we were unable to recover it. 00:28:06.142 [2024-10-07 09:48:54.811368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.142 [2024-10-07 09:48:54.811401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.142 qpair failed and we were unable to recover it. 00:28:06.142 [2024-10-07 09:48:54.811513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.142 [2024-10-07 09:48:54.811548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.142 qpair failed and we were unable to recover it. 00:28:06.142 [2024-10-07 09:48:54.811691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.142 [2024-10-07 09:48:54.811724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.142 qpair failed and we were unable to recover it. 00:28:06.142 [2024-10-07 09:48:54.811851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.142 [2024-10-07 09:48:54.811882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.142 qpair failed and we were unable to recover it. 00:28:06.142 [2024-10-07 09:48:54.812013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.142 [2024-10-07 09:48:54.812044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.142 qpair failed and we were unable to recover it. 00:28:06.142 [2024-10-07 09:48:54.812224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.142 [2024-10-07 09:48:54.812257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.142 qpair failed and we were unable to recover it. 00:28:06.142 [2024-10-07 09:48:54.812369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.142 [2024-10-07 09:48:54.812401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.142 qpair failed and we were unable to recover it. 00:28:06.142 [2024-10-07 09:48:54.812553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.142 [2024-10-07 09:48:54.812586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.142 qpair failed and we were unable to recover it. 00:28:06.142 [2024-10-07 09:48:54.812774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.142 [2024-10-07 09:48:54.812807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.142 qpair failed and we were unable to recover it. 00:28:06.142 [2024-10-07 09:48:54.812907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.142 [2024-10-07 09:48:54.812939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.142 qpair failed and we were unable to recover it. 00:28:06.142 [2024-10-07 09:48:54.813112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.142 [2024-10-07 09:48:54.813160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.142 qpair failed and we were unable to recover it. 00:28:06.142 [2024-10-07 09:48:54.813275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.142 [2024-10-07 09:48:54.813324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.142 qpair failed and we were unable to recover it. 00:28:06.142 [2024-10-07 09:48:54.813458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.142 [2024-10-07 09:48:54.813489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.142 qpair failed and we were unable to recover it. 00:28:06.142 [2024-10-07 09:48:54.813620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.142 [2024-10-07 09:48:54.813651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.142 qpair failed and we were unable to recover it. 00:28:06.142 [2024-10-07 09:48:54.813771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.142 [2024-10-07 09:48:54.813805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.142 qpair failed and we were unable to recover it. 00:28:06.142 [2024-10-07 09:48:54.813926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.142 [2024-10-07 09:48:54.813956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.142 qpair failed and we were unable to recover it. 00:28:06.142 [2024-10-07 09:48:54.814075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.142 [2024-10-07 09:48:54.814106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.142 qpair failed and we were unable to recover it. 00:28:06.142 [2024-10-07 09:48:54.814230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.142 [2024-10-07 09:48:54.814260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.142 qpair failed and we were unable to recover it. 00:28:06.142 [2024-10-07 09:48:54.814376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.142 [2024-10-07 09:48:54.814407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.142 qpair failed and we were unable to recover it. 00:28:06.142 [2024-10-07 09:48:54.814498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.142 [2024-10-07 09:48:54.814529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.142 qpair failed and we were unable to recover it. 00:28:06.142 [2024-10-07 09:48:54.814628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.142 [2024-10-07 09:48:54.814658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.142 qpair failed and we were unable to recover it. 00:28:06.142 [2024-10-07 09:48:54.814795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.142 [2024-10-07 09:48:54.814831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.142 qpair failed and we were unable to recover it. 00:28:06.142 [2024-10-07 09:48:54.814968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.142 [2024-10-07 09:48:54.815000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.142 qpair failed and we were unable to recover it. 00:28:06.142 [2024-10-07 09:48:54.815101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.142 [2024-10-07 09:48:54.815132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.142 qpair failed and we were unable to recover it. 00:28:06.142 [2024-10-07 09:48:54.815298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.142 [2024-10-07 09:48:54.815329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.142 qpair failed and we were unable to recover it. 00:28:06.142 [2024-10-07 09:48:54.815489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.142 [2024-10-07 09:48:54.815520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.142 qpair failed and we were unable to recover it. 00:28:06.142 [2024-10-07 09:48:54.815654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.142 [2024-10-07 09:48:54.815694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.142 qpair failed and we were unable to recover it. 00:28:06.142 [2024-10-07 09:48:54.815788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.142 [2024-10-07 09:48:54.815818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.142 qpair failed and we were unable to recover it. 00:28:06.142 [2024-10-07 09:48:54.815914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.142 [2024-10-07 09:48:54.815945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.142 qpair failed and we were unable to recover it. 00:28:06.142 [2024-10-07 09:48:54.816041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.142 [2024-10-07 09:48:54.816071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.142 qpair failed and we were unable to recover it. 00:28:06.143 [2024-10-07 09:48:54.816162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.143 [2024-10-07 09:48:54.816192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.143 qpair failed and we were unable to recover it. 00:28:06.143 [2024-10-07 09:48:54.816318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.143 [2024-10-07 09:48:54.816349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.143 qpair failed and we were unable to recover it. 00:28:06.143 [2024-10-07 09:48:54.816445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.143 [2024-10-07 09:48:54.816477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.143 qpair failed and we were unable to recover it. 00:28:06.143 [2024-10-07 09:48:54.816618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.143 [2024-10-07 09:48:54.816678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.143 qpair failed and we were unable to recover it. 00:28:06.143 [2024-10-07 09:48:54.816802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.143 [2024-10-07 09:48:54.816838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.143 qpair failed and we were unable to recover it. 00:28:06.143 [2024-10-07 09:48:54.816984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.143 [2024-10-07 09:48:54.817029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.143 qpair failed and we were unable to recover it. 00:28:06.143 [2024-10-07 09:48:54.817185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.143 [2024-10-07 09:48:54.817220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.143 qpair failed and we were unable to recover it. 00:28:06.143 [2024-10-07 09:48:54.817362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.143 [2024-10-07 09:48:54.817396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.143 qpair failed and we were unable to recover it. 00:28:06.143 [2024-10-07 09:48:54.817531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.143 [2024-10-07 09:48:54.817564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.143 qpair failed and we were unable to recover it. 00:28:06.143 [2024-10-07 09:48:54.817765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.143 [2024-10-07 09:48:54.817822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.143 qpair failed and we were unable to recover it. 00:28:06.143 [2024-10-07 09:48:54.817979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.143 [2024-10-07 09:48:54.818031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.143 qpair failed and we were unable to recover it. 00:28:06.143 [2024-10-07 09:48:54.818209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.143 [2024-10-07 09:48:54.818259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.143 qpair failed and we were unable to recover it. 00:28:06.143 [2024-10-07 09:48:54.818398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.143 [2024-10-07 09:48:54.818445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.143 qpair failed and we were unable to recover it. 00:28:06.143 [2024-10-07 09:48:54.818549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.143 [2024-10-07 09:48:54.818581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.143 qpair failed and we were unable to recover it. 00:28:06.143 [2024-10-07 09:48:54.818721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.143 [2024-10-07 09:48:54.818754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.143 qpair failed and we were unable to recover it. 00:28:06.143 [2024-10-07 09:48:54.818897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.143 [2024-10-07 09:48:54.818927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.143 qpair failed and we were unable to recover it. 00:28:06.143 [2024-10-07 09:48:54.819058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.143 [2024-10-07 09:48:54.819089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.143 qpair failed and we were unable to recover it. 00:28:06.143 [2024-10-07 09:48:54.819190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.143 [2024-10-07 09:48:54.819221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.143 qpair failed and we were unable to recover it. 00:28:06.143 [2024-10-07 09:48:54.819381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.143 [2024-10-07 09:48:54.819411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.143 qpair failed and we were unable to recover it. 00:28:06.143 [2024-10-07 09:48:54.819547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.143 [2024-10-07 09:48:54.819578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.143 qpair failed and we were unable to recover it. 00:28:06.143 [2024-10-07 09:48:54.819712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.143 [2024-10-07 09:48:54.819760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.143 qpair failed and we were unable to recover it. 00:28:06.143 [2024-10-07 09:48:54.819914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.143 [2024-10-07 09:48:54.819960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.143 qpair failed and we were unable to recover it. 00:28:06.143 [2024-10-07 09:48:54.820084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.143 [2024-10-07 09:48:54.820118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.143 qpair failed and we were unable to recover it. 00:28:06.143 [2024-10-07 09:48:54.820253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.143 [2024-10-07 09:48:54.820284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.143 qpair failed and we were unable to recover it. 00:28:06.143 [2024-10-07 09:48:54.820406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.143 [2024-10-07 09:48:54.820437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.143 qpair failed and we were unable to recover it. 00:28:06.143 [2024-10-07 09:48:54.820580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.143 [2024-10-07 09:48:54.820614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.143 qpair failed and we were unable to recover it. 00:28:06.143 [2024-10-07 09:48:54.820783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.143 [2024-10-07 09:48:54.820816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.143 qpair failed and we were unable to recover it. 00:28:06.143 [2024-10-07 09:48:54.820964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.143 [2024-10-07 09:48:54.821016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.143 qpair failed and we were unable to recover it. 00:28:06.143 [2024-10-07 09:48:54.821110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.143 [2024-10-07 09:48:54.821141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.143 qpair failed and we were unable to recover it. 00:28:06.143 [2024-10-07 09:48:54.821260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.143 [2024-10-07 09:48:54.821308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.143 qpair failed and we were unable to recover it. 00:28:06.143 [2024-10-07 09:48:54.821428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.143 [2024-10-07 09:48:54.821459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.143 qpair failed and we were unable to recover it. 00:28:06.143 [2024-10-07 09:48:54.821593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.143 [2024-10-07 09:48:54.821624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.143 qpair failed and we were unable to recover it. 00:28:06.143 [2024-10-07 09:48:54.821778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.143 [2024-10-07 09:48:54.821827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.143 qpair failed and we were unable to recover it. 00:28:06.143 [2024-10-07 09:48:54.822017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.143 [2024-10-07 09:48:54.822065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.143 qpair failed and we were unable to recover it. 00:28:06.143 [2024-10-07 09:48:54.822215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.143 [2024-10-07 09:48:54.822262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.143 qpair failed and we were unable to recover it. 00:28:06.143 [2024-10-07 09:48:54.822357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.143 [2024-10-07 09:48:54.822389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.143 qpair failed and we were unable to recover it. 00:28:06.143 [2024-10-07 09:48:54.822530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.143 [2024-10-07 09:48:54.822561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.143 qpair failed and we were unable to recover it. 00:28:06.143 [2024-10-07 09:48:54.822661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.143 [2024-10-07 09:48:54.822698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.143 qpair failed and we were unable to recover it. 00:28:06.143 [2024-10-07 09:48:54.822829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.143 [2024-10-07 09:48:54.822860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.144 qpair failed and we were unable to recover it. 00:28:06.144 [2024-10-07 09:48:54.822987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.144 [2024-10-07 09:48:54.823019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.144 qpair failed and we were unable to recover it. 00:28:06.144 [2024-10-07 09:48:54.823175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.144 [2024-10-07 09:48:54.823206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.144 qpair failed and we were unable to recover it. 00:28:06.144 [2024-10-07 09:48:54.823345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.144 [2024-10-07 09:48:54.823376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.144 qpair failed and we were unable to recover it. 00:28:06.144 [2024-10-07 09:48:54.823473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.144 [2024-10-07 09:48:54.823504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.144 qpair failed and we were unable to recover it. 00:28:06.144 [2024-10-07 09:48:54.823659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.144 [2024-10-07 09:48:54.823696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.144 qpair failed and we were unable to recover it. 00:28:06.144 [2024-10-07 09:48:54.823871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.144 [2024-10-07 09:48:54.823919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.144 qpair failed and we were unable to recover it. 00:28:06.144 [2024-10-07 09:48:54.824039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.144 [2024-10-07 09:48:54.824094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.144 qpair failed and we were unable to recover it. 00:28:06.144 [2024-10-07 09:48:54.824233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.144 [2024-10-07 09:48:54.824264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.144 qpair failed and we were unable to recover it. 00:28:06.144 [2024-10-07 09:48:54.824396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.144 [2024-10-07 09:48:54.824427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.144 qpair failed and we were unable to recover it. 00:28:06.144 [2024-10-07 09:48:54.824513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.144 [2024-10-07 09:48:54.824544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.144 qpair failed and we were unable to recover it. 00:28:06.144 [2024-10-07 09:48:54.824702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.144 [2024-10-07 09:48:54.824734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.144 qpair failed and we were unable to recover it. 00:28:06.144 [2024-10-07 09:48:54.824827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.144 [2024-10-07 09:48:54.824859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.144 qpair failed and we were unable to recover it. 00:28:06.144 [2024-10-07 09:48:54.824987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.144 [2024-10-07 09:48:54.825018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.144 qpair failed and we were unable to recover it. 00:28:06.144 [2024-10-07 09:48:54.825152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.144 [2024-10-07 09:48:54.825183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.144 qpair failed and we were unable to recover it. 00:28:06.144 [2024-10-07 09:48:54.825340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.144 [2024-10-07 09:48:54.825370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.144 qpair failed and we were unable to recover it. 00:28:06.144 [2024-10-07 09:48:54.825489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.144 [2024-10-07 09:48:54.825520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.144 qpair failed and we were unable to recover it. 00:28:06.144 [2024-10-07 09:48:54.825613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.144 [2024-10-07 09:48:54.825644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.144 qpair failed and we were unable to recover it. 00:28:06.144 [2024-10-07 09:48:54.825782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.144 [2024-10-07 09:48:54.825814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.144 qpair failed and we were unable to recover it. 00:28:06.144 [2024-10-07 09:48:54.825939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.144 [2024-10-07 09:48:54.825970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.144 qpair failed and we were unable to recover it. 00:28:06.144 [2024-10-07 09:48:54.826100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.144 [2024-10-07 09:48:54.826132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.144 qpair failed and we were unable to recover it. 00:28:06.144 [2024-10-07 09:48:54.826236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.144 [2024-10-07 09:48:54.826267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.144 qpair failed and we were unable to recover it. 00:28:06.144 [2024-10-07 09:48:54.826395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.144 [2024-10-07 09:48:54.826426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.144 qpair failed and we were unable to recover it. 00:28:06.144 [2024-10-07 09:48:54.826582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.144 [2024-10-07 09:48:54.826614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.144 qpair failed and we were unable to recover it. 00:28:06.144 [2024-10-07 09:48:54.826775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.144 [2024-10-07 09:48:54.826806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.144 qpair failed and we were unable to recover it. 00:28:06.144 [2024-10-07 09:48:54.826938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.144 [2024-10-07 09:48:54.826968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.144 qpair failed and we were unable to recover it. 00:28:06.144 [2024-10-07 09:48:54.827131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.144 [2024-10-07 09:48:54.827162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.144 qpair failed and we were unable to recover it. 00:28:06.144 [2024-10-07 09:48:54.827293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.144 [2024-10-07 09:48:54.827324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.144 qpair failed and we were unable to recover it. 00:28:06.144 [2024-10-07 09:48:54.827450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.144 [2024-10-07 09:48:54.827481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.144 qpair failed and we were unable to recover it. 00:28:06.144 [2024-10-07 09:48:54.827577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.144 [2024-10-07 09:48:54.827608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.144 qpair failed and we were unable to recover it. 00:28:06.144 [2024-10-07 09:48:54.827749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.144 [2024-10-07 09:48:54.827780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.144 qpair failed and we were unable to recover it. 00:28:06.144 [2024-10-07 09:48:54.827885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.144 [2024-10-07 09:48:54.827916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.144 qpair failed and we were unable to recover it. 00:28:06.144 [2024-10-07 09:48:54.828053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.144 [2024-10-07 09:48:54.828083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.144 qpair failed and we were unable to recover it. 00:28:06.144 [2024-10-07 09:48:54.828215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.144 [2024-10-07 09:48:54.828245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.144 qpair failed and we were unable to recover it. 00:28:06.144 [2024-10-07 09:48:54.828382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.145 [2024-10-07 09:48:54.828413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.145 qpair failed and we were unable to recover it. 00:28:06.145 [2024-10-07 09:48:54.828557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.145 [2024-10-07 09:48:54.828588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.145 qpair failed and we were unable to recover it. 00:28:06.145 [2024-10-07 09:48:54.828707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.145 [2024-10-07 09:48:54.828738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.145 qpair failed and we were unable to recover it. 00:28:06.145 [2024-10-07 09:48:54.828838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.145 [2024-10-07 09:48:54.828869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.145 qpair failed and we were unable to recover it. 00:28:06.145 [2024-10-07 09:48:54.828998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.145 [2024-10-07 09:48:54.829029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.145 qpair failed and we were unable to recover it. 00:28:06.145 [2024-10-07 09:48:54.829163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.145 [2024-10-07 09:48:54.829194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.145 qpair failed and we were unable to recover it. 00:28:06.145 [2024-10-07 09:48:54.829291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.145 [2024-10-07 09:48:54.829321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.145 qpair failed and we were unable to recover it. 00:28:06.145 [2024-10-07 09:48:54.829445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.145 [2024-10-07 09:48:54.829476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.145 qpair failed and we were unable to recover it. 00:28:06.145 [2024-10-07 09:48:54.829604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.145 [2024-10-07 09:48:54.829634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.145 qpair failed and we were unable to recover it. 00:28:06.145 [2024-10-07 09:48:54.829772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.145 [2024-10-07 09:48:54.829809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.145 qpair failed and we were unable to recover it. 00:28:06.145 [2024-10-07 09:48:54.829951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.145 [2024-10-07 09:48:54.829983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.145 qpair failed and we were unable to recover it. 00:28:06.145 [2024-10-07 09:48:54.830114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.145 [2024-10-07 09:48:54.830145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.145 qpair failed and we were unable to recover it. 00:28:06.145 [2024-10-07 09:48:54.830274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.145 [2024-10-07 09:48:54.830305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.145 qpair failed and we were unable to recover it. 00:28:06.145 [2024-10-07 09:48:54.830437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.145 [2024-10-07 09:48:54.830468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.145 qpair failed and we were unable to recover it. 00:28:06.145 [2024-10-07 09:48:54.830602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.145 [2024-10-07 09:48:54.830640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.145 qpair failed and we were unable to recover it. 00:28:06.145 [2024-10-07 09:48:54.830796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.145 [2024-10-07 09:48:54.830843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.145 qpair failed and we were unable to recover it. 00:28:06.145 [2024-10-07 09:48:54.830968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.145 [2024-10-07 09:48:54.831003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.145 qpair failed and we were unable to recover it. 00:28:06.145 [2024-10-07 09:48:54.831131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.145 [2024-10-07 09:48:54.831163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.145 qpair failed and we were unable to recover it. 00:28:06.145 [2024-10-07 09:48:54.831292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.145 [2024-10-07 09:48:54.831324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.145 qpair failed and we were unable to recover it. 00:28:06.145 [2024-10-07 09:48:54.831429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.145 [2024-10-07 09:48:54.831460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.145 qpair failed and we were unable to recover it. 00:28:06.145 [2024-10-07 09:48:54.831613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.145 [2024-10-07 09:48:54.831660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.145 qpair failed and we were unable to recover it. 00:28:06.145 [2024-10-07 09:48:54.831816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.145 [2024-10-07 09:48:54.831848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.145 qpair failed and we were unable to recover it. 00:28:06.145 [2024-10-07 09:48:54.831958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.145 [2024-10-07 09:48:54.831989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.145 qpair failed and we were unable to recover it. 00:28:06.145 [2024-10-07 09:48:54.832112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.145 [2024-10-07 09:48:54.832143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.145 qpair failed and we were unable to recover it. 00:28:06.145 [2024-10-07 09:48:54.832238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.145 [2024-10-07 09:48:54.832269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.145 qpair failed and we were unable to recover it. 00:28:06.145 [2024-10-07 09:48:54.832436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.145 [2024-10-07 09:48:54.832466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.145 qpair failed and we were unable to recover it. 00:28:06.145 [2024-10-07 09:48:54.832622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.145 [2024-10-07 09:48:54.832652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.145 qpair failed and we were unable to recover it. 00:28:06.145 [2024-10-07 09:48:54.832869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.145 [2024-10-07 09:48:54.832918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.145 qpair failed and we were unable to recover it. 00:28:06.145 [2024-10-07 09:48:54.833078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.145 [2024-10-07 09:48:54.833113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.145 qpair failed and we were unable to recover it. 00:28:06.145 [2024-10-07 09:48:54.833281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.145 [2024-10-07 09:48:54.833316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.145 qpair failed and we were unable to recover it. 00:28:06.145 [2024-10-07 09:48:54.833480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.145 [2024-10-07 09:48:54.833514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.145 qpair failed and we were unable to recover it. 00:28:06.145 [2024-10-07 09:48:54.833622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.145 [2024-10-07 09:48:54.833656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.145 qpair failed and we were unable to recover it. 00:28:06.145 [2024-10-07 09:48:54.833818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.145 [2024-10-07 09:48:54.833849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.145 qpair failed and we were unable to recover it. 00:28:06.145 [2024-10-07 09:48:54.834029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.145 [2024-10-07 09:48:54.834062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.145 qpair failed and we were unable to recover it. 00:28:06.145 [2024-10-07 09:48:54.834201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.145 [2024-10-07 09:48:54.834235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.145 qpair failed and we were unable to recover it. 00:28:06.145 [2024-10-07 09:48:54.834351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.145 [2024-10-07 09:48:54.834385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.145 qpair failed and we were unable to recover it. 00:28:06.145 [2024-10-07 09:48:54.834567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.145 [2024-10-07 09:48:54.834600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.145 qpair failed and we were unable to recover it. 00:28:06.145 [2024-10-07 09:48:54.834710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.145 [2024-10-07 09:48:54.834743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.145 qpair failed and we were unable to recover it. 00:28:06.145 [2024-10-07 09:48:54.834882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.145 [2024-10-07 09:48:54.834912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.146 qpair failed and we were unable to recover it. 00:28:06.146 [2024-10-07 09:48:54.835053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.146 [2024-10-07 09:48:54.835083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.146 qpair failed and we were unable to recover it. 00:28:06.146 [2024-10-07 09:48:54.835188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.146 [2024-10-07 09:48:54.835218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.146 qpair failed and we were unable to recover it. 00:28:06.146 [2024-10-07 09:48:54.835317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.146 [2024-10-07 09:48:54.835347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.146 qpair failed and we were unable to recover it. 00:28:06.146 [2024-10-07 09:48:54.835501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.146 [2024-10-07 09:48:54.835530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.146 qpair failed and we were unable to recover it. 00:28:06.146 [2024-10-07 09:48:54.835640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.146 [2024-10-07 09:48:54.835676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.146 qpair failed and we were unable to recover it. 00:28:06.146 [2024-10-07 09:48:54.835804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.146 [2024-10-07 09:48:54.835835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.146 qpair failed and we were unable to recover it. 00:28:06.146 [2024-10-07 09:48:54.835933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.146 [2024-10-07 09:48:54.835963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.146 qpair failed and we were unable to recover it. 00:28:06.146 [2024-10-07 09:48:54.836095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.146 [2024-10-07 09:48:54.836126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.146 qpair failed and we were unable to recover it. 00:28:06.146 [2024-10-07 09:48:54.836233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.146 [2024-10-07 09:48:54.836262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.146 qpair failed and we were unable to recover it. 00:28:06.146 [2024-10-07 09:48:54.836398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.146 [2024-10-07 09:48:54.836431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.146 qpair failed and we were unable to recover it. 00:28:06.146 [2024-10-07 09:48:54.836532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.146 [2024-10-07 09:48:54.836564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.146 qpair failed and we were unable to recover it. 00:28:06.146 [2024-10-07 09:48:54.836683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.146 [2024-10-07 09:48:54.836730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.146 qpair failed and we were unable to recover it. 00:28:06.146 [2024-10-07 09:48:54.836831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.146 [2024-10-07 09:48:54.836864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.146 qpair failed and we were unable to recover it. 00:28:06.146 [2024-10-07 09:48:54.836999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.146 [2024-10-07 09:48:54.837033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.146 qpair failed and we were unable to recover it. 00:28:06.146 [2024-10-07 09:48:54.837161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.146 [2024-10-07 09:48:54.837193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.146 qpair failed and we were unable to recover it. 00:28:06.146 [2024-10-07 09:48:54.837339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.146 [2024-10-07 09:48:54.837372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.146 qpair failed and we were unable to recover it. 00:28:06.146 [2024-10-07 09:48:54.837477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.146 [2024-10-07 09:48:54.837509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.146 qpair failed and we were unable to recover it. 00:28:06.146 [2024-10-07 09:48:54.837605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.146 [2024-10-07 09:48:54.837636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.146 qpair failed and we were unable to recover it. 00:28:06.146 [2024-10-07 09:48:54.837771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.146 [2024-10-07 09:48:54.837802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.146 qpair failed and we were unable to recover it. 00:28:06.146 [2024-10-07 09:48:54.837909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.146 [2024-10-07 09:48:54.837939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.146 qpair failed and we were unable to recover it. 00:28:06.146 [2024-10-07 09:48:54.838069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.146 [2024-10-07 09:48:54.838100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.146 qpair failed and we were unable to recover it. 00:28:06.146 [2024-10-07 09:48:54.838242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.146 [2024-10-07 09:48:54.838276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.146 qpair failed and we were unable to recover it. 00:28:06.146 [2024-10-07 09:48:54.838409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.146 [2024-10-07 09:48:54.838441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.146 qpair failed and we were unable to recover it. 00:28:06.146 [2024-10-07 09:48:54.838598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.146 [2024-10-07 09:48:54.838629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.146 qpair failed and we were unable to recover it. 00:28:06.146 [2024-10-07 09:48:54.838740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.146 [2024-10-07 09:48:54.838773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.146 qpair failed and we were unable to recover it. 00:28:06.146 [2024-10-07 09:48:54.838883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.146 [2024-10-07 09:48:54.838916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.146 qpair failed and we were unable to recover it. 00:28:06.146 [2024-10-07 09:48:54.839011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.146 [2024-10-07 09:48:54.839042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.146 qpair failed and we were unable to recover it. 00:28:06.146 [2024-10-07 09:48:54.839166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.146 [2024-10-07 09:48:54.839198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.146 qpair failed and we were unable to recover it. 00:28:06.146 [2024-10-07 09:48:54.839333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.146 [2024-10-07 09:48:54.839365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.146 qpair failed and we were unable to recover it. 00:28:06.146 [2024-10-07 09:48:54.839531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.146 [2024-10-07 09:48:54.839563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.146 qpair failed and we were unable to recover it. 00:28:06.146 [2024-10-07 09:48:54.839696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.146 [2024-10-07 09:48:54.839729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.146 qpair failed and we were unable to recover it. 00:28:06.146 [2024-10-07 09:48:54.839826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.146 [2024-10-07 09:48:54.839856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.146 qpair failed and we were unable to recover it. 00:28:06.146 [2024-10-07 09:48:54.839955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.146 [2024-10-07 09:48:54.839984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.146 qpair failed and we were unable to recover it. 00:28:06.146 [2024-10-07 09:48:54.840145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.146 [2024-10-07 09:48:54.840176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.146 qpair failed and we were unable to recover it. 00:28:06.146 [2024-10-07 09:48:54.840335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.146 [2024-10-07 09:48:54.840365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.146 qpair failed and we were unable to recover it. 00:28:06.146 [2024-10-07 09:48:54.840494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.146 [2024-10-07 09:48:54.840524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.146 qpair failed and we were unable to recover it. 00:28:06.146 [2024-10-07 09:48:54.840658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.146 [2024-10-07 09:48:54.840697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.146 qpair failed and we were unable to recover it. 00:28:06.146 [2024-10-07 09:48:54.840794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.146 [2024-10-07 09:48:54.840824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.146 qpair failed and we were unable to recover it. 00:28:06.146 [2024-10-07 09:48:54.840965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.147 [2024-10-07 09:48:54.841013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.147 qpair failed and we were unable to recover it. 00:28:06.147 [2024-10-07 09:48:54.841153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.147 [2024-10-07 09:48:54.841201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.147 qpair failed and we were unable to recover it. 00:28:06.147 [2024-10-07 09:48:54.841363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.147 [2024-10-07 09:48:54.841393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.147 qpair failed and we were unable to recover it. 00:28:06.147 [2024-10-07 09:48:54.841577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.147 [2024-10-07 09:48:54.841610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.147 qpair failed and we were unable to recover it. 00:28:06.147 [2024-10-07 09:48:54.841747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.147 [2024-10-07 09:48:54.841779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.147 qpair failed and we were unable to recover it. 00:28:06.147 [2024-10-07 09:48:54.841920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.147 [2024-10-07 09:48:54.841951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.147 qpair failed and we were unable to recover it. 00:28:06.147 [2024-10-07 09:48:54.842084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.147 [2024-10-07 09:48:54.842114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.147 qpair failed and we were unable to recover it. 00:28:06.147 [2024-10-07 09:48:54.842225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.147 [2024-10-07 09:48:54.842256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.147 qpair failed and we were unable to recover it. 00:28:06.147 [2024-10-07 09:48:54.842385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.147 [2024-10-07 09:48:54.842414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.147 qpair failed and we were unable to recover it. 00:28:06.147 [2024-10-07 09:48:54.842502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.147 [2024-10-07 09:48:54.842533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.147 qpair failed and we were unable to recover it. 00:28:06.147 [2024-10-07 09:48:54.842653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.147 [2024-10-07 09:48:54.842692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.147 qpair failed and we were unable to recover it. 00:28:06.147 [2024-10-07 09:48:54.842824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.147 [2024-10-07 09:48:54.842854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.147 qpair failed and we were unable to recover it. 00:28:06.147 [2024-10-07 09:48:54.842980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.147 [2024-10-07 09:48:54.843010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.147 qpair failed and we were unable to recover it. 00:28:06.147 [2024-10-07 09:48:54.843148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.147 [2024-10-07 09:48:54.843179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.147 qpair failed and we were unable to recover it. 00:28:06.147 [2024-10-07 09:48:54.843267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.147 [2024-10-07 09:48:54.843297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.147 qpair failed and we were unable to recover it. 00:28:06.147 [2024-10-07 09:48:54.843395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.147 [2024-10-07 09:48:54.843425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.147 qpair failed and we were unable to recover it. 00:28:06.147 [2024-10-07 09:48:54.843519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.147 [2024-10-07 09:48:54.843548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.147 qpair failed and we were unable to recover it. 00:28:06.147 [2024-10-07 09:48:54.843681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.147 [2024-10-07 09:48:54.843712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.147 qpair failed and we were unable to recover it. 00:28:06.147 [2024-10-07 09:48:54.843828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.147 [2024-10-07 09:48:54.843859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.147 qpair failed and we were unable to recover it. 00:28:06.147 [2024-10-07 09:48:54.844013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.147 [2024-10-07 09:48:54.844043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.147 qpair failed and we were unable to recover it. 00:28:06.147 [2024-10-07 09:48:54.844182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.147 [2024-10-07 09:48:54.844227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.147 qpair failed and we were unable to recover it. 00:28:06.147 [2024-10-07 09:48:54.844371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.147 [2024-10-07 09:48:54.844405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.147 qpair failed and we were unable to recover it. 00:28:06.147 [2024-10-07 09:48:54.844537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.147 [2024-10-07 09:48:54.844570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.147 qpair failed and we were unable to recover it. 00:28:06.147 [2024-10-07 09:48:54.844706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.147 [2024-10-07 09:48:54.844739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.147 qpair failed and we were unable to recover it. 00:28:06.147 [2024-10-07 09:48:54.844905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.147 [2024-10-07 09:48:54.844937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.147 qpair failed and we were unable to recover it. 00:28:06.147 [2024-10-07 09:48:54.845042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.147 [2024-10-07 09:48:54.845074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.147 qpair failed and we were unable to recover it. 00:28:06.147 [2024-10-07 09:48:54.845250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.147 [2024-10-07 09:48:54.845286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.147 qpair failed and we were unable to recover it. 00:28:06.147 [2024-10-07 09:48:54.845392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.147 [2024-10-07 09:48:54.845427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.147 qpair failed and we were unable to recover it. 00:28:06.147 [2024-10-07 09:48:54.845593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.147 [2024-10-07 09:48:54.845627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.147 qpair failed and we were unable to recover it. 00:28:06.147 [2024-10-07 09:48:54.845750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.147 [2024-10-07 09:48:54.845783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.147 qpair failed and we were unable to recover it. 00:28:06.147 [2024-10-07 09:48:54.845893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.147 [2024-10-07 09:48:54.845925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.147 qpair failed and we were unable to recover it. 00:28:06.147 [2024-10-07 09:48:54.846039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.147 [2024-10-07 09:48:54.846081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.147 qpair failed and we were unable to recover it. 00:28:06.147 [2024-10-07 09:48:54.846275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.147 [2024-10-07 09:48:54.846311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.147 qpair failed and we were unable to recover it. 00:28:06.147 [2024-10-07 09:48:54.846454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.147 [2024-10-07 09:48:54.846489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.147 qpair failed and we were unable to recover it. 00:28:06.147 [2024-10-07 09:48:54.846597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.147 [2024-10-07 09:48:54.846632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.147 qpair failed and we were unable to recover it. 00:28:06.147 [2024-10-07 09:48:54.846818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.147 [2024-10-07 09:48:54.846850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.147 qpair failed and we were unable to recover it. 00:28:06.147 [2024-10-07 09:48:54.846998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.147 [2024-10-07 09:48:54.847032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.147 qpair failed and we were unable to recover it. 00:28:06.147 [2024-10-07 09:48:54.847214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.147 [2024-10-07 09:48:54.847246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.147 qpair failed and we were unable to recover it. 00:28:06.147 [2024-10-07 09:48:54.847399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.147 [2024-10-07 09:48:54.847433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.147 qpair failed and we were unable to recover it. 00:28:06.148 [2024-10-07 09:48:54.847572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.148 [2024-10-07 09:48:54.847607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.148 qpair failed and we were unable to recover it. 00:28:06.148 [2024-10-07 09:48:54.847758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.148 [2024-10-07 09:48:54.847806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.148 qpair failed and we were unable to recover it. 00:28:06.148 [2024-10-07 09:48:54.847940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.148 [2024-10-07 09:48:54.847972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.148 qpair failed and we were unable to recover it. 00:28:06.148 [2024-10-07 09:48:54.848111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.148 [2024-10-07 09:48:54.848143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.148 qpair failed and we were unable to recover it. 00:28:06.148 [2024-10-07 09:48:54.848272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.148 [2024-10-07 09:48:54.848303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.148 qpair failed and we were unable to recover it. 00:28:06.148 [2024-10-07 09:48:54.848412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.148 [2024-10-07 09:48:54.848459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.148 qpair failed and we were unable to recover it. 00:28:06.148 [2024-10-07 09:48:54.848607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.148 [2024-10-07 09:48:54.848639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.148 qpair failed and we were unable to recover it. 00:28:06.148 [2024-10-07 09:48:54.848780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.148 [2024-10-07 09:48:54.848811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.148 qpair failed and we were unable to recover it. 00:28:06.148 [2024-10-07 09:48:54.848921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.148 [2024-10-07 09:48:54.848955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.148 qpair failed and we were unable to recover it. 00:28:06.148 [2024-10-07 09:48:54.849075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.148 [2024-10-07 09:48:54.849105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.148 qpair failed and we were unable to recover it. 00:28:06.148 [2024-10-07 09:48:54.849231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.148 [2024-10-07 09:48:54.849261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.148 qpair failed and we were unable to recover it. 00:28:06.148 [2024-10-07 09:48:54.849380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.148 [2024-10-07 09:48:54.849410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.148 qpair failed and we were unable to recover it. 00:28:06.148 [2024-10-07 09:48:54.849510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.148 [2024-10-07 09:48:54.849541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.148 qpair failed and we were unable to recover it. 00:28:06.148 [2024-10-07 09:48:54.849673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.148 [2024-10-07 09:48:54.849704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.148 qpair failed and we were unable to recover it. 00:28:06.148 [2024-10-07 09:48:54.849836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.148 [2024-10-07 09:48:54.849866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.148 qpair failed and we were unable to recover it. 00:28:06.148 [2024-10-07 09:48:54.849962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.148 [2024-10-07 09:48:54.849993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.148 qpair failed and we were unable to recover it. 00:28:06.148 [2024-10-07 09:48:54.850112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.148 [2024-10-07 09:48:54.850142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.148 qpair failed and we were unable to recover it. 00:28:06.148 [2024-10-07 09:48:54.850253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.148 [2024-10-07 09:48:54.850287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.148 qpair failed and we were unable to recover it. 00:28:06.148 [2024-10-07 09:48:54.850455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.148 [2024-10-07 09:48:54.850487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.148 qpair failed and we were unable to recover it. 00:28:06.148 [2024-10-07 09:48:54.850595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.148 [2024-10-07 09:48:54.850636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.148 qpair failed and we were unable to recover it. 00:28:06.148 [2024-10-07 09:48:54.850749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.148 [2024-10-07 09:48:54.850781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.148 qpair failed and we were unable to recover it. 00:28:06.148 [2024-10-07 09:48:54.850921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.148 [2024-10-07 09:48:54.850951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.148 qpair failed and we were unable to recover it. 00:28:06.148 [2024-10-07 09:48:54.851072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.148 [2024-10-07 09:48:54.851101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.148 qpair failed and we were unable to recover it. 00:28:06.148 [2024-10-07 09:48:54.851187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.148 [2024-10-07 09:48:54.851218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.148 qpair failed and we were unable to recover it. 00:28:06.148 [2024-10-07 09:48:54.851342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.148 [2024-10-07 09:48:54.851371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.148 qpair failed and we were unable to recover it. 00:28:06.148 [2024-10-07 09:48:54.851493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.148 [2024-10-07 09:48:54.851524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.148 qpair failed and we were unable to recover it. 00:28:06.148 [2024-10-07 09:48:54.851610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.148 [2024-10-07 09:48:54.851640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.148 qpair failed and we were unable to recover it. 00:28:06.148 [2024-10-07 09:48:54.851783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.148 [2024-10-07 09:48:54.851814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.148 qpair failed and we were unable to recover it. 00:28:06.148 [2024-10-07 09:48:54.851940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.148 [2024-10-07 09:48:54.851971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.148 qpair failed and we were unable to recover it. 00:28:06.148 [2024-10-07 09:48:54.852060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.148 [2024-10-07 09:48:54.852090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.148 qpair failed and we were unable to recover it. 00:28:06.148 [2024-10-07 09:48:54.852191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.148 [2024-10-07 09:48:54.852221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.148 qpair failed and we were unable to recover it. 00:28:06.148 [2024-10-07 09:48:54.852321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.148 [2024-10-07 09:48:54.852351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.148 qpair failed and we were unable to recover it. 00:28:06.148 [2024-10-07 09:48:54.852511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.148 [2024-10-07 09:48:54.852541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.148 qpair failed and we were unable to recover it. 00:28:06.148 [2024-10-07 09:48:54.852649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.148 [2024-10-07 09:48:54.852689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.149 qpair failed and we were unable to recover it. 00:28:06.149 [2024-10-07 09:48:54.852784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.149 [2024-10-07 09:48:54.852814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.149 qpair failed and we were unable to recover it. 00:28:06.149 [2024-10-07 09:48:54.852946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.149 [2024-10-07 09:48:54.852975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.149 qpair failed and we were unable to recover it. 00:28:06.149 [2024-10-07 09:48:54.853107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.149 [2024-10-07 09:48:54.853138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.149 qpair failed and we were unable to recover it. 00:28:06.149 [2024-10-07 09:48:54.853236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.149 [2024-10-07 09:48:54.853265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.149 qpair failed and we were unable to recover it. 00:28:06.149 [2024-10-07 09:48:54.853393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.149 [2024-10-07 09:48:54.853424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.149 qpair failed and we were unable to recover it. 00:28:06.149 [2024-10-07 09:48:54.853584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.149 [2024-10-07 09:48:54.853615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.149 qpair failed and we were unable to recover it. 00:28:06.149 [2024-10-07 09:48:54.853721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.149 [2024-10-07 09:48:54.853752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.149 qpair failed and we were unable to recover it. 00:28:06.149 [2024-10-07 09:48:54.853852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.149 [2024-10-07 09:48:54.853883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.149 qpair failed and we were unable to recover it. 00:28:06.149 [2024-10-07 09:48:54.853984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.149 [2024-10-07 09:48:54.854016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.149 qpair failed and we were unable to recover it. 00:28:06.149 [2024-10-07 09:48:54.854151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.149 [2024-10-07 09:48:54.854182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.149 qpair failed and we were unable to recover it. 00:28:06.149 [2024-10-07 09:48:54.854310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.149 [2024-10-07 09:48:54.854340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.149 qpair failed and we were unable to recover it. 00:28:06.149 [2024-10-07 09:48:54.854440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.149 [2024-10-07 09:48:54.854470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.149 qpair failed and we were unable to recover it. 00:28:06.149 [2024-10-07 09:48:54.854563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.149 [2024-10-07 09:48:54.854603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.149 qpair failed and we were unable to recover it. 00:28:06.149 [2024-10-07 09:48:54.854738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.149 [2024-10-07 09:48:54.854785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.149 qpair failed and we were unable to recover it. 00:28:06.149 [2024-10-07 09:48:54.854897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.149 [2024-10-07 09:48:54.854930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.149 qpair failed and we were unable to recover it. 00:28:06.149 [2024-10-07 09:48:54.855070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.149 [2024-10-07 09:48:54.855104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.149 qpair failed and we were unable to recover it. 00:28:06.149 [2024-10-07 09:48:54.855242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.149 [2024-10-07 09:48:54.855274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.149 qpair failed and we were unable to recover it. 00:28:06.149 [2024-10-07 09:48:54.855379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.149 [2024-10-07 09:48:54.855411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.149 qpair failed and we were unable to recover it. 00:28:06.149 [2024-10-07 09:48:54.855509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.149 [2024-10-07 09:48:54.855541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.149 qpair failed and we were unable to recover it. 00:28:06.149 [2024-10-07 09:48:54.855645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.149 [2024-10-07 09:48:54.855689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.149 qpair failed and we were unable to recover it. 00:28:06.149 [2024-10-07 09:48:54.855825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.149 [2024-10-07 09:48:54.855857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.149 qpair failed and we were unable to recover it. 00:28:06.149 [2024-10-07 09:48:54.855969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.149 [2024-10-07 09:48:54.856002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.149 qpair failed and we were unable to recover it. 00:28:06.149 [2024-10-07 09:48:54.856141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.149 [2024-10-07 09:48:54.856174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.149 qpair failed and we were unable to recover it. 00:28:06.149 [2024-10-07 09:48:54.856316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.149 [2024-10-07 09:48:54.856350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.149 qpair failed and we were unable to recover it. 00:28:06.149 [2024-10-07 09:48:54.856517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.149 [2024-10-07 09:48:54.856568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.149 qpair failed and we were unable to recover it. 00:28:06.149 [2024-10-07 09:48:54.856723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.149 [2024-10-07 09:48:54.856770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.149 qpair failed and we were unable to recover it. 00:28:06.149 [2024-10-07 09:48:54.856888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.149 [2024-10-07 09:48:54.856921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.149 qpair failed and we were unable to recover it. 00:28:06.149 [2024-10-07 09:48:54.857079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.149 [2024-10-07 09:48:54.857110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.149 qpair failed and we were unable to recover it. 00:28:06.149 [2024-10-07 09:48:54.857223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.149 [2024-10-07 09:48:54.857255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.149 qpair failed and we were unable to recover it. 00:28:06.149 [2024-10-07 09:48:54.857398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.149 [2024-10-07 09:48:54.857430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.149 qpair failed and we were unable to recover it. 00:28:06.149 [2024-10-07 09:48:54.857559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.149 [2024-10-07 09:48:54.857591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.149 qpair failed and we were unable to recover it. 00:28:06.149 [2024-10-07 09:48:54.857725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.149 [2024-10-07 09:48:54.857757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.149 qpair failed and we were unable to recover it. 00:28:06.149 [2024-10-07 09:48:54.857895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.149 [2024-10-07 09:48:54.857925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.149 qpair failed and we were unable to recover it. 00:28:06.149 [2024-10-07 09:48:54.858017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.149 [2024-10-07 09:48:54.858048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.149 qpair failed and we were unable to recover it. 00:28:06.149 [2024-10-07 09:48:54.858152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.149 [2024-10-07 09:48:54.858185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.149 qpair failed and we were unable to recover it. 00:28:06.149 [2024-10-07 09:48:54.858336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.149 [2024-10-07 09:48:54.858367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.149 qpair failed and we were unable to recover it. 00:28:06.149 [2024-10-07 09:48:54.858489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.149 [2024-10-07 09:48:54.858520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.149 qpair failed and we were unable to recover it. 00:28:06.149 [2024-10-07 09:48:54.858617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.149 [2024-10-07 09:48:54.858648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.149 qpair failed and we were unable to recover it. 00:28:06.150 [2024-10-07 09:48:54.858784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.150 [2024-10-07 09:48:54.858815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.150 qpair failed and we were unable to recover it. 00:28:06.150 [2024-10-07 09:48:54.858941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.150 [2024-10-07 09:48:54.858977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.150 qpair failed and we were unable to recover it. 00:28:06.150 [2024-10-07 09:48:54.859087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.150 [2024-10-07 09:48:54.859118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.150 qpair failed and we were unable to recover it. 00:28:06.150 [2024-10-07 09:48:54.859254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.150 [2024-10-07 09:48:54.859285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.150 qpair failed and we were unable to recover it. 00:28:06.150 [2024-10-07 09:48:54.859387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.150 [2024-10-07 09:48:54.859428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.150 qpair failed and we were unable to recover it. 00:28:06.150 [2024-10-07 09:48:54.859567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.150 [2024-10-07 09:48:54.859599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.150 qpair failed and we were unable to recover it. 00:28:06.150 [2024-10-07 09:48:54.859757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.150 [2024-10-07 09:48:54.859791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.150 qpair failed and we were unable to recover it. 00:28:06.150 [2024-10-07 09:48:54.859886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.150 [2024-10-07 09:48:54.859918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.150 qpair failed and we were unable to recover it. 00:28:06.150 [2024-10-07 09:48:54.860082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.150 [2024-10-07 09:48:54.860114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.150 qpair failed and we were unable to recover it. 00:28:06.150 [2024-10-07 09:48:54.860226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.150 [2024-10-07 09:48:54.860259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.150 qpair failed and we were unable to recover it. 00:28:06.150 [2024-10-07 09:48:54.860403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.150 [2024-10-07 09:48:54.860437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.150 qpair failed and we were unable to recover it. 00:28:06.150 [2024-10-07 09:48:54.860556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.150 [2024-10-07 09:48:54.860605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.150 qpair failed and we were unable to recover it. 00:28:06.150 [2024-10-07 09:48:54.860760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.150 [2024-10-07 09:48:54.860794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.150 qpair failed and we were unable to recover it. 00:28:06.150 [2024-10-07 09:48:54.860970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.150 [2024-10-07 09:48:54.861036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.150 qpair failed and we were unable to recover it. 00:28:06.150 [2024-10-07 09:48:54.861170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.150 [2024-10-07 09:48:54.861236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.150 qpair failed and we were unable to recover it. 00:28:06.150 [2024-10-07 09:48:54.861420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.150 [2024-10-07 09:48:54.861486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.150 qpair failed and we were unable to recover it. 00:28:06.150 [2024-10-07 09:48:54.861677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.150 [2024-10-07 09:48:54.861710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.150 qpair failed and we were unable to recover it. 00:28:06.150 [2024-10-07 09:48:54.861847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.150 [2024-10-07 09:48:54.861877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.150 qpair failed and we were unable to recover it. 00:28:06.150 [2024-10-07 09:48:54.861988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.150 [2024-10-07 09:48:54.862021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.150 qpair failed and we were unable to recover it. 00:28:06.150 [2024-10-07 09:48:54.862161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.150 [2024-10-07 09:48:54.862210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.150 qpair failed and we were unable to recover it. 00:28:06.150 [2024-10-07 09:48:54.862362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.150 [2024-10-07 09:48:54.862398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.150 qpair failed and we were unable to recover it. 00:28:06.150 [2024-10-07 09:48:54.862536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.150 [2024-10-07 09:48:54.862571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.150 qpair failed and we were unable to recover it. 00:28:06.150 [2024-10-07 09:48:54.862688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.150 [2024-10-07 09:48:54.862739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.150 qpair failed and we were unable to recover it. 00:28:06.150 [2024-10-07 09:48:54.862894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.150 [2024-10-07 09:48:54.862926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.150 qpair failed and we were unable to recover it. 00:28:06.150 [2024-10-07 09:48:54.863089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.150 [2024-10-07 09:48:54.863121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.150 qpair failed and we were unable to recover it. 00:28:06.150 [2024-10-07 09:48:54.863255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.150 [2024-10-07 09:48:54.863287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.150 qpair failed and we were unable to recover it. 00:28:06.150 [2024-10-07 09:48:54.863424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.150 [2024-10-07 09:48:54.863455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.150 qpair failed and we were unable to recover it. 00:28:06.150 [2024-10-07 09:48:54.863563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.150 [2024-10-07 09:48:54.863596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.150 qpair failed and we were unable to recover it. 00:28:06.150 [2024-10-07 09:48:54.863715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.150 [2024-10-07 09:48:54.863750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.150 qpair failed and we were unable to recover it. 00:28:06.150 [2024-10-07 09:48:54.863853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.150 [2024-10-07 09:48:54.863885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.150 qpair failed and we were unable to recover it. 00:28:06.150 [2024-10-07 09:48:54.864010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.150 [2024-10-07 09:48:54.864042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.150 qpair failed and we were unable to recover it. 00:28:06.150 [2024-10-07 09:48:54.864195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.150 [2024-10-07 09:48:54.864227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.150 qpair failed and we were unable to recover it. 00:28:06.150 [2024-10-07 09:48:54.864341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.150 [2024-10-07 09:48:54.864373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.150 qpair failed and we were unable to recover it. 00:28:06.150 [2024-10-07 09:48:54.864467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.150 [2024-10-07 09:48:54.864498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.150 qpair failed and we were unable to recover it. 00:28:06.150 [2024-10-07 09:48:54.864653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.150 [2024-10-07 09:48:54.864690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.150 qpair failed and we were unable to recover it. 00:28:06.150 [2024-10-07 09:48:54.864780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.150 [2024-10-07 09:48:54.864811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.150 qpair failed and we were unable to recover it. 00:28:06.150 [2024-10-07 09:48:54.864899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.150 [2024-10-07 09:48:54.864930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.150 qpair failed and we were unable to recover it. 00:28:06.150 [2024-10-07 09:48:54.865037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.150 [2024-10-07 09:48:54.865068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.150 qpair failed and we were unable to recover it. 00:28:06.150 [2024-10-07 09:48:54.865222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.150 [2024-10-07 09:48:54.865253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.151 qpair failed and we were unable to recover it. 00:28:06.151 [2024-10-07 09:48:54.865382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.151 [2024-10-07 09:48:54.865413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.151 qpair failed and we were unable to recover it. 00:28:06.151 [2024-10-07 09:48:54.865513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.151 [2024-10-07 09:48:54.865544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.151 qpair failed and we were unable to recover it. 00:28:06.151 [2024-10-07 09:48:54.865659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.151 [2024-10-07 09:48:54.865696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.151 qpair failed and we were unable to recover it. 00:28:06.151 [2024-10-07 09:48:54.865799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.151 [2024-10-07 09:48:54.865830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.151 qpair failed and we were unable to recover it. 00:28:06.151 [2024-10-07 09:48:54.865959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.151 [2024-10-07 09:48:54.865989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.151 qpair failed and we were unable to recover it. 00:28:06.151 [2024-10-07 09:48:54.866080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.151 [2024-10-07 09:48:54.866110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.151 qpair failed and we were unable to recover it. 00:28:06.151 [2024-10-07 09:48:54.866241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.151 [2024-10-07 09:48:54.866271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.151 qpair failed and we were unable to recover it. 00:28:06.151 [2024-10-07 09:48:54.866399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.151 [2024-10-07 09:48:54.866431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.151 qpair failed and we were unable to recover it. 00:28:06.151 [2024-10-07 09:48:54.866556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.151 [2024-10-07 09:48:54.866587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.151 qpair failed and we were unable to recover it. 00:28:06.151 [2024-10-07 09:48:54.866695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.151 [2024-10-07 09:48:54.866726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.151 qpair failed and we were unable to recover it. 00:28:06.151 [2024-10-07 09:48:54.866827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.151 [2024-10-07 09:48:54.866858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.151 qpair failed and we were unable to recover it. 00:28:06.151 [2024-10-07 09:48:54.866959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.151 [2024-10-07 09:48:54.866990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.151 qpair failed and we were unable to recover it. 00:28:06.151 [2024-10-07 09:48:54.867091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.151 [2024-10-07 09:48:54.867122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.151 qpair failed and we were unable to recover it. 00:28:06.151 [2024-10-07 09:48:54.867248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.151 [2024-10-07 09:48:54.867279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.151 qpair failed and we were unable to recover it. 00:28:06.151 [2024-10-07 09:48:54.867402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.151 [2024-10-07 09:48:54.867433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.151 qpair failed and we were unable to recover it. 00:28:06.151 [2024-10-07 09:48:54.867563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.151 [2024-10-07 09:48:54.867595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.151 qpair failed and we were unable to recover it. 00:28:06.151 [2024-10-07 09:48:54.867692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.151 [2024-10-07 09:48:54.867730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.151 qpair failed and we were unable to recover it. 00:28:06.151 [2024-10-07 09:48:54.867831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.151 [2024-10-07 09:48:54.867861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.151 qpair failed and we were unable to recover it. 00:28:06.151 [2024-10-07 09:48:54.867951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.151 [2024-10-07 09:48:54.867982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.151 qpair failed and we were unable to recover it. 00:28:06.151 [2024-10-07 09:48:54.868106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.151 [2024-10-07 09:48:54.868137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.151 qpair failed and we were unable to recover it. 00:28:06.151 [2024-10-07 09:48:54.868275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.151 [2024-10-07 09:48:54.868305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.151 qpair failed and we were unable to recover it. 00:28:06.151 [2024-10-07 09:48:54.868448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.151 [2024-10-07 09:48:54.868479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.151 qpair failed and we were unable to recover it. 00:28:06.151 [2024-10-07 09:48:54.868583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.151 [2024-10-07 09:48:54.868615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.151 qpair failed and we were unable to recover it. 00:28:06.151 [2024-10-07 09:48:54.868739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.151 [2024-10-07 09:48:54.868787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.151 qpair failed and we were unable to recover it. 00:28:06.151 [2024-10-07 09:48:54.868906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.151 [2024-10-07 09:48:54.868940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.151 qpair failed and we were unable to recover it. 00:28:06.151 [2024-10-07 09:48:54.869070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.151 [2024-10-07 09:48:54.869102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.151 qpair failed and we were unable to recover it. 00:28:06.151 [2024-10-07 09:48:54.869203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.151 [2024-10-07 09:48:54.869235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.151 qpair failed and we were unable to recover it. 00:28:06.151 [2024-10-07 09:48:54.869367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.151 [2024-10-07 09:48:54.869399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.151 qpair failed and we were unable to recover it. 00:28:06.151 [2024-10-07 09:48:54.869497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.151 [2024-10-07 09:48:54.869529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.151 qpair failed and we were unable to recover it. 00:28:06.151 [2024-10-07 09:48:54.869651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.151 [2024-10-07 09:48:54.869697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.151 qpair failed and we were unable to recover it. 00:28:06.151 [2024-10-07 09:48:54.869817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.151 [2024-10-07 09:48:54.869867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.151 qpair failed and we were unable to recover it. 00:28:06.151 [2024-10-07 09:48:54.869978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.151 [2024-10-07 09:48:54.870019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.151 qpair failed and we were unable to recover it. 00:28:06.151 [2024-10-07 09:48:54.870132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.151 [2024-10-07 09:48:54.870165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.151 qpair failed and we were unable to recover it. 00:28:06.151 [2024-10-07 09:48:54.870332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.151 [2024-10-07 09:48:54.870366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.151 qpair failed and we were unable to recover it. 00:28:06.151 [2024-10-07 09:48:54.870465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.151 [2024-10-07 09:48:54.870498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.151 qpair failed and we were unable to recover it. 00:28:06.151 [2024-10-07 09:48:54.870642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.151 [2024-10-07 09:48:54.870685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.151 qpair failed and we were unable to recover it. 00:28:06.151 [2024-10-07 09:48:54.870822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.151 [2024-10-07 09:48:54.870853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.151 qpair failed and we were unable to recover it. 00:28:06.151 [2024-10-07 09:48:54.871017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.151 [2024-10-07 09:48:54.871064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.151 qpair failed and we were unable to recover it. 00:28:06.151 [2024-10-07 09:48:54.871158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.152 [2024-10-07 09:48:54.871189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.152 qpair failed and we were unable to recover it. 00:28:06.152 [2024-10-07 09:48:54.871299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.152 [2024-10-07 09:48:54.871330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.152 qpair failed and we were unable to recover it. 00:28:06.152 [2024-10-07 09:48:54.871454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.152 [2024-10-07 09:48:54.871484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.152 qpair failed and we were unable to recover it. 00:28:06.152 [2024-10-07 09:48:54.871611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.152 [2024-10-07 09:48:54.871641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.152 qpair failed and we were unable to recover it. 00:28:06.152 [2024-10-07 09:48:54.871782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.152 [2024-10-07 09:48:54.871813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.152 qpair failed and we were unable to recover it. 00:28:06.152 [2024-10-07 09:48:54.871930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.152 [2024-10-07 09:48:54.871976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.152 qpair failed and we were unable to recover it. 00:28:06.152 [2024-10-07 09:48:54.872082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.152 [2024-10-07 09:48:54.872115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.152 qpair failed and we were unable to recover it. 00:28:06.152 [2024-10-07 09:48:54.872244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.152 [2024-10-07 09:48:54.872276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.152 qpair failed and we were unable to recover it. 00:28:06.152 [2024-10-07 09:48:54.872404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.152 [2024-10-07 09:48:54.872436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.152 qpair failed and we were unable to recover it. 00:28:06.152 [2024-10-07 09:48:54.872536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.152 [2024-10-07 09:48:54.872568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.152 qpair failed and we were unable to recover it. 00:28:06.152 [2024-10-07 09:48:54.872661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.152 [2024-10-07 09:48:54.872717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.152 qpair failed and we were unable to recover it. 00:28:06.152 [2024-10-07 09:48:54.872840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.152 [2024-10-07 09:48:54.872871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.152 qpair failed and we were unable to recover it. 00:28:06.152 [2024-10-07 09:48:54.872970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.152 [2024-10-07 09:48:54.873002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.152 qpair failed and we were unable to recover it. 00:28:06.152 [2024-10-07 09:48:54.873145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.152 [2024-10-07 09:48:54.873177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.152 qpair failed and we were unable to recover it. 00:28:06.152 [2024-10-07 09:48:54.873313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.152 [2024-10-07 09:48:54.873348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.152 qpair failed and we were unable to recover it. 00:28:06.152 [2024-10-07 09:48:54.873470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.152 [2024-10-07 09:48:54.873500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.152 qpair failed and we were unable to recover it. 00:28:06.152 [2024-10-07 09:48:54.873599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.152 [2024-10-07 09:48:54.873630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.152 qpair failed and we were unable to recover it. 00:28:06.152 [2024-10-07 09:48:54.873770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.152 [2024-10-07 09:48:54.873818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.152 qpair failed and we were unable to recover it. 00:28:06.152 [2024-10-07 09:48:54.873953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.152 [2024-10-07 09:48:54.874000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.152 qpair failed and we were unable to recover it. 00:28:06.152 [2024-10-07 09:48:54.874099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.152 [2024-10-07 09:48:54.874130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.152 qpair failed and we were unable to recover it. 00:28:06.152 [2024-10-07 09:48:54.874246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.152 [2024-10-07 09:48:54.874278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.152 qpair failed and we were unable to recover it. 00:28:06.152 [2024-10-07 09:48:54.874383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.152 [2024-10-07 09:48:54.874414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.152 qpair failed and we were unable to recover it. 00:28:06.152 [2024-10-07 09:48:54.874544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.152 [2024-10-07 09:48:54.874575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.152 qpair failed and we were unable to recover it. 00:28:06.152 [2024-10-07 09:48:54.874676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.152 [2024-10-07 09:48:54.874707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.152 qpair failed and we were unable to recover it. 00:28:06.152 [2024-10-07 09:48:54.874811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.152 [2024-10-07 09:48:54.874841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.152 qpair failed and we were unable to recover it. 00:28:06.152 [2024-10-07 09:48:54.874952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.152 [2024-10-07 09:48:54.874982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.152 qpair failed and we were unable to recover it. 00:28:06.152 [2024-10-07 09:48:54.875080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.152 [2024-10-07 09:48:54.875110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.152 qpair failed and we were unable to recover it. 00:28:06.152 [2024-10-07 09:48:54.875255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.152 [2024-10-07 09:48:54.875302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.152 qpair failed and we were unable to recover it. 00:28:06.152 [2024-10-07 09:48:54.875445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.152 [2024-10-07 09:48:54.875480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.152 qpair failed and we were unable to recover it. 00:28:06.152 [2024-10-07 09:48:54.875602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.152 [2024-10-07 09:48:54.875648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.152 qpair failed and we were unable to recover it. 00:28:06.152 [2024-10-07 09:48:54.875768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.152 [2024-10-07 09:48:54.875800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.152 qpair failed and we were unable to recover it. 00:28:06.152 [2024-10-07 09:48:54.875945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.152 [2024-10-07 09:48:54.875976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.152 qpair failed and we were unable to recover it. 00:28:06.152 [2024-10-07 09:48:54.876119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.152 [2024-10-07 09:48:54.876154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.152 qpair failed and we were unable to recover it. 00:28:06.152 [2024-10-07 09:48:54.876291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.152 [2024-10-07 09:48:54.876322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.153 qpair failed and we were unable to recover it. 00:28:06.153 [2024-10-07 09:48:54.876459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.153 [2024-10-07 09:48:54.876491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.153 qpair failed and we were unable to recover it. 00:28:06.153 [2024-10-07 09:48:54.876623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.153 [2024-10-07 09:48:54.876655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.153 qpair failed and we were unable to recover it. 00:28:06.153 [2024-10-07 09:48:54.876760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.153 [2024-10-07 09:48:54.876794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.153 qpair failed and we were unable to recover it. 00:28:06.153 [2024-10-07 09:48:54.876901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.153 [2024-10-07 09:48:54.876932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.153 qpair failed and we were unable to recover it. 00:28:06.153 [2024-10-07 09:48:54.877127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.153 [2024-10-07 09:48:54.877193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.153 qpair failed and we were unable to recover it. 00:28:06.153 [2024-10-07 09:48:54.877336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.153 [2024-10-07 09:48:54.877409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.153 qpair failed and we were unable to recover it. 00:28:06.153 [2024-10-07 09:48:54.877535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.153 [2024-10-07 09:48:54.877568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.153 qpair failed and we were unable to recover it. 00:28:06.153 [2024-10-07 09:48:54.877710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.153 [2024-10-07 09:48:54.877742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.153 qpair failed and we were unable to recover it. 00:28:06.153 [2024-10-07 09:48:54.877842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.153 [2024-10-07 09:48:54.877875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.153 qpair failed and we were unable to recover it. 00:28:06.153 [2024-10-07 09:48:54.878050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.153 [2024-10-07 09:48:54.878083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.153 qpair failed and we were unable to recover it. 00:28:06.153 [2024-10-07 09:48:54.878241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.153 [2024-10-07 09:48:54.878274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.153 qpair failed and we were unable to recover it. 00:28:06.153 [2024-10-07 09:48:54.878437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.153 [2024-10-07 09:48:54.878477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.153 qpair failed and we were unable to recover it. 00:28:06.153 [2024-10-07 09:48:54.878574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.153 [2024-10-07 09:48:54.878608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.153 qpair failed and we were unable to recover it. 00:28:06.153 [2024-10-07 09:48:54.878748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.153 [2024-10-07 09:48:54.878795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.153 qpair failed and we were unable to recover it. 00:28:06.153 [2024-10-07 09:48:54.878940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.153 [2024-10-07 09:48:54.878973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.153 qpair failed and we were unable to recover it. 00:28:06.153 [2024-10-07 09:48:54.879093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.153 [2024-10-07 09:48:54.879140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.153 qpair failed and we were unable to recover it. 00:28:06.153 [2024-10-07 09:48:54.879266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.153 [2024-10-07 09:48:54.879297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.153 qpair failed and we were unable to recover it. 00:28:06.153 [2024-10-07 09:48:54.879423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.153 [2024-10-07 09:48:54.879455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.153 qpair failed and we were unable to recover it. 00:28:06.153 [2024-10-07 09:48:54.879594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.153 [2024-10-07 09:48:54.879623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.153 qpair failed and we were unable to recover it. 00:28:06.153 [2024-10-07 09:48:54.879762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.153 [2024-10-07 09:48:54.879793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.153 qpair failed and we were unable to recover it. 00:28:06.153 [2024-10-07 09:48:54.879897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.153 [2024-10-07 09:48:54.879928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.153 qpair failed and we were unable to recover it. 00:28:06.153 [2024-10-07 09:48:54.880045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.153 [2024-10-07 09:48:54.880077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.153 qpair failed and we were unable to recover it. 00:28:06.153 [2024-10-07 09:48:54.880178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.153 [2024-10-07 09:48:54.880209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.153 qpair failed and we were unable to recover it. 00:28:06.153 [2024-10-07 09:48:54.880341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.153 [2024-10-07 09:48:54.880371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.153 qpair failed and we were unable to recover it. 00:28:06.153 [2024-10-07 09:48:54.880472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.153 [2024-10-07 09:48:54.880502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.153 qpair failed and we were unable to recover it. 00:28:06.153 [2024-10-07 09:48:54.880640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.153 [2024-10-07 09:48:54.880679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.153 qpair failed and we were unable to recover it. 00:28:06.153 [2024-10-07 09:48:54.880781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.153 [2024-10-07 09:48:54.880812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.153 qpair failed and we were unable to recover it. 00:28:06.153 [2024-10-07 09:48:54.880910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.153 [2024-10-07 09:48:54.880940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.153 qpair failed and we were unable to recover it. 00:28:06.153 [2024-10-07 09:48:54.881075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.153 [2024-10-07 09:48:54.881105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.153 qpair failed and we were unable to recover it. 00:28:06.153 [2024-10-07 09:48:54.881233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.153 [2024-10-07 09:48:54.881263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.153 qpair failed and we were unable to recover it. 00:28:06.153 [2024-10-07 09:48:54.881352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.153 [2024-10-07 09:48:54.881381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.153 qpair failed and we were unable to recover it. 00:28:06.153 [2024-10-07 09:48:54.881516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.153 [2024-10-07 09:48:54.881552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.153 qpair failed and we were unable to recover it. 00:28:06.153 [2024-10-07 09:48:54.881706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.153 [2024-10-07 09:48:54.881740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.154 qpair failed and we were unable to recover it. 00:28:06.154 [2024-10-07 09:48:54.881850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.154 [2024-10-07 09:48:54.881883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.154 qpair failed and we were unable to recover it. 00:28:06.154 [2024-10-07 09:48:54.881994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.154 [2024-10-07 09:48:54.882026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.154 qpair failed and we were unable to recover it. 00:28:06.154 [2024-10-07 09:48:54.882140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.154 [2024-10-07 09:48:54.882173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.154 qpair failed and we were unable to recover it. 00:28:06.154 [2024-10-07 09:48:54.882308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.154 [2024-10-07 09:48:54.882340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.154 qpair failed and we were unable to recover it. 00:28:06.154 [2024-10-07 09:48:54.882465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.154 [2024-10-07 09:48:54.882498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.154 qpair failed and we were unable to recover it. 00:28:06.154 [2024-10-07 09:48:54.882604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.154 [2024-10-07 09:48:54.882641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.154 qpair failed and we were unable to recover it. 00:28:06.154 [2024-10-07 09:48:54.882803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.154 [2024-10-07 09:48:54.882852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.154 qpair failed and we were unable to recover it. 00:28:06.154 [2024-10-07 09:48:54.882977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.154 [2024-10-07 09:48:54.883013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.154 qpair failed and we were unable to recover it. 00:28:06.154 [2024-10-07 09:48:54.883127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.154 [2024-10-07 09:48:54.883160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.154 qpair failed and we were unable to recover it. 00:28:06.154 [2024-10-07 09:48:54.883270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.154 [2024-10-07 09:48:54.883303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.154 qpair failed and we were unable to recover it. 00:28:06.154 [2024-10-07 09:48:54.883411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.154 [2024-10-07 09:48:54.883444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.154 qpair failed and we were unable to recover it. 00:28:06.154 [2024-10-07 09:48:54.883550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.154 [2024-10-07 09:48:54.883599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.154 qpair failed and we were unable to recover it. 00:28:06.154 [2024-10-07 09:48:54.883704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.154 [2024-10-07 09:48:54.883737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.154 qpair failed and we were unable to recover it. 00:28:06.154 [2024-10-07 09:48:54.883827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.154 [2024-10-07 09:48:54.883859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.154 qpair failed and we were unable to recover it. 00:28:06.154 [2024-10-07 09:48:54.883960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.154 [2024-10-07 09:48:54.884010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.154 qpair failed and we were unable to recover it. 00:28:06.154 [2024-10-07 09:48:54.884175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.154 [2024-10-07 09:48:54.884208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.154 qpair failed and we were unable to recover it. 00:28:06.154 [2024-10-07 09:48:54.884347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.154 [2024-10-07 09:48:54.884381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.154 qpair failed and we were unable to recover it. 00:28:06.154 [2024-10-07 09:48:54.884553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.154 [2024-10-07 09:48:54.884586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.154 qpair failed and we were unable to recover it. 00:28:06.154 [2024-10-07 09:48:54.884724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.154 [2024-10-07 09:48:54.884757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.154 qpair failed and we were unable to recover it. 00:28:06.154 [2024-10-07 09:48:54.884869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.154 [2024-10-07 09:48:54.884904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.154 qpair failed and we were unable to recover it. 00:28:06.154 [2024-10-07 09:48:54.885076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.154 [2024-10-07 09:48:54.885111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.154 qpair failed and we were unable to recover it. 00:28:06.154 [2024-10-07 09:48:54.885306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.154 [2024-10-07 09:48:54.885340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.154 qpair failed and we were unable to recover it. 00:28:06.154 [2024-10-07 09:48:54.885477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.154 [2024-10-07 09:48:54.885510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.154 qpair failed and we were unable to recover it. 00:28:06.154 [2024-10-07 09:48:54.885623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.154 [2024-10-07 09:48:54.885657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.154 qpair failed and we were unable to recover it. 00:28:06.154 [2024-10-07 09:48:54.885785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.154 [2024-10-07 09:48:54.885817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.154 qpair failed and we were unable to recover it. 00:28:06.154 [2024-10-07 09:48:54.885914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.154 [2024-10-07 09:48:54.885946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.154 qpair failed and we were unable to recover it. 00:28:06.154 [2024-10-07 09:48:54.886151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.154 [2024-10-07 09:48:54.886217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.154 qpair failed and we were unable to recover it. 00:28:06.154 [2024-10-07 09:48:54.886363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.154 [2024-10-07 09:48:54.886414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.154 qpair failed and we were unable to recover it. 00:28:06.154 [2024-10-07 09:48:54.886575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.154 [2024-10-07 09:48:54.886608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.154 qpair failed and we were unable to recover it. 00:28:06.154 [2024-10-07 09:48:54.886768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.154 [2024-10-07 09:48:54.886800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.154 qpair failed and we were unable to recover it. 00:28:06.154 [2024-10-07 09:48:54.886939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.154 [2024-10-07 09:48:54.886971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.154 qpair failed and we were unable to recover it. 00:28:06.154 [2024-10-07 09:48:54.887069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.154 [2024-10-07 09:48:54.887118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.154 qpair failed and we were unable to recover it. 00:28:06.154 [2024-10-07 09:48:54.887261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.154 [2024-10-07 09:48:54.887295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.154 qpair failed and we were unable to recover it. 00:28:06.154 [2024-10-07 09:48:54.887439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.155 [2024-10-07 09:48:54.887472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.155 qpair failed and we were unable to recover it. 00:28:06.155 [2024-10-07 09:48:54.887617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.155 [2024-10-07 09:48:54.887655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.155 qpair failed and we were unable to recover it. 00:28:06.155 [2024-10-07 09:48:54.887798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.155 [2024-10-07 09:48:54.887832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.155 qpair failed and we were unable to recover it. 00:28:06.155 [2024-10-07 09:48:54.887968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.155 [2024-10-07 09:48:54.888000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.155 qpair failed and we were unable to recover it. 00:28:06.155 [2024-10-07 09:48:54.888101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.155 [2024-10-07 09:48:54.888149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.155 qpair failed and we were unable to recover it. 00:28:06.155 [2024-10-07 09:48:54.888256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.155 [2024-10-07 09:48:54.888290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.155 qpair failed and we were unable to recover it. 00:28:06.155 [2024-10-07 09:48:54.888439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.155 [2024-10-07 09:48:54.888489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.155 qpair failed and we were unable to recover it. 00:28:06.155 [2024-10-07 09:48:54.888621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.155 [2024-10-07 09:48:54.888654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.155 qpair failed and we were unable to recover it. 00:28:06.155 [2024-10-07 09:48:54.888788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.155 [2024-10-07 09:48:54.888819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.155 qpair failed and we were unable to recover it. 00:28:06.155 [2024-10-07 09:48:54.888946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.155 [2024-10-07 09:48:54.888977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.155 qpair failed and we were unable to recover it. 00:28:06.155 [2024-10-07 09:48:54.889102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.155 [2024-10-07 09:48:54.889151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.155 qpair failed and we were unable to recover it. 00:28:06.155 [2024-10-07 09:48:54.889258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.155 [2024-10-07 09:48:54.889293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.155 qpair failed and we were unable to recover it. 00:28:06.155 [2024-10-07 09:48:54.889434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.155 [2024-10-07 09:48:54.889474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.155 qpair failed and we were unable to recover it. 00:28:06.155 [2024-10-07 09:48:54.889592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.155 [2024-10-07 09:48:54.889625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.155 qpair failed and we were unable to recover it. 00:28:06.155 [2024-10-07 09:48:54.889817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.155 [2024-10-07 09:48:54.889848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.155 qpair failed and we were unable to recover it. 00:28:06.155 [2024-10-07 09:48:54.890005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.155 [2024-10-07 09:48:54.890037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.155 qpair failed and we were unable to recover it. 00:28:06.155 [2024-10-07 09:48:54.890166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.155 [2024-10-07 09:48:54.890198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.155 qpair failed and we were unable to recover it. 00:28:06.155 [2024-10-07 09:48:54.890314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.155 [2024-10-07 09:48:54.890347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.155 qpair failed and we were unable to recover it. 00:28:06.155 [2024-10-07 09:48:54.890469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.155 [2024-10-07 09:48:54.890516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.155 qpair failed and we were unable to recover it. 00:28:06.155 [2024-10-07 09:48:54.890645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.155 [2024-10-07 09:48:54.890697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.155 qpair failed and we were unable to recover it. 00:28:06.155 [2024-10-07 09:48:54.890846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.155 [2024-10-07 09:48:54.890878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.155 qpair failed and we were unable to recover it. 00:28:06.155 [2024-10-07 09:48:54.891003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.155 [2024-10-07 09:48:54.891035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.155 qpair failed and we were unable to recover it. 00:28:06.155 [2024-10-07 09:48:54.891137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.155 [2024-10-07 09:48:54.891170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.155 qpair failed and we were unable to recover it. 00:28:06.155 [2024-10-07 09:48:54.891319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.155 [2024-10-07 09:48:54.891352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.155 qpair failed and we were unable to recover it. 00:28:06.155 [2024-10-07 09:48:54.891508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.155 [2024-10-07 09:48:54.891540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.155 qpair failed and we were unable to recover it. 00:28:06.155 [2024-10-07 09:48:54.891643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.155 [2024-10-07 09:48:54.891686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.155 qpair failed and we were unable to recover it. 00:28:06.155 [2024-10-07 09:48:54.891846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.155 [2024-10-07 09:48:54.891878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.155 qpair failed and we were unable to recover it. 00:28:06.155 [2024-10-07 09:48:54.892011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.155 [2024-10-07 09:48:54.892044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.155 qpair failed and we were unable to recover it. 00:28:06.155 [2024-10-07 09:48:54.892176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.155 [2024-10-07 09:48:54.892225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.155 qpair failed and we were unable to recover it. 00:28:06.155 [2024-10-07 09:48:54.892340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.155 [2024-10-07 09:48:54.892373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.155 qpair failed and we were unable to recover it. 00:28:06.155 [2024-10-07 09:48:54.892504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.155 [2024-10-07 09:48:54.892537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.155 qpair failed and we were unable to recover it. 00:28:06.155 [2024-10-07 09:48:54.892710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.155 [2024-10-07 09:48:54.892744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.155 qpair failed and we were unable to recover it. 00:28:06.155 [2024-10-07 09:48:54.892850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.155 [2024-10-07 09:48:54.892882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.155 qpair failed and we were unable to recover it. 00:28:06.155 [2024-10-07 09:48:54.893026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.155 [2024-10-07 09:48:54.893057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.155 qpair failed and we were unable to recover it. 00:28:06.155 [2024-10-07 09:48:54.893152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.155 [2024-10-07 09:48:54.893199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.155 qpair failed and we were unable to recover it. 00:28:06.155 [2024-10-07 09:48:54.893352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.155 [2024-10-07 09:48:54.893384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.155 qpair failed and we were unable to recover it. 00:28:06.155 [2024-10-07 09:48:54.893544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.155 [2024-10-07 09:48:54.893576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.155 qpair failed and we were unable to recover it. 00:28:06.155 [2024-10-07 09:48:54.893700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.155 [2024-10-07 09:48:54.893748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.155 qpair failed and we were unable to recover it. 00:28:06.155 [2024-10-07 09:48:54.893898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.155 [2024-10-07 09:48:54.893932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.155 qpair failed and we were unable to recover it. 00:28:06.156 [2024-10-07 09:48:54.894040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.156 [2024-10-07 09:48:54.894073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.156 qpair failed and we were unable to recover it. 00:28:06.156 [2024-10-07 09:48:54.894209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.156 [2024-10-07 09:48:54.894242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.156 qpair failed and we were unable to recover it. 00:28:06.156 [2024-10-07 09:48:54.894390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.156 [2024-10-07 09:48:54.894423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.156 qpair failed and we were unable to recover it. 00:28:06.156 [2024-10-07 09:48:54.894558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.156 [2024-10-07 09:48:54.894592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.156 qpair failed and we were unable to recover it. 00:28:06.156 [2024-10-07 09:48:54.894749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.156 [2024-10-07 09:48:54.894783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.156 qpair failed and we were unable to recover it. 00:28:06.156 [2024-10-07 09:48:54.894934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.156 [2024-10-07 09:48:54.894984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.156 qpair failed and we were unable to recover it. 00:28:06.156 [2024-10-07 09:48:54.895103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.156 [2024-10-07 09:48:54.895137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.156 qpair failed and we were unable to recover it. 00:28:06.156 [2024-10-07 09:48:54.895272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.156 [2024-10-07 09:48:54.895305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.156 qpair failed and we were unable to recover it. 00:28:06.156 [2024-10-07 09:48:54.895425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.156 [2024-10-07 09:48:54.895458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.156 qpair failed and we were unable to recover it. 00:28:06.156 [2024-10-07 09:48:54.895575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.156 [2024-10-07 09:48:54.895607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.156 qpair failed and we were unable to recover it. 00:28:06.156 [2024-10-07 09:48:54.895736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.156 [2024-10-07 09:48:54.895768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.156 qpair failed and we were unable to recover it. 00:28:06.156 [2024-10-07 09:48:54.895898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.156 [2024-10-07 09:48:54.895930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.156 qpair failed and we were unable to recover it. 00:28:06.156 [2024-10-07 09:48:54.896100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.156 [2024-10-07 09:48:54.896132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.156 qpair failed and we were unable to recover it. 00:28:06.156 [2024-10-07 09:48:54.896302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.156 [2024-10-07 09:48:54.896341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.156 qpair failed and we were unable to recover it. 00:28:06.156 [2024-10-07 09:48:54.896476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.156 [2024-10-07 09:48:54.896508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.156 qpair failed and we were unable to recover it. 00:28:06.156 [2024-10-07 09:48:54.896654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.156 [2024-10-07 09:48:54.896695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.156 qpair failed and we were unable to recover it. 00:28:06.156 [2024-10-07 09:48:54.896825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.156 [2024-10-07 09:48:54.896855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.156 qpair failed and we were unable to recover it. 00:28:06.156 [2024-10-07 09:48:54.896968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.156 [2024-10-07 09:48:54.897000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.156 qpair failed and we were unable to recover it. 00:28:06.156 [2024-10-07 09:48:54.897150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.156 [2024-10-07 09:48:54.897182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.156 qpair failed and we were unable to recover it. 00:28:06.156 [2024-10-07 09:48:54.897308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.156 [2024-10-07 09:48:54.897357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.156 qpair failed and we were unable to recover it. 00:28:06.156 [2024-10-07 09:48:54.897485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.156 [2024-10-07 09:48:54.897517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.156 qpair failed and we were unable to recover it. 00:28:06.156 [2024-10-07 09:48:54.897602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.156 [2024-10-07 09:48:54.897633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.156 qpair failed and we were unable to recover it. 00:28:06.156 [2024-10-07 09:48:54.897765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.156 [2024-10-07 09:48:54.897797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.156 qpair failed and we were unable to recover it. 00:28:06.156 [2024-10-07 09:48:54.897929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.156 [2024-10-07 09:48:54.897962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.156 qpair failed and we were unable to recover it. 00:28:06.156 [2024-10-07 09:48:54.898117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.156 [2024-10-07 09:48:54.898148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.156 qpair failed and we were unable to recover it. 00:28:06.156 [2024-10-07 09:48:54.898280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.156 [2024-10-07 09:48:54.898311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.156 qpair failed and we were unable to recover it. 00:28:06.156 [2024-10-07 09:48:54.898457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.156 [2024-10-07 09:48:54.898488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.156 qpair failed and we were unable to recover it. 00:28:06.156 [2024-10-07 09:48:54.898591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.156 [2024-10-07 09:48:54.898622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.156 qpair failed and we were unable to recover it. 00:28:06.156 [2024-10-07 09:48:54.898763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.156 [2024-10-07 09:48:54.898811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.156 qpair failed and we were unable to recover it. 00:28:06.156 [2024-10-07 09:48:54.898987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.156 [2024-10-07 09:48:54.899022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.156 qpair failed and we were unable to recover it. 00:28:06.156 [2024-10-07 09:48:54.899128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.156 [2024-10-07 09:48:54.899162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.156 qpair failed and we were unable to recover it. 00:28:06.156 [2024-10-07 09:48:54.899258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.156 [2024-10-07 09:48:54.899291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.156 qpair failed and we were unable to recover it. 00:28:06.156 [2024-10-07 09:48:54.899403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.156 [2024-10-07 09:48:54.899437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.156 qpair failed and we were unable to recover it. 00:28:06.156 [2024-10-07 09:48:54.899598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.156 [2024-10-07 09:48:54.899629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.157 qpair failed and we were unable to recover it. 00:28:06.157 [2024-10-07 09:48:54.899766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.157 [2024-10-07 09:48:54.899799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.157 qpair failed and we were unable to recover it. 00:28:06.157 [2024-10-07 09:48:54.899938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.157 [2024-10-07 09:48:54.899981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.157 qpair failed and we were unable to recover it. 00:28:06.157 [2024-10-07 09:48:54.900122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.157 [2024-10-07 09:48:54.900156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.157 qpair failed and we were unable to recover it. 00:28:06.157 [2024-10-07 09:48:54.900252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.157 [2024-10-07 09:48:54.900287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.157 qpair failed and we were unable to recover it. 00:28:06.157 [2024-10-07 09:48:54.900457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.157 [2024-10-07 09:48:54.900491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.157 qpair failed and we were unable to recover it. 00:28:06.157 [2024-10-07 09:48:54.900596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.157 [2024-10-07 09:48:54.900630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.157 qpair failed and we were unable to recover it. 00:28:06.157 [2024-10-07 09:48:54.900735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.157 [2024-10-07 09:48:54.900771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.157 qpair failed and we were unable to recover it. 00:28:06.157 [2024-10-07 09:48:54.900875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.157 [2024-10-07 09:48:54.900909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.157 qpair failed and we were unable to recover it. 00:28:06.157 [2024-10-07 09:48:54.901042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.157 [2024-10-07 09:48:54.901076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.157 qpair failed and we were unable to recover it. 00:28:06.157 [2024-10-07 09:48:54.901210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.157 [2024-10-07 09:48:54.901243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.157 qpair failed and we were unable to recover it. 00:28:06.157 [2024-10-07 09:48:54.901355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.157 [2024-10-07 09:48:54.901389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.157 qpair failed and we were unable to recover it. 00:28:06.157 [2024-10-07 09:48:54.901521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.157 [2024-10-07 09:48:54.901555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.157 qpair failed and we were unable to recover it. 00:28:06.157 [2024-10-07 09:48:54.901708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.157 [2024-10-07 09:48:54.901744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.157 qpair failed and we were unable to recover it. 00:28:06.157 [2024-10-07 09:48:54.901883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.157 [2024-10-07 09:48:54.901917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.157 qpair failed and we were unable to recover it. 00:28:06.157 [2024-10-07 09:48:54.902047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.157 [2024-10-07 09:48:54.902080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.157 qpair failed and we were unable to recover it. 00:28:06.157 [2024-10-07 09:48:54.902190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.157 [2024-10-07 09:48:54.902223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.157 qpair failed and we were unable to recover it. 00:28:06.157 [2024-10-07 09:48:54.902361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.157 [2024-10-07 09:48:54.902395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.157 qpair failed and we were unable to recover it. 00:28:06.157 [2024-10-07 09:48:54.902492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.157 [2024-10-07 09:48:54.902525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.157 qpair failed and we were unable to recover it. 00:28:06.157 [2024-10-07 09:48:54.902684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.157 [2024-10-07 09:48:54.902721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.157 qpair failed and we were unable to recover it. 00:28:06.157 [2024-10-07 09:48:54.902834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.157 [2024-10-07 09:48:54.902866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.157 qpair failed and we were unable to recover it. 00:28:06.157 [2024-10-07 09:48:54.902998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.157 [2024-10-07 09:48:54.903031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.157 qpair failed and we were unable to recover it. 00:28:06.157 [2024-10-07 09:48:54.903123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.157 [2024-10-07 09:48:54.903156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.157 qpair failed and we were unable to recover it. 00:28:06.157 [2024-10-07 09:48:54.903289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.157 [2024-10-07 09:48:54.903322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.157 qpair failed and we were unable to recover it. 00:28:06.157 [2024-10-07 09:48:54.903482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.157 [2024-10-07 09:48:54.903515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.157 qpair failed and we were unable to recover it. 00:28:06.157 [2024-10-07 09:48:54.903606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.157 [2024-10-07 09:48:54.903639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.157 qpair failed and we were unable to recover it. 00:28:06.157 [2024-10-07 09:48:54.903793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.157 [2024-10-07 09:48:54.903826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.157 qpair failed and we were unable to recover it. 00:28:06.157 [2024-10-07 09:48:54.903988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.157 [2024-10-07 09:48:54.904022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.157 qpair failed and we were unable to recover it. 00:28:06.157 [2024-10-07 09:48:54.904129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.157 [2024-10-07 09:48:54.904162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.157 qpair failed and we were unable to recover it. 00:28:06.157 [2024-10-07 09:48:54.904261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.157 [2024-10-07 09:48:54.904294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.157 qpair failed and we were unable to recover it. 00:28:06.157 [2024-10-07 09:48:54.904406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.157 [2024-10-07 09:48:54.904438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.157 qpair failed and we were unable to recover it. 00:28:06.157 [2024-10-07 09:48:54.904572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.157 [2024-10-07 09:48:54.904606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.157 qpair failed and we were unable to recover it. 00:28:06.157 [2024-10-07 09:48:54.904718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.157 [2024-10-07 09:48:54.904752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.157 qpair failed and we were unable to recover it. 00:28:06.157 [2024-10-07 09:48:54.904856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.157 [2024-10-07 09:48:54.904888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.157 qpair failed and we were unable to recover it. 00:28:06.157 [2024-10-07 09:48:54.905028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.157 [2024-10-07 09:48:54.905060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.157 qpair failed and we were unable to recover it. 00:28:06.157 [2024-10-07 09:48:54.905198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.157 [2024-10-07 09:48:54.905231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.157 qpair failed and we were unable to recover it. 00:28:06.157 [2024-10-07 09:48:54.905370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.157 [2024-10-07 09:48:54.905403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.157 qpair failed and we were unable to recover it. 00:28:06.157 [2024-10-07 09:48:54.905519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.157 [2024-10-07 09:48:54.905552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.157 qpair failed and we were unable to recover it. 00:28:06.157 [2024-10-07 09:48:54.905653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.157 [2024-10-07 09:48:54.905709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.157 qpair failed and we were unable to recover it. 00:28:06.158 [2024-10-07 09:48:54.905845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.158 [2024-10-07 09:48:54.905878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.158 qpair failed and we were unable to recover it. 00:28:06.158 [2024-10-07 09:48:54.905976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.158 [2024-10-07 09:48:54.906009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.158 qpair failed and we were unable to recover it. 00:28:06.158 [2024-10-07 09:48:54.906108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.158 [2024-10-07 09:48:54.906141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.158 qpair failed and we were unable to recover it. 00:28:06.158 [2024-10-07 09:48:54.906250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.158 [2024-10-07 09:48:54.906282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.158 qpair failed and we were unable to recover it. 00:28:06.158 [2024-10-07 09:48:54.906414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.158 [2024-10-07 09:48:54.906447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.158 qpair failed and we were unable to recover it. 00:28:06.158 [2024-10-07 09:48:54.906586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.158 [2024-10-07 09:48:54.906618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.158 qpair failed and we were unable to recover it. 00:28:06.158 [2024-10-07 09:48:54.906748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.158 [2024-10-07 09:48:54.906782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.158 qpair failed and we were unable to recover it. 00:28:06.158 [2024-10-07 09:48:54.906920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.158 [2024-10-07 09:48:54.906953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.158 qpair failed and we were unable to recover it. 00:28:06.158 [2024-10-07 09:48:54.907061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.158 [2024-10-07 09:48:54.907099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.158 qpair failed and we were unable to recover it. 00:28:06.158 [2024-10-07 09:48:54.907263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.158 [2024-10-07 09:48:54.907296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.158 qpair failed and we were unable to recover it. 00:28:06.158 [2024-10-07 09:48:54.907405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.158 [2024-10-07 09:48:54.907438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.158 qpair failed and we were unable to recover it. 00:28:06.158 [2024-10-07 09:48:54.907537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.158 [2024-10-07 09:48:54.907574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.158 qpair failed and we were unable to recover it. 00:28:06.158 [2024-10-07 09:48:54.907717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.158 [2024-10-07 09:48:54.907753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.158 qpair failed and we were unable to recover it. 00:28:06.158 [2024-10-07 09:48:54.907863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.158 [2024-10-07 09:48:54.907897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.158 qpair failed and we were unable to recover it. 00:28:06.158 [2024-10-07 09:48:54.908052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.158 [2024-10-07 09:48:54.908085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.158 qpair failed and we were unable to recover it. 00:28:06.158 [2024-10-07 09:48:54.908233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.158 [2024-10-07 09:48:54.908265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.158 qpair failed and we were unable to recover it. 00:28:06.158 [2024-10-07 09:48:54.908398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.158 [2024-10-07 09:48:54.908430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.158 qpair failed and we were unable to recover it. 00:28:06.158 [2024-10-07 09:48:54.908537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.158 [2024-10-07 09:48:54.908569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.158 qpair failed and we were unable to recover it. 00:28:06.158 [2024-10-07 09:48:54.908695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.158 [2024-10-07 09:48:54.908727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.158 qpair failed and we were unable to recover it. 00:28:06.158 [2024-10-07 09:48:54.908824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.158 [2024-10-07 09:48:54.908857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.158 qpair failed and we were unable to recover it. 00:28:06.158 [2024-10-07 09:48:54.909005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.158 [2024-10-07 09:48:54.909037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.158 qpair failed and we were unable to recover it. 00:28:06.158 [2024-10-07 09:48:54.909157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.158 [2024-10-07 09:48:54.909188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.158 qpair failed and we were unable to recover it. 00:28:06.158 [2024-10-07 09:48:54.909312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.158 [2024-10-07 09:48:54.909359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.158 qpair failed and we were unable to recover it. 00:28:06.158 [2024-10-07 09:48:54.909508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.158 [2024-10-07 09:48:54.909543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.158 qpair failed and we were unable to recover it. 00:28:06.158 [2024-10-07 09:48:54.909649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.158 [2024-10-07 09:48:54.909693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.158 qpair failed and we were unable to recover it. 00:28:06.158 [2024-10-07 09:48:54.909802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.158 [2024-10-07 09:48:54.909835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.158 qpair failed and we were unable to recover it. 00:28:06.158 [2024-10-07 09:48:54.909977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.158 [2024-10-07 09:48:54.910010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.158 qpair failed and we were unable to recover it. 00:28:06.158 [2024-10-07 09:48:54.910168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.158 [2024-10-07 09:48:54.910200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.158 qpair failed and we were unable to recover it. 00:28:06.158 [2024-10-07 09:48:54.910349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.158 [2024-10-07 09:48:54.910384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.158 qpair failed and we were unable to recover it. 00:28:06.158 [2024-10-07 09:48:54.910496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.159 [2024-10-07 09:48:54.910529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.159 qpair failed and we were unable to recover it. 00:28:06.159 [2024-10-07 09:48:54.910632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.159 [2024-10-07 09:48:54.910673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.159 qpair failed and we were unable to recover it. 00:28:06.159 [2024-10-07 09:48:54.910780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.159 [2024-10-07 09:48:54.910813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.159 qpair failed and we were unable to recover it. 00:28:06.159 [2024-10-07 09:48:54.910950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.159 [2024-10-07 09:48:54.910982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.159 qpair failed and we were unable to recover it. 00:28:06.159 [2024-10-07 09:48:54.911086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.159 [2024-10-07 09:48:54.911119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.159 qpair failed and we were unable to recover it. 00:28:06.159 [2024-10-07 09:48:54.911263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.159 [2024-10-07 09:48:54.911294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.159 qpair failed and we were unable to recover it. 00:28:06.159 [2024-10-07 09:48:54.911419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.159 [2024-10-07 09:48:54.911450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.159 qpair failed and we were unable to recover it. 00:28:06.159 [2024-10-07 09:48:54.911553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.159 [2024-10-07 09:48:54.911585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.159 qpair failed and we were unable to recover it. 00:28:06.159 [2024-10-07 09:48:54.911711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.159 [2024-10-07 09:48:54.911743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.159 qpair failed and we were unable to recover it. 00:28:06.159 [2024-10-07 09:48:54.911835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.159 [2024-10-07 09:48:54.911866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.159 qpair failed and we were unable to recover it. 00:28:06.159 [2024-10-07 09:48:54.911993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.159 [2024-10-07 09:48:54.912024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.159 qpair failed and we were unable to recover it. 00:28:06.159 [2024-10-07 09:48:54.912128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.159 [2024-10-07 09:48:54.912160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.159 qpair failed and we were unable to recover it. 00:28:06.159 [2024-10-07 09:48:54.912290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.159 [2024-10-07 09:48:54.912321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.159 qpair failed and we were unable to recover it. 00:28:06.159 [2024-10-07 09:48:54.912457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.159 [2024-10-07 09:48:54.912489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.159 qpair failed and we were unable to recover it. 00:28:06.159 [2024-10-07 09:48:54.912607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.159 [2024-10-07 09:48:54.912638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.159 qpair failed and we were unable to recover it. 00:28:06.159 [2024-10-07 09:48:54.912746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.159 [2024-10-07 09:48:54.912778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.159 qpair failed and we were unable to recover it. 00:28:06.159 [2024-10-07 09:48:54.912909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.159 [2024-10-07 09:48:54.912940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.159 qpair failed and we were unable to recover it. 00:28:06.159 [2024-10-07 09:48:54.913037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.159 [2024-10-07 09:48:54.913068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.159 qpair failed and we were unable to recover it. 00:28:06.159 [2024-10-07 09:48:54.913208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.159 [2024-10-07 09:48:54.913240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.159 qpair failed and we were unable to recover it. 00:28:06.159 [2024-10-07 09:48:54.913340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.159 [2024-10-07 09:48:54.913376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.159 qpair failed and we were unable to recover it. 00:28:06.159 [2024-10-07 09:48:54.913487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.159 [2024-10-07 09:48:54.913522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.159 qpair failed and we were unable to recover it. 00:28:06.159 [2024-10-07 09:48:54.913633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.159 [2024-10-07 09:48:54.913680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.159 qpair failed and we were unable to recover it. 00:28:06.159 [2024-10-07 09:48:54.913810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.159 [2024-10-07 09:48:54.913842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.159 qpair failed and we were unable to recover it. 00:28:06.159 [2024-10-07 09:48:54.913966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.159 [2024-10-07 09:48:54.913997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.159 qpair failed and we were unable to recover it. 00:28:06.159 [2024-10-07 09:48:54.914138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.159 [2024-10-07 09:48:54.914171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.159 qpair failed and we were unable to recover it. 00:28:06.159 [2024-10-07 09:48:54.914302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.159 [2024-10-07 09:48:54.914333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.159 qpair failed and we were unable to recover it. 00:28:06.159 [2024-10-07 09:48:54.914433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.159 [2024-10-07 09:48:54.914466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.159 qpair failed and we were unable to recover it. 00:28:06.159 [2024-10-07 09:48:54.914563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.159 [2024-10-07 09:48:54.914594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.159 qpair failed and we were unable to recover it. 00:28:06.159 [2024-10-07 09:48:54.914703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.159 [2024-10-07 09:48:54.914735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.159 qpair failed and we were unable to recover it. 00:28:06.159 [2024-10-07 09:48:54.914865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.159 [2024-10-07 09:48:54.914897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.159 qpair failed and we were unable to recover it. 00:28:06.159 [2024-10-07 09:48:54.915033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.159 [2024-10-07 09:48:54.915064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.159 qpair failed and we were unable to recover it. 00:28:06.159 [2024-10-07 09:48:54.915168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.159 [2024-10-07 09:48:54.915199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.159 qpair failed and we were unable to recover it. 00:28:06.159 [2024-10-07 09:48:54.915357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.159 [2024-10-07 09:48:54.915388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.159 qpair failed and we were unable to recover it. 00:28:06.159 [2024-10-07 09:48:54.915498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.159 [2024-10-07 09:48:54.915529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.159 qpair failed and we were unable to recover it. 00:28:06.159 [2024-10-07 09:48:54.915648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.159 [2024-10-07 09:48:54.915688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.159 qpair failed and we were unable to recover it. 00:28:06.159 [2024-10-07 09:48:54.915786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.159 [2024-10-07 09:48:54.915817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.159 qpair failed and we were unable to recover it. 00:28:06.159 [2024-10-07 09:48:54.915951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.159 [2024-10-07 09:48:54.915982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.159 qpair failed and we were unable to recover it. 00:28:06.159 [2024-10-07 09:48:54.916111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.160 [2024-10-07 09:48:54.916142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.160 qpair failed and we were unable to recover it. 00:28:06.160 [2024-10-07 09:48:54.916235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.160 [2024-10-07 09:48:54.916268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.160 qpair failed and we were unable to recover it. 00:28:06.160 [2024-10-07 09:48:54.916370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.160 [2024-10-07 09:48:54.916401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.160 qpair failed and we were unable to recover it. 00:28:06.160 [2024-10-07 09:48:54.916534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.160 [2024-10-07 09:48:54.916566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.160 qpair failed and we were unable to recover it. 00:28:06.160 [2024-10-07 09:48:54.916721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.160 [2024-10-07 09:48:54.916754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.160 qpair failed and we were unable to recover it. 00:28:06.160 [2024-10-07 09:48:54.916851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.160 [2024-10-07 09:48:54.916882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.160 qpair failed and we were unable to recover it. 00:28:06.160 [2024-10-07 09:48:54.916990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.160 [2024-10-07 09:48:54.917022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.160 qpair failed and we were unable to recover it. 00:28:06.160 [2024-10-07 09:48:54.917126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.160 [2024-10-07 09:48:54.917157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.160 qpair failed and we were unable to recover it. 00:28:06.160 [2024-10-07 09:48:54.917256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.160 [2024-10-07 09:48:54.917288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.160 qpair failed and we were unable to recover it. 00:28:06.160 [2024-10-07 09:48:54.917418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.160 [2024-10-07 09:48:54.917450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.160 qpair failed and we were unable to recover it. 00:28:06.160 [2024-10-07 09:48:54.917578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.160 [2024-10-07 09:48:54.917609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.160 qpair failed and we were unable to recover it. 00:28:06.160 [2024-10-07 09:48:54.917792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.160 [2024-10-07 09:48:54.917825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.160 qpair failed and we were unable to recover it. 00:28:06.160 [2024-10-07 09:48:54.917953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.160 [2024-10-07 09:48:54.917985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.160 qpair failed and we were unable to recover it. 00:28:06.160 [2024-10-07 09:48:54.918084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.160 [2024-10-07 09:48:54.918117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.160 qpair failed and we were unable to recover it. 00:28:06.160 [2024-10-07 09:48:54.918235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.160 [2024-10-07 09:48:54.918267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.160 qpair failed and we were unable to recover it. 00:28:06.160 [2024-10-07 09:48:54.918370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.160 [2024-10-07 09:48:54.918403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.160 qpair failed and we were unable to recover it. 00:28:06.160 [2024-10-07 09:48:54.918536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.160 [2024-10-07 09:48:54.918568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.160 qpair failed and we were unable to recover it. 00:28:06.160 [2024-10-07 09:48:54.918723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.160 [2024-10-07 09:48:54.918756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.160 qpair failed and we were unable to recover it. 00:28:06.160 [2024-10-07 09:48:54.918862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.160 [2024-10-07 09:48:54.918893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.160 qpair failed and we were unable to recover it. 00:28:06.160 [2024-10-07 09:48:54.918992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.160 [2024-10-07 09:48:54.919024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.160 qpair failed and we were unable to recover it. 00:28:06.160 [2024-10-07 09:48:54.919136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.160 [2024-10-07 09:48:54.919167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.160 qpair failed and we were unable to recover it. 00:28:06.160 [2024-10-07 09:48:54.919295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.160 [2024-10-07 09:48:54.919326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.160 qpair failed and we were unable to recover it. 00:28:06.160 [2024-10-07 09:48:54.919477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.160 [2024-10-07 09:48:54.919530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.160 qpair failed and we were unable to recover it. 00:28:06.160 [2024-10-07 09:48:54.919683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.160 [2024-10-07 09:48:54.919720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.160 qpair failed and we were unable to recover it. 00:28:06.160 [2024-10-07 09:48:54.919846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.160 [2024-10-07 09:48:54.919880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.160 qpair failed and we were unable to recover it. 00:28:06.160 [2024-10-07 09:48:54.920009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.160 [2024-10-07 09:48:54.920041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.160 qpair failed and we were unable to recover it. 00:28:06.160 [2024-10-07 09:48:54.920175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.160 [2024-10-07 09:48:54.920208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.160 qpair failed and we were unable to recover it. 00:28:06.160 [2024-10-07 09:48:54.920337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.160 [2024-10-07 09:48:54.920369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.160 qpair failed and we were unable to recover it. 00:28:06.160 [2024-10-07 09:48:54.920470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.160 [2024-10-07 09:48:54.920502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.160 qpair failed and we were unable to recover it. 00:28:06.160 [2024-10-07 09:48:54.920638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.160 [2024-10-07 09:48:54.920677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.160 qpair failed and we were unable to recover it. 00:28:06.160 [2024-10-07 09:48:54.920782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.160 [2024-10-07 09:48:54.920815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.160 qpair failed and we were unable to recover it. 00:28:06.160 [2024-10-07 09:48:54.920919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.160 [2024-10-07 09:48:54.920951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.160 qpair failed and we were unable to recover it. 00:28:06.160 [2024-10-07 09:48:54.921082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.160 [2024-10-07 09:48:54.921114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.160 qpair failed and we were unable to recover it. 00:28:06.160 [2024-10-07 09:48:54.921271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.160 [2024-10-07 09:48:54.921303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.160 qpair failed and we were unable to recover it. 00:28:06.160 [2024-10-07 09:48:54.921401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.160 [2024-10-07 09:48:54.921433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.160 qpair failed and we were unable to recover it. 00:28:06.160 [2024-10-07 09:48:54.921551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.160 [2024-10-07 09:48:54.921584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.160 qpair failed and we were unable to recover it. 00:28:06.160 [2024-10-07 09:48:54.921729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.160 [2024-10-07 09:48:54.921763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.160 qpair failed and we were unable to recover it. 00:28:06.160 [2024-10-07 09:48:54.921862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.160 [2024-10-07 09:48:54.921894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.160 qpair failed and we were unable to recover it. 00:28:06.160 [2024-10-07 09:48:54.922055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.160 [2024-10-07 09:48:54.922087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.161 qpair failed and we were unable to recover it. 00:28:06.161 [2024-10-07 09:48:54.922185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.161 [2024-10-07 09:48:54.922216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.161 qpair failed and we were unable to recover it. 00:28:06.161 [2024-10-07 09:48:54.922341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.161 [2024-10-07 09:48:54.922373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.161 qpair failed and we were unable to recover it. 00:28:06.161 [2024-10-07 09:48:54.922468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.161 [2024-10-07 09:48:54.922500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.161 qpair failed and we were unable to recover it. 00:28:06.161 [2024-10-07 09:48:54.922597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.161 [2024-10-07 09:48:54.922628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.161 qpair failed and we were unable to recover it. 00:28:06.161 [2024-10-07 09:48:54.922803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.161 [2024-10-07 09:48:54.922836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.161 qpair failed and we were unable to recover it. 00:28:06.161 [2024-10-07 09:48:54.922941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.161 [2024-10-07 09:48:54.922973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.161 qpair failed and we were unable to recover it. 00:28:06.161 [2024-10-07 09:48:54.923075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.161 [2024-10-07 09:48:54.923107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.161 qpair failed and we were unable to recover it. 00:28:06.161 [2024-10-07 09:48:54.923233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.161 [2024-10-07 09:48:54.923266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.161 qpair failed and we were unable to recover it. 00:28:06.161 [2024-10-07 09:48:54.923371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.161 [2024-10-07 09:48:54.923404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.161 qpair failed and we were unable to recover it. 00:28:06.161 [2024-10-07 09:48:54.923541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.161 [2024-10-07 09:48:54.923573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.161 qpair failed and we were unable to recover it. 00:28:06.161 [2024-10-07 09:48:54.923692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.161 [2024-10-07 09:48:54.923725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.161 qpair failed and we were unable to recover it. 00:28:06.161 [2024-10-07 09:48:54.923858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.161 [2024-10-07 09:48:54.923890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.161 qpair failed and we were unable to recover it. 00:28:06.161 [2024-10-07 09:48:54.924016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.161 [2024-10-07 09:48:54.924048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.161 qpair failed and we were unable to recover it. 00:28:06.161 [2024-10-07 09:48:54.924169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.161 [2024-10-07 09:48:54.924202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.161 qpair failed and we were unable to recover it. 00:28:06.161 [2024-10-07 09:48:54.924317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.161 [2024-10-07 09:48:54.924350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.161 qpair failed and we were unable to recover it. 00:28:06.161 [2024-10-07 09:48:54.924479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.161 [2024-10-07 09:48:54.924510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.161 qpair failed and we were unable to recover it. 00:28:06.161 [2024-10-07 09:48:54.924645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.161 [2024-10-07 09:48:54.924685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.161 qpair failed and we were unable to recover it. 00:28:06.161 [2024-10-07 09:48:54.924804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.161 [2024-10-07 09:48:54.924837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.161 qpair failed and we were unable to recover it. 00:28:06.161 [2024-10-07 09:48:54.924965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.161 [2024-10-07 09:48:54.924996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.161 qpair failed and we were unable to recover it. 00:28:06.161 [2024-10-07 09:48:54.925099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.161 [2024-10-07 09:48:54.925133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.161 qpair failed and we were unable to recover it. 00:28:06.161 [2024-10-07 09:48:54.925235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.161 [2024-10-07 09:48:54.925267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.161 qpair failed and we were unable to recover it. 00:28:06.161 [2024-10-07 09:48:54.925402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.161 [2024-10-07 09:48:54.925433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.161 qpair failed and we were unable to recover it. 00:28:06.161 [2024-10-07 09:48:54.925524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.161 [2024-10-07 09:48:54.925556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.161 qpair failed and we were unable to recover it. 00:28:06.161 [2024-10-07 09:48:54.925693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.161 [2024-10-07 09:48:54.925731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.161 qpair failed and we were unable to recover it. 00:28:06.161 [2024-10-07 09:48:54.925892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.161 [2024-10-07 09:48:54.925924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.161 qpair failed and we were unable to recover it. 00:28:06.161 [2024-10-07 09:48:54.926027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.161 [2024-10-07 09:48:54.926058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.161 qpair failed and we were unable to recover it. 00:28:06.161 [2024-10-07 09:48:54.926196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.161 [2024-10-07 09:48:54.926228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.161 qpair failed and we were unable to recover it. 00:28:06.161 [2024-10-07 09:48:54.926356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.161 [2024-10-07 09:48:54.926387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.161 qpair failed and we were unable to recover it. 00:28:06.161 [2024-10-07 09:48:54.926489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.161 [2024-10-07 09:48:54.926523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.161 qpair failed and we were unable to recover it. 00:28:06.161 [2024-10-07 09:48:54.926654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.161 [2024-10-07 09:48:54.926693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.161 qpair failed and we were unable to recover it. 00:28:06.161 [2024-10-07 09:48:54.926824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.161 [2024-10-07 09:48:54.926855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.161 qpair failed and we were unable to recover it. 00:28:06.161 [2024-10-07 09:48:54.926955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.161 [2024-10-07 09:48:54.926987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.161 qpair failed and we were unable to recover it. 00:28:06.161 [2024-10-07 09:48:54.927100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.161 [2024-10-07 09:48:54.927133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.161 qpair failed and we were unable to recover it. 00:28:06.161 [2024-10-07 09:48:54.927266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.161 [2024-10-07 09:48:54.927298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.161 qpair failed and we were unable to recover it. 00:28:06.161 [2024-10-07 09:48:54.927390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.161 [2024-10-07 09:48:54.927422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.161 qpair failed and we were unable to recover it. 00:28:06.161 [2024-10-07 09:48:54.927518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.162 [2024-10-07 09:48:54.927550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.162 qpair failed and we were unable to recover it. 00:28:06.162 [2024-10-07 09:48:54.927653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.162 [2024-10-07 09:48:54.927692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.162 qpair failed and we were unable to recover it. 00:28:06.162 [2024-10-07 09:48:54.927829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.162 [2024-10-07 09:48:54.927861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.162 qpair failed and we were unable to recover it. 00:28:06.162 [2024-10-07 09:48:54.927991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.162 [2024-10-07 09:48:54.928024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.162 qpair failed and we were unable to recover it. 00:28:06.162 [2024-10-07 09:48:54.928127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.162 [2024-10-07 09:48:54.928157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.162 qpair failed and we were unable to recover it. 00:28:06.162 [2024-10-07 09:48:54.928284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.162 [2024-10-07 09:48:54.928316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.162 qpair failed and we were unable to recover it. 00:28:06.162 [2024-10-07 09:48:54.928419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.162 [2024-10-07 09:48:54.928452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.162 qpair failed and we were unable to recover it. 00:28:06.162 [2024-10-07 09:48:54.928549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.162 [2024-10-07 09:48:54.928580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.162 qpair failed and we were unable to recover it. 00:28:06.162 [2024-10-07 09:48:54.928681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.162 [2024-10-07 09:48:54.928714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.162 qpair failed and we were unable to recover it. 00:28:06.162 [2024-10-07 09:48:54.928824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.162 [2024-10-07 09:48:54.928857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.162 qpair failed and we were unable to recover it. 00:28:06.162 [2024-10-07 09:48:54.928955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.162 [2024-10-07 09:48:54.928988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.162 qpair failed and we were unable to recover it. 00:28:06.162 [2024-10-07 09:48:54.929096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.162 [2024-10-07 09:48:54.929128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.162 qpair failed and we were unable to recover it. 00:28:06.162 [2024-10-07 09:48:54.929243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.162 [2024-10-07 09:48:54.929276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.162 qpair failed and we were unable to recover it. 00:28:06.162 [2024-10-07 09:48:54.929376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.162 [2024-10-07 09:48:54.929409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.162 qpair failed and we were unable to recover it. 00:28:06.162 [2024-10-07 09:48:54.929523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.162 [2024-10-07 09:48:54.929558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.162 qpair failed and we were unable to recover it. 00:28:06.162 [2024-10-07 09:48:54.929705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.162 [2024-10-07 09:48:54.929738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.162 qpair failed and we were unable to recover it. 00:28:06.162 [2024-10-07 09:48:54.929871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.162 [2024-10-07 09:48:54.929903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.162 qpair failed and we were unable to recover it. 00:28:06.162 [2024-10-07 09:48:54.929998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.162 [2024-10-07 09:48:54.930030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.162 qpair failed and we were unable to recover it. 00:28:06.162 [2024-10-07 09:48:54.930162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.162 [2024-10-07 09:48:54.930194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.162 qpair failed and we were unable to recover it. 00:28:06.162 [2024-10-07 09:48:54.930298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.162 [2024-10-07 09:48:54.930329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.162 qpair failed and we were unable to recover it. 00:28:06.162 [2024-10-07 09:48:54.930459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.162 [2024-10-07 09:48:54.930490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.162 qpair failed and we were unable to recover it. 00:28:06.162 [2024-10-07 09:48:54.930655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.162 [2024-10-07 09:48:54.930695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.162 qpair failed and we were unable to recover it. 00:28:06.162 [2024-10-07 09:48:54.930830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.162 [2024-10-07 09:48:54.930861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.162 qpair failed and we were unable to recover it. 00:28:06.162 [2024-10-07 09:48:54.930994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.162 [2024-10-07 09:48:54.931026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.162 qpair failed and we were unable to recover it. 00:28:06.162 [2024-10-07 09:48:54.931152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.162 [2024-10-07 09:48:54.931184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.162 qpair failed and we were unable to recover it. 00:28:06.162 [2024-10-07 09:48:54.931282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.162 [2024-10-07 09:48:54.931313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.162 qpair failed and we were unable to recover it. 00:28:06.162 [2024-10-07 09:48:54.931402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.162 [2024-10-07 09:48:54.931432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.162 qpair failed and we were unable to recover it. 00:28:06.162 [2024-10-07 09:48:54.931539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.162 [2024-10-07 09:48:54.931571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.162 qpair failed and we were unable to recover it. 00:28:06.162 [2024-10-07 09:48:54.931677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.162 [2024-10-07 09:48:54.931714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.162 qpair failed and we were unable to recover it. 00:28:06.162 [2024-10-07 09:48:54.931851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.162 [2024-10-07 09:48:54.931883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.162 qpair failed and we were unable to recover it. 00:28:06.162 [2024-10-07 09:48:54.931989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.162 [2024-10-07 09:48:54.932021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.162 qpair failed and we were unable to recover it. 00:28:06.162 [2024-10-07 09:48:54.932160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.162 [2024-10-07 09:48:54.932191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.162 qpair failed and we were unable to recover it. 00:28:06.162 [2024-10-07 09:48:54.932321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.162 [2024-10-07 09:48:54.932353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.162 qpair failed and we were unable to recover it. 00:28:06.162 [2024-10-07 09:48:54.932484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.162 [2024-10-07 09:48:54.932516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.162 qpair failed and we were unable to recover it. 00:28:06.162 [2024-10-07 09:48:54.932682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.162 [2024-10-07 09:48:54.932713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.162 qpair failed and we were unable to recover it. 00:28:06.162 [2024-10-07 09:48:54.932818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.162 [2024-10-07 09:48:54.932853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.162 qpair failed and we were unable to recover it. 00:28:06.162 [2024-10-07 09:48:54.932963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.162 [2024-10-07 09:48:54.932995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.163 qpair failed and we were unable to recover it. 00:28:06.163 [2024-10-07 09:48:54.933108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.163 [2024-10-07 09:48:54.933140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.163 qpair failed and we were unable to recover it. 00:28:06.163 [2024-10-07 09:48:54.933260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.163 [2024-10-07 09:48:54.933292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.163 qpair failed and we were unable to recover it. 00:28:06.163 [2024-10-07 09:48:54.933400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.163 [2024-10-07 09:48:54.933433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.163 qpair failed and we were unable to recover it. 00:28:06.163 [2024-10-07 09:48:54.933545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.163 [2024-10-07 09:48:54.933577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.163 qpair failed and we were unable to recover it. 00:28:06.163 [2024-10-07 09:48:54.933694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.163 [2024-10-07 09:48:54.933728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.163 qpair failed and we were unable to recover it. 00:28:06.163 [2024-10-07 09:48:54.933844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.163 [2024-10-07 09:48:54.933876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.163 qpair failed and we were unable to recover it. 00:28:06.163 [2024-10-07 09:48:54.934002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.163 [2024-10-07 09:48:54.934035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.163 qpair failed and we were unable to recover it. 00:28:06.163 [2024-10-07 09:48:54.934127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.163 [2024-10-07 09:48:54.934161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.163 qpair failed and we were unable to recover it. 00:28:06.163 [2024-10-07 09:48:54.934301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.163 [2024-10-07 09:48:54.934334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.163 qpair failed and we were unable to recover it. 00:28:06.163 [2024-10-07 09:48:54.934437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.163 [2024-10-07 09:48:54.934469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.163 qpair failed and we were unable to recover it. 00:28:06.163 [2024-10-07 09:48:54.934564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.163 [2024-10-07 09:48:54.934595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.163 qpair failed and we were unable to recover it. 00:28:06.163 [2024-10-07 09:48:54.934700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.163 [2024-10-07 09:48:54.934733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.163 qpair failed and we were unable to recover it. 00:28:06.163 [2024-10-07 09:48:54.934890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.163 [2024-10-07 09:48:54.934921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.163 qpair failed and we were unable to recover it. 00:28:06.163 [2024-10-07 09:48:54.935082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.163 [2024-10-07 09:48:54.935112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.163 qpair failed and we were unable to recover it. 00:28:06.163 [2024-10-07 09:48:54.935240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.163 [2024-10-07 09:48:54.935272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.163 qpair failed and we were unable to recover it. 00:28:06.163 [2024-10-07 09:48:54.935412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.163 [2024-10-07 09:48:54.935443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.163 qpair failed and we were unable to recover it. 00:28:06.163 [2024-10-07 09:48:54.935581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.163 [2024-10-07 09:48:54.935612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.163 qpair failed and we were unable to recover it. 00:28:06.163 [2024-10-07 09:48:54.935731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.163 [2024-10-07 09:48:54.935765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.163 qpair failed and we were unable to recover it. 00:28:06.163 [2024-10-07 09:48:54.935872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.163 [2024-10-07 09:48:54.935904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.163 qpair failed and we were unable to recover it. 00:28:06.163 [2024-10-07 09:48:54.936008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.163 [2024-10-07 09:48:54.936042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.163 qpair failed and we were unable to recover it. 00:28:06.163 [2024-10-07 09:48:54.936179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.163 [2024-10-07 09:48:54.936211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.163 qpair failed and we were unable to recover it. 00:28:06.163 [2024-10-07 09:48:54.936316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.163 [2024-10-07 09:48:54.936347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.163 qpair failed and we were unable to recover it. 00:28:06.163 [2024-10-07 09:48:54.936453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.163 [2024-10-07 09:48:54.936485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.163 qpair failed and we were unable to recover it. 00:28:06.163 [2024-10-07 09:48:54.936649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.163 [2024-10-07 09:48:54.936689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.163 qpair failed and we were unable to recover it. 00:28:06.163 [2024-10-07 09:48:54.936786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.163 [2024-10-07 09:48:54.936817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.163 qpair failed and we were unable to recover it. 00:28:06.163 [2024-10-07 09:48:54.936953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.163 [2024-10-07 09:48:54.936985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.163 qpair failed and we were unable to recover it. 00:28:06.163 [2024-10-07 09:48:54.937085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.163 [2024-10-07 09:48:54.937117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.163 qpair failed and we were unable to recover it. 00:28:06.163 [2024-10-07 09:48:54.937220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.163 [2024-10-07 09:48:54.937251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.163 qpair failed and we were unable to recover it. 00:28:06.163 [2024-10-07 09:48:54.937413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.163 [2024-10-07 09:48:54.937447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.163 qpair failed and we were unable to recover it. 00:28:06.163 [2024-10-07 09:48:54.937559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.163 [2024-10-07 09:48:54.937593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.163 qpair failed and we were unable to recover it. 00:28:06.163 [2024-10-07 09:48:54.937706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.163 [2024-10-07 09:48:54.937739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.163 qpair failed and we were unable to recover it. 00:28:06.163 [2024-10-07 09:48:54.937898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.163 [2024-10-07 09:48:54.937935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.163 qpair failed and we were unable to recover it. 00:28:06.163 [2024-10-07 09:48:54.938041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.163 [2024-10-07 09:48:54.938074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.163 qpair failed and we were unable to recover it. 00:28:06.164 [2024-10-07 09:48:54.938182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.164 [2024-10-07 09:48:54.938213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.164 qpair failed and we were unable to recover it. 00:28:06.164 [2024-10-07 09:48:54.938305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.164 [2024-10-07 09:48:54.938338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.164 qpair failed and we were unable to recover it. 00:28:06.164 [2024-10-07 09:48:54.938483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.164 [2024-10-07 09:48:54.938515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.164 qpair failed and we were unable to recover it. 00:28:06.164 [2024-10-07 09:48:54.938650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.164 [2024-10-07 09:48:54.938695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.164 qpair failed and we were unable to recover it. 00:28:06.164 [2024-10-07 09:48:54.938796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.164 [2024-10-07 09:48:54.938827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.164 qpair failed and we were unable to recover it. 00:28:06.164 [2024-10-07 09:48:54.938940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.164 [2024-10-07 09:48:54.938971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.164 qpair failed and we were unable to recover it. 00:28:06.164 [2024-10-07 09:48:54.939113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.164 [2024-10-07 09:48:54.939144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.164 qpair failed and we were unable to recover it. 00:28:06.164 [2024-10-07 09:48:54.939258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.164 [2024-10-07 09:48:54.939290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.164 qpair failed and we were unable to recover it. 00:28:06.164 [2024-10-07 09:48:54.939420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.164 [2024-10-07 09:48:54.939452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.164 qpair failed and we were unable to recover it. 00:28:06.164 [2024-10-07 09:48:54.939586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.164 [2024-10-07 09:48:54.939617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.164 qpair failed and we were unable to recover it. 00:28:06.164 [2024-10-07 09:48:54.939722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.164 [2024-10-07 09:48:54.939754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.164 qpair failed and we were unable to recover it. 00:28:06.164 [2024-10-07 09:48:54.939862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.164 [2024-10-07 09:48:54.939894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.164 qpair failed and we were unable to recover it. 00:28:06.164 [2024-10-07 09:48:54.940033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.164 [2024-10-07 09:48:54.940064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.164 qpair failed and we were unable to recover it. 00:28:06.164 [2024-10-07 09:48:54.940159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.164 [2024-10-07 09:48:54.940191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.164 qpair failed and we were unable to recover it. 00:28:06.164 [2024-10-07 09:48:54.940324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.164 [2024-10-07 09:48:54.940356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.164 qpair failed and we were unable to recover it. 00:28:06.164 [2024-10-07 09:48:54.940462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.164 [2024-10-07 09:48:54.940494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.164 qpair failed and we were unable to recover it. 00:28:06.164 [2024-10-07 09:48:54.940590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.164 [2024-10-07 09:48:54.940622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.164 qpair failed and we were unable to recover it. 00:28:06.164 [2024-10-07 09:48:54.940752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.164 [2024-10-07 09:48:54.940785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.164 qpair failed and we were unable to recover it. 00:28:06.164 [2024-10-07 09:48:54.940911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.164 [2024-10-07 09:48:54.940942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.164 qpair failed and we were unable to recover it. 00:28:06.164 [2024-10-07 09:48:54.941067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.164 [2024-10-07 09:48:54.941098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.164 qpair failed and we were unable to recover it. 00:28:06.164 [2024-10-07 09:48:54.941239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.164 [2024-10-07 09:48:54.941271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.164 qpair failed and we were unable to recover it. 00:28:06.164 [2024-10-07 09:48:54.941393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.164 [2024-10-07 09:48:54.941425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.164 qpair failed and we were unable to recover it. 00:28:06.164 [2024-10-07 09:48:54.941585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.164 [2024-10-07 09:48:54.941617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.164 qpair failed and we were unable to recover it. 00:28:06.164 [2024-10-07 09:48:54.941727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.164 [2024-10-07 09:48:54.941759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.164 qpair failed and we were unable to recover it. 00:28:06.164 [2024-10-07 09:48:54.941875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.164 [2024-10-07 09:48:54.941907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.164 qpair failed and we were unable to recover it. 00:28:06.164 [2024-10-07 09:48:54.942054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.164 [2024-10-07 09:48:54.942085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.164 qpair failed and we were unable to recover it. 00:28:06.164 [2024-10-07 09:48:54.942220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.164 [2024-10-07 09:48:54.942252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.164 qpair failed and we were unable to recover it. 00:28:06.164 [2024-10-07 09:48:54.942384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.164 [2024-10-07 09:48:54.942415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.164 qpair failed and we were unable to recover it. 00:28:06.164 [2024-10-07 09:48:54.942540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.164 [2024-10-07 09:48:54.942571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.164 qpair failed and we were unable to recover it. 00:28:06.164 [2024-10-07 09:48:54.942710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.164 [2024-10-07 09:48:54.942742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.164 qpair failed and we were unable to recover it. 00:28:06.164 [2024-10-07 09:48:54.942847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.164 [2024-10-07 09:48:54.942879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.164 qpair failed and we were unable to recover it. 00:28:06.164 [2024-10-07 09:48:54.942982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.164 [2024-10-07 09:48:54.943015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.164 qpair failed and we were unable to recover it. 00:28:06.164 [2024-10-07 09:48:54.943150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.164 [2024-10-07 09:48:54.943182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.164 qpair failed and we were unable to recover it. 00:28:06.164 [2024-10-07 09:48:54.943292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.164 [2024-10-07 09:48:54.943323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.164 qpair failed and we were unable to recover it. 00:28:06.164 [2024-10-07 09:48:54.943424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.164 [2024-10-07 09:48:54.943456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.164 qpair failed and we were unable to recover it. 00:28:06.164 [2024-10-07 09:48:54.943590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.164 [2024-10-07 09:48:54.943622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.164 qpair failed and we were unable to recover it. 00:28:06.164 [2024-10-07 09:48:54.943735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.164 [2024-10-07 09:48:54.943767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.164 qpair failed and we were unable to recover it. 00:28:06.164 [2024-10-07 09:48:54.943902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.164 [2024-10-07 09:48:54.943949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.164 qpair failed and we were unable to recover it. 00:28:06.164 [2024-10-07 09:48:54.944102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.165 [2024-10-07 09:48:54.944143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.165 qpair failed and we were unable to recover it. 00:28:06.165 [2024-10-07 09:48:54.944277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.165 [2024-10-07 09:48:54.944310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.165 qpair failed and we were unable to recover it. 00:28:06.165 [2024-10-07 09:48:54.944417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.165 [2024-10-07 09:48:54.944449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.165 qpair failed and we were unable to recover it. 00:28:06.165 [2024-10-07 09:48:54.944561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.165 [2024-10-07 09:48:54.944593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.165 qpair failed and we were unable to recover it. 00:28:06.165 [2024-10-07 09:48:54.944698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.165 [2024-10-07 09:48:54.944731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.165 qpair failed and we were unable to recover it. 00:28:06.165 [2024-10-07 09:48:54.944862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.165 [2024-10-07 09:48:54.944894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.165 qpair failed and we were unable to recover it. 00:28:06.165 [2024-10-07 09:48:54.945037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.165 [2024-10-07 09:48:54.945070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.165 qpair failed and we were unable to recover it. 00:28:06.165 [2024-10-07 09:48:54.945173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.165 [2024-10-07 09:48:54.945207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.165 qpair failed and we were unable to recover it. 00:28:06.165 [2024-10-07 09:48:54.945308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.165 [2024-10-07 09:48:54.945342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.165 qpair failed and we were unable to recover it. 00:28:06.165 [2024-10-07 09:48:54.945474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.165 [2024-10-07 09:48:54.945506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.165 qpair failed and we were unable to recover it. 00:28:06.165 [2024-10-07 09:48:54.945637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.165 [2024-10-07 09:48:54.945676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.165 qpair failed and we were unable to recover it. 00:28:06.165 [2024-10-07 09:48:54.945816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.165 [2024-10-07 09:48:54.945848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.165 qpair failed and we were unable to recover it. 00:28:06.165 [2024-10-07 09:48:54.945950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.165 [2024-10-07 09:48:54.945983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.165 qpair failed and we were unable to recover it. 00:28:06.165 [2024-10-07 09:48:54.946088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.165 [2024-10-07 09:48:54.946120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.165 qpair failed and we were unable to recover it. 00:28:06.165 [2024-10-07 09:48:54.946217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.165 [2024-10-07 09:48:54.946248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.165 qpair failed and we were unable to recover it. 00:28:06.165 [2024-10-07 09:48:54.946399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.165 [2024-10-07 09:48:54.946431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.165 qpair failed and we were unable to recover it. 00:28:06.165 [2024-10-07 09:48:54.946521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.165 [2024-10-07 09:48:54.946552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.165 qpair failed and we were unable to recover it. 00:28:06.165 [2024-10-07 09:48:54.946646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.165 [2024-10-07 09:48:54.946687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.165 qpair failed and we were unable to recover it. 00:28:06.165 [2024-10-07 09:48:54.946781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.165 [2024-10-07 09:48:54.946814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.165 qpair failed and we were unable to recover it. 00:28:06.165 [2024-10-07 09:48:54.946905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.165 [2024-10-07 09:48:54.946936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.165 qpair failed and we were unable to recover it. 00:28:06.165 [2024-10-07 09:48:54.947065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.165 [2024-10-07 09:48:54.947096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.165 qpair failed and we were unable to recover it. 00:28:06.165 [2024-10-07 09:48:54.947252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.165 [2024-10-07 09:48:54.947284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.165 qpair failed and we were unable to recover it. 00:28:06.165 [2024-10-07 09:48:54.947384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.165 [2024-10-07 09:48:54.947415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.165 qpair failed and we were unable to recover it. 00:28:06.165 [2024-10-07 09:48:54.947520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.165 [2024-10-07 09:48:54.947551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.165 qpair failed and we were unable to recover it. 00:28:06.165 [2024-10-07 09:48:54.947694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.165 [2024-10-07 09:48:54.947727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.165 qpair failed and we were unable to recover it. 00:28:06.165 [2024-10-07 09:48:54.947857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.165 [2024-10-07 09:48:54.947889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.165 qpair failed and we were unable to recover it. 00:28:06.165 [2024-10-07 09:48:54.948025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.165 [2024-10-07 09:48:54.948056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.165 qpair failed and we were unable to recover it. 00:28:06.165 [2024-10-07 09:48:54.948172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.165 [2024-10-07 09:48:54.948205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.165 qpair failed and we were unable to recover it. 00:28:06.165 [2024-10-07 09:48:54.948296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.165 [2024-10-07 09:48:54.948327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.165 qpair failed and we were unable to recover it. 00:28:06.165 [2024-10-07 09:48:54.948436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.165 [2024-10-07 09:48:54.948467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.165 qpair failed and we were unable to recover it. 00:28:06.165 [2024-10-07 09:48:54.948600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.165 [2024-10-07 09:48:54.948632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.165 qpair failed and we were unable to recover it. 00:28:06.165 [2024-10-07 09:48:54.948751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.165 [2024-10-07 09:48:54.948783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.165 qpair failed and we were unable to recover it. 00:28:06.165 [2024-10-07 09:48:54.948881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.165 [2024-10-07 09:48:54.948913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.165 qpair failed and we were unable to recover it. 00:28:06.165 [2024-10-07 09:48:54.949012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.165 [2024-10-07 09:48:54.949043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.165 qpair failed and we were unable to recover it. 00:28:06.165 [2024-10-07 09:48:54.949171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.165 [2024-10-07 09:48:54.949202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.165 qpair failed and we were unable to recover it. 00:28:06.165 [2024-10-07 09:48:54.949339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.165 [2024-10-07 09:48:54.949370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.165 qpair failed and we were unable to recover it. 00:28:06.165 [2024-10-07 09:48:54.949530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.165 [2024-10-07 09:48:54.949561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.165 qpair failed and we were unable to recover it. 00:28:06.165 [2024-10-07 09:48:54.949672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.165 [2024-10-07 09:48:54.949704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.165 qpair failed and we were unable to recover it. 00:28:06.165 [2024-10-07 09:48:54.949806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.166 [2024-10-07 09:48:54.949838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.166 qpair failed and we were unable to recover it. 00:28:06.166 [2024-10-07 09:48:54.949981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.166 [2024-10-07 09:48:54.950013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.166 qpair failed and we were unable to recover it. 00:28:06.166 [2024-10-07 09:48:54.950112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.166 [2024-10-07 09:48:54.950149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.166 qpair failed and we were unable to recover it. 00:28:06.166 [2024-10-07 09:48:54.950255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.166 [2024-10-07 09:48:54.950287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.166 qpair failed and we were unable to recover it. 00:28:06.166 [2024-10-07 09:48:54.950390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.166 [2024-10-07 09:48:54.950421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.166 qpair failed and we were unable to recover it. 00:28:06.166 [2024-10-07 09:48:54.950548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.166 [2024-10-07 09:48:54.950580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.166 qpair failed and we were unable to recover it. 00:28:06.166 [2024-10-07 09:48:54.950706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.166 [2024-10-07 09:48:54.950738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.166 qpair failed and we were unable to recover it. 00:28:06.166 [2024-10-07 09:48:54.950853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.166 [2024-10-07 09:48:54.950885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.166 qpair failed and we were unable to recover it. 00:28:06.166 [2024-10-07 09:48:54.950983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.166 [2024-10-07 09:48:54.951015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.166 qpair failed and we were unable to recover it. 00:28:06.166 [2024-10-07 09:48:54.951118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.166 [2024-10-07 09:48:54.951151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.166 qpair failed and we were unable to recover it. 00:28:06.166 [2024-10-07 09:48:54.951250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.166 [2024-10-07 09:48:54.951282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.166 qpair failed and we were unable to recover it. 00:28:06.166 [2024-10-07 09:48:54.951441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.166 [2024-10-07 09:48:54.951472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.166 qpair failed and we were unable to recover it. 00:28:06.166 [2024-10-07 09:48:54.951575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.166 [2024-10-07 09:48:54.951607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.166 qpair failed and we were unable to recover it. 00:28:06.166 [2024-10-07 09:48:54.951718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.166 [2024-10-07 09:48:54.951753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.166 qpair failed and we were unable to recover it. 00:28:06.166 [2024-10-07 09:48:54.951851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.166 [2024-10-07 09:48:54.951878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.166 qpair failed and we were unable to recover it. 00:28:06.166 [2024-10-07 09:48:54.951987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.166 [2024-10-07 09:48:54.952033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.166 qpair failed and we were unable to recover it. 00:28:06.166 [2024-10-07 09:48:54.952146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.166 [2024-10-07 09:48:54.952174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.166 qpair failed and we were unable to recover it. 00:28:06.166 [2024-10-07 09:48:54.952314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.166 [2024-10-07 09:48:54.952345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.166 qpair failed and we were unable to recover it. 00:28:06.166 [2024-10-07 09:48:54.952460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.166 [2024-10-07 09:48:54.952490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.166 qpair failed and we were unable to recover it. 00:28:06.166 [2024-10-07 09:48:54.952604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.166 [2024-10-07 09:48:54.952643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.166 qpair failed and we were unable to recover it. 00:28:06.166 [2024-10-07 09:48:54.952779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.166 [2024-10-07 09:48:54.952810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.166 qpair failed and we were unable to recover it. 00:28:06.166 [2024-10-07 09:48:54.952938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.166 [2024-10-07 09:48:54.952968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.166 qpair failed and we were unable to recover it. 00:28:06.166 [2024-10-07 09:48:54.953054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.166 [2024-10-07 09:48:54.953082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.166 qpair failed and we were unable to recover it. 00:28:06.166 [2024-10-07 09:48:54.953173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.166 [2024-10-07 09:48:54.953201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.166 qpair failed and we were unable to recover it. 00:28:06.166 [2024-10-07 09:48:54.953322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.166 [2024-10-07 09:48:54.953350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.166 qpair failed and we were unable to recover it. 00:28:06.166 [2024-10-07 09:48:54.953443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.166 [2024-10-07 09:48:54.953471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.166 qpair failed and we were unable to recover it. 00:28:06.166 [2024-10-07 09:48:54.953558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.166 [2024-10-07 09:48:54.953586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.166 qpair failed and we were unable to recover it. 00:28:06.166 [2024-10-07 09:48:54.953685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.166 [2024-10-07 09:48:54.953713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.166 qpair failed and we were unable to recover it. 00:28:06.166 [2024-10-07 09:48:54.953795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.166 [2024-10-07 09:48:54.953823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.166 qpair failed and we were unable to recover it. 00:28:06.166 [2024-10-07 09:48:54.953928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.166 [2024-10-07 09:48:54.953970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.166 qpair failed and we were unable to recover it. 00:28:06.166 [2024-10-07 09:48:54.954079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.166 [2024-10-07 09:48:54.954109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.166 qpair failed and we were unable to recover it. 00:28:06.166 [2024-10-07 09:48:54.954212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.166 [2024-10-07 09:48:54.954242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.166 qpair failed and we were unable to recover it. 00:28:06.166 [2024-10-07 09:48:54.954331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.166 [2024-10-07 09:48:54.954359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.166 qpair failed and we were unable to recover it. 00:28:06.166 [2024-10-07 09:48:54.954477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.166 [2024-10-07 09:48:54.954505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.166 qpair failed and we were unable to recover it. 00:28:06.166 [2024-10-07 09:48:54.954622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.166 [2024-10-07 09:48:54.954650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.166 qpair failed and we were unable to recover it. 00:28:06.166 [2024-10-07 09:48:54.954765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.166 [2024-10-07 09:48:54.954794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.166 qpair failed and we were unable to recover it. 00:28:06.166 [2024-10-07 09:48:54.954896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.166 [2024-10-07 09:48:54.954925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.166 qpair failed and we were unable to recover it. 00:28:06.166 [2024-10-07 09:48:54.955043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.166 [2024-10-07 09:48:54.955071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.166 qpair failed and we were unable to recover it. 00:28:06.166 [2024-10-07 09:48:54.955213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.166 [2024-10-07 09:48:54.955240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.166 qpair failed and we were unable to recover it. 00:28:06.166 [2024-10-07 09:48:54.955352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.166 [2024-10-07 09:48:54.955380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.166 qpair failed and we were unable to recover it. 00:28:06.166 [2024-10-07 09:48:54.955502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.166 [2024-10-07 09:48:54.955533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.166 qpair failed and we were unable to recover it. 00:28:06.166 [2024-10-07 09:48:54.955611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.166 [2024-10-07 09:48:54.955638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.166 qpair failed and we were unable to recover it. 00:28:06.167 [2024-10-07 09:48:54.955765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.167 [2024-10-07 09:48:54.955802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.167 qpair failed and we were unable to recover it. 00:28:06.167 [2024-10-07 09:48:54.955906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.167 [2024-10-07 09:48:54.955933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.167 qpair failed and we were unable to recover it. 00:28:06.167 [2024-10-07 09:48:54.956026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.167 [2024-10-07 09:48:54.956054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.167 qpair failed and we were unable to recover it. 00:28:06.167 [2024-10-07 09:48:54.956175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.167 [2024-10-07 09:48:54.956202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.167 qpair failed and we were unable to recover it. 00:28:06.167 [2024-10-07 09:48:54.956285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.167 [2024-10-07 09:48:54.956312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.167 qpair failed and we were unable to recover it. 00:28:06.167 [2024-10-07 09:48:54.956418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.167 [2024-10-07 09:48:54.956460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.167 qpair failed and we were unable to recover it. 00:28:06.167 [2024-10-07 09:48:54.956578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.167 [2024-10-07 09:48:54.956610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.167 qpair failed and we were unable to recover it. 00:28:06.167 [2024-10-07 09:48:54.956732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.167 [2024-10-07 09:48:54.956761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.167 qpair failed and we were unable to recover it. 00:28:06.167 [2024-10-07 09:48:54.956850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.167 [2024-10-07 09:48:54.956878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.167 qpair failed and we were unable to recover it. 00:28:06.167 [2024-10-07 09:48:54.956979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.167 [2024-10-07 09:48:54.957007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.167 qpair failed and we were unable to recover it. 00:28:06.167 [2024-10-07 09:48:54.957153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.167 [2024-10-07 09:48:54.957181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.167 qpair failed and we were unable to recover it. 00:28:06.167 [2024-10-07 09:48:54.957274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.167 [2024-10-07 09:48:54.957302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.167 qpair failed and we were unable to recover it. 00:28:06.167 [2024-10-07 09:48:54.957392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.167 [2024-10-07 09:48:54.957420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.167 qpair failed and we were unable to recover it. 00:28:06.167 [2024-10-07 09:48:54.957512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.167 [2024-10-07 09:48:54.957541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.167 qpair failed and we were unable to recover it. 00:28:06.167 [2024-10-07 09:48:54.957694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.167 [2024-10-07 09:48:54.957724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.167 qpair failed and we were unable to recover it. 00:28:06.167 [2024-10-07 09:48:54.957841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.167 [2024-10-07 09:48:54.957869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.167 qpair failed and we were unable to recover it. 00:28:06.167 [2024-10-07 09:48:54.957963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.167 [2024-10-07 09:48:54.957990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.167 qpair failed and we were unable to recover it. 00:28:06.167 [2024-10-07 09:48:54.958136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.167 [2024-10-07 09:48:54.958163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.167 qpair failed and we were unable to recover it. 00:28:06.167 [2024-10-07 09:48:54.958310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.167 [2024-10-07 09:48:54.958337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.167 qpair failed and we were unable to recover it. 00:28:06.167 [2024-10-07 09:48:54.958450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.167 [2024-10-07 09:48:54.958477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.167 qpair failed and we were unable to recover it. 00:28:06.167 [2024-10-07 09:48:54.958598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.167 [2024-10-07 09:48:54.958628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.167 qpair failed and we were unable to recover it. 00:28:06.167 [2024-10-07 09:48:54.958743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.167 [2024-10-07 09:48:54.958773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.167 qpair failed and we were unable to recover it. 00:28:06.167 [2024-10-07 09:48:54.958893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.167 [2024-10-07 09:48:54.958923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.167 qpair failed and we were unable to recover it. 00:28:06.167 [2024-10-07 09:48:54.959015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.167 [2024-10-07 09:48:54.959042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.167 qpair failed and we were unable to recover it. 00:28:06.167 [2024-10-07 09:48:54.959127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.167 [2024-10-07 09:48:54.959154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.167 qpair failed and we were unable to recover it. 00:28:06.167 [2024-10-07 09:48:54.959255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.167 [2024-10-07 09:48:54.959284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.167 qpair failed and we were unable to recover it. 00:28:06.167 [2024-10-07 09:48:54.959406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.167 [2024-10-07 09:48:54.959434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.167 qpair failed and we were unable to recover it. 00:28:06.167 [2024-10-07 09:48:54.959532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.167 [2024-10-07 09:48:54.959565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.167 qpair failed and we were unable to recover it. 00:28:06.167 [2024-10-07 09:48:54.959657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.167 [2024-10-07 09:48:54.959694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.167 qpair failed and we were unable to recover it. 00:28:06.167 [2024-10-07 09:48:54.959814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.167 [2024-10-07 09:48:54.959843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.167 qpair failed and we were unable to recover it. 00:28:06.167 [2024-10-07 09:48:54.959962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.167 [2024-10-07 09:48:54.959989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.167 qpair failed and we were unable to recover it. 00:28:06.167 [2024-10-07 09:48:54.960076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.167 [2024-10-07 09:48:54.960103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.167 qpair failed and we were unable to recover it. 00:28:06.167 [2024-10-07 09:48:54.960200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.167 [2024-10-07 09:48:54.960227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.167 qpair failed and we were unable to recover it. 00:28:06.167 [2024-10-07 09:48:54.960317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.167 [2024-10-07 09:48:54.960345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.167 qpair failed and we were unable to recover it. 00:28:06.167 [2024-10-07 09:48:54.960458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.167 [2024-10-07 09:48:54.960485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.167 qpair failed and we were unable to recover it. 00:28:06.167 [2024-10-07 09:48:54.960583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.167 [2024-10-07 09:48:54.960613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.167 qpair failed and we were unable to recover it. 00:28:06.167 [2024-10-07 09:48:54.960749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.168 [2024-10-07 09:48:54.960778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.168 qpair failed and we were unable to recover it. 00:28:06.168 [2024-10-07 09:48:54.960869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.168 [2024-10-07 09:48:54.960898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.168 qpair failed and we were unable to recover it. 00:28:06.168 [2024-10-07 09:48:54.961012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.168 [2024-10-07 09:48:54.961041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.168 qpair failed and we were unable to recover it. 00:28:06.168 [2024-10-07 09:48:54.961198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.168 [2024-10-07 09:48:54.961226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.168 qpair failed and we were unable to recover it. 00:28:06.168 [2024-10-07 09:48:54.961329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.168 [2024-10-07 09:48:54.961359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.168 qpair failed and we were unable to recover it. 00:28:06.168 [2024-10-07 09:48:54.961484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.168 [2024-10-07 09:48:54.961512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.168 qpair failed and we were unable to recover it. 00:28:06.168 [2024-10-07 09:48:54.961633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.168 [2024-10-07 09:48:54.961661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.168 qpair failed and we were unable to recover it. 00:28:06.168 [2024-10-07 09:48:54.961786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.168 [2024-10-07 09:48:54.961814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.168 qpair failed and we were unable to recover it. 00:28:06.168 [2024-10-07 09:48:54.961933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.168 [2024-10-07 09:48:54.961961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.168 qpair failed and we were unable to recover it. 00:28:06.168 [2024-10-07 09:48:54.962081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.168 [2024-10-07 09:48:54.962108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.168 qpair failed and we were unable to recover it. 00:28:06.168 [2024-10-07 09:48:54.962222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.168 [2024-10-07 09:48:54.962250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.168 qpair failed and we were unable to recover it. 00:28:06.168 [2024-10-07 09:48:54.962329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.168 [2024-10-07 09:48:54.962356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.168 qpair failed and we were unable to recover it. 00:28:06.168 [2024-10-07 09:48:54.962449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.168 [2024-10-07 09:48:54.962500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.168 qpair failed and we were unable to recover it. 00:28:06.168 [2024-10-07 09:48:54.962624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.168 [2024-10-07 09:48:54.962655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.168 qpair failed and we were unable to recover it. 00:28:06.168 [2024-10-07 09:48:54.962792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.168 [2024-10-07 09:48:54.962821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.168 qpair failed and we were unable to recover it. 00:28:06.168 [2024-10-07 09:48:54.962950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.168 [2024-10-07 09:48:54.962987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.168 qpair failed and we were unable to recover it. 00:28:06.168 [2024-10-07 09:48:54.963069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.168 [2024-10-07 09:48:54.963096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.168 qpair failed and we were unable to recover it. 00:28:06.168 [2024-10-07 09:48:54.963216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.168 [2024-10-07 09:48:54.963244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.168 qpair failed and we were unable to recover it. 00:28:06.168 [2024-10-07 09:48:54.963343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.168 [2024-10-07 09:48:54.963372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.168 qpair failed and we were unable to recover it. 00:28:06.168 [2024-10-07 09:48:54.963460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.168 [2024-10-07 09:48:54.963488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.168 qpair failed and we were unable to recover it. 00:28:06.168 [2024-10-07 09:48:54.963582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.168 [2024-10-07 09:48:54.963611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.168 qpair failed and we were unable to recover it. 00:28:06.168 [2024-10-07 09:48:54.963710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.168 [2024-10-07 09:48:54.963739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.168 qpair failed and we were unable to recover it. 00:28:06.168 [2024-10-07 09:48:54.963824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.168 [2024-10-07 09:48:54.963854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.168 qpair failed and we were unable to recover it. 00:28:06.168 [2024-10-07 09:48:54.963945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.168 [2024-10-07 09:48:54.963973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.168 qpair failed and we were unable to recover it. 00:28:06.168 [2024-10-07 09:48:54.964089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.168 [2024-10-07 09:48:54.964117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.168 qpair failed and we were unable to recover it. 00:28:06.168 [2024-10-07 09:48:54.964209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.168 [2024-10-07 09:48:54.964237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.168 qpair failed and we were unable to recover it. 00:28:06.168 [2024-10-07 09:48:54.964354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.168 [2024-10-07 09:48:54.964383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.168 qpair failed and we were unable to recover it. 00:28:06.168 [2024-10-07 09:48:54.964473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.168 [2024-10-07 09:48:54.964501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.168 qpair failed and we were unable to recover it. 00:28:06.168 [2024-10-07 09:48:54.964594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.168 [2024-10-07 09:48:54.964622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.168 qpair failed and we were unable to recover it. 00:28:06.168 [2024-10-07 09:48:54.964709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.168 [2024-10-07 09:48:54.964738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.168 qpair failed and we were unable to recover it. 00:28:06.168 [2024-10-07 09:48:54.964841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.168 [2024-10-07 09:48:54.964881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.168 qpair failed and we were unable to recover it. 00:28:06.168 [2024-10-07 09:48:54.965004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.168 [2024-10-07 09:48:54.965050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.168 qpair failed and we were unable to recover it. 00:28:06.168 [2024-10-07 09:48:54.965145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.168 [2024-10-07 09:48:54.965175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.168 qpair failed and we were unable to recover it. 00:28:06.168 [2024-10-07 09:48:54.965265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.168 [2024-10-07 09:48:54.965294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.168 qpair failed and we were unable to recover it. 00:28:06.168 [2024-10-07 09:48:54.965419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.168 [2024-10-07 09:48:54.965449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.168 qpair failed and we were unable to recover it. 00:28:06.168 [2024-10-07 09:48:54.965613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.168 [2024-10-07 09:48:54.965643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.168 qpair failed and we were unable to recover it. 00:28:06.168 [2024-10-07 09:48:54.965748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.168 [2024-10-07 09:48:54.965779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.168 qpair failed and we were unable to recover it. 00:28:06.168 [2024-10-07 09:48:54.965888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.168 [2024-10-07 09:48:54.965917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.168 qpair failed and we were unable to recover it. 00:28:06.168 [2024-10-07 09:48:54.966068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.168 [2024-10-07 09:48:54.966097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.168 qpair failed and we were unable to recover it. 00:28:06.168 [2024-10-07 09:48:54.966188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.168 [2024-10-07 09:48:54.966217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.168 qpair failed and we were unable to recover it. 00:28:06.168 [2024-10-07 09:48:54.966311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.168 [2024-10-07 09:48:54.966341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.168 qpair failed and we were unable to recover it. 00:28:06.168 [2024-10-07 09:48:54.966453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.168 [2024-10-07 09:48:54.966482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.168 qpair failed and we were unable to recover it. 00:28:06.169 [2024-10-07 09:48:54.966635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.169 [2024-10-07 09:48:54.966663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.169 qpair failed and we were unable to recover it. 00:28:06.169 [2024-10-07 09:48:54.966785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.169 [2024-10-07 09:48:54.966829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.169 qpair failed and we were unable to recover it. 00:28:06.169 [2024-10-07 09:48:54.966959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.169 [2024-10-07 09:48:54.966995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.169 qpair failed and we were unable to recover it. 00:28:06.169 [2024-10-07 09:48:54.967100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.169 [2024-10-07 09:48:54.967129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.169 qpair failed and we were unable to recover it. 00:28:06.169 [2024-10-07 09:48:54.967253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.169 [2024-10-07 09:48:54.967283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.169 qpair failed and we were unable to recover it. 00:28:06.169 [2024-10-07 09:48:54.967405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.169 [2024-10-07 09:48:54.967434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.169 qpair failed and we were unable to recover it. 00:28:06.169 [2024-10-07 09:48:54.967584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.169 [2024-10-07 09:48:54.967613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.169 qpair failed and we were unable to recover it. 00:28:06.169 [2024-10-07 09:48:54.967712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.169 [2024-10-07 09:48:54.967742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.169 qpair failed and we were unable to recover it. 00:28:06.169 [2024-10-07 09:48:54.967860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.169 [2024-10-07 09:48:54.967889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.169 qpair failed and we were unable to recover it. 00:28:06.169 [2024-10-07 09:48:54.967986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.169 [2024-10-07 09:48:54.968014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.169 qpair failed and we were unable to recover it. 00:28:06.169 [2024-10-07 09:48:54.968103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.169 [2024-10-07 09:48:54.968133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.169 qpair failed and we were unable to recover it. 00:28:06.169 [2024-10-07 09:48:54.968227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.169 [2024-10-07 09:48:54.968256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.169 qpair failed and we were unable to recover it. 00:28:06.169 [2024-10-07 09:48:54.968340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.169 [2024-10-07 09:48:54.968370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.169 qpair failed and we were unable to recover it. 00:28:06.169 [2024-10-07 09:48:54.968499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.169 [2024-10-07 09:48:54.968528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.169 qpair failed and we were unable to recover it. 00:28:06.169 [2024-10-07 09:48:54.968615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.169 [2024-10-07 09:48:54.968644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.169 qpair failed and we were unable to recover it. 00:28:06.169 [2024-10-07 09:48:54.968763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.169 [2024-10-07 09:48:54.968792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.169 qpair failed and we were unable to recover it. 00:28:06.169 [2024-10-07 09:48:54.968920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.169 [2024-10-07 09:48:54.968950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.169 qpair failed and we were unable to recover it. 00:28:06.169 [2024-10-07 09:48:54.969072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.169 [2024-10-07 09:48:54.969101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.169 qpair failed and we were unable to recover it. 00:28:06.169 [2024-10-07 09:48:54.969227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.169 [2024-10-07 09:48:54.969256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.169 qpair failed and we were unable to recover it. 00:28:06.169 [2024-10-07 09:48:54.969386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.169 [2024-10-07 09:48:54.969415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.169 qpair failed and we were unable to recover it. 00:28:06.169 [2024-10-07 09:48:54.969527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.169 [2024-10-07 09:48:54.969556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.169 qpair failed and we were unable to recover it. 00:28:06.169 [2024-10-07 09:48:54.969654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.169 [2024-10-07 09:48:54.969694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.169 qpair failed and we were unable to recover it. 00:28:06.169 [2024-10-07 09:48:54.969829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.169 [2024-10-07 09:48:54.969860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.169 qpair failed and we were unable to recover it. 00:28:06.169 [2024-10-07 09:48:54.969954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.169 [2024-10-07 09:48:54.969985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.169 qpair failed and we were unable to recover it. 00:28:06.169 [2024-10-07 09:48:54.970115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.169 [2024-10-07 09:48:54.970147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.169 qpair failed and we were unable to recover it. 00:28:06.169 [2024-10-07 09:48:54.970281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.169 [2024-10-07 09:48:54.970314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.169 qpair failed and we were unable to recover it. 00:28:06.169 [2024-10-07 09:48:54.970417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.169 [2024-10-07 09:48:54.970447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.169 qpair failed and we were unable to recover it. 00:28:06.169 [2024-10-07 09:48:54.970540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.169 [2024-10-07 09:48:54.970574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.169 qpair failed and we were unable to recover it. 00:28:06.169 [2024-10-07 09:48:54.970726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.169 [2024-10-07 09:48:54.970774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.169 qpair failed and we were unable to recover it. 00:28:06.169 [2024-10-07 09:48:54.970908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.169 [2024-10-07 09:48:54.970967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.169 qpair failed and we were unable to recover it. 00:28:06.169 [2024-10-07 09:48:54.971114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.169 [2024-10-07 09:48:54.971143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.169 qpair failed and we were unable to recover it. 00:28:06.169 [2024-10-07 09:48:54.971300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.169 [2024-10-07 09:48:54.971335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.169 qpair failed and we were unable to recover it. 00:28:06.169 [2024-10-07 09:48:54.971455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.169 [2024-10-07 09:48:54.971484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.169 qpair failed and we were unable to recover it. 00:28:06.169 [2024-10-07 09:48:54.971586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.169 [2024-10-07 09:48:54.971616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.169 qpair failed and we were unable to recover it. 00:28:06.169 [2024-10-07 09:48:54.971728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.169 [2024-10-07 09:48:54.971760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.169 qpair failed and we were unable to recover it. 00:28:06.169 [2024-10-07 09:48:54.971887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.169 [2024-10-07 09:48:54.971917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.169 qpair failed and we were unable to recover it. 00:28:06.169 [2024-10-07 09:48:54.972008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.169 [2024-10-07 09:48:54.972042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.169 qpair failed and we were unable to recover it. 00:28:06.169 [2024-10-07 09:48:54.972167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.169 [2024-10-07 09:48:54.972197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.169 qpair failed and we were unable to recover it. 00:28:06.169 [2024-10-07 09:48:54.972297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.169 [2024-10-07 09:48:54.972327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.169 qpair failed and we were unable to recover it. 00:28:06.169 [2024-10-07 09:48:54.972423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.169 [2024-10-07 09:48:54.972454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.169 qpair failed and we were unable to recover it. 00:28:06.169 [2024-10-07 09:48:54.972563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.169 [2024-10-07 09:48:54.972593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.169 qpair failed and we were unable to recover it. 00:28:06.169 [2024-10-07 09:48:54.972750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.169 [2024-10-07 09:48:54.972793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.170 qpair failed and we were unable to recover it. 00:28:06.170 [2024-10-07 09:48:54.972888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.170 [2024-10-07 09:48:54.972918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.170 qpair failed and we were unable to recover it. 00:28:06.170 [2024-10-07 09:48:54.973037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.170 [2024-10-07 09:48:54.973066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.170 qpair failed and we were unable to recover it. 00:28:06.170 [2024-10-07 09:48:54.973178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.170 [2024-10-07 09:48:54.973206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.170 qpair failed and we were unable to recover it. 00:28:06.170 [2024-10-07 09:48:54.973291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.170 [2024-10-07 09:48:54.973319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.170 qpair failed and we were unable to recover it. 00:28:06.170 [2024-10-07 09:48:54.973420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.170 [2024-10-07 09:48:54.973447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.170 qpair failed and we were unable to recover it. 00:28:06.170 [2024-10-07 09:48:54.973564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.170 [2024-10-07 09:48:54.973592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.170 qpair failed and we were unable to recover it. 00:28:06.170 [2024-10-07 09:48:54.973713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.170 [2024-10-07 09:48:54.973743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.170 qpair failed and we were unable to recover it. 00:28:06.170 [2024-10-07 09:48:54.973837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.170 [2024-10-07 09:48:54.973865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.170 qpair failed and we were unable to recover it. 00:28:06.170 [2024-10-07 09:48:54.973959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.170 [2024-10-07 09:48:54.973988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.170 qpair failed and we were unable to recover it. 00:28:06.170 [2024-10-07 09:48:54.974133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.170 [2024-10-07 09:48:54.974160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.170 qpair failed and we were unable to recover it. 00:28:06.170 [2024-10-07 09:48:54.974261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.170 [2024-10-07 09:48:54.974289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.170 qpair failed and we were unable to recover it. 00:28:06.170 [2024-10-07 09:48:54.974385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.170 [2024-10-07 09:48:54.974413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.170 qpair failed and we were unable to recover it. 00:28:06.170 [2024-10-07 09:48:54.974527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.170 [2024-10-07 09:48:54.974556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.170 qpair failed and we were unable to recover it. 00:28:06.170 [2024-10-07 09:48:54.974654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.170 [2024-10-07 09:48:54.974688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.170 qpair failed and we were unable to recover it. 00:28:06.170 [2024-10-07 09:48:54.974783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.170 [2024-10-07 09:48:54.974819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.170 qpair failed and we were unable to recover it. 00:28:06.170 [2024-10-07 09:48:54.974943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.170 [2024-10-07 09:48:54.974971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.170 qpair failed and we were unable to recover it. 00:28:06.170 [2024-10-07 09:48:54.975061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.170 [2024-10-07 09:48:54.975089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.170 qpair failed and we were unable to recover it. 00:28:06.170 [2024-10-07 09:48:54.975210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.170 [2024-10-07 09:48:54.975238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.170 qpair failed and we were unable to recover it. 00:28:06.170 [2024-10-07 09:48:54.975371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.170 [2024-10-07 09:48:54.975414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.170 qpair failed and we were unable to recover it. 00:28:06.170 [2024-10-07 09:48:54.975562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.170 [2024-10-07 09:48:54.975594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.170 qpair failed and we were unable to recover it. 00:28:06.170 [2024-10-07 09:48:54.975694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.170 [2024-10-07 09:48:54.975725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.170 qpair failed and we were unable to recover it. 00:28:06.170 [2024-10-07 09:48:54.975813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.170 [2024-10-07 09:48:54.975843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.170 qpair failed and we were unable to recover it. 00:28:06.170 [2024-10-07 09:48:54.975958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.170 [2024-10-07 09:48:54.975989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.170 qpair failed and we were unable to recover it. 00:28:06.170 [2024-10-07 09:48:54.976083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.170 [2024-10-07 09:48:54.976112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.170 qpair failed and we were unable to recover it. 00:28:06.170 [2024-10-07 09:48:54.976228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.170 [2024-10-07 09:48:54.976257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.170 qpair failed and we were unable to recover it. 00:28:06.170 [2024-10-07 09:48:54.976386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.170 [2024-10-07 09:48:54.976415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.170 qpair failed and we were unable to recover it. 00:28:06.170 [2024-10-07 09:48:54.976505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.170 [2024-10-07 09:48:54.976534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.170 qpair failed and we were unable to recover it. 00:28:06.170 [2024-10-07 09:48:54.976653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.170 [2024-10-07 09:48:54.976690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.170 qpair failed and we were unable to recover it. 00:28:06.170 [2024-10-07 09:48:54.976800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.170 [2024-10-07 09:48:54.976828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.170 qpair failed and we were unable to recover it. 00:28:06.170 [2024-10-07 09:48:54.976948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.170 [2024-10-07 09:48:54.976976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.170 qpair failed and we were unable to recover it. 00:28:06.170 [2024-10-07 09:48:54.977092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.170 [2024-10-07 09:48:54.977121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.170 qpair failed and we were unable to recover it. 00:28:06.170 [2024-10-07 09:48:54.977221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.170 [2024-10-07 09:48:54.977250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.170 qpair failed and we were unable to recover it. 00:28:06.170 [2024-10-07 09:48:54.977371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.170 [2024-10-07 09:48:54.977400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.170 qpair failed and we were unable to recover it. 00:28:06.170 [2024-10-07 09:48:54.977493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.170 [2024-10-07 09:48:54.977523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.170 qpair failed and we were unable to recover it. 00:28:06.170 [2024-10-07 09:48:54.977647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.170 [2024-10-07 09:48:54.977684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.170 qpair failed and we were unable to recover it. 00:28:06.170 [2024-10-07 09:48:54.977804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.170 [2024-10-07 09:48:54.977833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.170 qpair failed and we were unable to recover it. 00:28:06.170 [2024-10-07 09:48:54.977923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.170 [2024-10-07 09:48:54.977952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.170 qpair failed and we were unable to recover it. 00:28:06.170 [2024-10-07 09:48:54.978050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.170 [2024-10-07 09:48:54.978079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.170 qpair failed and we were unable to recover it. 00:28:06.170 [2024-10-07 09:48:54.978199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.170 [2024-10-07 09:48:54.978229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.170 qpair failed and we were unable to recover it. 00:28:06.170 [2024-10-07 09:48:54.978330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.170 [2024-10-07 09:48:54.978358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.170 qpair failed and we were unable to recover it. 00:28:06.170 [2024-10-07 09:48:54.978470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.170 [2024-10-07 09:48:54.978499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.170 qpair failed and we were unable to recover it. 00:28:06.170 [2024-10-07 09:48:54.978592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.171 [2024-10-07 09:48:54.978626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.171 qpair failed and we were unable to recover it. 00:28:06.171 [2024-10-07 09:48:54.978758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.171 [2024-10-07 09:48:54.978788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.171 qpair failed and we were unable to recover it. 00:28:06.171 [2024-10-07 09:48:54.978883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.171 [2024-10-07 09:48:54.978911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.171 qpair failed and we were unable to recover it. 00:28:06.171 [2024-10-07 09:48:54.979031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.171 [2024-10-07 09:48:54.979060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.171 qpair failed and we were unable to recover it. 00:28:06.171 [2024-10-07 09:48:54.979150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.171 [2024-10-07 09:48:54.979177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.171 qpair failed and we were unable to recover it. 00:28:06.171 [2024-10-07 09:48:54.979274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.171 [2024-10-07 09:48:54.979302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.171 qpair failed and we were unable to recover it. 00:28:06.171 [2024-10-07 09:48:54.979418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.171 [2024-10-07 09:48:54.979445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.171 qpair failed and we were unable to recover it. 00:28:06.171 [2024-10-07 09:48:54.979534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.171 [2024-10-07 09:48:54.979562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.171 qpair failed and we were unable to recover it. 00:28:06.171 [2024-10-07 09:48:54.979702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.171 [2024-10-07 09:48:54.979730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.171 qpair failed and we were unable to recover it. 00:28:06.171 [2024-10-07 09:48:54.979849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.171 [2024-10-07 09:48:54.979876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.171 qpair failed and we were unable to recover it. 00:28:06.171 [2024-10-07 09:48:54.979992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.171 [2024-10-07 09:48:54.980021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.171 qpair failed and we were unable to recover it. 00:28:06.171 [2024-10-07 09:48:54.980116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.171 [2024-10-07 09:48:54.980145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.171 qpair failed and we were unable to recover it. 00:28:06.171 [2024-10-07 09:48:54.980260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.171 [2024-10-07 09:48:54.980288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.171 qpair failed and we were unable to recover it. 00:28:06.171 [2024-10-07 09:48:54.980378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.171 [2024-10-07 09:48:54.980407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.171 qpair failed and we were unable to recover it. 00:28:06.171 [2024-10-07 09:48:54.980538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.171 [2024-10-07 09:48:54.980567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.171 qpair failed and we were unable to recover it. 00:28:06.171 [2024-10-07 09:48:54.980697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.171 [2024-10-07 09:48:54.980725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.171 qpair failed and we were unable to recover it. 00:28:06.171 [2024-10-07 09:48:54.980823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.171 [2024-10-07 09:48:54.980850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.171 qpair failed and we were unable to recover it. 00:28:06.171 [2024-10-07 09:48:54.980931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.171 [2024-10-07 09:48:54.980959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.171 qpair failed and we were unable to recover it. 00:28:06.171 [2024-10-07 09:48:54.981079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.171 [2024-10-07 09:48:54.981106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.171 qpair failed and we were unable to recover it. 00:28:06.171 [2024-10-07 09:48:54.981199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.171 [2024-10-07 09:48:54.981228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.171 qpair failed and we were unable to recover it. 00:28:06.171 [2024-10-07 09:48:54.981334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.171 [2024-10-07 09:48:54.981362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.171 qpair failed and we were unable to recover it. 00:28:06.171 [2024-10-07 09:48:54.981455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.171 [2024-10-07 09:48:54.981483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.171 qpair failed and we were unable to recover it. 00:28:06.171 [2024-10-07 09:48:54.981606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.171 [2024-10-07 09:48:54.981633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.171 qpair failed and we were unable to recover it. 00:28:06.171 [2024-10-07 09:48:54.981760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.171 [2024-10-07 09:48:54.981788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.171 qpair failed and we were unable to recover it. 00:28:06.171 [2024-10-07 09:48:54.981878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.171 [2024-10-07 09:48:54.981905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.171 qpair failed and we were unable to recover it. 00:28:06.171 [2024-10-07 09:48:54.982023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.171 [2024-10-07 09:48:54.982051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.171 qpair failed and we were unable to recover it. 00:28:06.171 [2024-10-07 09:48:54.982148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.171 [2024-10-07 09:48:54.982176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.171 qpair failed and we were unable to recover it. 00:28:06.171 [2024-10-07 09:48:54.982295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.171 [2024-10-07 09:48:54.982327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.171 qpair failed and we were unable to recover it. 00:28:06.171 [2024-10-07 09:48:54.982446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.171 [2024-10-07 09:48:54.982474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.171 qpair failed and we were unable to recover it. 00:28:06.171 [2024-10-07 09:48:54.982565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.171 [2024-10-07 09:48:54.982593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.171 qpair failed and we were unable to recover it. 00:28:06.171 [2024-10-07 09:48:54.982693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.171 [2024-10-07 09:48:54.982723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.171 qpair failed and we were unable to recover it. 00:28:06.171 [2024-10-07 09:48:54.982814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.171 [2024-10-07 09:48:54.982842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.171 qpair failed and we were unable to recover it. 00:28:06.171 [2024-10-07 09:48:54.982937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.171 [2024-10-07 09:48:54.982965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.171 qpair failed and we were unable to recover it. 00:28:06.171 [2024-10-07 09:48:54.983082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.171 [2024-10-07 09:48:54.983110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.171 qpair failed and we were unable to recover it. 00:28:06.171 [2024-10-07 09:48:54.983224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.171 [2024-10-07 09:48:54.983252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.171 qpair failed and we were unable to recover it. 00:28:06.171 [2024-10-07 09:48:54.983350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.171 [2024-10-07 09:48:54.983377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.171 qpair failed and we were unable to recover it. 00:28:06.171 [2024-10-07 09:48:54.983461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.171 [2024-10-07 09:48:54.983490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.171 qpair failed and we were unable to recover it. 00:28:06.171 [2024-10-07 09:48:54.983573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.171 [2024-10-07 09:48:54.983600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.171 qpair failed and we were unable to recover it. 00:28:06.171 [2024-10-07 09:48:54.983740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.171 [2024-10-07 09:48:54.983770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.172 qpair failed and we were unable to recover it. 00:28:06.172 [2024-10-07 09:48:54.983885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.172 [2024-10-07 09:48:54.983913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.172 qpair failed and we were unable to recover it. 00:28:06.172 [2024-10-07 09:48:54.983995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.172 [2024-10-07 09:48:54.984023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.172 qpair failed and we were unable to recover it. 00:28:06.172 [2024-10-07 09:48:54.984119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.172 [2024-10-07 09:48:54.984147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.172 qpair failed and we were unable to recover it. 00:28:06.172 [2024-10-07 09:48:54.984273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.172 [2024-10-07 09:48:54.984301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.172 qpair failed and we were unable to recover it. 00:28:06.172 [2024-10-07 09:48:54.984393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.172 [2024-10-07 09:48:54.984420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.172 qpair failed and we were unable to recover it. 00:28:06.172 [2024-10-07 09:48:54.984509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.172 [2024-10-07 09:48:54.984537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.172 qpair failed and we were unable to recover it. 00:28:06.172 [2024-10-07 09:48:54.984657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.172 [2024-10-07 09:48:54.984691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.172 qpair failed and we were unable to recover it. 00:28:06.172 [2024-10-07 09:48:54.984811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.172 [2024-10-07 09:48:54.984840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.172 qpair failed and we were unable to recover it. 00:28:06.172 [2024-10-07 09:48:54.984936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.172 [2024-10-07 09:48:54.984964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.172 qpair failed and we were unable to recover it. 00:28:06.172 [2024-10-07 09:48:54.985047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.172 [2024-10-07 09:48:54.985075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.172 qpair failed and we were unable to recover it. 00:28:06.172 [2024-10-07 09:48:54.985163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.172 [2024-10-07 09:48:54.985190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.172 qpair failed and we were unable to recover it. 00:28:06.172 [2024-10-07 09:48:54.985286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.172 [2024-10-07 09:48:54.985315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.172 qpair failed and we were unable to recover it. 00:28:06.172 [2024-10-07 09:48:54.985438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.172 [2024-10-07 09:48:54.985465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.172 qpair failed and we were unable to recover it. 00:28:06.172 [2024-10-07 09:48:54.985579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.172 [2024-10-07 09:48:54.985607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.172 qpair failed and we were unable to recover it. 00:28:06.172 [2024-10-07 09:48:54.985700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.172 [2024-10-07 09:48:54.985728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.172 qpair failed and we were unable to recover it. 00:28:06.172 [2024-10-07 09:48:54.985817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.172 [2024-10-07 09:48:54.985849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.172 qpair failed and we were unable to recover it. 00:28:06.172 [2024-10-07 09:48:54.985941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.172 [2024-10-07 09:48:54.985969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.172 qpair failed and we were unable to recover it. 00:28:06.172 [2024-10-07 09:48:54.986093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.172 [2024-10-07 09:48:54.986122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.172 qpair failed and we were unable to recover it. 00:28:06.172 [2024-10-07 09:48:54.986214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.172 [2024-10-07 09:48:54.986242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.172 qpair failed and we were unable to recover it. 00:28:06.172 [2024-10-07 09:48:54.986364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.172 [2024-10-07 09:48:54.986393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.172 qpair failed and we were unable to recover it. 00:28:06.172 [2024-10-07 09:48:54.986518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.172 [2024-10-07 09:48:54.986546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.172 qpair failed and we were unable to recover it. 00:28:06.172 [2024-10-07 09:48:54.986691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.172 [2024-10-07 09:48:54.986719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.172 qpair failed and we were unable to recover it. 00:28:06.172 [2024-10-07 09:48:54.986832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.172 [2024-10-07 09:48:54.986859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.172 qpair failed and we were unable to recover it. 00:28:06.172 [2024-10-07 09:48:54.986984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.172 [2024-10-07 09:48:54.987012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.172 qpair failed and we were unable to recover it. 00:28:06.172 [2024-10-07 09:48:54.987141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.172 [2024-10-07 09:48:54.987169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.172 qpair failed and we were unable to recover it. 00:28:06.172 [2024-10-07 09:48:54.987289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.172 [2024-10-07 09:48:54.987318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.172 qpair failed and we were unable to recover it. 00:28:06.172 [2024-10-07 09:48:54.987433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.172 [2024-10-07 09:48:54.987461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.172 qpair failed and we were unable to recover it. 00:28:06.172 [2024-10-07 09:48:54.987603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.172 [2024-10-07 09:48:54.987632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.172 qpair failed and we were unable to recover it. 00:28:06.172 [2024-10-07 09:48:54.987756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.172 [2024-10-07 09:48:54.987784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.172 qpair failed and we were unable to recover it. 00:28:06.172 [2024-10-07 09:48:54.987891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.172 [2024-10-07 09:48:54.987934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.172 qpair failed and we were unable to recover it. 00:28:06.172 [2024-10-07 09:48:54.988052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.172 [2024-10-07 09:48:54.988084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.172 qpair failed and we were unable to recover it. 00:28:06.172 [2024-10-07 09:48:54.988209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.172 [2024-10-07 09:48:54.988238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.172 qpair failed and we were unable to recover it. 00:28:06.172 [2024-10-07 09:48:54.988388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.172 [2024-10-07 09:48:54.988417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.172 qpair failed and we were unable to recover it. 00:28:06.172 [2024-10-07 09:48:54.988541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.172 [2024-10-07 09:48:54.988571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.172 qpair failed and we were unable to recover it. 00:28:06.172 [2024-10-07 09:48:54.988696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.172 [2024-10-07 09:48:54.988725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.172 qpair failed and we were unable to recover it. 00:28:06.172 [2024-10-07 09:48:54.988848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.172 [2024-10-07 09:48:54.988877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.172 qpair failed and we were unable to recover it. 00:28:06.172 [2024-10-07 09:48:54.988972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.172 [2024-10-07 09:48:54.989001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.172 qpair failed and we were unable to recover it. 00:28:06.172 [2024-10-07 09:48:54.989086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.172 [2024-10-07 09:48:54.989115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.172 qpair failed and we were unable to recover it. 00:28:06.172 [2024-10-07 09:48:54.989249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.172 [2024-10-07 09:48:54.989279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.172 qpair failed and we were unable to recover it. 00:28:06.172 [2024-10-07 09:48:54.989371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.172 [2024-10-07 09:48:54.989399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.172 qpair failed and we were unable to recover it. 00:28:06.172 [2024-10-07 09:48:54.989496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.172 [2024-10-07 09:48:54.989524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.172 qpair failed and we were unable to recover it. 00:28:06.172 [2024-10-07 09:48:54.989639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.172 [2024-10-07 09:48:54.989671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.172 qpair failed and we were unable to recover it. 00:28:06.172 [2024-10-07 09:48:54.989761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.172 [2024-10-07 09:48:54.989794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.172 qpair failed and we were unable to recover it. 00:28:06.172 [2024-10-07 09:48:54.989915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.172 [2024-10-07 09:48:54.989943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.172 qpair failed and we were unable to recover it. 00:28:06.172 [2024-10-07 09:48:54.990032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.172 [2024-10-07 09:48:54.990059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.173 qpair failed and we were unable to recover it. 00:28:06.173 [2024-10-07 09:48:54.990148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.173 [2024-10-07 09:48:54.990176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.173 qpair failed and we were unable to recover it. 00:28:06.173 [2024-10-07 09:48:54.990268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.173 [2024-10-07 09:48:54.990296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.173 qpair failed and we were unable to recover it. 00:28:06.173 [2024-10-07 09:48:54.990411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.173 [2024-10-07 09:48:54.990438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.173 qpair failed and we were unable to recover it. 00:28:06.173 [2024-10-07 09:48:54.990530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.173 [2024-10-07 09:48:54.990558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.173 qpair failed and we were unable to recover it. 00:28:06.173 [2024-10-07 09:48:54.990647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.173 [2024-10-07 09:48:54.990689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.173 qpair failed and we were unable to recover it. 00:28:06.173 [2024-10-07 09:48:54.990781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.173 [2024-10-07 09:48:54.990809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.173 qpair failed and we were unable to recover it. 00:28:06.173 [2024-10-07 09:48:54.990899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.173 [2024-10-07 09:48:54.990927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.173 qpair failed and we were unable to recover it. 00:28:06.173 [2024-10-07 09:48:54.991072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.173 [2024-10-07 09:48:54.991101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.173 qpair failed and we were unable to recover it. 00:28:06.173 [2024-10-07 09:48:54.991196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.173 [2024-10-07 09:48:54.991223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.173 qpair failed and we were unable to recover it. 00:28:06.173 [2024-10-07 09:48:54.991341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.173 [2024-10-07 09:48:54.991368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.173 qpair failed and we were unable to recover it. 00:28:06.173 [2024-10-07 09:48:54.991460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.173 [2024-10-07 09:48:54.991488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.173 qpair failed and we were unable to recover it. 00:28:06.173 [2024-10-07 09:48:54.991586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.173 [2024-10-07 09:48:54.991630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.173 qpair failed and we were unable to recover it. 00:28:06.173 [2024-10-07 09:48:54.991809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.173 [2024-10-07 09:48:54.991841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.173 qpair failed and we were unable to recover it. 00:28:06.173 [2024-10-07 09:48:54.991933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.173 [2024-10-07 09:48:54.991963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.173 qpair failed and we were unable to recover it. 00:28:06.173 [2024-10-07 09:48:54.992062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.173 [2024-10-07 09:48:54.992092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.173 qpair failed and we were unable to recover it. 00:28:06.173 [2024-10-07 09:48:54.992187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.173 [2024-10-07 09:48:54.992224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.173 qpair failed and we were unable to recover it. 00:28:06.173 [2024-10-07 09:48:54.992339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.173 [2024-10-07 09:48:54.992367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.173 qpair failed and we were unable to recover it. 00:28:06.173 [2024-10-07 09:48:54.992459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.173 [2024-10-07 09:48:54.992489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.173 qpair failed and we were unable to recover it. 00:28:06.173 [2024-10-07 09:48:54.992615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.173 [2024-10-07 09:48:54.992643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.173 qpair failed and we were unable to recover it. 00:28:06.173 [2024-10-07 09:48:54.992771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.173 [2024-10-07 09:48:54.992799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.173 qpair failed and we were unable to recover it. 00:28:06.173 [2024-10-07 09:48:54.992888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.173 [2024-10-07 09:48:54.992916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.173 qpair failed and we were unable to recover it. 00:28:06.173 [2024-10-07 09:48:54.993009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.173 [2024-10-07 09:48:54.993037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.173 qpair failed and we were unable to recover it. 00:28:06.173 [2024-10-07 09:48:54.993159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.173 [2024-10-07 09:48:54.993187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.173 qpair failed and we were unable to recover it. 00:28:06.173 [2024-10-07 09:48:54.993299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.173 [2024-10-07 09:48:54.993327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.173 qpair failed and we were unable to recover it. 00:28:06.173 [2024-10-07 09:48:54.993425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.173 [2024-10-07 09:48:54.993458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.173 qpair failed and we were unable to recover it. 00:28:06.173 [2024-10-07 09:48:54.993554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.173 [2024-10-07 09:48:54.993581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.173 qpair failed and we were unable to recover it. 00:28:06.173 [2024-10-07 09:48:54.993707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.173 [2024-10-07 09:48:54.993739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.173 qpair failed and we were unable to recover it. 00:28:06.173 [2024-10-07 09:48:54.993834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.173 [2024-10-07 09:48:54.993864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.173 qpair failed and we were unable to recover it. 00:28:06.173 [2024-10-07 09:48:54.993957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.173 [2024-10-07 09:48:54.993985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.173 qpair failed and we were unable to recover it. 00:28:06.173 [2024-10-07 09:48:54.994070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.173 [2024-10-07 09:48:54.994099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.173 qpair failed and we were unable to recover it. 00:28:06.173 [2024-10-07 09:48:54.994219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.173 [2024-10-07 09:48:54.994249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.173 qpair failed and we were unable to recover it. 00:28:06.173 [2024-10-07 09:48:54.994336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.173 [2024-10-07 09:48:54.994364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.173 qpair failed and we were unable to recover it. 00:28:06.173 [2024-10-07 09:48:54.994511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.173 [2024-10-07 09:48:54.994539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.173 qpair failed and we were unable to recover it. 00:28:06.173 [2024-10-07 09:48:54.994639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.173 [2024-10-07 09:48:54.994675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.173 qpair failed and we were unable to recover it. 00:28:06.173 [2024-10-07 09:48:54.994793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.173 [2024-10-07 09:48:54.994822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.173 qpair failed and we were unable to recover it. 00:28:06.173 [2024-10-07 09:48:54.994905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.173 [2024-10-07 09:48:54.994935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.173 qpair failed and we were unable to recover it. 00:28:06.173 [2024-10-07 09:48:54.995064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.173 [2024-10-07 09:48:54.995093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.173 qpair failed and we were unable to recover it. 00:28:06.173 [2024-10-07 09:48:54.995208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.173 [2024-10-07 09:48:54.995237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.173 qpair failed and we were unable to recover it. 00:28:06.173 [2024-10-07 09:48:54.995372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.173 [2024-10-07 09:48:54.995401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.173 qpair failed and we were unable to recover it. 00:28:06.173 [2024-10-07 09:48:54.995499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.173 [2024-10-07 09:48:54.995528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.173 qpair failed and we were unable to recover it. 00:28:06.173 [2024-10-07 09:48:54.995648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.173 [2024-10-07 09:48:54.995681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.173 qpair failed and we were unable to recover it. 00:28:06.173 [2024-10-07 09:48:54.995772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.173 [2024-10-07 09:48:54.995800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.173 qpair failed and we were unable to recover it. 00:28:06.173 [2024-10-07 09:48:54.995920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.173 [2024-10-07 09:48:54.995948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.173 qpair failed and we were unable to recover it. 00:28:06.173 [2024-10-07 09:48:54.996062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.173 [2024-10-07 09:48:54.996090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.173 qpair failed and we were unable to recover it. 00:28:06.173 [2024-10-07 09:48:54.996203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.174 [2024-10-07 09:48:54.996230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.174 qpair failed and we were unable to recover it. 00:28:06.174 [2024-10-07 09:48:54.996380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.174 [2024-10-07 09:48:54.996407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.174 qpair failed and we were unable to recover it. 00:28:06.174 [2024-10-07 09:48:54.996559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.174 [2024-10-07 09:48:54.996587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.174 qpair failed and we were unable to recover it. 00:28:06.174 [2024-10-07 09:48:54.996687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.174 [2024-10-07 09:48:54.996719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.174 qpair failed and we were unable to recover it. 00:28:06.174 [2024-10-07 09:48:54.996814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.174 [2024-10-07 09:48:54.996843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.174 qpair failed and we were unable to recover it. 00:28:06.174 [2024-10-07 09:48:54.996939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.174 [2024-10-07 09:48:54.996968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.174 qpair failed and we were unable to recover it. 00:28:06.174 [2024-10-07 09:48:54.997115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.174 [2024-10-07 09:48:54.997144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.174 qpair failed and we were unable to recover it. 00:28:06.174 [2024-10-07 09:48:54.997271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.174 [2024-10-07 09:48:54.997305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.174 qpair failed and we were unable to recover it. 00:28:06.174 [2024-10-07 09:48:54.997400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.174 [2024-10-07 09:48:54.997430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.174 qpair failed and we were unable to recover it. 00:28:06.174 [2024-10-07 09:48:54.997580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.174 [2024-10-07 09:48:54.997608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.174 qpair failed and we were unable to recover it. 00:28:06.174 [2024-10-07 09:48:54.997738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.174 [2024-10-07 09:48:54.997767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.174 qpair failed and we were unable to recover it. 00:28:06.174 [2024-10-07 09:48:54.997868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.174 [2024-10-07 09:48:54.997897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.174 qpair failed and we were unable to recover it. 00:28:06.174 [2024-10-07 09:48:54.998023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.174 [2024-10-07 09:48:54.998051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.174 qpair failed and we were unable to recover it. 00:28:06.174 [2024-10-07 09:48:54.998199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.174 [2024-10-07 09:48:54.998228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.174 qpair failed and we were unable to recover it. 00:28:06.174 [2024-10-07 09:48:54.998314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.174 [2024-10-07 09:48:54.998343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.174 qpair failed and we were unable to recover it. 00:28:06.174 [2024-10-07 09:48:54.998467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.174 [2024-10-07 09:48:54.998506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.174 qpair failed and we were unable to recover it. 00:28:06.174 [2024-10-07 09:48:54.998615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.174 [2024-10-07 09:48:54.998663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.174 qpair failed and we were unable to recover it. 00:28:06.174 [2024-10-07 09:48:54.998869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.174 [2024-10-07 09:48:54.998905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.174 qpair failed and we were unable to recover it. 00:28:06.174 [2024-10-07 09:48:54.999011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.174 [2024-10-07 09:48:54.999046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.174 qpair failed and we were unable to recover it. 00:28:06.174 [2024-10-07 09:48:54.999184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.174 [2024-10-07 09:48:54.999220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.174 qpair failed and we were unable to recover it. 00:28:06.174 [2024-10-07 09:48:54.999355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.174 [2024-10-07 09:48:54.999390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.174 qpair failed and we were unable to recover it. 00:28:06.174 [2024-10-07 09:48:54.999493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.174 [2024-10-07 09:48:54.999529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.174 qpair failed and we were unable to recover it. 00:28:06.174 [2024-10-07 09:48:54.999675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.174 [2024-10-07 09:48:54.999712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.174 qpair failed and we were unable to recover it. 00:28:06.174 [2024-10-07 09:48:54.999862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.174 [2024-10-07 09:48:54.999897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.174 qpair failed and we were unable to recover it. 00:28:06.174 [2024-10-07 09:48:55.000085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.174 [2024-10-07 09:48:55.000122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.174 qpair failed and we were unable to recover it. 00:28:06.174 [2024-10-07 09:48:55.000276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.174 [2024-10-07 09:48:55.000313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.174 qpair failed and we were unable to recover it. 00:28:06.174 [2024-10-07 09:48:55.000459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.174 [2024-10-07 09:48:55.000495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.174 qpair failed and we were unable to recover it. 00:28:06.174 [2024-10-07 09:48:55.000642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.174 [2024-10-07 09:48:55.000686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.174 qpair failed and we were unable to recover it. 00:28:06.174 [2024-10-07 09:48:55.000826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.174 [2024-10-07 09:48:55.000861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.174 qpair failed and we were unable to recover it. 00:28:06.174 [2024-10-07 09:48:55.000976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.174 [2024-10-07 09:48:55.001013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.174 qpair failed and we were unable to recover it. 00:28:06.174 [2024-10-07 09:48:55.001152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.174 [2024-10-07 09:48:55.001187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.174 qpair failed and we were unable to recover it. 00:28:06.174 [2024-10-07 09:48:55.001362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.174 [2024-10-07 09:48:55.001398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.174 qpair failed and we were unable to recover it. 00:28:06.174 [2024-10-07 09:48:55.001513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.174 [2024-10-07 09:48:55.001550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.174 qpair failed and we were unable to recover it. 00:28:06.174 [2024-10-07 09:48:55.001712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.174 [2024-10-07 09:48:55.001750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.174 qpair failed and we were unable to recover it. 00:28:06.174 [2024-10-07 09:48:55.001893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.174 [2024-10-07 09:48:55.001935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.174 qpair failed and we were unable to recover it. 00:28:06.174 [2024-10-07 09:48:55.002081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.174 [2024-10-07 09:48:55.002117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.174 qpair failed and we were unable to recover it. 00:28:06.174 [2024-10-07 09:48:55.002267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.174 [2024-10-07 09:48:55.002303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.174 qpair failed and we were unable to recover it. 00:28:06.174 [2024-10-07 09:48:55.002483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.174 [2024-10-07 09:48:55.002520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.174 qpair failed and we were unable to recover it. 00:28:06.174 [2024-10-07 09:48:55.002637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.174 [2024-10-07 09:48:55.002680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.174 qpair failed and we were unable to recover it. 00:28:06.174 [2024-10-07 09:48:55.002860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.174 [2024-10-07 09:48:55.002897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.174 qpair failed and we were unable to recover it. 00:28:06.174 [2024-10-07 09:48:55.003077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.174 [2024-10-07 09:48:55.003138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.174 qpair failed and we were unable to recover it. 00:28:06.174 [2024-10-07 09:48:55.003317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.174 [2024-10-07 09:48:55.003387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.174 qpair failed and we were unable to recover it. 00:28:06.174 [2024-10-07 09:48:55.003542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.174 [2024-10-07 09:48:55.003617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.174 qpair failed and we were unable to recover it. 00:28:06.174 [2024-10-07 09:48:55.003829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.174 [2024-10-07 09:48:55.003889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.174 qpair failed and we were unable to recover it. 00:28:06.174 [2024-10-07 09:48:55.004035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.174 [2024-10-07 09:48:55.004124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.174 qpair failed and we were unable to recover it. 00:28:06.174 [2024-10-07 09:48:55.004357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.174 [2024-10-07 09:48:55.004421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.174 qpair failed and we were unable to recover it. 00:28:06.174 [2024-10-07 09:48:55.004614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.174 [2024-10-07 09:48:55.004685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.174 qpair failed and we were unable to recover it. 00:28:06.174 [2024-10-07 09:48:55.004909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.174 [2024-10-07 09:48:55.004958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.174 qpair failed and we were unable to recover it. 00:28:06.174 [2024-10-07 09:48:55.005149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.174 [2024-10-07 09:48:55.005197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.174 qpair failed and we were unable to recover it. 00:28:06.175 [2024-10-07 09:48:55.005366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.175 [2024-10-07 09:48:55.005428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.175 qpair failed and we were unable to recover it. 00:28:06.175 [2024-10-07 09:48:55.005679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.175 [2024-10-07 09:48:55.005741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.175 qpair failed and we were unable to recover it. 00:28:06.175 [2024-10-07 09:48:55.005861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.175 [2024-10-07 09:48:55.005897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.175 qpair failed and we were unable to recover it. 00:28:06.175 [2024-10-07 09:48:55.006082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.175 [2024-10-07 09:48:55.006117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.175 qpair failed and we were unable to recover it. 00:28:06.175 [2024-10-07 09:48:55.006274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.175 [2024-10-07 09:48:55.006309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.175 qpair failed and we were unable to recover it. 00:28:06.175 [2024-10-07 09:48:55.006479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.175 [2024-10-07 09:48:55.006540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.175 qpair failed and we were unable to recover it. 00:28:06.175 [2024-10-07 09:48:55.006739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.175 [2024-10-07 09:48:55.006776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.175 qpair failed and we were unable to recover it. 00:28:06.175 [2024-10-07 09:48:55.006898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.175 [2024-10-07 09:48:55.006935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.175 qpair failed and we were unable to recover it. 00:28:06.175 [2024-10-07 09:48:55.007088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.175 [2024-10-07 09:48:55.007123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.175 qpair failed and we were unable to recover it. 00:28:06.175 [2024-10-07 09:48:55.007298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.175 [2024-10-07 09:48:55.007333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.175 qpair failed and we were unable to recover it. 00:28:06.175 [2024-10-07 09:48:55.007530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.175 [2024-10-07 09:48:55.007591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.175 qpair failed and we were unable to recover it. 00:28:06.175 [2024-10-07 09:48:55.007818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.175 [2024-10-07 09:48:55.007874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.175 qpair failed and we were unable to recover it. 00:28:06.175 [2024-10-07 09:48:55.008041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.175 [2024-10-07 09:48:55.008133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.175 qpair failed and we were unable to recover it. 00:28:06.175 [2024-10-07 09:48:55.008334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.175 [2024-10-07 09:48:55.008378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.175 qpair failed and we were unable to recover it. 00:28:06.175 [2024-10-07 09:48:55.008617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.175 [2024-10-07 09:48:55.008695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.175 qpair failed and we were unable to recover it. 00:28:06.175 [2024-10-07 09:48:55.008883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.175 [2024-10-07 09:48:55.008936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.175 qpair failed and we were unable to recover it. 00:28:06.175 [2024-10-07 09:48:55.009154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.175 [2024-10-07 09:48:55.009217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.175 qpair failed and we were unable to recover it. 00:28:06.175 [2024-10-07 09:48:55.009441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.175 [2024-10-07 09:48:55.009496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.175 qpair failed and we were unable to recover it. 00:28:06.175 [2024-10-07 09:48:55.009698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.175 [2024-10-07 09:48:55.009737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.175 qpair failed and we were unable to recover it. 00:28:06.175 [2024-10-07 09:48:55.009867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.175 [2024-10-07 09:48:55.009905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.175 qpair failed and we were unable to recover it. 00:28:06.175 [2024-10-07 09:48:55.010025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.175 [2024-10-07 09:48:55.010063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.175 qpair failed and we were unable to recover it. 00:28:06.175 [2024-10-07 09:48:55.010211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.175 [2024-10-07 09:48:55.010248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.175 qpair failed and we were unable to recover it. 00:28:06.175 [2024-10-07 09:48:55.010356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.175 [2024-10-07 09:48:55.010394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.175 qpair failed and we were unable to recover it. 00:28:06.175 [2024-10-07 09:48:55.010578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.175 [2024-10-07 09:48:55.010638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.175 qpair failed and we were unable to recover it. 00:28:06.175 [2024-10-07 09:48:55.010800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.175 [2024-10-07 09:48:55.010874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.175 qpair failed and we were unable to recover it. 00:28:06.175 [2024-10-07 09:48:55.011055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.175 [2024-10-07 09:48:55.011093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.175 qpair failed and we were unable to recover it. 00:28:06.175 [2024-10-07 09:48:55.011280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.175 [2024-10-07 09:48:55.011318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.175 qpair failed and we were unable to recover it. 00:28:06.175 [2024-10-07 09:48:55.011472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.175 [2024-10-07 09:48:55.011529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.175 qpair failed and we were unable to recover it. 00:28:06.175 [2024-10-07 09:48:55.011729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.175 [2024-10-07 09:48:55.011767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.175 qpair failed and we were unable to recover it. 00:28:06.175 [2024-10-07 09:48:55.011920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.175 [2024-10-07 09:48:55.011957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.175 qpair failed and we were unable to recover it. 00:28:06.175 [2024-10-07 09:48:55.012166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.175 [2024-10-07 09:48:55.012227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.175 qpair failed and we were unable to recover it. 00:28:06.175 [2024-10-07 09:48:55.012408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.175 [2024-10-07 09:48:55.012459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.175 qpair failed and we were unable to recover it. 00:28:06.175 [2024-10-07 09:48:55.012646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.175 [2024-10-07 09:48:55.012722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.175 qpair failed and we were unable to recover it. 00:28:06.175 [2024-10-07 09:48:55.012873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.175 [2024-10-07 09:48:55.012948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.175 qpair failed and we were unable to recover it. 00:28:06.175 [2024-10-07 09:48:55.013152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.175 [2024-10-07 09:48:55.013205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.175 qpair failed and we were unable to recover it. 00:28:06.175 [2024-10-07 09:48:55.013402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.175 [2024-10-07 09:48:55.013453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.175 qpair failed and we were unable to recover it. 00:28:06.175 [2024-10-07 09:48:55.013599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.175 [2024-10-07 09:48:55.013637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.175 qpair failed and we were unable to recover it. 00:28:06.175 [2024-10-07 09:48:55.013777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.175 [2024-10-07 09:48:55.013814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.175 qpair failed and we were unable to recover it. 00:28:06.175 [2024-10-07 09:48:55.013979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.175 [2024-10-07 09:48:55.014031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.175 qpair failed and we were unable to recover it. 00:28:06.175 [2024-10-07 09:48:55.014224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.175 [2024-10-07 09:48:55.014277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.175 qpair failed and we were unable to recover it. 00:28:06.175 [2024-10-07 09:48:55.014451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.175 [2024-10-07 09:48:55.014487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.175 qpair failed and we were unable to recover it. 00:28:06.175 [2024-10-07 09:48:55.014638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.175 [2024-10-07 09:48:55.014697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.175 qpair failed and we were unable to recover it. 00:28:06.175 [2024-10-07 09:48:55.014850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.175 [2024-10-07 09:48:55.014889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.175 qpair failed and we were unable to recover it. 00:28:06.176 [2024-10-07 09:48:55.015038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.176 [2024-10-07 09:48:55.015076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.176 qpair failed and we were unable to recover it. 00:28:06.176 [2024-10-07 09:48:55.015258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.176 [2024-10-07 09:48:55.015295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.176 qpair failed and we were unable to recover it. 00:28:06.176 [2024-10-07 09:48:55.015429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.176 [2024-10-07 09:48:55.015469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.176 qpair failed and we were unable to recover it. 00:28:06.176 [2024-10-07 09:48:55.015630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.176 [2024-10-07 09:48:55.015680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.176 qpair failed and we were unable to recover it. 00:28:06.176 [2024-10-07 09:48:55.015836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.176 [2024-10-07 09:48:55.015876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.176 qpair failed and we were unable to recover it. 00:28:06.176 [2024-10-07 09:48:55.016044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.176 [2024-10-07 09:48:55.016083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.176 qpair failed and we were unable to recover it. 00:28:06.176 [2024-10-07 09:48:55.016268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.176 [2024-10-07 09:48:55.016307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.176 qpair failed and we were unable to recover it. 00:28:06.176 [2024-10-07 09:48:55.016461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.176 [2024-10-07 09:48:55.016500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.176 qpair failed and we were unable to recover it. 00:28:06.176 [2024-10-07 09:48:55.016687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.176 [2024-10-07 09:48:55.016727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.176 qpair failed and we were unable to recover it. 00:28:06.176 [2024-10-07 09:48:55.016883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.176 [2024-10-07 09:48:55.016928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.176 qpair failed and we were unable to recover it. 00:28:06.176 [2024-10-07 09:48:55.017051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.176 [2024-10-07 09:48:55.017090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.176 qpair failed and we were unable to recover it. 00:28:06.176 [2024-10-07 09:48:55.017273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.176 [2024-10-07 09:48:55.017311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.176 qpair failed and we were unable to recover it. 00:28:06.176 [2024-10-07 09:48:55.017469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.176 [2024-10-07 09:48:55.017508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.176 qpair failed and we were unable to recover it. 00:28:06.176 [2024-10-07 09:48:55.017654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.176 [2024-10-07 09:48:55.017710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.176 qpair failed and we were unable to recover it. 00:28:06.176 [2024-10-07 09:48:55.017827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.176 [2024-10-07 09:48:55.017866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.176 qpair failed and we were unable to recover it. 00:28:06.176 [2024-10-07 09:48:55.018032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.176 [2024-10-07 09:48:55.018072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.176 qpair failed and we were unable to recover it. 00:28:06.176 [2024-10-07 09:48:55.018269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.176 [2024-10-07 09:48:55.018308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.176 qpair failed and we were unable to recover it. 00:28:06.176 [2024-10-07 09:48:55.018466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.176 [2024-10-07 09:48:55.018504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.176 qpair failed and we were unable to recover it. 00:28:06.176 [2024-10-07 09:48:55.018635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.176 [2024-10-07 09:48:55.018682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.176 qpair failed and we were unable to recover it. 00:28:06.176 [2024-10-07 09:48:55.018812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.176 [2024-10-07 09:48:55.018853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.176 qpair failed and we were unable to recover it. 00:28:06.176 [2024-10-07 09:48:55.019006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.176 [2024-10-07 09:48:55.019045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.176 qpair failed and we were unable to recover it. 00:28:06.176 [2024-10-07 09:48:55.019194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.176 [2024-10-07 09:48:55.019234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.176 qpair failed and we were unable to recover it. 00:28:06.176 [2024-10-07 09:48:55.019395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.176 [2024-10-07 09:48:55.019435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.176 qpair failed and we were unable to recover it. 00:28:06.176 [2024-10-07 09:48:55.019610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.176 [2024-10-07 09:48:55.019648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.176 qpair failed and we were unable to recover it. 00:28:06.176 [2024-10-07 09:48:55.019789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.176 [2024-10-07 09:48:55.019830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.176 qpair failed and we were unable to recover it. 00:28:06.176 [2024-10-07 09:48:55.020027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.176 [2024-10-07 09:48:55.020066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.176 qpair failed and we were unable to recover it. 00:28:06.176 [2024-10-07 09:48:55.020224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.176 [2024-10-07 09:48:55.020263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.176 qpair failed and we were unable to recover it. 00:28:06.176 [2024-10-07 09:48:55.020415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.176 [2024-10-07 09:48:55.020455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.176 qpair failed and we were unable to recover it. 00:28:06.176 [2024-10-07 09:48:55.020579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.176 [2024-10-07 09:48:55.020618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.176 qpair failed and we were unable to recover it. 00:28:06.176 [2024-10-07 09:48:55.020817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.176 [2024-10-07 09:48:55.020856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.176 qpair failed and we were unable to recover it. 00:28:06.176 [2024-10-07 09:48:55.021019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.176 [2024-10-07 09:48:55.021058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.176 qpair failed and we were unable to recover it. 00:28:06.176 [2024-10-07 09:48:55.021243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.176 [2024-10-07 09:48:55.021281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.176 qpair failed and we were unable to recover it. 00:28:06.176 [2024-10-07 09:48:55.021436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.176 [2024-10-07 09:48:55.021475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.176 qpair failed and we were unable to recover it. 00:28:06.176 [2024-10-07 09:48:55.021619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.176 [2024-10-07 09:48:55.021658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.176 qpair failed and we were unable to recover it. 00:28:06.176 [2024-10-07 09:48:55.021834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.176 [2024-10-07 09:48:55.021872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.176 qpair failed and we were unable to recover it. 00:28:06.176 [2024-10-07 09:48:55.022002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.176 [2024-10-07 09:48:55.022042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.176 qpair failed and we were unable to recover it. 00:28:06.176 [2024-10-07 09:48:55.022232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.176 [2024-10-07 09:48:55.022271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.176 qpair failed and we were unable to recover it. 00:28:06.176 [2024-10-07 09:48:55.022452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.176 [2024-10-07 09:48:55.022491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.176 qpair failed and we were unable to recover it. 00:28:06.176 [2024-10-07 09:48:55.022643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.176 [2024-10-07 09:48:55.022708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.176 qpair failed and we were unable to recover it. 00:28:06.176 [2024-10-07 09:48:55.022861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.176 [2024-10-07 09:48:55.022900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.176 qpair failed and we were unable to recover it. 00:28:06.176 [2024-10-07 09:48:55.023062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.176 [2024-10-07 09:48:55.023101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.176 qpair failed and we were unable to recover it. 00:28:06.176 [2024-10-07 09:48:55.023287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.176 [2024-10-07 09:48:55.023326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.176 qpair failed and we were unable to recover it. 00:28:06.176 [2024-10-07 09:48:55.023486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.176 [2024-10-07 09:48:55.023524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.176 qpair failed and we were unable to recover it. 00:28:06.176 [2024-10-07 09:48:55.023649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.176 [2024-10-07 09:48:55.023703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.176 qpair failed and we were unable to recover it. 00:28:06.176 [2024-10-07 09:48:55.023890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.176 [2024-10-07 09:48:55.023929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.176 qpair failed and we were unable to recover it. 00:28:06.176 [2024-10-07 09:48:55.024060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.176 [2024-10-07 09:48:55.024098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.176 qpair failed and we were unable to recover it. 00:28:06.176 [2024-10-07 09:48:55.024289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.176 [2024-10-07 09:48:55.024327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.176 qpair failed and we were unable to recover it. 00:28:06.176 [2024-10-07 09:48:55.024479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.176 [2024-10-07 09:48:55.024519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.176 qpair failed and we were unable to recover it. 00:28:06.177 [2024-10-07 09:48:55.024690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.177 [2024-10-07 09:48:55.024730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.177 qpair failed and we were unable to recover it. 00:28:06.177 [2024-10-07 09:48:55.024876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.177 [2024-10-07 09:48:55.024915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.177 qpair failed and we were unable to recover it. 00:28:06.177 [2024-10-07 09:48:55.025110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.177 [2024-10-07 09:48:55.025149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.177 qpair failed and we were unable to recover it. 00:28:06.177 [2024-10-07 09:48:55.025306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.177 [2024-10-07 09:48:55.025345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.177 qpair failed and we were unable to recover it. 00:28:06.177 [2024-10-07 09:48:55.025506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.177 [2024-10-07 09:48:55.025544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.177 qpair failed and we were unable to recover it. 00:28:06.177 [2024-10-07 09:48:55.025748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.177 [2024-10-07 09:48:55.025789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.177 qpair failed and we were unable to recover it. 00:28:06.177 [2024-10-07 09:48:55.025910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.177 [2024-10-07 09:48:55.025949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.177 qpair failed and we were unable to recover it. 00:28:06.177 [2024-10-07 09:48:55.026108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.177 [2024-10-07 09:48:55.026151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.177 qpair failed and we were unable to recover it. 00:28:06.177 [2024-10-07 09:48:55.026346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.177 [2024-10-07 09:48:55.026387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.177 qpair failed and we were unable to recover it. 00:28:06.177 [2024-10-07 09:48:55.026553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.177 [2024-10-07 09:48:55.026595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.177 qpair failed and we were unable to recover it. 00:28:06.177 [2024-10-07 09:48:55.026779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.177 [2024-10-07 09:48:55.026822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.177 qpair failed and we were unable to recover it. 00:28:06.177 [2024-10-07 09:48:55.027023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.177 [2024-10-07 09:48:55.027064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.177 qpair failed and we were unable to recover it. 00:28:06.177 [2024-10-07 09:48:55.027234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.177 [2024-10-07 09:48:55.027272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.177 qpair failed and we were unable to recover it. 00:28:06.177 [2024-10-07 09:48:55.027425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.177 [2024-10-07 09:48:55.027465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.177 qpair failed and we were unable to recover it. 00:28:06.177 [2024-10-07 09:48:55.027649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.177 [2024-10-07 09:48:55.027697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.177 qpair failed and we were unable to recover it. 00:28:06.177 [2024-10-07 09:48:55.027867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.177 [2024-10-07 09:48:55.027906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.177 qpair failed and we were unable to recover it. 00:28:06.177 [2024-10-07 09:48:55.028090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.177 [2024-10-07 09:48:55.028128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.177 qpair failed and we were unable to recover it. 00:28:06.177 [2024-10-07 09:48:55.028283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.177 [2024-10-07 09:48:55.028322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.177 qpair failed and we were unable to recover it. 00:28:06.177 [2024-10-07 09:48:55.028515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.177 [2024-10-07 09:48:55.028554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.177 qpair failed and we were unable to recover it. 00:28:06.177 [2024-10-07 09:48:55.028752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.177 [2024-10-07 09:48:55.028792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.177 qpair failed and we were unable to recover it. 00:28:06.177 [2024-10-07 09:48:55.028986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.177 [2024-10-07 09:48:55.029025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.177 qpair failed and we were unable to recover it. 00:28:06.177 [2024-10-07 09:48:55.029180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.177 [2024-10-07 09:48:55.029220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.177 qpair failed and we were unable to recover it. 00:28:06.177 [2024-10-07 09:48:55.029411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.177 [2024-10-07 09:48:55.029449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.177 qpair failed and we were unable to recover it. 00:28:06.177 [2024-10-07 09:48:55.029602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.177 [2024-10-07 09:48:55.029642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.177 qpair failed and we were unable to recover it. 00:28:06.177 [2024-10-07 09:48:55.029771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.177 [2024-10-07 09:48:55.029812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.177 qpair failed and we were unable to recover it. 00:28:06.177 [2024-10-07 09:48:55.029972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.177 [2024-10-07 09:48:55.030011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.177 qpair failed and we were unable to recover it. 00:28:06.177 [2024-10-07 09:48:55.030200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.177 [2024-10-07 09:48:55.030239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.177 qpair failed and we were unable to recover it. 00:28:06.177 [2024-10-07 09:48:55.030447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.177 [2024-10-07 09:48:55.030507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.177 qpair failed and we were unable to recover it. 00:28:06.177 [2024-10-07 09:48:55.030643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.177 [2024-10-07 09:48:55.030707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.177 qpair failed and we were unable to recover it. 00:28:06.177 [2024-10-07 09:48:55.030903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.177 [2024-10-07 09:48:55.030944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.177 qpair failed and we were unable to recover it. 00:28:06.177 [2024-10-07 09:48:55.031067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.177 [2024-10-07 09:48:55.031108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.177 qpair failed and we were unable to recover it. 00:28:06.177 [2024-10-07 09:48:55.031224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.177 [2024-10-07 09:48:55.031266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.177 qpair failed and we were unable to recover it. 00:28:06.177 [2024-10-07 09:48:55.031444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.177 [2024-10-07 09:48:55.031487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.177 qpair failed and we were unable to recover it. 00:28:06.177 [2024-10-07 09:48:55.031663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.177 [2024-10-07 09:48:55.031714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.177 qpair failed and we were unable to recover it. 00:28:06.177 [2024-10-07 09:48:55.031873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.177 [2024-10-07 09:48:55.031913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.177 qpair failed and we were unable to recover it. 00:28:06.177 [2024-10-07 09:48:55.032032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.177 [2024-10-07 09:48:55.032074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.177 qpair failed and we were unable to recover it. 00:28:06.177 [2024-10-07 09:48:55.032210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.177 [2024-10-07 09:48:55.032251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.177 qpair failed and we were unable to recover it. 00:28:06.177 [2024-10-07 09:48:55.032377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.177 [2024-10-07 09:48:55.032420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.177 qpair failed and we were unable to recover it. 00:28:06.177 [2024-10-07 09:48:55.032591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.177 [2024-10-07 09:48:55.032633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.177 qpair failed and we were unable to recover it. 00:28:06.177 [2024-10-07 09:48:55.032839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.177 [2024-10-07 09:48:55.032885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.177 qpair failed and we were unable to recover it. 00:28:06.177 [2024-10-07 09:48:55.033028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.177 [2024-10-07 09:48:55.033070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.177 qpair failed and we were unable to recover it. 00:28:06.177 [2024-10-07 09:48:55.033265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.177 [2024-10-07 09:48:55.033307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.177 qpair failed and we were unable to recover it. 00:28:06.177 [2024-10-07 09:48:55.033436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.177 [2024-10-07 09:48:55.033479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.177 qpair failed and we were unable to recover it. 00:28:06.177 [2024-10-07 09:48:55.033643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.177 [2024-10-07 09:48:55.033701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.177 qpair failed and we were unable to recover it. 00:28:06.177 [2024-10-07 09:48:55.033866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.177 [2024-10-07 09:48:55.033909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.177 qpair failed and we were unable to recover it. 00:28:06.177 [2024-10-07 09:48:55.034102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.177 [2024-10-07 09:48:55.034144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.177 qpair failed and we were unable to recover it. 00:28:06.177 [2024-10-07 09:48:55.034282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.177 [2024-10-07 09:48:55.034323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.177 qpair failed and we were unable to recover it. 00:28:06.177 [2024-10-07 09:48:55.034451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.177 [2024-10-07 09:48:55.034494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:06.177 qpair failed and we were unable to recover it. 00:28:06.178 [2024-10-07 09:48:55.034664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.178 [2024-10-07 09:48:55.034716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.178 qpair failed and we were unable to recover it. 00:28:06.178 [2024-10-07 09:48:55.034914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.178 [2024-10-07 09:48:55.034955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.178 qpair failed and we were unable to recover it. 00:28:06.178 [2024-10-07 09:48:55.035082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.178 [2024-10-07 09:48:55.035125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.178 qpair failed and we were unable to recover it. 00:28:06.178 [2024-10-07 09:48:55.035286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.178 [2024-10-07 09:48:55.035329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.178 qpair failed and we were unable to recover it. 00:28:06.178 [2024-10-07 09:48:55.035455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.178 [2024-10-07 09:48:55.035497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.178 qpair failed and we were unable to recover it. 00:28:06.178 [2024-10-07 09:48:55.035661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.178 [2024-10-07 09:48:55.035710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.178 qpair failed and we were unable to recover it. 00:28:06.178 [2024-10-07 09:48:55.035863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.178 [2024-10-07 09:48:55.035904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.178 qpair failed and we were unable to recover it. 00:28:06.178 [2024-10-07 09:48:55.036077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.178 [2024-10-07 09:48:55.036118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.178 qpair failed and we were unable to recover it. 00:28:06.178 [2024-10-07 09:48:55.036280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.178 [2024-10-07 09:48:55.036323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.178 qpair failed and we were unable to recover it. 00:28:06.178 [2024-10-07 09:48:55.036481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.178 [2024-10-07 09:48:55.036523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.178 qpair failed and we were unable to recover it. 00:28:06.178 [2024-10-07 09:48:55.036718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.178 [2024-10-07 09:48:55.036761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.178 qpair failed and we were unable to recover it. 00:28:06.178 [2024-10-07 09:48:55.036958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.178 [2024-10-07 09:48:55.036999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.178 qpair failed and we were unable to recover it. 00:28:06.178 [2024-10-07 09:48:55.037166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.178 [2024-10-07 09:48:55.037207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.178 qpair failed and we were unable to recover it. 00:28:06.178 [2024-10-07 09:48:55.037327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.178 [2024-10-07 09:48:55.037368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.178 qpair failed and we were unable to recover it. 00:28:06.178 [2024-10-07 09:48:55.037557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.178 [2024-10-07 09:48:55.037598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.178 qpair failed and we were unable to recover it. 00:28:06.178 [2024-10-07 09:48:55.037771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.178 [2024-10-07 09:48:55.037813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.178 qpair failed and we were unable to recover it. 00:28:06.178 [2024-10-07 09:48:55.037981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.178 [2024-10-07 09:48:55.038022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.178 qpair failed and we were unable to recover it. 00:28:06.178 [2024-10-07 09:48:55.038188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.178 [2024-10-07 09:48:55.038230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.178 qpair failed and we were unable to recover it. 00:28:06.178 [2024-10-07 09:48:55.038400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.178 [2024-10-07 09:48:55.038440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.178 qpair failed and we were unable to recover it. 00:28:06.178 [2024-10-07 09:48:55.038605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.178 [2024-10-07 09:48:55.038647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.178 qpair failed and we were unable to recover it. 00:28:06.178 [2024-10-07 09:48:55.038858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.178 [2024-10-07 09:48:55.038905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.178 qpair failed and we were unable to recover it. 00:28:06.178 [2024-10-07 09:48:55.039068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.178 [2024-10-07 09:48:55.039109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.178 qpair failed and we were unable to recover it. 00:28:06.178 [2024-10-07 09:48:55.039275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.178 [2024-10-07 09:48:55.039316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.178 qpair failed and we were unable to recover it. 00:28:06.178 [2024-10-07 09:48:55.039466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.178 [2024-10-07 09:48:55.039507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.178 qpair failed and we were unable to recover it. 00:28:06.178 [2024-10-07 09:48:55.039678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.178 [2024-10-07 09:48:55.039719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.178 qpair failed and we were unable to recover it. 00:28:06.178 [2024-10-07 09:48:55.039882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.178 [2024-10-07 09:48:55.039923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.178 qpair failed and we were unable to recover it. 00:28:06.178 [2024-10-07 09:48:55.040073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.178 [2024-10-07 09:48:55.040114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.178 qpair failed and we were unable to recover it. 00:28:06.178 [2024-10-07 09:48:55.040276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.178 [2024-10-07 09:48:55.040317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.178 qpair failed and we were unable to recover it. 00:28:06.178 [2024-10-07 09:48:55.040484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.178 [2024-10-07 09:48:55.040525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.178 qpair failed and we were unable to recover it. 00:28:06.178 [2024-10-07 09:48:55.040722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.178 [2024-10-07 09:48:55.040765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.178 qpair failed and we were unable to recover it. 00:28:06.178 [2024-10-07 09:48:55.040926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.178 [2024-10-07 09:48:55.040966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.178 qpair failed and we were unable to recover it. 00:28:06.178 [2024-10-07 09:48:55.041156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.178 [2024-10-07 09:48:55.041196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.178 qpair failed and we were unable to recover it. 00:28:06.178 [2024-10-07 09:48:55.041348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.178 [2024-10-07 09:48:55.041389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.178 qpair failed and we were unable to recover it. 00:28:06.178 [2024-10-07 09:48:55.041517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.178 [2024-10-07 09:48:55.041560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.178 qpair failed and we were unable to recover it. 00:28:06.178 [2024-10-07 09:48:55.041768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.178 [2024-10-07 09:48:55.041810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.178 qpair failed and we were unable to recover it. 00:28:06.178 [2024-10-07 09:48:55.041946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.178 [2024-10-07 09:48:55.041989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.178 qpair failed and we were unable to recover it. 00:28:06.178 [2024-10-07 09:48:55.042153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.178 [2024-10-07 09:48:55.042194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.178 qpair failed and we were unable to recover it. 00:28:06.178 [2024-10-07 09:48:55.042400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.178 [2024-10-07 09:48:55.042441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.178 qpair failed and we were unable to recover it. 00:28:06.178 [2024-10-07 09:48:55.042599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.178 [2024-10-07 09:48:55.042641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.178 qpair failed and we were unable to recover it. 00:28:06.178 [2024-10-07 09:48:55.042841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.178 [2024-10-07 09:48:55.042882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.178 qpair failed and we were unable to recover it. 00:28:06.178 [2024-10-07 09:48:55.043046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.178 [2024-10-07 09:48:55.043089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.178 qpair failed and we were unable to recover it. 00:28:06.178 [2024-10-07 09:48:55.043224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.178 [2024-10-07 09:48:55.043265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.178 qpair failed and we were unable to recover it. 00:28:06.178 [2024-10-07 09:48:55.043440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.178 [2024-10-07 09:48:55.043481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.178 qpair failed and we were unable to recover it. 00:28:06.178 [2024-10-07 09:48:55.043644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.178 [2024-10-07 09:48:55.043694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.178 qpair failed and we were unable to recover it. 00:28:06.178 [2024-10-07 09:48:55.043859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.178 [2024-10-07 09:48:55.043901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.178 qpair failed and we were unable to recover it. 00:28:06.178 [2024-10-07 09:48:55.044035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.178 [2024-10-07 09:48:55.044077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.179 qpair failed and we were unable to recover it. 00:28:06.179 [2024-10-07 09:48:55.044231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.179 [2024-10-07 09:48:55.044272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.179 qpair failed and we were unable to recover it. 00:28:06.179 [2024-10-07 09:48:55.044432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.179 [2024-10-07 09:48:55.044473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.179 qpair failed and we were unable to recover it. 00:28:06.179 [2024-10-07 09:48:55.044686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.179 [2024-10-07 09:48:55.044729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.179 qpair failed and we were unable to recover it. 00:28:06.179 [2024-10-07 09:48:55.044892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.179 [2024-10-07 09:48:55.044934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.179 qpair failed and we were unable to recover it. 00:28:06.179 [2024-10-07 09:48:55.045127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.179 [2024-10-07 09:48:55.045167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.179 qpair failed and we were unable to recover it. 00:28:06.179 [2024-10-07 09:48:55.045335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.179 [2024-10-07 09:48:55.045375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.179 qpair failed and we were unable to recover it. 00:28:06.179 [2024-10-07 09:48:55.045513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.179 [2024-10-07 09:48:55.045555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.179 qpair failed and we were unable to recover it. 00:28:06.179 [2024-10-07 09:48:55.045721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.179 [2024-10-07 09:48:55.045763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.179 qpair failed and we were unable to recover it. 00:28:06.179 [2024-10-07 09:48:55.045948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.179 [2024-10-07 09:48:55.045989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.179 qpair failed and we were unable to recover it. 00:28:06.179 [2024-10-07 09:48:55.046120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.179 [2024-10-07 09:48:55.046161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.179 qpair failed and we were unable to recover it. 00:28:06.179 [2024-10-07 09:48:55.046321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.179 [2024-10-07 09:48:55.046363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.179 qpair failed and we were unable to recover it. 00:28:06.179 [2024-10-07 09:48:55.046567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.179 [2024-10-07 09:48:55.046610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.179 qpair failed and we were unable to recover it. 00:28:06.179 [2024-10-07 09:48:55.046790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.179 [2024-10-07 09:48:55.046833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.179 qpair failed and we were unable to recover it. 00:28:06.179 [2024-10-07 09:48:55.046995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.179 [2024-10-07 09:48:55.047038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.179 qpair failed and we were unable to recover it. 00:28:06.179 [2024-10-07 09:48:55.047207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.179 [2024-10-07 09:48:55.047257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.179 qpair failed and we were unable to recover it. 00:28:06.179 [2024-10-07 09:48:55.047432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.179 [2024-10-07 09:48:55.047475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.179 qpair failed and we were unable to recover it. 00:28:06.179 [2024-10-07 09:48:55.047625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.179 [2024-10-07 09:48:55.047676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.179 qpair failed and we were unable to recover it. 00:28:06.179 [2024-10-07 09:48:55.047855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.179 [2024-10-07 09:48:55.047897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.179 qpair failed and we were unable to recover it. 00:28:06.179 [2024-10-07 09:48:55.048034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.179 [2024-10-07 09:48:55.048076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.179 qpair failed and we were unable to recover it. 00:28:06.179 [2024-10-07 09:48:55.048240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.179 [2024-10-07 09:48:55.048282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.179 qpair failed and we were unable to recover it. 00:28:06.179 [2024-10-07 09:48:55.048452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.179 [2024-10-07 09:48:55.048493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.179 qpair failed and we were unable to recover it. 00:28:06.179 [2024-10-07 09:48:55.048676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.179 [2024-10-07 09:48:55.048722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.179 qpair failed and we were unable to recover it. 00:28:06.179 [2024-10-07 09:48:55.048897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.179 [2024-10-07 09:48:55.048940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.179 qpair failed and we were unable to recover it. 00:28:06.179 [2024-10-07 09:48:55.049106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.179 [2024-10-07 09:48:55.049151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.179 qpair failed and we were unable to recover it. 00:28:06.179 [2024-10-07 09:48:55.049322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.179 [2024-10-07 09:48:55.049366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.179 qpair failed and we were unable to recover it. 00:28:06.179 [2024-10-07 09:48:55.049570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.179 [2024-10-07 09:48:55.049613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.179 qpair failed and we were unable to recover it. 00:28:06.179 [2024-10-07 09:48:55.049823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.179 [2024-10-07 09:48:55.049867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.179 qpair failed and we were unable to recover it. 00:28:06.179 [2024-10-07 09:48:55.050025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.179 [2024-10-07 09:48:55.050066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.179 qpair failed and we were unable to recover it. 00:28:06.179 [2024-10-07 09:48:55.050237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.179 [2024-10-07 09:48:55.050279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.179 qpair failed and we were unable to recover it. 00:28:06.179 [2024-10-07 09:48:55.050403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.179 [2024-10-07 09:48:55.050444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.179 qpair failed and we were unable to recover it. 00:28:06.179 [2024-10-07 09:48:55.050574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.179 [2024-10-07 09:48:55.050614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.179 qpair failed and we were unable to recover it. 00:28:06.179 [2024-10-07 09:48:55.050835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.179 [2024-10-07 09:48:55.050880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.179 qpair failed and we were unable to recover it. 00:28:06.179 [2024-10-07 09:48:55.051050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.179 [2024-10-07 09:48:55.051094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.179 qpair failed and we were unable to recover it. 00:28:06.180 [2024-10-07 09:48:55.051268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.180 [2024-10-07 09:48:55.051310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.180 qpair failed and we were unable to recover it. 00:28:06.180 [2024-10-07 09:48:55.051479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.180 [2024-10-07 09:48:55.051521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.180 qpair failed and we were unable to recover it. 00:28:06.180 [2024-10-07 09:48:55.051702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.180 [2024-10-07 09:48:55.051747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.180 qpair failed and we were unable to recover it. 00:28:06.180 [2024-10-07 09:48:55.051920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.180 [2024-10-07 09:48:55.051964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.180 qpair failed and we were unable to recover it. 00:28:06.180 [2024-10-07 09:48:55.052102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.180 [2024-10-07 09:48:55.052146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.180 qpair failed and we were unable to recover it. 00:28:06.180 [2024-10-07 09:48:55.052274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.180 [2024-10-07 09:48:55.052319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.180 qpair failed and we were unable to recover it. 00:28:06.180 [2024-10-07 09:48:55.052532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.180 [2024-10-07 09:48:55.052575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.180 qpair failed and we were unable to recover it. 00:28:06.180 [2024-10-07 09:48:55.052755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.180 [2024-10-07 09:48:55.052799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.180 qpair failed and we were unable to recover it. 00:28:06.180 [2024-10-07 09:48:55.052983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.180 [2024-10-07 09:48:55.053026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.180 qpair failed and we were unable to recover it. 00:28:06.180 [2024-10-07 09:48:55.053238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.180 [2024-10-07 09:48:55.053281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.180 qpair failed and we were unable to recover it. 00:28:06.180 [2024-10-07 09:48:55.053484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.180 [2024-10-07 09:48:55.053527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.180 qpair failed and we were unable to recover it. 00:28:06.180 [2024-10-07 09:48:55.053698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.180 [2024-10-07 09:48:55.053742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.180 qpair failed and we were unable to recover it. 00:28:06.180 [2024-10-07 09:48:55.053912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.180 [2024-10-07 09:48:55.053958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.180 qpair failed and we were unable to recover it. 00:28:06.180 [2024-10-07 09:48:55.054142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.180 [2024-10-07 09:48:55.054185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.180 qpair failed and we were unable to recover it. 00:28:06.180 [2024-10-07 09:48:55.054361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.180 [2024-10-07 09:48:55.054404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.180 qpair failed and we were unable to recover it. 00:28:06.180 [2024-10-07 09:48:55.054572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.180 [2024-10-07 09:48:55.054615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.180 qpair failed and we were unable to recover it. 00:28:06.180 [2024-10-07 09:48:55.054787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.180 [2024-10-07 09:48:55.054831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.180 qpair failed and we were unable to recover it. 00:28:06.180 [2024-10-07 09:48:55.055063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.180 [2024-10-07 09:48:55.055105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.180 qpair failed and we were unable to recover it. 00:28:06.180 [2024-10-07 09:48:55.055253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.180 [2024-10-07 09:48:55.055297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.180 qpair failed and we were unable to recover it. 00:28:06.180 [2024-10-07 09:48:55.055463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.180 [2024-10-07 09:48:55.055508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.180 qpair failed and we were unable to recover it. 00:28:06.180 [2024-10-07 09:48:55.055713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.180 [2024-10-07 09:48:55.055757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.180 qpair failed and we were unable to recover it. 00:28:06.180 [2024-10-07 09:48:55.055928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.180 [2024-10-07 09:48:55.055979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.180 qpair failed and we were unable to recover it. 00:28:06.180 [2024-10-07 09:48:55.056188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.180 [2024-10-07 09:48:55.056232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.180 qpair failed and we were unable to recover it. 00:28:06.180 [2024-10-07 09:48:55.056448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.180 [2024-10-07 09:48:55.056491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.180 qpair failed and we were unable to recover it. 00:28:06.180 [2024-10-07 09:48:55.056658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.180 [2024-10-07 09:48:55.056712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.180 qpair failed and we were unable to recover it. 00:28:06.180 [2024-10-07 09:48:55.056920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.180 [2024-10-07 09:48:55.056964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.180 qpair failed and we were unable to recover it. 00:28:06.180 [2024-10-07 09:48:55.057136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.180 [2024-10-07 09:48:55.057179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.180 qpair failed and we were unable to recover it. 00:28:06.180 [2024-10-07 09:48:55.057346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.180 [2024-10-07 09:48:55.057389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.180 qpair failed and we were unable to recover it. 00:28:06.180 [2024-10-07 09:48:55.057588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.180 [2024-10-07 09:48:55.057631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.180 qpair failed and we were unable to recover it. 00:28:06.180 [2024-10-07 09:48:55.057789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.180 [2024-10-07 09:48:55.057833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.180 qpair failed and we were unable to recover it. 00:28:06.180 [2024-10-07 09:48:55.057965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.180 [2024-10-07 09:48:55.058008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.180 qpair failed and we were unable to recover it. 00:28:06.180 [2024-10-07 09:48:55.058210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.180 [2024-10-07 09:48:55.058253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.180 qpair failed and we were unable to recover it. 00:28:06.180 [2024-10-07 09:48:55.058400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.180 [2024-10-07 09:48:55.058445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.180 qpair failed and we were unable to recover it. 00:28:06.180 [2024-10-07 09:48:55.058656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.180 [2024-10-07 09:48:55.058708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.180 qpair failed and we were unable to recover it. 00:28:06.180 [2024-10-07 09:48:55.058879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.180 [2024-10-07 09:48:55.058922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.180 qpair failed and we were unable to recover it. 00:28:06.180 [2024-10-07 09:48:55.059130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.180 [2024-10-07 09:48:55.059173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.180 qpair failed and we were unable to recover it. 00:28:06.180 [2024-10-07 09:48:55.059343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.180 [2024-10-07 09:48:55.059386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.180 qpair failed and we were unable to recover it. 00:28:06.180 [2024-10-07 09:48:55.059584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.180 [2024-10-07 09:48:55.059627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.180 qpair failed and we were unable to recover it. 00:28:06.180 [2024-10-07 09:48:55.059808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.180 [2024-10-07 09:48:55.059853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.180 qpair failed and we were unable to recover it. 00:28:06.180 [2024-10-07 09:48:55.060056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.180 [2024-10-07 09:48:55.060099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.180 qpair failed and we were unable to recover it. 00:28:06.180 [2024-10-07 09:48:55.060276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.180 [2024-10-07 09:48:55.060319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.180 qpair failed and we were unable to recover it. 00:28:06.180 [2024-10-07 09:48:55.060487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.180 [2024-10-07 09:48:55.060530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.180 qpair failed and we were unable to recover it. 00:28:06.180 [2024-10-07 09:48:55.060703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.180 [2024-10-07 09:48:55.060749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.180 qpair failed and we were unable to recover it. 00:28:06.180 [2024-10-07 09:48:55.060891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.180 [2024-10-07 09:48:55.060936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.180 qpair failed and we were unable to recover it. 00:28:06.180 [2024-10-07 09:48:55.061102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.180 [2024-10-07 09:48:55.061145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.180 qpair failed and we were unable to recover it. 00:28:06.180 [2024-10-07 09:48:55.061316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.180 [2024-10-07 09:48:55.061359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.180 qpair failed and we were unable to recover it. 00:28:06.180 [2024-10-07 09:48:55.061569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.180 [2024-10-07 09:48:55.061611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.180 qpair failed and we were unable to recover it. 00:28:06.180 [2024-10-07 09:48:55.061754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.180 [2024-10-07 09:48:55.061799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.180 qpair failed and we were unable to recover it. 00:28:06.180 [2024-10-07 09:48:55.061975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.180 [2024-10-07 09:48:55.062019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.180 qpair failed and we were unable to recover it. 00:28:06.181 [2024-10-07 09:48:55.062193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.181 [2024-10-07 09:48:55.062236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.181 qpair failed and we were unable to recover it. 00:28:06.181 [2024-10-07 09:48:55.062373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.181 [2024-10-07 09:48:55.062416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.181 qpair failed and we were unable to recover it. 00:28:06.181 [2024-10-07 09:48:55.062557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.181 [2024-10-07 09:48:55.062600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.181 qpair failed and we were unable to recover it. 00:28:06.181 [2024-10-07 09:48:55.062787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.181 [2024-10-07 09:48:55.062831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.181 qpair failed and we were unable to recover it. 00:28:06.181 [2024-10-07 09:48:55.062963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.181 [2024-10-07 09:48:55.063007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.181 qpair failed and we were unable to recover it. 00:28:06.181 [2024-10-07 09:48:55.063136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.181 [2024-10-07 09:48:55.063182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.181 qpair failed and we were unable to recover it. 00:28:06.181 [2024-10-07 09:48:55.063351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.181 [2024-10-07 09:48:55.063394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.181 qpair failed and we were unable to recover it. 00:28:06.181 [2024-10-07 09:48:55.063551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.181 [2024-10-07 09:48:55.063594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.181 qpair failed and we were unable to recover it. 00:28:06.181 [2024-10-07 09:48:55.063761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.181 [2024-10-07 09:48:55.063805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.181 qpair failed and we were unable to recover it. 00:28:06.181 [2024-10-07 09:48:55.064006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.181 [2024-10-07 09:48:55.064049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.181 qpair failed and we were unable to recover it. 00:28:06.181 [2024-10-07 09:48:55.064212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.181 [2024-10-07 09:48:55.064255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.181 qpair failed and we were unable to recover it. 00:28:06.181 [2024-10-07 09:48:55.064423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.181 [2024-10-07 09:48:55.064468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.181 qpair failed and we were unable to recover it. 00:28:06.181 [2024-10-07 09:48:55.064635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.181 [2024-10-07 09:48:55.064698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.181 qpair failed and we were unable to recover it. 00:28:06.181 [2024-10-07 09:48:55.064907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.181 [2024-10-07 09:48:55.064951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.181 qpair failed and we were unable to recover it. 00:28:06.181 [2024-10-07 09:48:55.065120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.181 [2024-10-07 09:48:55.065164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.181 qpair failed and we were unable to recover it. 00:28:06.181 [2024-10-07 09:48:55.065303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.181 [2024-10-07 09:48:55.065347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.181 qpair failed and we were unable to recover it. 00:28:06.181 [2024-10-07 09:48:55.065517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.181 [2024-10-07 09:48:55.065562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.181 qpair failed and we were unable to recover it. 00:28:06.181 [2024-10-07 09:48:55.065724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.181 [2024-10-07 09:48:55.065769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.181 qpair failed and we were unable to recover it. 00:28:06.181 [2024-10-07 09:48:55.065935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.181 [2024-10-07 09:48:55.065978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.181 qpair failed and we were unable to recover it. 00:28:06.181 [2024-10-07 09:48:55.066180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.181 [2024-10-07 09:48:55.066223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.181 qpair failed and we were unable to recover it. 00:28:06.181 [2024-10-07 09:48:55.066396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.181 [2024-10-07 09:48:55.066439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.181 qpair failed and we were unable to recover it. 00:28:06.181 [2024-10-07 09:48:55.066598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.181 [2024-10-07 09:48:55.066641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.181 qpair failed and we were unable to recover it. 00:28:06.181 [2024-10-07 09:48:55.066826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.181 [2024-10-07 09:48:55.066870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.181 qpair failed and we were unable to recover it. 00:28:06.181 [2024-10-07 09:48:55.067077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.181 [2024-10-07 09:48:55.067120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.181 qpair failed and we were unable to recover it. 00:28:06.181 [2024-10-07 09:48:55.067290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.181 [2024-10-07 09:48:55.067334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.181 qpair failed and we were unable to recover it. 00:28:06.181 [2024-10-07 09:48:55.067502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.181 [2024-10-07 09:48:55.067546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.181 qpair failed and we were unable to recover it. 00:28:06.181 [2024-10-07 09:48:55.067719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.181 [2024-10-07 09:48:55.067764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.181 qpair failed and we were unable to recover it. 00:28:06.181 [2024-10-07 09:48:55.067929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.181 [2024-10-07 09:48:55.067973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.181 qpair failed and we were unable to recover it. 00:28:06.181 [2024-10-07 09:48:55.068123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.181 [2024-10-07 09:48:55.068167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.181 qpair failed and we were unable to recover it. 00:28:06.181 [2024-10-07 09:48:55.068328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.181 [2024-10-07 09:48:55.068371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.181 qpair failed and we were unable to recover it. 00:28:06.181 [2024-10-07 09:48:55.068492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.181 [2024-10-07 09:48:55.068535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.181 qpair failed and we were unable to recover it. 00:28:06.181 [2024-10-07 09:48:55.068718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.181 [2024-10-07 09:48:55.068764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.181 qpair failed and we were unable to recover it. 00:28:06.181 [2024-10-07 09:48:55.068967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.181 [2024-10-07 09:48:55.069011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.181 qpair failed and we were unable to recover it. 00:28:06.181 [2024-10-07 09:48:55.069188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.181 [2024-10-07 09:48:55.069232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.181 qpair failed and we were unable to recover it. 00:28:06.181 [2024-10-07 09:48:55.069362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.181 [2024-10-07 09:48:55.069407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.181 qpair failed and we were unable to recover it. 00:28:06.181 [2024-10-07 09:48:55.069614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.181 [2024-10-07 09:48:55.069658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.181 qpair failed and we were unable to recover it. 00:28:06.181 [2024-10-07 09:48:55.069853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.181 [2024-10-07 09:48:55.069896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.181 qpair failed and we were unable to recover it. 00:28:06.181 [2024-10-07 09:48:55.070101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.181 [2024-10-07 09:48:55.070144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.181 qpair failed and we were unable to recover it. 00:28:06.181 [2024-10-07 09:48:55.070316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.181 [2024-10-07 09:48:55.070360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.181 qpair failed and we were unable to recover it. 00:28:06.181 [2024-10-07 09:48:55.070572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.181 [2024-10-07 09:48:55.070616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.181 qpair failed and we were unable to recover it. 00:28:06.181 [2024-10-07 09:48:55.070799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.181 [2024-10-07 09:48:55.070844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.181 qpair failed and we were unable to recover it. 00:28:06.181 [2024-10-07 09:48:55.071008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.181 [2024-10-07 09:48:55.071051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.181 qpair failed and we were unable to recover it. 00:28:06.181 [2024-10-07 09:48:55.071251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.181 [2024-10-07 09:48:55.071294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.181 qpair failed and we were unable to recover it. 00:28:06.181 [2024-10-07 09:48:55.071471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.181 [2024-10-07 09:48:55.071514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.181 qpair failed and we were unable to recover it. 00:28:06.181 [2024-10-07 09:48:55.071680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.181 [2024-10-07 09:48:55.071723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.181 qpair failed and we were unable to recover it. 00:28:06.181 [2024-10-07 09:48:55.071858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.181 [2024-10-07 09:48:55.071901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.181 qpair failed and we were unable to recover it. 00:28:06.181 [2024-10-07 09:48:55.072072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.181 [2024-10-07 09:48:55.072115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.181 qpair failed and we were unable to recover it. 00:28:06.181 [2024-10-07 09:48:55.072288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.181 [2024-10-07 09:48:55.072331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.181 qpair failed and we were unable to recover it. 00:28:06.181 [2024-10-07 09:48:55.072472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.181 [2024-10-07 09:48:55.072515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.181 qpair failed and we were unable to recover it. 00:28:06.181 [2024-10-07 09:48:55.072648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.181 [2024-10-07 09:48:55.072723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.181 qpair failed and we were unable to recover it. 00:28:06.181 [2024-10-07 09:48:55.072894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.181 [2024-10-07 09:48:55.072937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.181 qpair failed and we were unable to recover it. 00:28:06.182 [2024-10-07 09:48:55.073108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.182 [2024-10-07 09:48:55.073151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.182 qpair failed and we were unable to recover it. 00:28:06.182 [2024-10-07 09:48:55.073346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.182 [2024-10-07 09:48:55.073395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.182 qpair failed and we were unable to recover it. 00:28:06.182 [2024-10-07 09:48:55.073603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.182 [2024-10-07 09:48:55.073651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.182 qpair failed and we were unable to recover it. 00:28:06.182 [2024-10-07 09:48:55.073847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.182 [2024-10-07 09:48:55.073893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.182 qpair failed and we were unable to recover it. 00:28:06.182 [2024-10-07 09:48:55.074077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.182 [2024-10-07 09:48:55.074122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.182 qpair failed and we were unable to recover it. 00:28:06.182 [2024-10-07 09:48:55.074305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.182 [2024-10-07 09:48:55.074351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.182 qpair failed and we were unable to recover it. 00:28:06.182 [2024-10-07 09:48:55.074563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.182 [2024-10-07 09:48:55.074610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.182 qpair failed and we were unable to recover it. 00:28:06.182 [2024-10-07 09:48:55.074836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.182 [2024-10-07 09:48:55.074882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.182 qpair failed and we were unable to recover it. 00:28:06.182 [2024-10-07 09:48:55.075024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.182 [2024-10-07 09:48:55.075071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.182 qpair failed and we were unable to recover it. 00:28:06.182 [2024-10-07 09:48:55.075284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.182 [2024-10-07 09:48:55.075331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.182 qpair failed and we were unable to recover it. 00:28:06.182 [2024-10-07 09:48:55.075469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.182 [2024-10-07 09:48:55.075517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.182 qpair failed and we were unable to recover it. 00:28:06.182 [2024-10-07 09:48:55.075729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.182 [2024-10-07 09:48:55.075776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.182 qpair failed and we were unable to recover it. 00:28:06.182 [2024-10-07 09:48:55.075989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.182 [2024-10-07 09:48:55.076037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.182 qpair failed and we were unable to recover it. 00:28:06.182 [2024-10-07 09:48:55.076216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.182 [2024-10-07 09:48:55.076263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.182 qpair failed and we were unable to recover it. 00:28:06.182 [2024-10-07 09:48:55.076412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.182 [2024-10-07 09:48:55.076459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.182 qpair failed and we were unable to recover it. 00:28:06.182 [2024-10-07 09:48:55.076697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.182 [2024-10-07 09:48:55.076744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.182 qpair failed and we were unable to recover it. 00:28:06.182 [2024-10-07 09:48:55.076928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.182 [2024-10-07 09:48:55.076974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.182 qpair failed and we were unable to recover it. 00:28:06.182 [2024-10-07 09:48:55.077158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.182 [2024-10-07 09:48:55.077205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.182 qpair failed and we were unable to recover it. 00:28:06.182 [2024-10-07 09:48:55.077416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.182 [2024-10-07 09:48:55.077461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.182 qpair failed and we were unable to recover it. 00:28:06.182 [2024-10-07 09:48:55.077649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.182 [2024-10-07 09:48:55.077705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.182 qpair failed and we were unable to recover it. 00:28:06.182 [2024-10-07 09:48:55.077923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.182 [2024-10-07 09:48:55.077968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.182 qpair failed and we were unable to recover it. 00:28:06.182 [2024-10-07 09:48:55.078151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.182 [2024-10-07 09:48:55.078197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.182 qpair failed and we were unable to recover it. 00:28:06.182 [2024-10-07 09:48:55.078341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.182 [2024-10-07 09:48:55.078386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.182 qpair failed and we were unable to recover it. 00:28:06.182 [2024-10-07 09:48:55.078551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.182 [2024-10-07 09:48:55.078597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.182 qpair failed and we were unable to recover it. 00:28:06.182 [2024-10-07 09:48:55.078823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.182 [2024-10-07 09:48:55.078869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.182 qpair failed and we were unable to recover it. 00:28:06.182 [2024-10-07 09:48:55.079084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.182 [2024-10-07 09:48:55.079130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.182 qpair failed and we were unable to recover it. 00:28:06.182 [2024-10-07 09:48:55.079312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.182 [2024-10-07 09:48:55.079358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.182 qpair failed and we were unable to recover it. 00:28:06.182 [2024-10-07 09:48:55.079533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.182 [2024-10-07 09:48:55.079578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.182 qpair failed and we were unable to recover it. 00:28:06.182 [2024-10-07 09:48:55.079780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.182 [2024-10-07 09:48:55.079827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.182 qpair failed and we were unable to recover it. 00:28:06.182 [2024-10-07 09:48:55.080022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.182 [2024-10-07 09:48:55.080068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.182 qpair failed and we were unable to recover it. 00:28:06.182 [2024-10-07 09:48:55.080218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.182 [2024-10-07 09:48:55.080263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.182 qpair failed and we were unable to recover it. 00:28:06.182 [2024-10-07 09:48:55.080446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.182 [2024-10-07 09:48:55.080492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.182 qpair failed and we were unable to recover it. 00:28:06.182 [2024-10-07 09:48:55.080714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.182 [2024-10-07 09:48:55.080762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.182 qpair failed and we were unable to recover it. 00:28:06.182 [2024-10-07 09:48:55.080976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.182 [2024-10-07 09:48:55.081022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.182 qpair failed and we were unable to recover it. 00:28:06.182 [2024-10-07 09:48:55.081169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.182 [2024-10-07 09:48:55.081217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.182 qpair failed and we were unable to recover it. 00:28:06.182 [2024-10-07 09:48:55.081429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.182 [2024-10-07 09:48:55.081475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.182 qpair failed and we were unable to recover it. 00:28:06.182 [2024-10-07 09:48:55.081657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.182 [2024-10-07 09:48:55.081712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.182 qpair failed and we were unable to recover it. 00:28:06.182 [2024-10-07 09:48:55.081904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.182 [2024-10-07 09:48:55.081950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.182 qpair failed and we were unable to recover it. 00:28:06.182 [2024-10-07 09:48:55.082166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.182 [2024-10-07 09:48:55.082212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.182 qpair failed and we were unable to recover it. 00:28:06.182 [2024-10-07 09:48:55.082426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.182 [2024-10-07 09:48:55.082472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.182 qpair failed and we were unable to recover it. 00:28:06.182 [2024-10-07 09:48:55.082658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.182 [2024-10-07 09:48:55.082712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.182 qpair failed and we were unable to recover it. 00:28:06.182 [2024-10-07 09:48:55.082888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.182 [2024-10-07 09:48:55.082943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.182 qpair failed and we were unable to recover it. 00:28:06.182 [2024-10-07 09:48:55.083156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.182 [2024-10-07 09:48:55.083202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.182 qpair failed and we were unable to recover it. 00:28:06.183 [2024-10-07 09:48:55.083383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.183 [2024-10-07 09:48:55.083429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.183 qpair failed and we were unable to recover it. 00:28:06.183 [2024-10-07 09:48:55.083568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.183 [2024-10-07 09:48:55.083615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.183 qpair failed and we were unable to recover it. 00:28:06.183 [2024-10-07 09:48:55.083793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.183 [2024-10-07 09:48:55.083840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.183 qpair failed and we were unable to recover it. 00:28:06.183 [2024-10-07 09:48:55.084047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.183 [2024-10-07 09:48:55.084093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.183 qpair failed and we were unable to recover it. 00:28:06.183 [2024-10-07 09:48:55.084261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.183 [2024-10-07 09:48:55.084306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.183 qpair failed and we were unable to recover it. 00:28:06.183 [2024-10-07 09:48:55.084448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.183 [2024-10-07 09:48:55.084495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.183 qpair failed and we were unable to recover it. 00:28:06.183 [2024-10-07 09:48:55.084647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.183 [2024-10-07 09:48:55.084709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.183 qpair failed and we were unable to recover it. 00:28:06.183 [2024-10-07 09:48:55.084934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.183 [2024-10-07 09:48:55.084980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.183 qpair failed and we were unable to recover it. 00:28:06.183 [2024-10-07 09:48:55.085112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.183 [2024-10-07 09:48:55.085160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.183 qpair failed and we were unable to recover it. 00:28:06.183 [2024-10-07 09:48:55.085374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.183 [2024-10-07 09:48:55.085421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.183 qpair failed and we were unable to recover it. 00:28:06.183 [2024-10-07 09:48:55.085636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.183 [2024-10-07 09:48:55.085706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.183 qpair failed and we were unable to recover it. 00:28:06.183 [2024-10-07 09:48:55.085856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.183 [2024-10-07 09:48:55.085901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.183 qpair failed and we were unable to recover it. 00:28:06.183 [2024-10-07 09:48:55.086091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.183 [2024-10-07 09:48:55.086136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.183 qpair failed and we were unable to recover it. 00:28:06.183 [2024-10-07 09:48:55.086344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.183 [2024-10-07 09:48:55.086393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.183 qpair failed and we were unable to recover it. 00:28:06.183 [2024-10-07 09:48:55.086631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.183 [2024-10-07 09:48:55.086694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.183 qpair failed and we were unable to recover it. 00:28:06.183 [2024-10-07 09:48:55.086923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.183 [2024-10-07 09:48:55.086971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.183 qpair failed and we were unable to recover it. 00:28:06.183 [2024-10-07 09:48:55.087162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.183 [2024-10-07 09:48:55.087211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.183 qpair failed and we were unable to recover it. 00:28:06.183 [2024-10-07 09:48:55.087438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.183 [2024-10-07 09:48:55.087486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.183 qpair failed and we were unable to recover it. 00:28:06.183 [2024-10-07 09:48:55.087714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.183 [2024-10-07 09:48:55.087764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.183 qpair failed and we were unable to recover it. 00:28:06.183 [2024-10-07 09:48:55.087996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.183 [2024-10-07 09:48:55.088046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.183 qpair failed and we were unable to recover it. 00:28:06.183 [2024-10-07 09:48:55.088272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.183 [2024-10-07 09:48:55.088320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.183 qpair failed and we were unable to recover it. 00:28:06.183 [2024-10-07 09:48:55.088488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.183 [2024-10-07 09:48:55.088535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.183 qpair failed and we were unable to recover it. 00:28:06.183 [2024-10-07 09:48:55.088721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.183 [2024-10-07 09:48:55.088769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.183 qpair failed and we were unable to recover it. 00:28:06.183 [2024-10-07 09:48:55.088984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.183 [2024-10-07 09:48:55.089030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.183 qpair failed and we were unable to recover it. 00:28:06.183 [2024-10-07 09:48:55.089240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.183 [2024-10-07 09:48:55.089286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.183 qpair failed and we were unable to recover it. 00:28:06.183 [2024-10-07 09:48:55.089467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.183 [2024-10-07 09:48:55.089513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.183 qpair failed and we were unable to recover it. 00:28:06.183 [2024-10-07 09:48:55.089738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.183 [2024-10-07 09:48:55.089786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.183 qpair failed and we were unable to recover it. 00:28:06.183 [2024-10-07 09:48:55.089957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.183 [2024-10-07 09:48:55.090003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.183 qpair failed and we were unable to recover it. 00:28:06.183 [2024-10-07 09:48:55.090184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.183 [2024-10-07 09:48:55.090230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.183 qpair failed and we were unable to recover it. 00:28:06.183 [2024-10-07 09:48:55.090444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.183 [2024-10-07 09:48:55.090489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.183 qpair failed and we were unable to recover it. 00:28:06.183 [2024-10-07 09:48:55.090690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.183 [2024-10-07 09:48:55.090740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.183 qpair failed and we were unable to recover it. 00:28:06.183 [2024-10-07 09:48:55.090928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.183 [2024-10-07 09:48:55.090978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.183 qpair failed and we were unable to recover it. 00:28:06.183 [2024-10-07 09:48:55.091180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.183 [2024-10-07 09:48:55.091228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.183 qpair failed and we were unable to recover it. 00:28:06.183 [2024-10-07 09:48:55.091407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.183 [2024-10-07 09:48:55.091456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.183 qpair failed and we were unable to recover it. 00:28:06.183 [2024-10-07 09:48:55.091638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.183 [2024-10-07 09:48:55.091702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.183 qpair failed and we were unable to recover it. 00:28:06.183 [2024-10-07 09:48:55.091878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.183 [2024-10-07 09:48:55.091927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.183 qpair failed and we were unable to recover it. 00:28:06.183 [2024-10-07 09:48:55.092112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.183 [2024-10-07 09:48:55.092161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.183 qpair failed and we were unable to recover it. 00:28:06.183 [2024-10-07 09:48:55.092388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.183 [2024-10-07 09:48:55.092435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.183 qpair failed and we were unable to recover it. 00:28:06.183 [2024-10-07 09:48:55.092661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.183 [2024-10-07 09:48:55.092732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.183 qpair failed and we were unable to recover it. 00:28:06.183 [2024-10-07 09:48:55.092877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.183 [2024-10-07 09:48:55.092926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.184 qpair failed and we were unable to recover it. 00:28:06.184 [2024-10-07 09:48:55.093152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.184 [2024-10-07 09:48:55.093200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.184 qpair failed and we were unable to recover it. 00:28:06.184 [2024-10-07 09:48:55.093388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.184 [2024-10-07 09:48:55.093437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.184 qpair failed and we were unable to recover it. 00:28:06.184 [2024-10-07 09:48:55.093589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.184 [2024-10-07 09:48:55.093640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.184 qpair failed and we were unable to recover it. 00:28:06.184 [2024-10-07 09:48:55.093908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.184 [2024-10-07 09:48:55.093957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.184 qpair failed and we were unable to recover it. 00:28:06.184 [2024-10-07 09:48:55.094145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.184 [2024-10-07 09:48:55.094195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.184 qpair failed and we were unable to recover it. 00:28:06.184 [2024-10-07 09:48:55.094339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.184 [2024-10-07 09:48:55.094390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.184 qpair failed and we were unable to recover it. 00:28:06.184 [2024-10-07 09:48:55.094578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.184 [2024-10-07 09:48:55.094627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.184 qpair failed and we were unable to recover it. 00:28:06.184 [2024-10-07 09:48:55.094902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.184 [2024-10-07 09:48:55.094967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.184 qpair failed and we were unable to recover it. 00:28:06.184 [2024-10-07 09:48:55.095211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.184 [2024-10-07 09:48:55.095271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.184 qpair failed and we were unable to recover it. 00:28:06.184 [2024-10-07 09:48:55.095514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.184 [2024-10-07 09:48:55.095572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.184 qpair failed and we were unable to recover it. 00:28:06.184 [2024-10-07 09:48:55.095855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.184 [2024-10-07 09:48:55.095913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.184 qpair failed and we were unable to recover it. 00:28:06.184 [2024-10-07 09:48:55.096133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.184 [2024-10-07 09:48:55.096191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.184 qpair failed and we were unable to recover it. 00:28:06.184 [2024-10-07 09:48:55.096447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.184 [2024-10-07 09:48:55.096505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.184 qpair failed and we were unable to recover it. 00:28:06.184 [2024-10-07 09:48:55.096744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.184 [2024-10-07 09:48:55.096794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.184 qpair failed and we were unable to recover it. 00:28:06.184 [2024-10-07 09:48:55.096972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.184 [2024-10-07 09:48:55.097018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.184 qpair failed and we were unable to recover it. 00:28:06.184 [2024-10-07 09:48:55.097206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.184 [2024-10-07 09:48:55.097256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.184 qpair failed and we were unable to recover it. 00:28:06.184 [2024-10-07 09:48:55.097402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.184 [2024-10-07 09:48:55.097452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.184 qpair failed and we were unable to recover it. 00:28:06.184 [2024-10-07 09:48:55.097687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.184 [2024-10-07 09:48:55.097736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.184 qpair failed and we were unable to recover it. 00:28:06.184 [2024-10-07 09:48:55.097920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.184 [2024-10-07 09:48:55.097971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.184 qpair failed and we were unable to recover it. 00:28:06.184 [2024-10-07 09:48:55.098124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.184 [2024-10-07 09:48:55.098168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.184 qpair failed and we were unable to recover it. 00:28:06.184 [2024-10-07 09:48:55.098334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.184 [2024-10-07 09:48:55.098378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.184 qpair failed and we were unable to recover it. 00:28:06.184 [2024-10-07 09:48:55.098542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.184 [2024-10-07 09:48:55.098585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.184 qpair failed and we were unable to recover it. 00:28:06.184 [2024-10-07 09:48:55.098832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.184 [2024-10-07 09:48:55.098882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.184 qpair failed and we were unable to recover it. 00:28:06.184 [2024-10-07 09:48:55.099121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.184 [2024-10-07 09:48:55.099168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.184 qpair failed and we were unable to recover it. 00:28:06.184 [2024-10-07 09:48:55.099329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.184 [2024-10-07 09:48:55.099378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.184 qpair failed and we were unable to recover it. 00:28:06.184 [2024-10-07 09:48:55.099521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.184 [2024-10-07 09:48:55.099572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.184 qpair failed and we were unable to recover it. 00:28:06.184 [2024-10-07 09:48:55.099743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.463 [2024-10-07 09:48:55.099794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.463 qpair failed and we were unable to recover it. 00:28:06.463 [2024-10-07 09:48:55.099952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.463 [2024-10-07 09:48:55.100003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.463 qpair failed and we were unable to recover it. 00:28:06.463 [2024-10-07 09:48:55.100207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.463 [2024-10-07 09:48:55.100260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.463 qpair failed and we were unable to recover it. 00:28:06.463 [2024-10-07 09:48:55.100412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.463 [2024-10-07 09:48:55.100462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.463 qpair failed and we were unable to recover it. 00:28:06.463 [2024-10-07 09:48:55.100657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.463 [2024-10-07 09:48:55.100736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.463 qpair failed and we were unable to recover it. 00:28:06.463 [2024-10-07 09:48:55.100881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.463 [2024-10-07 09:48:55.100930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.463 qpair failed and we were unable to recover it. 00:28:06.463 [2024-10-07 09:48:55.101081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.463 [2024-10-07 09:48:55.101130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.463 qpair failed and we were unable to recover it. 00:28:06.463 [2024-10-07 09:48:55.101327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.463 [2024-10-07 09:48:55.101380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.463 qpair failed and we were unable to recover it. 00:28:06.463 [2024-10-07 09:48:55.101528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.463 [2024-10-07 09:48:55.101581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.463 qpair failed and we were unable to recover it. 00:28:06.463 [2024-10-07 09:48:55.101823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.463 [2024-10-07 09:48:55.101897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.463 qpair failed and we were unable to recover it. 00:28:06.463 [2024-10-07 09:48:55.102058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.463 [2024-10-07 09:48:55.102110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.463 qpair failed and we were unable to recover it. 00:28:06.463 [2024-10-07 09:48:55.102300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.463 [2024-10-07 09:48:55.102349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.463 qpair failed and we were unable to recover it. 00:28:06.463 [2024-10-07 09:48:55.102531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.463 [2024-10-07 09:48:55.102590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.463 qpair failed and we were unable to recover it. 00:28:06.463 [2024-10-07 09:48:55.102801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.463 [2024-10-07 09:48:55.102852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.463 qpair failed and we were unable to recover it. 00:28:06.463 [2024-10-07 09:48:55.103005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.463 [2024-10-07 09:48:55.103053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.463 qpair failed and we were unable to recover it. 00:28:06.463 [2024-10-07 09:48:55.103204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.463 [2024-10-07 09:48:55.103252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.463 qpair failed and we were unable to recover it. 00:28:06.463 [2024-10-07 09:48:55.103421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.463 [2024-10-07 09:48:55.103470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.463 qpair failed and we were unable to recover it. 00:28:06.463 [2024-10-07 09:48:55.103690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.463 [2024-10-07 09:48:55.103739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.463 qpair failed and we were unable to recover it. 00:28:06.463 [2024-10-07 09:48:55.103895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.463 [2024-10-07 09:48:55.103949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.463 qpair failed and we were unable to recover it. 00:28:06.463 [2024-10-07 09:48:55.104100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.463 [2024-10-07 09:48:55.104151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.463 qpair failed and we were unable to recover it. 00:28:06.463 [2024-10-07 09:48:55.104388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.463 [2024-10-07 09:48:55.104436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.463 qpair failed and we were unable to recover it. 00:28:06.463 [2024-10-07 09:48:55.104570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.463 [2024-10-07 09:48:55.104619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.463 qpair failed and we were unable to recover it. 00:28:06.463 [2024-10-07 09:48:55.104783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.463 [2024-10-07 09:48:55.104835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.463 qpair failed and we were unable to recover it. 00:28:06.463 [2024-10-07 09:48:55.105022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.463 [2024-10-07 09:48:55.105070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.463 qpair failed and we were unable to recover it. 00:28:06.463 [2024-10-07 09:48:55.105247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.463 [2024-10-07 09:48:55.105295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.463 qpair failed and we were unable to recover it. 00:28:06.463 [2024-10-07 09:48:55.105435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.463 [2024-10-07 09:48:55.105486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.463 qpair failed and we were unable to recover it. 00:28:06.463 [2024-10-07 09:48:55.105720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.463 [2024-10-07 09:48:55.105770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.463 qpair failed and we were unable to recover it. 00:28:06.463 [2024-10-07 09:48:55.105934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.463 [2024-10-07 09:48:55.105985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.463 qpair failed and we were unable to recover it. 00:28:06.463 [2024-10-07 09:48:55.106151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.463 [2024-10-07 09:48:55.106200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.463 qpair failed and we were unable to recover it. 00:28:06.463 [2024-10-07 09:48:55.106346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.463 [2024-10-07 09:48:55.106393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.463 qpair failed and we were unable to recover it. 00:28:06.463 [2024-10-07 09:48:55.106578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.463 [2024-10-07 09:48:55.106626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.463 qpair failed and we were unable to recover it. 00:28:06.463 [2024-10-07 09:48:55.106779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.463 [2024-10-07 09:48:55.106829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.463 qpair failed and we were unable to recover it. 00:28:06.463 [2024-10-07 09:48:55.107020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.463 [2024-10-07 09:48:55.107069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.463 qpair failed and we were unable to recover it. 00:28:06.463 [2024-10-07 09:48:55.107257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.463 [2024-10-07 09:48:55.107307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.463 qpair failed and we were unable to recover it. 00:28:06.463 [2024-10-07 09:48:55.107489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.464 [2024-10-07 09:48:55.107539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.464 qpair failed and we were unable to recover it. 00:28:06.464 [2024-10-07 09:48:55.107731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.464 [2024-10-07 09:48:55.107782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.464 qpair failed and we were unable to recover it. 00:28:06.464 [2024-10-07 09:48:55.107976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.464 [2024-10-07 09:48:55.108027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.464 qpair failed and we were unable to recover it. 00:28:06.464 [2024-10-07 09:48:55.108185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.464 [2024-10-07 09:48:55.108234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.464 qpair failed and we were unable to recover it. 00:28:06.464 [2024-10-07 09:48:55.108397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.464 [2024-10-07 09:48:55.108446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.464 qpair failed and we were unable to recover it. 00:28:06.464 [2024-10-07 09:48:55.108611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.464 [2024-10-07 09:48:55.108697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.464 qpair failed and we were unable to recover it. 00:28:06.464 [2024-10-07 09:48:55.108914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.464 [2024-10-07 09:48:55.108967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.464 qpair failed and we were unable to recover it. 00:28:06.464 [2024-10-07 09:48:55.109129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.464 [2024-10-07 09:48:55.109180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.464 qpair failed and we were unable to recover it. 00:28:06.464 [2024-10-07 09:48:55.109334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.464 [2024-10-07 09:48:55.109384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.464 qpair failed and we were unable to recover it. 00:28:06.464 [2024-10-07 09:48:55.109605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.464 [2024-10-07 09:48:55.109654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.464 qpair failed and we were unable to recover it. 00:28:06.464 [2024-10-07 09:48:55.109860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.464 [2024-10-07 09:48:55.109910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.464 qpair failed and we were unable to recover it. 00:28:06.464 [2024-10-07 09:48:55.110083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.464 [2024-10-07 09:48:55.110134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.464 qpair failed and we were unable to recover it. 00:28:06.464 [2024-10-07 09:48:55.110295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.464 [2024-10-07 09:48:55.110346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.464 qpair failed and we were unable to recover it. 00:28:06.464 [2024-10-07 09:48:55.110577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.464 [2024-10-07 09:48:55.110626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:06.464 qpair failed and we were unable to recover it. 00:28:06.464 [2024-10-07 09:48:55.110783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.464 [2024-10-07 09:48:55.110836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.464 qpair failed and we were unable to recover it. 00:28:06.464 [2024-10-07 09:48:55.111014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.464 [2024-10-07 09:48:55.111064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.464 qpair failed and we were unable to recover it. 00:28:06.464 [2024-10-07 09:48:55.111254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.464 [2024-10-07 09:48:55.111303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.464 qpair failed and we were unable to recover it. 00:28:06.464 [2024-10-07 09:48:55.111497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.464 [2024-10-07 09:48:55.111545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.464 qpair failed and we were unable to recover it. 00:28:06.464 [2024-10-07 09:48:55.111734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.464 [2024-10-07 09:48:55.111794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.464 qpair failed and we were unable to recover it. 00:28:06.464 [2024-10-07 09:48:55.111990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.464 [2024-10-07 09:48:55.112040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.464 qpair failed and we were unable to recover it. 00:28:06.464 [2024-10-07 09:48:55.112220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.464 [2024-10-07 09:48:55.112268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.464 qpair failed and we were unable to recover it. 00:28:06.464 [2024-10-07 09:48:55.112412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.464 [2024-10-07 09:48:55.112460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.464 qpair failed and we were unable to recover it. 00:28:06.464 [2024-10-07 09:48:55.112613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.464 [2024-10-07 09:48:55.112661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.464 qpair failed and we were unable to recover it. 00:28:06.464 [2024-10-07 09:48:55.112822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.464 [2024-10-07 09:48:55.112870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.464 qpair failed and we were unable to recover it. 00:28:06.464 [2024-10-07 09:48:55.113094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.464 [2024-10-07 09:48:55.113142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.464 qpair failed and we were unable to recover it. 00:28:06.464 [2024-10-07 09:48:55.113303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.464 [2024-10-07 09:48:55.113352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.464 qpair failed and we were unable to recover it. 00:28:06.464 [2024-10-07 09:48:55.113513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.464 [2024-10-07 09:48:55.113561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.464 qpair failed and we were unable to recover it. 00:28:06.464 [2024-10-07 09:48:55.113727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.464 [2024-10-07 09:48:55.113777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.464 qpair failed and we were unable to recover it. 00:28:06.464 [2024-10-07 09:48:55.113928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.464 [2024-10-07 09:48:55.113976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.464 qpair failed and we were unable to recover it. 00:28:06.464 [2024-10-07 09:48:55.114195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.464 [2024-10-07 09:48:55.114243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.464 qpair failed and we were unable to recover it. 00:28:06.464 [2024-10-07 09:48:55.114408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.464 [2024-10-07 09:48:55.114456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.464 qpair failed and we were unable to recover it. 00:28:06.464 [2024-10-07 09:48:55.114643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.464 [2024-10-07 09:48:55.114701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.464 qpair failed and we were unable to recover it. 00:28:06.464 [2024-10-07 09:48:55.114904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.464 [2024-10-07 09:48:55.114952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.464 qpair failed and we were unable to recover it. 00:28:06.464 [2024-10-07 09:48:55.115116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.464 [2024-10-07 09:48:55.115166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.464 qpair failed and we were unable to recover it. 00:28:06.464 [2024-10-07 09:48:55.115355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.464 [2024-10-07 09:48:55.115404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.464 qpair failed and we were unable to recover it. 00:28:06.464 [2024-10-07 09:48:55.115572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.464 [2024-10-07 09:48:55.115620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.464 qpair failed and we were unable to recover it. 00:28:06.464 [2024-10-07 09:48:55.115834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.464 [2024-10-07 09:48:55.115883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.464 qpair failed and we were unable to recover it. 00:28:06.464 [2024-10-07 09:48:55.116071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.464 [2024-10-07 09:48:55.116119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.464 qpair failed and we were unable to recover it. 00:28:06.465 [2024-10-07 09:48:55.116341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.465 [2024-10-07 09:48:55.116389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.465 qpair failed and we were unable to recover it. 00:28:06.465 [2024-10-07 09:48:55.116536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.465 [2024-10-07 09:48:55.116584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.465 qpair failed and we were unable to recover it. 00:28:06.465 [2024-10-07 09:48:55.116790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.465 [2024-10-07 09:48:55.116840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.465 qpair failed and we were unable to recover it. 00:28:06.465 [2024-10-07 09:48:55.117043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.465 [2024-10-07 09:48:55.117091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.465 qpair failed and we were unable to recover it. 00:28:06.465 [2024-10-07 09:48:55.117253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.465 [2024-10-07 09:48:55.117304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.465 qpair failed and we were unable to recover it. 00:28:06.465 [2024-10-07 09:48:55.117449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.465 [2024-10-07 09:48:55.117497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.465 qpair failed and we were unable to recover it. 00:28:06.465 [2024-10-07 09:48:55.117705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.465 [2024-10-07 09:48:55.117755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.465 qpair failed and we were unable to recover it. 00:28:06.465 [2024-10-07 09:48:55.117919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.465 [2024-10-07 09:48:55.117968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.465 qpair failed and we were unable to recover it. 00:28:06.465 [2024-10-07 09:48:55.118112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.465 [2024-10-07 09:48:55.118160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.465 qpair failed and we were unable to recover it. 00:28:06.465 [2024-10-07 09:48:55.118336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.465 [2024-10-07 09:48:55.118384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.465 qpair failed and we were unable to recover it. 00:28:06.465 [2024-10-07 09:48:55.118536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.465 [2024-10-07 09:48:55.118587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.465 qpair failed and we were unable to recover it. 00:28:06.465 [2024-10-07 09:48:55.118769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.465 [2024-10-07 09:48:55.118819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.465 qpair failed and we were unable to recover it. 00:28:06.465 [2024-10-07 09:48:55.119005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.465 [2024-10-07 09:48:55.119054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.465 qpair failed and we were unable to recover it. 00:28:06.465 [2024-10-07 09:48:55.119242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.465 [2024-10-07 09:48:55.119291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.465 qpair failed and we were unable to recover it. 00:28:06.465 [2024-10-07 09:48:55.119484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.465 [2024-10-07 09:48:55.119532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.465 qpair failed and we were unable to recover it. 00:28:06.465 [2024-10-07 09:48:55.119688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.465 [2024-10-07 09:48:55.119739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.465 qpair failed and we were unable to recover it. 00:28:06.465 [2024-10-07 09:48:55.119946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.465 [2024-10-07 09:48:55.119995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.465 qpair failed and we were unable to recover it. 00:28:06.465 [2024-10-07 09:48:55.120149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.465 [2024-10-07 09:48:55.120196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.465 qpair failed and we were unable to recover it. 00:28:06.465 [2024-10-07 09:48:55.120391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.465 [2024-10-07 09:48:55.120439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.465 qpair failed and we were unable to recover it. 00:28:06.465 [2024-10-07 09:48:55.120661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.465 [2024-10-07 09:48:55.120720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.465 qpair failed and we were unable to recover it. 00:28:06.465 [2024-10-07 09:48:55.120880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.465 [2024-10-07 09:48:55.120935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.465 qpair failed and we were unable to recover it. 00:28:06.465 [2024-10-07 09:48:55.121126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.465 [2024-10-07 09:48:55.121176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.465 qpair failed and we were unable to recover it. 00:28:06.465 [2024-10-07 09:48:55.121336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.465 [2024-10-07 09:48:55.121385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.465 qpair failed and we were unable to recover it. 00:28:06.465 [2024-10-07 09:48:55.121602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.465 [2024-10-07 09:48:55.121650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.465 qpair failed and we were unable to recover it. 00:28:06.465 [2024-10-07 09:48:55.121838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.465 [2024-10-07 09:48:55.121888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.465 qpair failed and we were unable to recover it. 00:28:06.465 [2024-10-07 09:48:55.122069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.465 [2024-10-07 09:48:55.122119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.465 qpair failed and we were unable to recover it. 00:28:06.465 [2024-10-07 09:48:55.122294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.465 [2024-10-07 09:48:55.122342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.465 qpair failed and we were unable to recover it. 00:28:06.465 [2024-10-07 09:48:55.122569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.465 [2024-10-07 09:48:55.122618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.465 qpair failed and we were unable to recover it. 00:28:06.465 [2024-10-07 09:48:55.122823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.465 [2024-10-07 09:48:55.122873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.465 qpair failed and we were unable to recover it. 00:28:06.465 [2024-10-07 09:48:55.123019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.465 [2024-10-07 09:48:55.123068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.465 qpair failed and we were unable to recover it. 00:28:06.465 [2024-10-07 09:48:55.123253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.465 [2024-10-07 09:48:55.123304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.465 qpair failed and we were unable to recover it. 00:28:06.465 [2024-10-07 09:48:55.123461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.465 [2024-10-07 09:48:55.123509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.465 qpair failed and we were unable to recover it. 00:28:06.465 [2024-10-07 09:48:55.123661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.465 [2024-10-07 09:48:55.123724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.465 qpair failed and we were unable to recover it. 00:28:06.465 [2024-10-07 09:48:55.123918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.465 [2024-10-07 09:48:55.123968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.465 qpair failed and we were unable to recover it. 00:28:06.465 [2024-10-07 09:48:55.124201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.465 [2024-10-07 09:48:55.124250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.465 qpair failed and we were unable to recover it. 00:28:06.465 [2024-10-07 09:48:55.124411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.465 [2024-10-07 09:48:55.124459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.465 qpair failed and we were unable to recover it. 00:28:06.465 [2024-10-07 09:48:55.124615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.465 [2024-10-07 09:48:55.124663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.465 qpair failed and we were unable to recover it. 00:28:06.465 [2024-10-07 09:48:55.124882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.465 [2024-10-07 09:48:55.124931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.465 qpair failed and we were unable to recover it. 00:28:06.465 [2024-10-07 09:48:55.125151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.466 [2024-10-07 09:48:55.125200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.466 qpair failed and we were unable to recover it. 00:28:06.466 [2024-10-07 09:48:55.125352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.466 [2024-10-07 09:48:55.125400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.466 qpair failed and we were unable to recover it. 00:28:06.466 [2024-10-07 09:48:55.125591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.466 [2024-10-07 09:48:55.125639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.466 qpair failed and we were unable to recover it. 00:28:06.466 [2024-10-07 09:48:55.125814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.466 [2024-10-07 09:48:55.125864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.466 qpair failed and we were unable to recover it. 00:28:06.466 [2024-10-07 09:48:55.126013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.466 [2024-10-07 09:48:55.126061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.466 qpair failed and we were unable to recover it. 00:28:06.466 [2024-10-07 09:48:55.126243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.466 [2024-10-07 09:48:55.126293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.466 qpair failed and we were unable to recover it. 00:28:06.466 [2024-10-07 09:48:55.126498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.466 [2024-10-07 09:48:55.126548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.466 qpair failed and we were unable to recover it. 00:28:06.466 [2024-10-07 09:48:55.126739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.466 [2024-10-07 09:48:55.126789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.466 qpair failed and we were unable to recover it. 00:28:06.466 [2024-10-07 09:48:55.126931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.466 [2024-10-07 09:48:55.126980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:06.466 qpair failed and we were unable to recover it. 00:28:06.466 [2024-10-07 09:48:55.127205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.466 [2024-10-07 09:48:55.127278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.466 qpair failed and we were unable to recover it. 00:28:06.466 [2024-10-07 09:48:55.127460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.466 [2024-10-07 09:48:55.127512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.466 qpair failed and we were unable to recover it. 00:28:06.466 [2024-10-07 09:48:55.127683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.466 [2024-10-07 09:48:55.127734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.466 qpair failed and we were unable to recover it. 00:28:06.466 [2024-10-07 09:48:55.127897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.466 [2024-10-07 09:48:55.127947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.466 qpair failed and we were unable to recover it. 00:28:06.466 [2024-10-07 09:48:55.128183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.466 [2024-10-07 09:48:55.128232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.466 qpair failed and we were unable to recover it. 00:28:06.466 [2024-10-07 09:48:55.128389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.466 [2024-10-07 09:48:55.128438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.466 qpair failed and we were unable to recover it. 00:28:06.466 [2024-10-07 09:48:55.128604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.466 [2024-10-07 09:48:55.128652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.466 qpair failed and we were unable to recover it. 00:28:06.466 [2024-10-07 09:48:55.128884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.466 [2024-10-07 09:48:55.128933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.466 qpair failed and we were unable to recover it. 00:28:06.466 [2024-10-07 09:48:55.129236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.466 [2024-10-07 09:48:55.129300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.466 qpair failed and we were unable to recover it. 00:28:06.466 [2024-10-07 09:48:55.129494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.466 [2024-10-07 09:48:55.129558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.466 qpair failed and we were unable to recover it. 00:28:06.466 [2024-10-07 09:48:55.129821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.466 [2024-10-07 09:48:55.129887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.466 qpair failed and we were unable to recover it. 00:28:06.466 [2024-10-07 09:48:55.130111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.466 [2024-10-07 09:48:55.130176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.466 qpair failed and we were unable to recover it. 00:28:06.466 [2024-10-07 09:48:55.130435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.466 [2024-10-07 09:48:55.130498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.466 qpair failed and we were unable to recover it. 00:28:06.466 [2024-10-07 09:48:55.130757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.466 [2024-10-07 09:48:55.130824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.466 qpair failed and we were unable to recover it. 00:28:06.466 [2024-10-07 09:48:55.131135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.466 [2024-10-07 09:48:55.131201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.466 qpair failed and we were unable to recover it. 00:28:06.466 [2024-10-07 09:48:55.131444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.466 [2024-10-07 09:48:55.131509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.466 qpair failed and we were unable to recover it. 00:28:06.466 [2024-10-07 09:48:55.131721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.466 [2024-10-07 09:48:55.131786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.466 qpair failed and we were unable to recover it. 00:28:06.466 [2024-10-07 09:48:55.132037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.466 [2024-10-07 09:48:55.132101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.466 qpair failed and we were unable to recover it. 00:28:06.466 [2024-10-07 09:48:55.132366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.466 [2024-10-07 09:48:55.132432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.466 qpair failed and we were unable to recover it. 00:28:06.466 [2024-10-07 09:48:55.132696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.466 [2024-10-07 09:48:55.132763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.466 qpair failed and we were unable to recover it. 00:28:06.466 [2024-10-07 09:48:55.132967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.466 [2024-10-07 09:48:55.133031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.466 qpair failed and we were unable to recover it. 00:28:06.466 [2024-10-07 09:48:55.133285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.466 [2024-10-07 09:48:55.133349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.466 qpair failed and we were unable to recover it. 00:28:06.466 [2024-10-07 09:48:55.133554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.466 [2024-10-07 09:48:55.133619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.466 qpair failed and we were unable to recover it. 00:28:06.466 [2024-10-07 09:48:55.133893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.466 [2024-10-07 09:48:55.133959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.466 qpair failed and we were unable to recover it. 00:28:06.466 [2024-10-07 09:48:55.134212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.466 [2024-10-07 09:48:55.134276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.466 qpair failed and we were unable to recover it. 00:28:06.466 [2024-10-07 09:48:55.134537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.466 [2024-10-07 09:48:55.134602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.466 qpair failed and we were unable to recover it. 00:28:06.466 [2024-10-07 09:48:55.134868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.466 [2024-10-07 09:48:55.134934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.466 qpair failed and we were unable to recover it. 00:28:06.466 [2024-10-07 09:48:55.135204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.466 [2024-10-07 09:48:55.135279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.466 qpair failed and we were unable to recover it. 00:28:06.466 [2024-10-07 09:48:55.135526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.466 [2024-10-07 09:48:55.135590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.466 qpair failed and we were unable to recover it. 00:28:06.466 [2024-10-07 09:48:55.135800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.466 [2024-10-07 09:48:55.135866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.467 qpair failed and we were unable to recover it. 00:28:06.467 [2024-10-07 09:48:55.136074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.467 [2024-10-07 09:48:55.136137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.467 qpair failed and we were unable to recover it. 00:28:06.467 [2024-10-07 09:48:55.136342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.467 [2024-10-07 09:48:55.136406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.467 qpair failed and we were unable to recover it. 00:28:06.467 [2024-10-07 09:48:55.136623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.467 [2024-10-07 09:48:55.136711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.467 qpair failed and we were unable to recover it. 00:28:06.467 [2024-10-07 09:48:55.136971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.467 [2024-10-07 09:48:55.137036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.467 qpair failed and we were unable to recover it. 00:28:06.467 [2024-10-07 09:48:55.137277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.467 [2024-10-07 09:48:55.137342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.467 qpair failed and we were unable to recover it. 00:28:06.467 [2024-10-07 09:48:55.137540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.467 [2024-10-07 09:48:55.137604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.467 qpair failed and we were unable to recover it. 00:28:06.467 [2024-10-07 09:48:55.137845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.467 [2024-10-07 09:48:55.137910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.467 qpair failed and we were unable to recover it. 00:28:06.467 [2024-10-07 09:48:55.138123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.467 [2024-10-07 09:48:55.138190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.467 qpair failed and we were unable to recover it. 00:28:06.467 [2024-10-07 09:48:55.138485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.467 [2024-10-07 09:48:55.138549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.467 qpair failed and we were unable to recover it. 00:28:06.467 [2024-10-07 09:48:55.138785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.467 [2024-10-07 09:48:55.138851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.467 qpair failed and we were unable to recover it. 00:28:06.467 [2024-10-07 09:48:55.139045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.467 [2024-10-07 09:48:55.139109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.467 qpair failed and we were unable to recover it. 00:28:06.467 [2024-10-07 09:48:55.139368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.467 [2024-10-07 09:48:55.139432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.467 qpair failed and we were unable to recover it. 00:28:06.467 [2024-10-07 09:48:55.139638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.467 [2024-10-07 09:48:55.139722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.467 qpair failed and we were unable to recover it. 00:28:06.467 [2024-10-07 09:48:55.139963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.467 [2024-10-07 09:48:55.140027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.467 qpair failed and we were unable to recover it. 00:28:06.467 [2024-10-07 09:48:55.140225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.467 [2024-10-07 09:48:55.140290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.467 qpair failed and we were unable to recover it. 00:28:06.467 [2024-10-07 09:48:55.140515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.467 [2024-10-07 09:48:55.140578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.467 qpair failed and we were unable to recover it. 00:28:06.467 [2024-10-07 09:48:55.140820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.467 [2024-10-07 09:48:55.140887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.467 qpair failed and we were unable to recover it. 00:28:06.467 [2024-10-07 09:48:55.141139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.467 [2024-10-07 09:48:55.141204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.467 qpair failed and we were unable to recover it. 00:28:06.467 [2024-10-07 09:48:55.141499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.467 [2024-10-07 09:48:55.141563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.467 qpair failed and we were unable to recover it. 00:28:06.467 [2024-10-07 09:48:55.141785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.467 [2024-10-07 09:48:55.141851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.467 qpair failed and we were unable to recover it. 00:28:06.467 [2024-10-07 09:48:55.142060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.467 [2024-10-07 09:48:55.142126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.467 qpair failed and we were unable to recover it. 00:28:06.467 [2024-10-07 09:48:55.142341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.467 [2024-10-07 09:48:55.142391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.467 qpair failed and we were unable to recover it. 00:28:06.467 [2024-10-07 09:48:55.142536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.467 [2024-10-07 09:48:55.142585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.467 qpair failed and we were unable to recover it. 00:28:06.467 [2024-10-07 09:48:55.142794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.467 [2024-10-07 09:48:55.142844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.467 qpair failed and we were unable to recover it. 00:28:06.467 [2024-10-07 09:48:55.143005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.467 [2024-10-07 09:48:55.143061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.467 qpair failed and we were unable to recover it. 00:28:06.467 [2024-10-07 09:48:55.143247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.467 [2024-10-07 09:48:55.143313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.467 qpair failed and we were unable to recover it. 00:28:06.467 [2024-10-07 09:48:55.143558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.467 [2024-10-07 09:48:55.143606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.467 qpair failed and we were unable to recover it. 00:28:06.467 [2024-10-07 09:48:55.143804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.467 [2024-10-07 09:48:55.143854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.467 qpair failed and we were unable to recover it. 00:28:06.467 [2024-10-07 09:48:55.144095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.467 [2024-10-07 09:48:55.144159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.467 qpair failed and we were unable to recover it. 00:28:06.467 [2024-10-07 09:48:55.144381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.467 [2024-10-07 09:48:55.144445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.467 qpair failed and we were unable to recover it. 00:28:06.467 [2024-10-07 09:48:55.144648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.467 [2024-10-07 09:48:55.144747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.467 qpair failed and we were unable to recover it. 00:28:06.467 [2024-10-07 09:48:55.144952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.467 [2024-10-07 09:48:55.145016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.467 qpair failed and we were unable to recover it. 00:28:06.468 [2024-10-07 09:48:55.145304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.468 [2024-10-07 09:48:55.145369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.468 qpair failed and we were unable to recover it. 00:28:06.468 [2024-10-07 09:48:55.145620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.468 [2024-10-07 09:48:55.145705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.468 qpair failed and we were unable to recover it. 00:28:06.468 [2024-10-07 09:48:55.145925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.468 [2024-10-07 09:48:55.145990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.468 qpair failed and we were unable to recover it. 00:28:06.468 [2024-10-07 09:48:55.146199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.468 [2024-10-07 09:48:55.146265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.468 qpair failed and we were unable to recover it. 00:28:06.468 [2024-10-07 09:48:55.146520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.468 [2024-10-07 09:48:55.146585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.468 qpair failed and we were unable to recover it. 00:28:06.468 [2024-10-07 09:48:55.146830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.468 [2024-10-07 09:48:55.146897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.468 qpair failed and we were unable to recover it. 00:28:06.468 [2024-10-07 09:48:55.147152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.468 [2024-10-07 09:48:55.147217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.468 qpair failed and we were unable to recover it. 00:28:06.468 [2024-10-07 09:48:55.147481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.468 [2024-10-07 09:48:55.147545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.468 qpair failed and we were unable to recover it. 00:28:06.468 [2024-10-07 09:48:55.147774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.468 [2024-10-07 09:48:55.147841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.468 qpair failed and we were unable to recover it. 00:28:06.468 [2024-10-07 09:48:55.148085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.468 [2024-10-07 09:48:55.148150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.468 qpair failed and we were unable to recover it. 00:28:06.468 [2024-10-07 09:48:55.148388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.468 [2024-10-07 09:48:55.148452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.468 qpair failed and we were unable to recover it. 00:28:06.468 [2024-10-07 09:48:55.148708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.468 [2024-10-07 09:48:55.148776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.468 qpair failed and we were unable to recover it. 00:28:06.468 [2024-10-07 09:48:55.149038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.468 [2024-10-07 09:48:55.149104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.468 qpair failed and we were unable to recover it. 00:28:06.468 [2024-10-07 09:48:55.149353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.468 [2024-10-07 09:48:55.149417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.468 qpair failed and we were unable to recover it. 00:28:06.468 [2024-10-07 09:48:55.149680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.468 [2024-10-07 09:48:55.149747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.468 qpair failed and we were unable to recover it. 00:28:06.468 [2024-10-07 09:48:55.150003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.468 [2024-10-07 09:48:55.150068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.468 qpair failed and we were unable to recover it. 00:28:06.468 [2024-10-07 09:48:55.150351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.468 [2024-10-07 09:48:55.150415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.468 qpair failed and we were unable to recover it. 00:28:06.468 [2024-10-07 09:48:55.150626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.468 [2024-10-07 09:48:55.150704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.468 qpair failed and we were unable to recover it. 00:28:06.468 [2024-10-07 09:48:55.150943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.468 [2024-10-07 09:48:55.151008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.468 qpair failed and we were unable to recover it. 00:28:06.468 [2024-10-07 09:48:55.151265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.468 [2024-10-07 09:48:55.151329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.468 qpair failed and we were unable to recover it. 00:28:06.468 [2024-10-07 09:48:55.151552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.468 [2024-10-07 09:48:55.151617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.468 qpair failed and we were unable to recover it. 00:28:06.468 [2024-10-07 09:48:55.151885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.468 [2024-10-07 09:48:55.151950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.468 qpair failed and we were unable to recover it. 00:28:06.468 [2024-10-07 09:48:55.152165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.468 [2024-10-07 09:48:55.152229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.468 qpair failed and we were unable to recover it. 00:28:06.468 [2024-10-07 09:48:55.152440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.468 [2024-10-07 09:48:55.152504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.468 qpair failed and we were unable to recover it. 00:28:06.468 [2024-10-07 09:48:55.152750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.468 [2024-10-07 09:48:55.152815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.468 qpair failed and we were unable to recover it. 00:28:06.468 [2024-10-07 09:48:55.153054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.468 [2024-10-07 09:48:55.153103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.468 qpair failed and we were unable to recover it. 00:28:06.468 [2024-10-07 09:48:55.153294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.468 [2024-10-07 09:48:55.153374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.468 qpair failed and we were unable to recover it. 00:28:06.468 [2024-10-07 09:48:55.153615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.468 [2024-10-07 09:48:55.153694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.468 qpair failed and we were unable to recover it. 00:28:06.468 [2024-10-07 09:48:55.153957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.468 [2024-10-07 09:48:55.154023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.468 qpair failed and we were unable to recover it. 00:28:06.468 [2024-10-07 09:48:55.154268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.468 [2024-10-07 09:48:55.154333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.468 qpair failed and we were unable to recover it. 00:28:06.468 [2024-10-07 09:48:55.154621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.468 [2024-10-07 09:48:55.154711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.468 qpair failed and we were unable to recover it. 00:28:06.468 [2024-10-07 09:48:55.154971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.468 [2024-10-07 09:48:55.155035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.468 qpair failed and we were unable to recover it. 00:28:06.468 [2024-10-07 09:48:55.155292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.468 [2024-10-07 09:48:55.155357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.468 qpair failed and we were unable to recover it. 00:28:06.468 [2024-10-07 09:48:55.155664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.468 [2024-10-07 09:48:55.155751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.468 qpair failed and we were unable to recover it. 00:28:06.468 [2024-10-07 09:48:55.155996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.468 [2024-10-07 09:48:55.156060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.468 qpair failed and we were unable to recover it. 00:28:06.468 [2024-10-07 09:48:55.156258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.468 [2024-10-07 09:48:55.156324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.468 qpair failed and we were unable to recover it. 00:28:06.468 [2024-10-07 09:48:55.156616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.468 [2024-10-07 09:48:55.156699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.468 qpair failed and we were unable to recover it. 00:28:06.468 [2024-10-07 09:48:55.156938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.468 [2024-10-07 09:48:55.157002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.468 qpair failed and we were unable to recover it. 00:28:06.468 [2024-10-07 09:48:55.157228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.468 [2024-10-07 09:48:55.157293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.468 qpair failed and we were unable to recover it. 00:28:06.469 [2024-10-07 09:48:55.157533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.469 [2024-10-07 09:48:55.157598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.469 qpair failed and we were unable to recover it. 00:28:06.469 [2024-10-07 09:48:55.157832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.469 [2024-10-07 09:48:55.157897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.469 qpair failed and we were unable to recover it. 00:28:06.469 [2024-10-07 09:48:55.158138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.469 [2024-10-07 09:48:55.158203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.469 qpair failed and we were unable to recover it. 00:28:06.469 [2024-10-07 09:48:55.158452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.469 [2024-10-07 09:48:55.158516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.469 qpair failed and we were unable to recover it. 00:28:06.469 [2024-10-07 09:48:55.158734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.469 [2024-10-07 09:48:55.158800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.469 qpair failed and we were unable to recover it. 00:28:06.469 [2024-10-07 09:48:55.159096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.469 [2024-10-07 09:48:55.159160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.469 qpair failed and we were unable to recover it. 00:28:06.469 [2024-10-07 09:48:55.159443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.469 [2024-10-07 09:48:55.159507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.469 qpair failed and we were unable to recover it. 00:28:06.469 [2024-10-07 09:48:55.159686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.469 [2024-10-07 09:48:55.159751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.469 qpair failed and we were unable to recover it. 00:28:06.469 [2024-10-07 09:48:55.159976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.469 [2024-10-07 09:48:55.160041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.469 qpair failed and we were unable to recover it. 00:28:06.469 [2024-10-07 09:48:55.160240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.469 [2024-10-07 09:48:55.160304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.469 qpair failed and we were unable to recover it. 00:28:06.469 [2024-10-07 09:48:55.160512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.469 [2024-10-07 09:48:55.160578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.469 qpair failed and we were unable to recover it. 00:28:06.469 [2024-10-07 09:48:55.160801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.469 [2024-10-07 09:48:55.160868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.469 qpair failed and we were unable to recover it. 00:28:06.469 [2024-10-07 09:48:55.161117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.469 [2024-10-07 09:48:55.161182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.469 qpair failed and we were unable to recover it. 00:28:06.469 [2024-10-07 09:48:55.161392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.469 [2024-10-07 09:48:55.161456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.469 qpair failed and we were unable to recover it. 00:28:06.469 [2024-10-07 09:48:55.161708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.469 [2024-10-07 09:48:55.161776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.469 qpair failed and we were unable to recover it. 00:28:06.469 [2024-10-07 09:48:55.162062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.469 [2024-10-07 09:48:55.162127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.469 qpair failed and we were unable to recover it. 00:28:06.469 [2024-10-07 09:48:55.162325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.469 [2024-10-07 09:48:55.162389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.469 qpair failed and we were unable to recover it. 00:28:06.469 [2024-10-07 09:48:55.162689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.469 [2024-10-07 09:48:55.162755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.469 qpair failed and we were unable to recover it. 00:28:06.469 [2024-10-07 09:48:55.162970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.469 [2024-10-07 09:48:55.163033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.469 qpair failed and we were unable to recover it. 00:28:06.469 [2024-10-07 09:48:55.163288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.469 [2024-10-07 09:48:55.163351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.469 qpair failed and we were unable to recover it. 00:28:06.469 [2024-10-07 09:48:55.163540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.469 [2024-10-07 09:48:55.163605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.469 qpair failed and we were unable to recover it. 00:28:06.469 [2024-10-07 09:48:55.163825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.469 [2024-10-07 09:48:55.163902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.469 qpair failed and we were unable to recover it. 00:28:06.469 [2024-10-07 09:48:55.164202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.469 [2024-10-07 09:48:55.164267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.469 qpair failed and we were unable to recover it. 00:28:06.469 [2024-10-07 09:48:55.164505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.469 [2024-10-07 09:48:55.164571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.469 qpair failed and we were unable to recover it. 00:28:06.469 [2024-10-07 09:48:55.164842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.469 [2024-10-07 09:48:55.164908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.469 qpair failed and we were unable to recover it. 00:28:06.469 [2024-10-07 09:48:55.165125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.469 [2024-10-07 09:48:55.165189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.469 qpair failed and we were unable to recover it. 00:28:06.469 [2024-10-07 09:48:55.165387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.469 [2024-10-07 09:48:55.165452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.469 qpair failed and we were unable to recover it. 00:28:06.469 [2024-10-07 09:48:55.165662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.469 [2024-10-07 09:48:55.165742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.469 qpair failed and we were unable to recover it. 00:28:06.469 [2024-10-07 09:48:55.165945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.469 [2024-10-07 09:48:55.166011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.469 qpair failed and we were unable to recover it. 00:28:06.469 [2024-10-07 09:48:55.166256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.469 [2024-10-07 09:48:55.166321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.469 qpair failed and we were unable to recover it. 00:28:06.469 [2024-10-07 09:48:55.166502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.469 [2024-10-07 09:48:55.166566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.469 qpair failed and we were unable to recover it. 00:28:06.469 [2024-10-07 09:48:55.166842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.469 [2024-10-07 09:48:55.166892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.469 qpair failed and we were unable to recover it. 00:28:06.469 [2024-10-07 09:48:55.167085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.469 [2024-10-07 09:48:55.167158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.469 qpair failed and we were unable to recover it. 00:28:06.469 [2024-10-07 09:48:55.167375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.469 [2024-10-07 09:48:55.167440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.469 qpair failed and we were unable to recover it. 00:28:06.469 [2024-10-07 09:48:55.167722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.469 [2024-10-07 09:48:55.167789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.469 qpair failed and we were unable to recover it. 00:28:06.469 [2024-10-07 09:48:55.168047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.469 [2024-10-07 09:48:55.168113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.469 qpair failed and we were unable to recover it. 00:28:06.469 [2024-10-07 09:48:55.168330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.469 [2024-10-07 09:48:55.168396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.469 qpair failed and we were unable to recover it. 00:28:06.469 [2024-10-07 09:48:55.168622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.469 [2024-10-07 09:48:55.168700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.469 qpair failed and we were unable to recover it. 00:28:06.469 [2024-10-07 09:48:55.168929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.469 [2024-10-07 09:48:55.168978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.470 qpair failed and we were unable to recover it. 00:28:06.470 [2024-10-07 09:48:55.169144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.470 [2024-10-07 09:48:55.169215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.470 qpair failed and we were unable to recover it. 00:28:06.470 [2024-10-07 09:48:55.169451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.470 [2024-10-07 09:48:55.169515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.470 qpair failed and we were unable to recover it. 00:28:06.470 [2024-10-07 09:48:55.169832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.470 [2024-10-07 09:48:55.169898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.470 qpair failed and we were unable to recover it. 00:28:06.470 [2024-10-07 09:48:55.170163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.470 [2024-10-07 09:48:55.170226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.470 qpair failed and we were unable to recover it. 00:28:06.470 [2024-10-07 09:48:55.170483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.470 [2024-10-07 09:48:55.170547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.470 qpair failed and we were unable to recover it. 00:28:06.470 [2024-10-07 09:48:55.170832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.470 [2024-10-07 09:48:55.170897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.470 qpair failed and we were unable to recover it. 00:28:06.470 [2024-10-07 09:48:55.171145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.470 [2024-10-07 09:48:55.171209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.470 qpair failed and we were unable to recover it. 00:28:06.470 [2024-10-07 09:48:55.171434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.470 [2024-10-07 09:48:55.171499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.470 qpair failed and we were unable to recover it. 00:28:06.470 [2024-10-07 09:48:55.171783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.470 [2024-10-07 09:48:55.171849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.470 qpair failed and we were unable to recover it. 00:28:06.470 [2024-10-07 09:48:55.172052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.470 [2024-10-07 09:48:55.172126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.470 qpair failed and we were unable to recover it. 00:28:06.470 [2024-10-07 09:48:55.172344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.470 [2024-10-07 09:48:55.172408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.470 qpair failed and we were unable to recover it. 00:28:06.470 [2024-10-07 09:48:55.172627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.470 [2024-10-07 09:48:55.172709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.470 qpair failed and we were unable to recover it. 00:28:06.470 [2024-10-07 09:48:55.172984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.470 [2024-10-07 09:48:55.173054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.470 qpair failed and we were unable to recover it. 00:28:06.470 [2024-10-07 09:48:55.173270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.470 [2024-10-07 09:48:55.173335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.470 qpair failed and we were unable to recover it. 00:28:06.470 [2024-10-07 09:48:55.173538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.470 [2024-10-07 09:48:55.173602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.470 qpair failed and we were unable to recover it. 00:28:06.470 [2024-10-07 09:48:55.173899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.470 [2024-10-07 09:48:55.173963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.470 qpair failed and we were unable to recover it. 00:28:06.470 [2024-10-07 09:48:55.174220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.470 [2024-10-07 09:48:55.174284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.470 qpair failed and we were unable to recover it. 00:28:06.470 [2024-10-07 09:48:55.174555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.470 [2024-10-07 09:48:55.174620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.470 qpair failed and we were unable to recover it. 00:28:06.470 [2024-10-07 09:48:55.174878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.470 [2024-10-07 09:48:55.174942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.470 qpair failed and we were unable to recover it. 00:28:06.470 [2024-10-07 09:48:55.175186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.470 [2024-10-07 09:48:55.175250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.470 qpair failed and we were unable to recover it. 00:28:06.470 [2024-10-07 09:48:55.175459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.470 [2024-10-07 09:48:55.175524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.470 qpair failed and we were unable to recover it. 00:28:06.470 [2024-10-07 09:48:55.175738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.470 [2024-10-07 09:48:55.175805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.470 qpair failed and we were unable to recover it. 00:28:06.470 [2024-10-07 09:48:55.176067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.470 [2024-10-07 09:48:55.176132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.470 qpair failed and we were unable to recover it. 00:28:06.470 [2024-10-07 09:48:55.176357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.470 [2024-10-07 09:48:55.176421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.470 qpair failed and we were unable to recover it. 00:28:06.470 [2024-10-07 09:48:55.176661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.470 [2024-10-07 09:48:55.176741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.470 qpair failed and we were unable to recover it. 00:28:06.470 [2024-10-07 09:48:55.177028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.470 [2024-10-07 09:48:55.177093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.470 qpair failed and we were unable to recover it. 00:28:06.470 [2024-10-07 09:48:55.177306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.470 [2024-10-07 09:48:55.177369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.470 qpair failed and we were unable to recover it. 00:28:06.470 [2024-10-07 09:48:55.177571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.470 [2024-10-07 09:48:55.177639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.470 qpair failed and we were unable to recover it. 00:28:06.470 [2024-10-07 09:48:55.177864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.470 [2024-10-07 09:48:55.177931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.470 qpair failed and we were unable to recover it. 00:28:06.470 [2024-10-07 09:48:55.178184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.470 [2024-10-07 09:48:55.178248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.470 qpair failed and we were unable to recover it. 00:28:06.470 [2024-10-07 09:48:55.178498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.470 [2024-10-07 09:48:55.178563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.470 qpair failed and we were unable to recover it. 00:28:06.470 [2024-10-07 09:48:55.178772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.470 [2024-10-07 09:48:55.178840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.470 qpair failed and we were unable to recover it. 00:28:06.470 [2024-10-07 09:48:55.179096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.470 [2024-10-07 09:48:55.179161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.470 qpair failed and we were unable to recover it. 00:28:06.470 [2024-10-07 09:48:55.179370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.470 [2024-10-07 09:48:55.179436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.470 qpair failed and we were unable to recover it. 00:28:06.470 [2024-10-07 09:48:55.179693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.470 [2024-10-07 09:48:55.179759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.470 qpair failed and we were unable to recover it. 00:28:06.470 [2024-10-07 09:48:55.180012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.470 [2024-10-07 09:48:55.180077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.470 qpair failed and we were unable to recover it. 00:28:06.471 [2024-10-07 09:48:55.180363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.471 [2024-10-07 09:48:55.180438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.471 qpair failed and we were unable to recover it. 00:28:06.471 [2024-10-07 09:48:55.180730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.471 [2024-10-07 09:48:55.180795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.471 qpair failed and we were unable to recover it. 00:28:06.471 [2024-10-07 09:48:55.181040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.471 [2024-10-07 09:48:55.181105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.471 qpair failed and we were unable to recover it. 00:28:06.471 [2024-10-07 09:48:55.181350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.471 [2024-10-07 09:48:55.181415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.471 qpair failed and we were unable to recover it. 00:28:06.471 [2024-10-07 09:48:55.181680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.471 [2024-10-07 09:48:55.181747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.471 qpair failed and we were unable to recover it. 00:28:06.471 [2024-10-07 09:48:55.181999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.471 [2024-10-07 09:48:55.182064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.471 qpair failed and we were unable to recover it. 00:28:06.471 [2024-10-07 09:48:55.182294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.471 [2024-10-07 09:48:55.182359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.471 qpair failed and we were unable to recover it. 00:28:06.471 [2024-10-07 09:48:55.182572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.471 [2024-10-07 09:48:55.182636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.471 qpair failed and we were unable to recover it. 00:28:06.471 [2024-10-07 09:48:55.182884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.471 [2024-10-07 09:48:55.182948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.471 qpair failed and we were unable to recover it. 00:28:06.471 [2024-10-07 09:48:55.183147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.471 [2024-10-07 09:48:55.183220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.471 qpair failed and we were unable to recover it. 00:28:06.471 [2024-10-07 09:48:55.183461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.471 [2024-10-07 09:48:55.183524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.471 qpair failed and we were unable to recover it. 00:28:06.471 [2024-10-07 09:48:55.183774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.471 [2024-10-07 09:48:55.183840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.471 qpair failed and we were unable to recover it. 00:28:06.471 [2024-10-07 09:48:55.184057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.471 [2024-10-07 09:48:55.184122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.471 qpair failed and we were unable to recover it. 00:28:06.471 [2024-10-07 09:48:55.184370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.471 [2024-10-07 09:48:55.184434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.471 qpair failed and we were unable to recover it. 00:28:06.471 [2024-10-07 09:48:55.184700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.471 [2024-10-07 09:48:55.184766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.471 qpair failed and we were unable to recover it. 00:28:06.471 [2024-10-07 09:48:55.184982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.471 [2024-10-07 09:48:55.185046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.471 qpair failed and we were unable to recover it. 00:28:06.471 [2024-10-07 09:48:55.185331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.471 [2024-10-07 09:48:55.185395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.471 qpair failed and we were unable to recover it. 00:28:06.471 [2024-10-07 09:48:55.185645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.471 [2024-10-07 09:48:55.185723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.471 qpair failed and we were unable to recover it. 00:28:06.471 [2024-10-07 09:48:55.185953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.471 [2024-10-07 09:48:55.186018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.471 qpair failed and we were unable to recover it. 00:28:06.471 [2024-10-07 09:48:55.186236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.471 [2024-10-07 09:48:55.186301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.471 qpair failed and we were unable to recover it. 00:28:06.471 [2024-10-07 09:48:55.186554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.471 [2024-10-07 09:48:55.186618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.471 qpair failed and we were unable to recover it. 00:28:06.471 [2024-10-07 09:48:55.186869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.471 [2024-10-07 09:48:55.186933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.471 qpair failed and we were unable to recover it. 00:28:06.471 [2024-10-07 09:48:55.187180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.471 [2024-10-07 09:48:55.187245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.471 qpair failed and we were unable to recover it. 00:28:06.471 [2024-10-07 09:48:55.187504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.471 [2024-10-07 09:48:55.187569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.471 qpair failed and we were unable to recover it. 00:28:06.471 [2024-10-07 09:48:55.187842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.471 [2024-10-07 09:48:55.187908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.471 qpair failed and we were unable to recover it. 00:28:06.471 [2024-10-07 09:48:55.188149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.471 [2024-10-07 09:48:55.188214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.471 qpair failed and we were unable to recover it. 00:28:06.471 [2024-10-07 09:48:55.188510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.471 [2024-10-07 09:48:55.188575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.471 qpair failed and we were unable to recover it. 00:28:06.471 [2024-10-07 09:48:55.188821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.471 [2024-10-07 09:48:55.188886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.471 qpair failed and we were unable to recover it. 00:28:06.471 [2024-10-07 09:48:55.189149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.471 [2024-10-07 09:48:55.189213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.471 qpair failed and we were unable to recover it. 00:28:06.471 [2024-10-07 09:48:55.189469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.471 [2024-10-07 09:48:55.189536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.471 qpair failed and we were unable to recover it. 00:28:06.471 [2024-10-07 09:48:55.189798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.471 [2024-10-07 09:48:55.189863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.471 qpair failed and we were unable to recover it. 00:28:06.471 [2024-10-07 09:48:55.190047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.471 [2024-10-07 09:48:55.190114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.471 qpair failed and we were unable to recover it. 00:28:06.471 [2024-10-07 09:48:55.190365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.471 [2024-10-07 09:48:55.190431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.471 qpair failed and we were unable to recover it. 00:28:06.471 [2024-10-07 09:48:55.190703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.471 [2024-10-07 09:48:55.190768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.471 qpair failed and we were unable to recover it. 00:28:06.471 [2024-10-07 09:48:55.191057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.471 [2024-10-07 09:48:55.191122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.471 qpair failed and we were unable to recover it. 00:28:06.471 [2024-10-07 09:48:55.191367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.471 [2024-10-07 09:48:55.191432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.471 qpair failed and we were unable to recover it. 00:28:06.471 [2024-10-07 09:48:55.191663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.471 [2024-10-07 09:48:55.191748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.471 qpair failed and we were unable to recover it. 00:28:06.471 [2024-10-07 09:48:55.192003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.471 [2024-10-07 09:48:55.192068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.471 qpair failed and we were unable to recover it. 00:28:06.471 [2024-10-07 09:48:55.192291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.472 [2024-10-07 09:48:55.192356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.472 qpair failed and we were unable to recover it. 00:28:06.472 [2024-10-07 09:48:55.192615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.472 [2024-10-07 09:48:55.192696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.472 qpair failed and we were unable to recover it. 00:28:06.472 [2024-10-07 09:48:55.192941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.472 [2024-10-07 09:48:55.193006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.472 qpair failed and we were unable to recover it. 00:28:06.472 [2024-10-07 09:48:55.193309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.472 [2024-10-07 09:48:55.193383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.472 qpair failed and we were unable to recover it. 00:28:06.472 [2024-10-07 09:48:55.193621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.472 [2024-10-07 09:48:55.193704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.472 qpair failed and we were unable to recover it. 00:28:06.472 [2024-10-07 09:48:55.193967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.472 [2024-10-07 09:48:55.194031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.472 qpair failed and we were unable to recover it. 00:28:06.472 [2024-10-07 09:48:55.194280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.472 [2024-10-07 09:48:55.194343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.472 qpair failed and we were unable to recover it. 00:28:06.472 [2024-10-07 09:48:55.194566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.472 [2024-10-07 09:48:55.194630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.472 qpair failed and we were unable to recover it. 00:28:06.472 [2024-10-07 09:48:55.194842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.472 [2024-10-07 09:48:55.194908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.472 qpair failed and we were unable to recover it. 00:28:06.472 [2024-10-07 09:48:55.195198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.472 [2024-10-07 09:48:55.195262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.472 qpair failed and we were unable to recover it. 00:28:06.472 [2024-10-07 09:48:55.195554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.472 [2024-10-07 09:48:55.195618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.472 qpair failed and we were unable to recover it. 00:28:06.472 [2024-10-07 09:48:55.195829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.472 [2024-10-07 09:48:55.195896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.472 qpair failed and we were unable to recover it. 00:28:06.472 [2024-10-07 09:48:55.196127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.472 [2024-10-07 09:48:55.196191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.472 qpair failed and we were unable to recover it. 00:28:06.472 [2024-10-07 09:48:55.196488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.472 [2024-10-07 09:48:55.196553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.472 qpair failed and we were unable to recover it. 00:28:06.472 [2024-10-07 09:48:55.196872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.472 [2024-10-07 09:48:55.196939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.472 qpair failed and we were unable to recover it. 00:28:06.472 [2024-10-07 09:48:55.197152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.472 [2024-10-07 09:48:55.197216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.472 qpair failed and we were unable to recover it. 00:28:06.472 [2024-10-07 09:48:55.197459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.472 [2024-10-07 09:48:55.197523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.472 qpair failed and we were unable to recover it. 00:28:06.472 [2024-10-07 09:48:55.197736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.472 [2024-10-07 09:48:55.197804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.472 qpair failed and we were unable to recover it. 00:28:06.472 [2024-10-07 09:48:55.198048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.472 [2024-10-07 09:48:55.198113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.472 qpair failed and we were unable to recover it. 00:28:06.472 [2024-10-07 09:48:55.198354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.472 [2024-10-07 09:48:55.198418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.472 qpair failed and we were unable to recover it. 00:28:06.472 [2024-10-07 09:48:55.198721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.472 [2024-10-07 09:48:55.198795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.472 qpair failed and we were unable to recover it. 00:28:06.472 [2024-10-07 09:48:55.199026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.472 [2024-10-07 09:48:55.199092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.472 qpair failed and we were unable to recover it. 00:28:06.472 [2024-10-07 09:48:55.199331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.472 [2024-10-07 09:48:55.199395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.472 qpair failed and we were unable to recover it. 00:28:06.472 [2024-10-07 09:48:55.199641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.472 [2024-10-07 09:48:55.199723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.472 qpair failed and we were unable to recover it. 00:28:06.472 [2024-10-07 09:48:55.199990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.472 [2024-10-07 09:48:55.200056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.472 qpair failed and we were unable to recover it. 00:28:06.472 [2024-10-07 09:48:55.200333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.472 [2024-10-07 09:48:55.200396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.472 qpair failed and we were unable to recover it. 00:28:06.472 [2024-10-07 09:48:55.200688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.472 [2024-10-07 09:48:55.200753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.472 qpair failed and we were unable to recover it. 00:28:06.472 [2024-10-07 09:48:55.201049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.472 [2024-10-07 09:48:55.201114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.472 qpair failed and we were unable to recover it. 00:28:06.472 [2024-10-07 09:48:55.201360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.472 [2024-10-07 09:48:55.201425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.472 qpair failed and we were unable to recover it. 00:28:06.472 [2024-10-07 09:48:55.201613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.472 [2024-10-07 09:48:55.201691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.472 qpair failed and we were unable to recover it. 00:28:06.472 [2024-10-07 09:48:55.201901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.472 [2024-10-07 09:48:55.201977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.472 qpair failed and we were unable to recover it. 00:28:06.472 [2024-10-07 09:48:55.202271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.472 [2024-10-07 09:48:55.202335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.472 qpair failed and we were unable to recover it. 00:28:06.472 [2024-10-07 09:48:55.202570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.472 [2024-10-07 09:48:55.202635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.472 qpair failed and we were unable to recover it. 00:28:06.472 [2024-10-07 09:48:55.202935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.472 [2024-10-07 09:48:55.203002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.472 qpair failed and we were unable to recover it. 00:28:06.472 [2024-10-07 09:48:55.203283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.473 [2024-10-07 09:48:55.203347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.473 qpair failed and we were unable to recover it. 00:28:06.473 [2024-10-07 09:48:55.203607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.473 [2024-10-07 09:48:55.203686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.473 qpair failed and we were unable to recover it. 00:28:06.473 [2024-10-07 09:48:55.203936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.473 [2024-10-07 09:48:55.204001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.473 qpair failed and we were unable to recover it. 00:28:06.473 [2024-10-07 09:48:55.204256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.473 [2024-10-07 09:48:55.204320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.473 qpair failed and we were unable to recover it. 00:28:06.473 [2024-10-07 09:48:55.204589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.473 [2024-10-07 09:48:55.204654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.473 qpair failed and we were unable to recover it. 00:28:06.473 [2024-10-07 09:48:55.204888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.473 [2024-10-07 09:48:55.204952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.473 qpair failed and we were unable to recover it. 00:28:06.473 [2024-10-07 09:48:55.205210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.473 [2024-10-07 09:48:55.205274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.473 qpair failed and we were unable to recover it. 00:28:06.473 [2024-10-07 09:48:55.205516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.473 [2024-10-07 09:48:55.205581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.473 qpair failed and we were unable to recover it. 00:28:06.473 [2024-10-07 09:48:55.205843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.473 [2024-10-07 09:48:55.205910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.473 qpair failed and we were unable to recover it. 00:28:06.473 [2024-10-07 09:48:55.206166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.473 [2024-10-07 09:48:55.206230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.473 qpair failed and we were unable to recover it. 00:28:06.473 [2024-10-07 09:48:55.206490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.473 [2024-10-07 09:48:55.206555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.473 qpair failed and we were unable to recover it. 00:28:06.473 [2024-10-07 09:48:55.206789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.473 [2024-10-07 09:48:55.206856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.473 qpair failed and we were unable to recover it. 00:28:06.473 [2024-10-07 09:48:55.207115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.473 [2024-10-07 09:48:55.207181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.473 qpair failed and we were unable to recover it. 00:28:06.473 [2024-10-07 09:48:55.207391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.473 [2024-10-07 09:48:55.207455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.473 qpair failed and we were unable to recover it. 00:28:06.473 [2024-10-07 09:48:55.207694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.473 [2024-10-07 09:48:55.207761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.473 qpair failed and we were unable to recover it. 00:28:06.473 [2024-10-07 09:48:55.207958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.473 [2024-10-07 09:48:55.208022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.473 qpair failed and we were unable to recover it. 00:28:06.473 [2024-10-07 09:48:55.208237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.473 [2024-10-07 09:48:55.208301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.473 qpair failed and we were unable to recover it. 00:28:06.473 [2024-10-07 09:48:55.208590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.473 [2024-10-07 09:48:55.208655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.473 qpair failed and we were unable to recover it. 00:28:06.473 [2024-10-07 09:48:55.208900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.473 [2024-10-07 09:48:55.208964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.473 qpair failed and we were unable to recover it. 00:28:06.473 [2024-10-07 09:48:55.209250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.473 [2024-10-07 09:48:55.209314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.473 qpair failed and we were unable to recover it. 00:28:06.473 [2024-10-07 09:48:55.209500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.473 [2024-10-07 09:48:55.209565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.473 qpair failed and we were unable to recover it. 00:28:06.473 [2024-10-07 09:48:55.209823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.473 [2024-10-07 09:48:55.209889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.473 qpair failed and we were unable to recover it. 00:28:06.473 [2024-10-07 09:48:55.210137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.473 [2024-10-07 09:48:55.210202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.473 qpair failed and we were unable to recover it. 00:28:06.473 [2024-10-07 09:48:55.210405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.473 [2024-10-07 09:48:55.210480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.473 qpair failed and we were unable to recover it. 00:28:06.473 [2024-10-07 09:48:55.210707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.473 [2024-10-07 09:48:55.210775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.473 qpair failed and we were unable to recover it. 00:28:06.473 [2024-10-07 09:48:55.210985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.473 [2024-10-07 09:48:55.211051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.473 qpair failed and we were unable to recover it. 00:28:06.473 [2024-10-07 09:48:55.211321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.473 [2024-10-07 09:48:55.211386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.473 qpair failed and we were unable to recover it. 00:28:06.473 [2024-10-07 09:48:55.211642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.473 [2024-10-07 09:48:55.211722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.473 qpair failed and we were unable to recover it. 00:28:06.473 [2024-10-07 09:48:55.211934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.473 [2024-10-07 09:48:55.211999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.473 qpair failed and we were unable to recover it. 00:28:06.473 [2024-10-07 09:48:55.212245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.473 [2024-10-07 09:48:55.212310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.473 qpair failed and we were unable to recover it. 00:28:06.473 [2024-10-07 09:48:55.212565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.473 [2024-10-07 09:48:55.212629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.473 qpair failed and we were unable to recover it. 00:28:06.473 [2024-10-07 09:48:55.212906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.473 [2024-10-07 09:48:55.212970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.473 qpair failed and we were unable to recover it. 00:28:06.473 [2024-10-07 09:48:55.213217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.473 [2024-10-07 09:48:55.213281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.473 qpair failed and we were unable to recover it. 00:28:06.473 [2024-10-07 09:48:55.213492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.473 [2024-10-07 09:48:55.213558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.473 qpair failed and we were unable to recover it. 00:28:06.473 [2024-10-07 09:48:55.213756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.473 [2024-10-07 09:48:55.213822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.473 qpair failed and we were unable to recover it. 00:28:06.473 [2024-10-07 09:48:55.214019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.473 [2024-10-07 09:48:55.214085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.473 qpair failed and we were unable to recover it. 00:28:06.473 [2024-10-07 09:48:55.214325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.473 [2024-10-07 09:48:55.214389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.473 qpair failed and we were unable to recover it. 00:28:06.473 [2024-10-07 09:48:55.214612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.473 [2024-10-07 09:48:55.214712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.473 qpair failed and we were unable to recover it. 00:28:06.473 [2024-10-07 09:48:55.214922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.473 [2024-10-07 09:48:55.214988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.473 qpair failed and we were unable to recover it. 00:28:06.473 [2024-10-07 09:48:55.215179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.474 [2024-10-07 09:48:55.215244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.474 qpair failed and we were unable to recover it. 00:28:06.474 [2024-10-07 09:48:55.215465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.474 [2024-10-07 09:48:55.215529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.474 qpair failed and we were unable to recover it. 00:28:06.474 [2024-10-07 09:48:55.215761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.474 [2024-10-07 09:48:55.215828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.474 qpair failed and we were unable to recover it. 00:28:06.474 [2024-10-07 09:48:55.216117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.474 [2024-10-07 09:48:55.216184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.474 qpair failed and we were unable to recover it. 00:28:06.474 [2024-10-07 09:48:55.216440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.474 [2024-10-07 09:48:55.216503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.474 qpair failed and we were unable to recover it. 00:28:06.474 [2024-10-07 09:48:55.216799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.474 [2024-10-07 09:48:55.216864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.474 qpair failed and we were unable to recover it. 00:28:06.474 [2024-10-07 09:48:55.217066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.474 [2024-10-07 09:48:55.217132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.474 qpair failed and we were unable to recover it. 00:28:06.474 [2024-10-07 09:48:55.217415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.474 [2024-10-07 09:48:55.217480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.474 qpair failed and we were unable to recover it. 00:28:06.474 [2024-10-07 09:48:55.217718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.474 [2024-10-07 09:48:55.217784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.474 qpair failed and we were unable to recover it. 00:28:06.474 [2024-10-07 09:48:55.218009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.474 [2024-10-07 09:48:55.218073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.474 qpair failed and we were unable to recover it. 00:28:06.474 [2024-10-07 09:48:55.218328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.474 [2024-10-07 09:48:55.218393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.474 qpair failed and we were unable to recover it. 00:28:06.474 [2024-10-07 09:48:55.218640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.474 [2024-10-07 09:48:55.218732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.474 qpair failed and we were unable to recover it. 00:28:06.474 [2024-10-07 09:48:55.218988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.474 [2024-10-07 09:48:55.219052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.474 qpair failed and we were unable to recover it. 00:28:06.474 [2024-10-07 09:48:55.219270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.474 [2024-10-07 09:48:55.219335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.474 qpair failed and we were unable to recover it. 00:28:06.474 [2024-10-07 09:48:55.219600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.474 [2024-10-07 09:48:55.219684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.474 qpair failed and we were unable to recover it. 00:28:06.474 [2024-10-07 09:48:55.219978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.474 [2024-10-07 09:48:55.220041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.474 qpair failed and we were unable to recover it. 00:28:06.474 [2024-10-07 09:48:55.220289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.474 [2024-10-07 09:48:55.220354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.474 qpair failed and we were unable to recover it. 00:28:06.474 [2024-10-07 09:48:55.220566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.474 [2024-10-07 09:48:55.220632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.474 qpair failed and we were unable to recover it. 00:28:06.474 [2024-10-07 09:48:55.220903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.474 [2024-10-07 09:48:55.220968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.474 qpair failed and we were unable to recover it. 00:28:06.474 [2024-10-07 09:48:55.221188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.474 [2024-10-07 09:48:55.221253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.474 qpair failed and we were unable to recover it. 00:28:06.474 [2024-10-07 09:48:55.221500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.474 [2024-10-07 09:48:55.221564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.474 qpair failed and we were unable to recover it. 00:28:06.474 [2024-10-07 09:48:55.221829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.474 [2024-10-07 09:48:55.221894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.474 qpair failed and we were unable to recover it. 00:28:06.474 [2024-10-07 09:48:55.222118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.474 [2024-10-07 09:48:55.222182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.474 qpair failed and we were unable to recover it. 00:28:06.474 [2024-10-07 09:48:55.222430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.474 [2024-10-07 09:48:55.222496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.474 qpair failed and we were unable to recover it. 00:28:06.474 [2024-10-07 09:48:55.222761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.474 [2024-10-07 09:48:55.222828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.474 qpair failed and we were unable to recover it. 00:28:06.474 [2024-10-07 09:48:55.223056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.474 [2024-10-07 09:48:55.223121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.474 qpair failed and we were unable to recover it. 00:28:06.474 [2024-10-07 09:48:55.223332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.474 [2024-10-07 09:48:55.223398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.474 qpair failed and we were unable to recover it. 00:28:06.474 [2024-10-07 09:48:55.223645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.474 [2024-10-07 09:48:55.223729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.474 qpair failed and we were unable to recover it. 00:28:06.474 [2024-10-07 09:48:55.223985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.474 [2024-10-07 09:48:55.224049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.474 qpair failed and we were unable to recover it. 00:28:06.474 [2024-10-07 09:48:55.224269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.474 [2024-10-07 09:48:55.224332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.474 qpair failed and we were unable to recover it. 00:28:06.474 [2024-10-07 09:48:55.224573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.474 [2024-10-07 09:48:55.224637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.474 qpair failed and we were unable to recover it. 00:28:06.474 [2024-10-07 09:48:55.224889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.474 [2024-10-07 09:48:55.224953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.474 qpair failed and we were unable to recover it. 00:28:06.474 [2024-10-07 09:48:55.225198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.474 [2024-10-07 09:48:55.225262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.474 qpair failed and we were unable to recover it. 00:28:06.474 [2024-10-07 09:48:55.225552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.474 [2024-10-07 09:48:55.225616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.474 qpair failed and we were unable to recover it. 00:28:06.474 [2024-10-07 09:48:55.225889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.474 [2024-10-07 09:48:55.225955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.474 qpair failed and we were unable to recover it. 00:28:06.474 [2024-10-07 09:48:55.226244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.474 [2024-10-07 09:48:55.226308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.474 qpair failed and we were unable to recover it. 00:28:06.474 [2024-10-07 09:48:55.226525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.474 [2024-10-07 09:48:55.226588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.474 qpair failed and we were unable to recover it. 00:28:06.474 [2024-10-07 09:48:55.226854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.474 [2024-10-07 09:48:55.226920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.474 qpair failed and we were unable to recover it. 00:28:06.474 [2024-10-07 09:48:55.227168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.474 [2024-10-07 09:48:55.227233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.474 qpair failed and we were unable to recover it. 00:28:06.475 [2024-10-07 09:48:55.227504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.475 [2024-10-07 09:48:55.227568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.475 qpair failed and we were unable to recover it. 00:28:06.475 [2024-10-07 09:48:55.227865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.475 [2024-10-07 09:48:55.227941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.475 qpair failed and we were unable to recover it. 00:28:06.475 [2024-10-07 09:48:55.228174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.475 [2024-10-07 09:48:55.228239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.475 qpair failed and we were unable to recover it. 00:28:06.475 [2024-10-07 09:48:55.228496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.475 [2024-10-07 09:48:55.228560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.475 qpair failed and we were unable to recover it. 00:28:06.475 [2024-10-07 09:48:55.228767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.475 [2024-10-07 09:48:55.228833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.475 qpair failed and we were unable to recover it. 00:28:06.475 [2024-10-07 09:48:55.229085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.475 [2024-10-07 09:48:55.229151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.475 qpair failed and we were unable to recover it. 00:28:06.475 [2024-10-07 09:48:55.229394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.475 [2024-10-07 09:48:55.229460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.475 qpair failed and we were unable to recover it. 00:28:06.475 [2024-10-07 09:48:55.229682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.475 [2024-10-07 09:48:55.229747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.475 qpair failed and we were unable to recover it. 00:28:06.475 [2024-10-07 09:48:55.230033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.475 [2024-10-07 09:48:55.230100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.475 qpair failed and we were unable to recover it. 00:28:06.475 [2024-10-07 09:48:55.230382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.475 [2024-10-07 09:48:55.230446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.475 qpair failed and we were unable to recover it. 00:28:06.475 [2024-10-07 09:48:55.230754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.475 [2024-10-07 09:48:55.230826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.475 qpair failed and we were unable to recover it. 00:28:06.475 [2024-10-07 09:48:55.231130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.475 [2024-10-07 09:48:55.231195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.475 qpair failed and we were unable to recover it. 00:28:06.475 [2024-10-07 09:48:55.231445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.475 [2024-10-07 09:48:55.231510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.475 qpair failed and we were unable to recover it. 00:28:06.475 [2024-10-07 09:48:55.231760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.475 [2024-10-07 09:48:55.231835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.475 qpair failed and we were unable to recover it. 00:28:06.475 [2024-10-07 09:48:55.232080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.475 [2024-10-07 09:48:55.232145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.475 qpair failed and we were unable to recover it. 00:28:06.475 [2024-10-07 09:48:55.232436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.475 [2024-10-07 09:48:55.232501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.475 qpair failed and we were unable to recover it. 00:28:06.475 [2024-10-07 09:48:55.232789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.475 [2024-10-07 09:48:55.232855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.475 qpair failed and we were unable to recover it. 00:28:06.475 [2024-10-07 09:48:55.233120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.475 [2024-10-07 09:48:55.233184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.475 qpair failed and we were unable to recover it. 00:28:06.475 [2024-10-07 09:48:55.233481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.475 [2024-10-07 09:48:55.233556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.475 qpair failed and we were unable to recover it. 00:28:06.475 [2024-10-07 09:48:55.233782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.475 [2024-10-07 09:48:55.233847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.475 qpair failed and we were unable to recover it. 00:28:06.475 [2024-10-07 09:48:55.234095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.475 [2024-10-07 09:48:55.234160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.475 qpair failed and we were unable to recover it. 00:28:06.475 [2024-10-07 09:48:55.234398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.475 [2024-10-07 09:48:55.234463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.475 qpair failed and we were unable to recover it. 00:28:06.475 [2024-10-07 09:48:55.234735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.475 [2024-10-07 09:48:55.234800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.475 qpair failed and we were unable to recover it. 00:28:06.475 [2024-10-07 09:48:55.235086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.475 [2024-10-07 09:48:55.235150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.475 qpair failed and we were unable to recover it. 00:28:06.475 [2024-10-07 09:48:55.235441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.475 [2024-10-07 09:48:55.235505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.475 qpair failed and we were unable to recover it. 00:28:06.475 [2024-10-07 09:48:55.235758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.475 [2024-10-07 09:48:55.235823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.475 qpair failed and we were unable to recover it. 00:28:06.475 [2024-10-07 09:48:55.236035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.475 [2024-10-07 09:48:55.236100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.475 qpair failed and we were unable to recover it. 00:28:06.475 [2024-10-07 09:48:55.236403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.475 [2024-10-07 09:48:55.236467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.475 qpair failed and we were unable to recover it. 00:28:06.475 [2024-10-07 09:48:55.236717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.475 [2024-10-07 09:48:55.236783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.475 qpair failed and we were unable to recover it. 00:28:06.475 [2024-10-07 09:48:55.237074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.475 [2024-10-07 09:48:55.237139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.475 qpair failed and we were unable to recover it. 00:28:06.475 [2024-10-07 09:48:55.237332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.475 [2024-10-07 09:48:55.237395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.475 qpair failed and we were unable to recover it. 00:28:06.475 [2024-10-07 09:48:55.237596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.475 [2024-10-07 09:48:55.237660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.475 qpair failed and we were unable to recover it. 00:28:06.475 [2024-10-07 09:48:55.237971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.475 [2024-10-07 09:48:55.238036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.475 qpair failed and we were unable to recover it. 00:28:06.475 [2024-10-07 09:48:55.238315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.475 [2024-10-07 09:48:55.238379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.475 qpair failed and we were unable to recover it. 00:28:06.475 [2024-10-07 09:48:55.238628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.475 [2024-10-07 09:48:55.238730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.475 qpair failed and we were unable to recover it. 00:28:06.475 [2024-10-07 09:48:55.238951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.475 [2024-10-07 09:48:55.239016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.475 qpair failed and we were unable to recover it. 00:28:06.475 [2024-10-07 09:48:55.239267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.475 [2024-10-07 09:48:55.239331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.475 qpair failed and we were unable to recover it. 00:28:06.475 [2024-10-07 09:48:55.239629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.475 [2024-10-07 09:48:55.239724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.475 qpair failed and we were unable to recover it. 00:28:06.475 [2024-10-07 09:48:55.239979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.475 [2024-10-07 09:48:55.240043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.475 qpair failed and we were unable to recover it. 00:28:06.476 [2024-10-07 09:48:55.240320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.476 [2024-10-07 09:48:55.240384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.476 qpair failed and we were unable to recover it. 00:28:06.476 [2024-10-07 09:48:55.240694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.476 [2024-10-07 09:48:55.240778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.476 qpair failed and we were unable to recover it. 00:28:06.476 [2024-10-07 09:48:55.241036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.476 [2024-10-07 09:48:55.241100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.476 qpair failed and we were unable to recover it. 00:28:06.476 [2024-10-07 09:48:55.241387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.476 [2024-10-07 09:48:55.241451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.476 qpair failed and we were unable to recover it. 00:28:06.476 [2024-10-07 09:48:55.241701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.476 [2024-10-07 09:48:55.241768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.476 qpair failed and we were unable to recover it. 00:28:06.476 [2024-10-07 09:48:55.241993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.476 [2024-10-07 09:48:55.242056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.476 qpair failed and we were unable to recover it. 00:28:06.476 [2024-10-07 09:48:55.242349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.476 [2024-10-07 09:48:55.242413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.476 qpair failed and we were unable to recover it. 00:28:06.476 [2024-10-07 09:48:55.242657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.476 [2024-10-07 09:48:55.242735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.476 qpair failed and we were unable to recover it. 00:28:06.476 [2024-10-07 09:48:55.242989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.476 [2024-10-07 09:48:55.243054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.476 qpair failed and we were unable to recover it. 00:28:06.476 [2024-10-07 09:48:55.243296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.476 [2024-10-07 09:48:55.243360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.476 qpair failed and we were unable to recover it. 00:28:06.476 [2024-10-07 09:48:55.243655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.476 [2024-10-07 09:48:55.243749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.476 qpair failed and we were unable to recover it. 00:28:06.476 [2024-10-07 09:48:55.243984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.476 [2024-10-07 09:48:55.244048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.476 qpair failed and we were unable to recover it. 00:28:06.476 [2024-10-07 09:48:55.244332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.476 [2024-10-07 09:48:55.244396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.476 qpair failed and we were unable to recover it. 00:28:06.476 [2024-10-07 09:48:55.244687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.476 [2024-10-07 09:48:55.244751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.476 qpair failed and we were unable to recover it. 00:28:06.476 [2024-10-07 09:48:55.245003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.476 [2024-10-07 09:48:55.245069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.476 qpair failed and we were unable to recover it. 00:28:06.476 [2024-10-07 09:48:55.245321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.476 [2024-10-07 09:48:55.245384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.476 qpair failed and we were unable to recover it. 00:28:06.476 [2024-10-07 09:48:55.245634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.476 [2024-10-07 09:48:55.245711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.476 qpair failed and we were unable to recover it. 00:28:06.476 [2024-10-07 09:48:55.245948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.476 [2024-10-07 09:48:55.246013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.476 qpair failed and we were unable to recover it. 00:28:06.476 [2024-10-07 09:48:55.246310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.476 [2024-10-07 09:48:55.246374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.476 qpair failed and we were unable to recover it. 00:28:06.476 [2024-10-07 09:48:55.246616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.476 [2024-10-07 09:48:55.246692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.476 qpair failed and we were unable to recover it. 00:28:06.476 [2024-10-07 09:48:55.246923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.476 [2024-10-07 09:48:55.246987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.476 qpair failed and we were unable to recover it. 00:28:06.476 [2024-10-07 09:48:55.247223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.476 [2024-10-07 09:48:55.247290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.476 qpair failed and we were unable to recover it. 00:28:06.476 [2024-10-07 09:48:55.247575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.476 [2024-10-07 09:48:55.247639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.476 qpair failed and we were unable to recover it. 00:28:06.476 [2024-10-07 09:48:55.247891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.476 [2024-10-07 09:48:55.247955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.476 qpair failed and we were unable to recover it. 00:28:06.476 [2024-10-07 09:48:55.248241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.476 [2024-10-07 09:48:55.248306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.476 qpair failed and we were unable to recover it. 00:28:06.476 [2024-10-07 09:48:55.248545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.476 [2024-10-07 09:48:55.248608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.476 qpair failed and we were unable to recover it. 00:28:06.476 [2024-10-07 09:48:55.248866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.476 [2024-10-07 09:48:55.248931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.476 qpair failed and we were unable to recover it. 00:28:06.476 [2024-10-07 09:48:55.249210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.476 [2024-10-07 09:48:55.249275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.476 qpair failed and we were unable to recover it. 00:28:06.476 [2024-10-07 09:48:55.249486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.476 [2024-10-07 09:48:55.249563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.476 qpair failed and we were unable to recover it. 00:28:06.476 [2024-10-07 09:48:55.249872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.476 [2024-10-07 09:48:55.249938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.476 qpair failed and we were unable to recover it. 00:28:06.476 [2024-10-07 09:48:55.250178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.476 [2024-10-07 09:48:55.250244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.476 qpair failed and we were unable to recover it. 00:28:06.476 [2024-10-07 09:48:55.250536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.476 [2024-10-07 09:48:55.250600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.476 qpair failed and we were unable to recover it. 00:28:06.476 [2024-10-07 09:48:55.250820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.476 [2024-10-07 09:48:55.250886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.476 qpair failed and we were unable to recover it. 00:28:06.476 [2024-10-07 09:48:55.251177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.476 [2024-10-07 09:48:55.251242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.476 qpair failed and we were unable to recover it. 00:28:06.476 [2024-10-07 09:48:55.251526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.476 [2024-10-07 09:48:55.251591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.477 qpair failed and we were unable to recover it. 00:28:06.477 [2024-10-07 09:48:55.251856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.477 [2024-10-07 09:48:55.251922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.477 qpair failed and we were unable to recover it. 00:28:06.477 [2024-10-07 09:48:55.252213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.477 [2024-10-07 09:48:55.252277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.477 qpair failed and we were unable to recover it. 00:28:06.477 [2024-10-07 09:48:55.252552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.477 [2024-10-07 09:48:55.252617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.477 qpair failed and we were unable to recover it. 00:28:06.477 [2024-10-07 09:48:55.252942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.477 [2024-10-07 09:48:55.253012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.477 qpair failed and we were unable to recover it. 00:28:06.477 [2024-10-07 09:48:55.253304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.477 [2024-10-07 09:48:55.253367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.477 qpair failed and we were unable to recover it. 00:28:06.477 [2024-10-07 09:48:55.253654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.477 [2024-10-07 09:48:55.253736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.477 qpair failed and we were unable to recover it. 00:28:06.477 [2024-10-07 09:48:55.253932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.477 [2024-10-07 09:48:55.253996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.477 qpair failed and we were unable to recover it. 00:28:06.477 [2024-10-07 09:48:55.254286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.477 [2024-10-07 09:48:55.254351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.477 qpair failed and we were unable to recover it. 00:28:06.477 [2024-10-07 09:48:55.254606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.477 [2024-10-07 09:48:55.254688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.477 qpair failed and we were unable to recover it. 00:28:06.477 [2024-10-07 09:48:55.254981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.477 [2024-10-07 09:48:55.255046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.477 qpair failed and we were unable to recover it. 00:28:06.477 [2024-10-07 09:48:55.255295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.477 [2024-10-07 09:48:55.255360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.477 qpair failed and we were unable to recover it. 00:28:06.477 [2024-10-07 09:48:55.255662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.477 [2024-10-07 09:48:55.255755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.477 qpair failed and we were unable to recover it. 00:28:06.477 [2024-10-07 09:48:55.256000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.477 [2024-10-07 09:48:55.256064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.477 qpair failed and we were unable to recover it. 00:28:06.477 [2024-10-07 09:48:55.256353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.477 [2024-10-07 09:48:55.256417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.477 qpair failed and we were unable to recover it. 00:28:06.477 [2024-10-07 09:48:55.256629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.477 [2024-10-07 09:48:55.256713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.477 qpair failed and we were unable to recover it. 00:28:06.477 [2024-10-07 09:48:55.256975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.477 [2024-10-07 09:48:55.257039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.477 qpair failed and we were unable to recover it. 00:28:06.477 [2024-10-07 09:48:55.257285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.477 [2024-10-07 09:48:55.257350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.477 qpair failed and we were unable to recover it. 00:28:06.477 [2024-10-07 09:48:55.257606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.477 [2024-10-07 09:48:55.257687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.477 qpair failed and we were unable to recover it. 00:28:06.477 [2024-10-07 09:48:55.257904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.477 [2024-10-07 09:48:55.257969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.477 qpair failed and we were unable to recover it. 00:28:06.477 [2024-10-07 09:48:55.258237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.477 [2024-10-07 09:48:55.258301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.477 qpair failed and we were unable to recover it. 00:28:06.477 [2024-10-07 09:48:55.258600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.477 [2024-10-07 09:48:55.258681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.477 qpair failed and we were unable to recover it. 00:28:06.477 [2024-10-07 09:48:55.258971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.477 [2024-10-07 09:48:55.259037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.477 qpair failed and we were unable to recover it. 00:28:06.477 [2024-10-07 09:48:55.259292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.477 [2024-10-07 09:48:55.259357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.477 qpair failed and we were unable to recover it. 00:28:06.477 [2024-10-07 09:48:55.259628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.477 [2024-10-07 09:48:55.259721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.477 qpair failed and we were unable to recover it. 00:28:06.477 [2024-10-07 09:48:55.259941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.477 [2024-10-07 09:48:55.260007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.477 qpair failed and we were unable to recover it. 00:28:06.477 [2024-10-07 09:48:55.260294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.477 [2024-10-07 09:48:55.260359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.477 qpair failed and we were unable to recover it. 00:28:06.477 [2024-10-07 09:48:55.260636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.477 [2024-10-07 09:48:55.260730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.477 qpair failed and we were unable to recover it. 00:28:06.477 [2024-10-07 09:48:55.260966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.477 [2024-10-07 09:48:55.261030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.477 qpair failed and we were unable to recover it. 00:28:06.477 [2024-10-07 09:48:55.261274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.477 [2024-10-07 09:48:55.261338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.477 qpair failed and we were unable to recover it. 00:28:06.477 [2024-10-07 09:48:55.261582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.477 [2024-10-07 09:48:55.261649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.477 qpair failed and we were unable to recover it. 00:28:06.477 [2024-10-07 09:48:55.261969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.477 [2024-10-07 09:48:55.262035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.477 qpair failed and we were unable to recover it. 00:28:06.477 [2024-10-07 09:48:55.262337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.477 [2024-10-07 09:48:55.262402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.477 qpair failed and we were unable to recover it. 00:28:06.477 [2024-10-07 09:48:55.262648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.477 [2024-10-07 09:48:55.262748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.477 qpair failed and we were unable to recover it. 00:28:06.477 [2024-10-07 09:48:55.263007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.477 [2024-10-07 09:48:55.263073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.477 qpair failed and we were unable to recover it. 00:28:06.477 [2024-10-07 09:48:55.263381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.477 [2024-10-07 09:48:55.263448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.477 qpair failed and we were unable to recover it. 00:28:06.477 [2024-10-07 09:48:55.263710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.477 [2024-10-07 09:48:55.263777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.477 qpair failed and we were unable to recover it. 00:28:06.477 [2024-10-07 09:48:55.264018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.477 [2024-10-07 09:48:55.264084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.477 qpair failed and we were unable to recover it. 00:28:06.477 [2024-10-07 09:48:55.264329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.477 [2024-10-07 09:48:55.264394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.478 qpair failed and we were unable to recover it. 00:28:06.478 [2024-10-07 09:48:55.264620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.478 [2024-10-07 09:48:55.264707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.478 qpair failed and we were unable to recover it. 00:28:06.478 [2024-10-07 09:48:55.264962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.478 [2024-10-07 09:48:55.265028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.478 qpair failed and we were unable to recover it. 00:28:06.478 [2024-10-07 09:48:55.265319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.478 [2024-10-07 09:48:55.265384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.478 qpair failed and we were unable to recover it. 00:28:06.478 [2024-10-07 09:48:55.265634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.478 [2024-10-07 09:48:55.265726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.478 qpair failed and we were unable to recover it. 00:28:06.478 [2024-10-07 09:48:55.265980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.478 [2024-10-07 09:48:55.266048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.478 qpair failed and we were unable to recover it. 00:28:06.478 [2024-10-07 09:48:55.266340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.478 [2024-10-07 09:48:55.266405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.478 qpair failed and we were unable to recover it. 00:28:06.478 [2024-10-07 09:48:55.266661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.478 [2024-10-07 09:48:55.266750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.478 qpair failed and we were unable to recover it. 00:28:06.478 [2024-10-07 09:48:55.266999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.478 [2024-10-07 09:48:55.267066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.478 qpair failed and we were unable to recover it. 00:28:06.478 [2024-10-07 09:48:55.267360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.478 [2024-10-07 09:48:55.267428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.478 qpair failed and we were unable to recover it. 00:28:06.478 [2024-10-07 09:48:55.267713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.478 [2024-10-07 09:48:55.267781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.478 qpair failed and we were unable to recover it. 00:28:06.478 [2024-10-07 09:48:55.268093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.478 [2024-10-07 09:48:55.268158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.478 qpair failed and we were unable to recover it. 00:28:06.478 [2024-10-07 09:48:55.268461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.478 [2024-10-07 09:48:55.268526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.478 qpair failed and we were unable to recover it. 00:28:06.478 [2024-10-07 09:48:55.268818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.478 [2024-10-07 09:48:55.268884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.478 qpair failed and we were unable to recover it. 00:28:06.478 [2024-10-07 09:48:55.269085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.478 [2024-10-07 09:48:55.269150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.478 qpair failed and we were unable to recover it. 00:28:06.478 [2024-10-07 09:48:55.269438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.478 [2024-10-07 09:48:55.269503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.478 qpair failed and we were unable to recover it. 00:28:06.478 [2024-10-07 09:48:55.269798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.478 [2024-10-07 09:48:55.269865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.478 qpair failed and we were unable to recover it. 00:28:06.478 [2024-10-07 09:48:55.270118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.478 [2024-10-07 09:48:55.270182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.478 qpair failed and we were unable to recover it. 00:28:06.478 [2024-10-07 09:48:55.270483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.478 [2024-10-07 09:48:55.270549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.478 qpair failed and we were unable to recover it. 00:28:06.478 [2024-10-07 09:48:55.270825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.478 [2024-10-07 09:48:55.270894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.478 qpair failed and we were unable to recover it. 00:28:06.478 [2024-10-07 09:48:55.271100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.478 [2024-10-07 09:48:55.271164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.478 qpair failed and we were unable to recover it. 00:28:06.478 [2024-10-07 09:48:55.271457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.478 [2024-10-07 09:48:55.271522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.478 qpair failed and we were unable to recover it. 00:28:06.478 [2024-10-07 09:48:55.271820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.478 [2024-10-07 09:48:55.271886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.478 qpair failed and we were unable to recover it. 00:28:06.478 [2024-10-07 09:48:55.272144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.478 [2024-10-07 09:48:55.272209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.478 qpair failed and we were unable to recover it. 00:28:06.478 [2024-10-07 09:48:55.272460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.478 [2024-10-07 09:48:55.272535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.478 qpair failed and we were unable to recover it. 00:28:06.478 [2024-10-07 09:48:55.272795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.478 [2024-10-07 09:48:55.272862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.478 qpair failed and we were unable to recover it. 00:28:06.478 [2024-10-07 09:48:55.273110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.478 [2024-10-07 09:48:55.273175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.478 qpair failed and we were unable to recover it. 00:28:06.478 [2024-10-07 09:48:55.273423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.478 [2024-10-07 09:48:55.273488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.478 qpair failed and we were unable to recover it. 00:28:06.478 [2024-10-07 09:48:55.273774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.478 [2024-10-07 09:48:55.273841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.478 qpair failed and we were unable to recover it. 00:28:06.478 [2024-10-07 09:48:55.274094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.478 [2024-10-07 09:48:55.274158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.478 qpair failed and we were unable to recover it. 00:28:06.478 [2024-10-07 09:48:55.274418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.478 [2024-10-07 09:48:55.274484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.478 qpair failed and we were unable to recover it. 00:28:06.478 [2024-10-07 09:48:55.274740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.478 [2024-10-07 09:48:55.274806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.478 qpair failed and we were unable to recover it. 00:28:06.478 [2024-10-07 09:48:55.275095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.478 [2024-10-07 09:48:55.275159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.478 qpair failed and we were unable to recover it. 00:28:06.478 [2024-10-07 09:48:55.275453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.478 [2024-10-07 09:48:55.275519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.478 qpair failed and we were unable to recover it. 00:28:06.478 [2024-10-07 09:48:55.275775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.478 [2024-10-07 09:48:55.275842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.478 qpair failed and we were unable to recover it. 00:28:06.478 [2024-10-07 09:48:55.276083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.478 [2024-10-07 09:48:55.276149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.478 qpair failed and we were unable to recover it. 00:28:06.478 [2024-10-07 09:48:55.276405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.478 [2024-10-07 09:48:55.276472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.478 qpair failed and we were unable to recover it. 00:28:06.478 [2024-10-07 09:48:55.276729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.478 [2024-10-07 09:48:55.276794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.478 qpair failed and we were unable to recover it. 00:28:06.478 [2024-10-07 09:48:55.277105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.478 [2024-10-07 09:48:55.277170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.478 qpair failed and we were unable to recover it. 00:28:06.478 [2024-10-07 09:48:55.277426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.479 [2024-10-07 09:48:55.277491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.479 qpair failed and we were unable to recover it. 00:28:06.479 [2024-10-07 09:48:55.277734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.479 [2024-10-07 09:48:55.277799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.479 qpair failed and we were unable to recover it. 00:28:06.479 [2024-10-07 09:48:55.278084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.479 [2024-10-07 09:48:55.278150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.479 qpair failed and we were unable to recover it. 00:28:06.479 [2024-10-07 09:48:55.278444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.479 [2024-10-07 09:48:55.278510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.479 qpair failed and we were unable to recover it. 00:28:06.479 [2024-10-07 09:48:55.278804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.479 [2024-10-07 09:48:55.278871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.479 qpair failed and we were unable to recover it. 00:28:06.479 [2024-10-07 09:48:55.279085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.479 [2024-10-07 09:48:55.279152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.479 qpair failed and we were unable to recover it. 00:28:06.479 [2024-10-07 09:48:55.279380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.479 [2024-10-07 09:48:55.279445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.479 qpair failed and we were unable to recover it. 00:28:06.479 [2024-10-07 09:48:55.279713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.479 [2024-10-07 09:48:55.279779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.479 qpair failed and we were unable to recover it. 00:28:06.479 [2024-10-07 09:48:55.280066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.479 [2024-10-07 09:48:55.280132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.479 qpair failed and we were unable to recover it. 00:28:06.479 [2024-10-07 09:48:55.280390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.479 [2024-10-07 09:48:55.280456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.479 qpair failed and we were unable to recover it. 00:28:06.479 [2024-10-07 09:48:55.280743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.479 [2024-10-07 09:48:55.280809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.479 qpair failed and we were unable to recover it. 00:28:06.479 [2024-10-07 09:48:55.281058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.479 [2024-10-07 09:48:55.281124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.479 qpair failed and we were unable to recover it. 00:28:06.479 [2024-10-07 09:48:55.281382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.479 [2024-10-07 09:48:55.281458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.479 qpair failed and we were unable to recover it. 00:28:06.479 [2024-10-07 09:48:55.281745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.479 [2024-10-07 09:48:55.281811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.479 qpair failed and we were unable to recover it. 00:28:06.479 [2024-10-07 09:48:55.282095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.479 [2024-10-07 09:48:55.282161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.479 qpair failed and we were unable to recover it. 00:28:06.479 [2024-10-07 09:48:55.282461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.479 [2024-10-07 09:48:55.282527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.479 qpair failed and we were unable to recover it. 00:28:06.479 [2024-10-07 09:48:55.282817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.479 [2024-10-07 09:48:55.282882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.479 qpair failed and we were unable to recover it. 00:28:06.479 [2024-10-07 09:48:55.283170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.479 [2024-10-07 09:48:55.283235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.479 qpair failed and we were unable to recover it. 00:28:06.479 [2024-10-07 09:48:55.283475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.479 [2024-10-07 09:48:55.283541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.479 qpair failed and we were unable to recover it. 00:28:06.479 [2024-10-07 09:48:55.283763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.479 [2024-10-07 09:48:55.283830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.479 qpair failed and we were unable to recover it. 00:28:06.479 [2024-10-07 09:48:55.284106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.479 [2024-10-07 09:48:55.284172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.479 qpair failed and we were unable to recover it. 00:28:06.479 [2024-10-07 09:48:55.284444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.479 [2024-10-07 09:48:55.284510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.479 qpair failed and we were unable to recover it. 00:28:06.479 [2024-10-07 09:48:55.284764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.479 [2024-10-07 09:48:55.284832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.479 qpair failed and we were unable to recover it. 00:28:06.479 [2024-10-07 09:48:55.285118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.479 [2024-10-07 09:48:55.285184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.479 qpair failed and we were unable to recover it. 00:28:06.479 [2024-10-07 09:48:55.285430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.479 [2024-10-07 09:48:55.285496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.479 qpair failed and we were unable to recover it. 00:28:06.479 [2024-10-07 09:48:55.285788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.479 [2024-10-07 09:48:55.285855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.479 qpair failed and we were unable to recover it. 00:28:06.479 [2024-10-07 09:48:55.286120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.479 [2024-10-07 09:48:55.286186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.479 qpair failed and we were unable to recover it. 00:28:06.479 [2024-10-07 09:48:55.286426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.479 [2024-10-07 09:48:55.286490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.479 qpair failed and we were unable to recover it. 00:28:06.479 [2024-10-07 09:48:55.286771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.479 [2024-10-07 09:48:55.286838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.479 qpair failed and we were unable to recover it. 00:28:06.479 [2024-10-07 09:48:55.287066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.479 [2024-10-07 09:48:55.287132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.479 qpair failed and we were unable to recover it. 00:28:06.479 [2024-10-07 09:48:55.287419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.479 [2024-10-07 09:48:55.287485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.479 qpair failed and we were unable to recover it. 00:28:06.479 [2024-10-07 09:48:55.287775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.479 [2024-10-07 09:48:55.287842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.479 qpair failed and we were unable to recover it. 00:28:06.479 [2024-10-07 09:48:55.288090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.479 [2024-10-07 09:48:55.288158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.479 qpair failed and we were unable to recover it. 00:28:06.479 [2024-10-07 09:48:55.288413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.479 [2024-10-07 09:48:55.288480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.479 qpair failed and we were unable to recover it. 00:28:06.479 [2024-10-07 09:48:55.288733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.479 [2024-10-07 09:48:55.288800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.479 qpair failed and we were unable to recover it. 00:28:06.479 [2024-10-07 09:48:55.289086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.479 [2024-10-07 09:48:55.289151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.479 qpair failed and we were unable to recover it. 00:28:06.479 [2024-10-07 09:48:55.289427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.479 [2024-10-07 09:48:55.289492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.479 qpair failed and we were unable to recover it. 00:28:06.479 [2024-10-07 09:48:55.289735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.479 [2024-10-07 09:48:55.289801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.479 qpair failed and we were unable to recover it. 00:28:06.479 [2024-10-07 09:48:55.290081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.479 [2024-10-07 09:48:55.290146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.479 qpair failed and we were unable to recover it. 00:28:06.479 [2024-10-07 09:48:55.290383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.480 [2024-10-07 09:48:55.290449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.480 qpair failed and we were unable to recover it. 00:28:06.480 [2024-10-07 09:48:55.290753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.480 [2024-10-07 09:48:55.290820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.480 qpair failed and we were unable to recover it. 00:28:06.480 [2024-10-07 09:48:55.291018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.480 [2024-10-07 09:48:55.291083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.480 qpair failed and we were unable to recover it. 00:28:06.480 [2024-10-07 09:48:55.291317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.480 [2024-10-07 09:48:55.291384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.480 qpair failed and we were unable to recover it. 00:28:06.480 [2024-10-07 09:48:55.291615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.480 [2024-10-07 09:48:55.291701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.480 qpair failed and we were unable to recover it. 00:28:06.480 [2024-10-07 09:48:55.291916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.480 [2024-10-07 09:48:55.291984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.480 qpair failed and we were unable to recover it. 00:28:06.480 [2024-10-07 09:48:55.292245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.480 [2024-10-07 09:48:55.292312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.480 qpair failed and we were unable to recover it. 00:28:06.480 [2024-10-07 09:48:55.292512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.480 [2024-10-07 09:48:55.292578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.480 qpair failed and we were unable to recover it. 00:28:06.480 [2024-10-07 09:48:55.292841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.480 [2024-10-07 09:48:55.292910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.480 qpair failed and we were unable to recover it. 00:28:06.480 [2024-10-07 09:48:55.293124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.480 [2024-10-07 09:48:55.293190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.480 qpair failed and we were unable to recover it. 00:28:06.480 [2024-10-07 09:48:55.293477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.480 [2024-10-07 09:48:55.293542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.480 qpair failed and we were unable to recover it. 00:28:06.480 [2024-10-07 09:48:55.293828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.480 [2024-10-07 09:48:55.293896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.480 qpair failed and we were unable to recover it. 00:28:06.480 [2024-10-07 09:48:55.294146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.480 [2024-10-07 09:48:55.294211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.480 qpair failed and we were unable to recover it. 00:28:06.480 [2024-10-07 09:48:55.294436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.480 [2024-10-07 09:48:55.294502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.480 qpair failed and we were unable to recover it. 00:28:06.480 [2024-10-07 09:48:55.294799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.480 [2024-10-07 09:48:55.294866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.480 qpair failed and we were unable to recover it. 00:28:06.480 [2024-10-07 09:48:55.295116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.480 [2024-10-07 09:48:55.295182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.480 qpair failed and we were unable to recover it. 00:28:06.480 [2024-10-07 09:48:55.295463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.480 [2024-10-07 09:48:55.295530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.480 qpair failed and we were unable to recover it. 00:28:06.480 [2024-10-07 09:48:55.295760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.480 [2024-10-07 09:48:55.295828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.480 qpair failed and we were unable to recover it. 00:28:06.480 [2024-10-07 09:48:55.296112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.480 [2024-10-07 09:48:55.296178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.480 qpair failed and we were unable to recover it. 00:28:06.480 [2024-10-07 09:48:55.296430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.480 [2024-10-07 09:48:55.296495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.480 qpair failed and we were unable to recover it. 00:28:06.480 [2024-10-07 09:48:55.296772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.480 [2024-10-07 09:48:55.296839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.480 qpair failed and we were unable to recover it. 00:28:06.480 [2024-10-07 09:48:55.297132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.480 [2024-10-07 09:48:55.297198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.480 qpair failed and we were unable to recover it. 00:28:06.480 [2024-10-07 09:48:55.297487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.480 [2024-10-07 09:48:55.297552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.480 qpair failed and we were unable to recover it. 00:28:06.480 [2024-10-07 09:48:55.297799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.480 [2024-10-07 09:48:55.297866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.480 qpair failed and we were unable to recover it. 00:28:06.480 [2024-10-07 09:48:55.298130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.480 [2024-10-07 09:48:55.298196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.480 qpair failed and we were unable to recover it. 00:28:06.480 [2024-10-07 09:48:55.298445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.480 [2024-10-07 09:48:55.298510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.480 qpair failed and we were unable to recover it. 00:28:06.480 [2024-10-07 09:48:55.298789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.480 [2024-10-07 09:48:55.298856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.480 qpair failed and we were unable to recover it. 00:28:06.480 [2024-10-07 09:48:55.299151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.480 [2024-10-07 09:48:55.299217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.480 qpair failed and we were unable to recover it. 00:28:06.480 [2024-10-07 09:48:55.299515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.480 [2024-10-07 09:48:55.299580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.480 qpair failed and we were unable to recover it. 00:28:06.480 [2024-10-07 09:48:55.299831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.480 [2024-10-07 09:48:55.299899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.480 qpair failed and we were unable to recover it. 00:28:06.480 [2024-10-07 09:48:55.300177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.480 [2024-10-07 09:48:55.300243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.480 qpair failed and we were unable to recover it. 00:28:06.480 [2024-10-07 09:48:55.300503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.480 [2024-10-07 09:48:55.300569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.480 qpair failed and we were unable to recover it. 00:28:06.480 [2024-10-07 09:48:55.300874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.480 [2024-10-07 09:48:55.300940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.480 qpair failed and we were unable to recover it. 00:28:06.480 [2024-10-07 09:48:55.301186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.480 [2024-10-07 09:48:55.301253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.480 qpair failed and we were unable to recover it. 00:28:06.480 [2024-10-07 09:48:55.301536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.480 [2024-10-07 09:48:55.301601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.480 qpair failed and we were unable to recover it. 00:28:06.480 [2024-10-07 09:48:55.301914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.480 [2024-10-07 09:48:55.301981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.480 qpair failed and we were unable to recover it. 00:28:06.480 [2024-10-07 09:48:55.302228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.480 [2024-10-07 09:48:55.302294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.480 qpair failed and we were unable to recover it. 00:28:06.480 [2024-10-07 09:48:55.302545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.480 [2024-10-07 09:48:55.302610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.480 qpair failed and we were unable to recover it. 00:28:06.480 [2024-10-07 09:48:55.302893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.480 [2024-10-07 09:48:55.302961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.480 qpair failed and we were unable to recover it. 00:28:06.481 [2024-10-07 09:48:55.303206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.481 [2024-10-07 09:48:55.303272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.481 qpair failed and we were unable to recover it. 00:28:06.481 [2024-10-07 09:48:55.303553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.481 [2024-10-07 09:48:55.303618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.481 qpair failed and we were unable to recover it. 00:28:06.481 [2024-10-07 09:48:55.303883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.481 [2024-10-07 09:48:55.303959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.481 qpair failed and we were unable to recover it. 00:28:06.481 [2024-10-07 09:48:55.304258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.481 [2024-10-07 09:48:55.304324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.481 qpair failed and we were unable to recover it. 00:28:06.481 [2024-10-07 09:48:55.304567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.481 [2024-10-07 09:48:55.304633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.481 qpair failed and we were unable to recover it. 00:28:06.481 [2024-10-07 09:48:55.304941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.481 [2024-10-07 09:48:55.305007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.481 qpair failed and we were unable to recover it. 00:28:06.481 [2024-10-07 09:48:55.305244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.481 [2024-10-07 09:48:55.305311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.481 qpair failed and we were unable to recover it. 00:28:06.481 [2024-10-07 09:48:55.305560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.481 [2024-10-07 09:48:55.305625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.481 qpair failed and we were unable to recover it. 00:28:06.481 [2024-10-07 09:48:55.305934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.481 [2024-10-07 09:48:55.306000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.481 qpair failed and we were unable to recover it. 00:28:06.481 [2024-10-07 09:48:55.306248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.481 [2024-10-07 09:48:55.306314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.481 qpair failed and we were unable to recover it. 00:28:06.481 [2024-10-07 09:48:55.306563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.481 [2024-10-07 09:48:55.306628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.481 qpair failed and we were unable to recover it. 00:28:06.481 [2024-10-07 09:48:55.306916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.481 [2024-10-07 09:48:55.306982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.481 qpair failed and we were unable to recover it. 00:28:06.481 [2024-10-07 09:48:55.307220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.481 [2024-10-07 09:48:55.307285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.481 qpair failed and we were unable to recover it. 00:28:06.481 [2024-10-07 09:48:55.307573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.481 [2024-10-07 09:48:55.307638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.481 qpair failed and we were unable to recover it. 00:28:06.481 [2024-10-07 09:48:55.307959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.481 [2024-10-07 09:48:55.308025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.481 qpair failed and we were unable to recover it. 00:28:06.481 [2024-10-07 09:48:55.308272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.481 [2024-10-07 09:48:55.308337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.481 qpair failed and we were unable to recover it. 00:28:06.481 [2024-10-07 09:48:55.308642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.481 [2024-10-07 09:48:55.308728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.481 qpair failed and we were unable to recover it. 00:28:06.481 [2024-10-07 09:48:55.308942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.481 [2024-10-07 09:48:55.309009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.481 qpair failed and we were unable to recover it. 00:28:06.481 [2024-10-07 09:48:55.309301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.481 [2024-10-07 09:48:55.309365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.481 qpair failed and we were unable to recover it. 00:28:06.481 [2024-10-07 09:48:55.309650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.481 [2024-10-07 09:48:55.309734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.481 qpair failed and we were unable to recover it. 00:28:06.481 [2024-10-07 09:48:55.310033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.481 [2024-10-07 09:48:55.310099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.481 qpair failed and we were unable to recover it. 00:28:06.481 [2024-10-07 09:48:55.310342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.481 [2024-10-07 09:48:55.310409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.481 qpair failed and we were unable to recover it. 00:28:06.481 [2024-10-07 09:48:55.310727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.481 [2024-10-07 09:48:55.310795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.481 qpair failed and we were unable to recover it. 00:28:06.481 [2024-10-07 09:48:55.311040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.481 [2024-10-07 09:48:55.311106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.481 qpair failed and we were unable to recover it. 00:28:06.481 [2024-10-07 09:48:55.311387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.481 [2024-10-07 09:48:55.311452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.481 qpair failed and we were unable to recover it. 00:28:06.481 [2024-10-07 09:48:55.311738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.481 [2024-10-07 09:48:55.311806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.481 qpair failed and we were unable to recover it. 00:28:06.481 [2024-10-07 09:48:55.312084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.481 [2024-10-07 09:48:55.312150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.481 qpair failed and we were unable to recover it. 00:28:06.481 [2024-10-07 09:48:55.312407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.481 [2024-10-07 09:48:55.312471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.481 qpair failed and we were unable to recover it. 00:28:06.481 [2024-10-07 09:48:55.312752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.481 [2024-10-07 09:48:55.312820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.481 qpair failed and we were unable to recover it. 00:28:06.481 [2024-10-07 09:48:55.313079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.481 [2024-10-07 09:48:55.313155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.481 qpair failed and we were unable to recover it. 00:28:06.481 [2024-10-07 09:48:55.313405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.481 [2024-10-07 09:48:55.313470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.481 qpair failed and we were unable to recover it. 00:28:06.481 [2024-10-07 09:48:55.313684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.481 [2024-10-07 09:48:55.313752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.481 qpair failed and we were unable to recover it. 00:28:06.481 [2024-10-07 09:48:55.313974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.481 [2024-10-07 09:48:55.314039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.481 qpair failed and we were unable to recover it. 00:28:06.481 [2024-10-07 09:48:55.314246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.482 [2024-10-07 09:48:55.314310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.482 qpair failed and we were unable to recover it. 00:28:06.482 [2024-10-07 09:48:55.314596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.482 [2024-10-07 09:48:55.314664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.482 qpair failed and we were unable to recover it. 00:28:06.482 [2024-10-07 09:48:55.314957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.482 [2024-10-07 09:48:55.315023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.482 qpair failed and we were unable to recover it. 00:28:06.482 [2024-10-07 09:48:55.315231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.482 [2024-10-07 09:48:55.315296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.482 qpair failed and we were unable to recover it. 00:28:06.482 [2024-10-07 09:48:55.315552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.482 [2024-10-07 09:48:55.315618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.482 qpair failed and we were unable to recover it. 00:28:06.482 [2024-10-07 09:48:55.315921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.482 [2024-10-07 09:48:55.315987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.482 qpair failed and we were unable to recover it. 00:28:06.482 [2024-10-07 09:48:55.316178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.482 [2024-10-07 09:48:55.316243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.482 qpair failed and we were unable to recover it. 00:28:06.482 [2024-10-07 09:48:55.316544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.482 [2024-10-07 09:48:55.316609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.482 qpair failed and we were unable to recover it. 00:28:06.482 [2024-10-07 09:48:55.316906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.482 [2024-10-07 09:48:55.316972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.482 qpair failed and we were unable to recover it. 00:28:06.482 [2024-10-07 09:48:55.317212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.482 [2024-10-07 09:48:55.317279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.482 qpair failed and we were unable to recover it. 00:28:06.482 [2024-10-07 09:48:55.317592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.482 [2024-10-07 09:48:55.317657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.482 qpair failed and we were unable to recover it. 00:28:06.482 [2024-10-07 09:48:55.317963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.482 [2024-10-07 09:48:55.318028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.482 qpair failed and we were unable to recover it. 00:28:06.482 [2024-10-07 09:48:55.318277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.482 [2024-10-07 09:48:55.318343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.482 qpair failed and we were unable to recover it. 00:28:06.482 [2024-10-07 09:48:55.318558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.482 [2024-10-07 09:48:55.318625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.482 qpair failed and we were unable to recover it. 00:28:06.482 [2024-10-07 09:48:55.318876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.482 [2024-10-07 09:48:55.318942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.482 qpair failed and we were unable to recover it. 00:28:06.482 [2024-10-07 09:48:55.319225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.482 [2024-10-07 09:48:55.319290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.482 qpair failed and we were unable to recover it. 00:28:06.482 [2024-10-07 09:48:55.319576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.482 [2024-10-07 09:48:55.319641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.482 qpair failed and we were unable to recover it. 00:28:06.482 [2024-10-07 09:48:55.319905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.482 [2024-10-07 09:48:55.319971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.482 qpair failed and we were unable to recover it. 00:28:06.482 [2024-10-07 09:48:55.320252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.482 [2024-10-07 09:48:55.320317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.482 qpair failed and we were unable to recover it. 00:28:06.482 [2024-10-07 09:48:55.320597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.482 [2024-10-07 09:48:55.320662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.482 qpair failed and we were unable to recover it. 00:28:06.482 [2024-10-07 09:48:55.320933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.482 [2024-10-07 09:48:55.320999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.482 qpair failed and we were unable to recover it. 00:28:06.482 [2024-10-07 09:48:55.321235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.482 [2024-10-07 09:48:55.321300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.482 qpair failed and we were unable to recover it. 00:28:06.482 [2024-10-07 09:48:55.321584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.482 [2024-10-07 09:48:55.321649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.482 qpair failed and we were unable to recover it. 00:28:06.482 [2024-10-07 09:48:55.321970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.482 [2024-10-07 09:48:55.322045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.482 qpair failed and we were unable to recover it. 00:28:06.482 [2024-10-07 09:48:55.322341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.482 [2024-10-07 09:48:55.322405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.482 qpair failed and we were unable to recover it. 00:28:06.482 [2024-10-07 09:48:55.322702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.482 [2024-10-07 09:48:55.322769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.482 qpair failed and we were unable to recover it. 00:28:06.482 [2024-10-07 09:48:55.323063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.482 [2024-10-07 09:48:55.323129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.482 qpair failed and we were unable to recover it. 00:28:06.482 [2024-10-07 09:48:55.323411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.482 [2024-10-07 09:48:55.323475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.482 qpair failed and we were unable to recover it. 00:28:06.482 [2024-10-07 09:48:55.323782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.482 [2024-10-07 09:48:55.323849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.482 qpair failed and we were unable to recover it. 00:28:06.482 [2024-10-07 09:48:55.324117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.482 [2024-10-07 09:48:55.324182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.482 qpair failed and we were unable to recover it. 00:28:06.482 [2024-10-07 09:48:55.324474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.482 [2024-10-07 09:48:55.324539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.482 qpair failed and we were unable to recover it. 00:28:06.482 [2024-10-07 09:48:55.324820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.482 [2024-10-07 09:48:55.324887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.482 qpair failed and we were unable to recover it. 00:28:06.482 [2024-10-07 09:48:55.325178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.482 [2024-10-07 09:48:55.325244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.482 qpair failed and we were unable to recover it. 00:28:06.482 [2024-10-07 09:48:55.325457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.482 [2024-10-07 09:48:55.325522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.482 qpair failed and we were unable to recover it. 00:28:06.482 [2024-10-07 09:48:55.325767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.482 [2024-10-07 09:48:55.325834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.482 qpair failed and we were unable to recover it. 00:28:06.482 [2024-10-07 09:48:55.326056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.482 [2024-10-07 09:48:55.326123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.482 qpair failed and we were unable to recover it. 00:28:06.482 [2024-10-07 09:48:55.326372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.482 [2024-10-07 09:48:55.326436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.482 qpair failed and we were unable to recover it. 00:28:06.482 [2024-10-07 09:48:55.326640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.482 [2024-10-07 09:48:55.326733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.482 qpair failed and we were unable to recover it. 00:28:06.482 [2024-10-07 09:48:55.326994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.482 [2024-10-07 09:48:55.327060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.482 qpair failed and we were unable to recover it. 00:28:06.482 [2024-10-07 09:48:55.327350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.483 [2024-10-07 09:48:55.327415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.483 qpair failed and we were unable to recover it. 00:28:06.483 [2024-10-07 09:48:55.327664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.483 [2024-10-07 09:48:55.327745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.483 qpair failed and we were unable to recover it. 00:28:06.483 [2024-10-07 09:48:55.327996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.483 [2024-10-07 09:48:55.328062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.483 qpair failed and we were unable to recover it. 00:28:06.483 [2024-10-07 09:48:55.328308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.483 [2024-10-07 09:48:55.328374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.483 qpair failed and we were unable to recover it. 00:28:06.483 [2024-10-07 09:48:55.328663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.483 [2024-10-07 09:48:55.328742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.483 qpair failed and we were unable to recover it. 00:28:06.483 [2024-10-07 09:48:55.328984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.483 [2024-10-07 09:48:55.329050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.483 qpair failed and we were unable to recover it. 00:28:06.483 [2024-10-07 09:48:55.329325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.483 [2024-10-07 09:48:55.329390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.483 qpair failed and we were unable to recover it. 00:28:06.483 [2024-10-07 09:48:55.329644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.483 [2024-10-07 09:48:55.329724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.483 qpair failed and we were unable to recover it. 00:28:06.483 [2024-10-07 09:48:55.329971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.483 [2024-10-07 09:48:55.330037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.483 qpair failed and we were unable to recover it. 00:28:06.483 [2024-10-07 09:48:55.330271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.483 [2024-10-07 09:48:55.330337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.483 qpair failed and we were unable to recover it. 00:28:06.483 [2024-10-07 09:48:55.330623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.483 [2024-10-07 09:48:55.330725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.483 qpair failed and we were unable to recover it. 00:28:06.483 [2024-10-07 09:48:55.331021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.483 [2024-10-07 09:48:55.331087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.483 qpair failed and we were unable to recover it. 00:28:06.483 [2024-10-07 09:48:55.331353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.483 [2024-10-07 09:48:55.331420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.483 qpair failed and we were unable to recover it. 00:28:06.483 [2024-10-07 09:48:55.331710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.483 [2024-10-07 09:48:55.331777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.483 qpair failed and we were unable to recover it. 00:28:06.483 [2024-10-07 09:48:55.332075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.483 [2024-10-07 09:48:55.332140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.483 qpair failed and we were unable to recover it. 00:28:06.483 [2024-10-07 09:48:55.332348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.483 [2024-10-07 09:48:55.332414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.483 qpair failed and we were unable to recover it. 00:28:06.483 [2024-10-07 09:48:55.332699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.483 [2024-10-07 09:48:55.332766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.483 qpair failed and we were unable to recover it. 00:28:06.483 [2024-10-07 09:48:55.333052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.483 [2024-10-07 09:48:55.333117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.483 qpair failed and we were unable to recover it. 00:28:06.483 [2024-10-07 09:48:55.333403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.483 [2024-10-07 09:48:55.333469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.483 qpair failed and we were unable to recover it. 00:28:06.483 [2024-10-07 09:48:55.333763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.483 [2024-10-07 09:48:55.333829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.483 qpair failed and we were unable to recover it. 00:28:06.483 [2024-10-07 09:48:55.334124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.483 [2024-10-07 09:48:55.334190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.483 qpair failed and we were unable to recover it. 00:28:06.483 [2024-10-07 09:48:55.334405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.483 [2024-10-07 09:48:55.334473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.483 qpair failed and we were unable to recover it. 00:28:06.483 [2024-10-07 09:48:55.334727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.483 [2024-10-07 09:48:55.334793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.483 qpair failed and we were unable to recover it. 00:28:06.483 [2024-10-07 09:48:55.335048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.483 [2024-10-07 09:48:55.335113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.483 qpair failed and we were unable to recover it. 00:28:06.483 [2024-10-07 09:48:55.335401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.483 [2024-10-07 09:48:55.335466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.483 qpair failed and we were unable to recover it. 00:28:06.483 [2024-10-07 09:48:55.335694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.483 [2024-10-07 09:48:55.335762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.483 qpair failed and we were unable to recover it. 00:28:06.483 [2024-10-07 09:48:55.335995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.483 [2024-10-07 09:48:55.336060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.483 qpair failed and we were unable to recover it. 00:28:06.483 [2024-10-07 09:48:55.336344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.483 [2024-10-07 09:48:55.336408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.483 qpair failed and we were unable to recover it. 00:28:06.483 [2024-10-07 09:48:55.336662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.483 [2024-10-07 09:48:55.336743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.483 qpair failed and we were unable to recover it. 00:28:06.483 [2024-10-07 09:48:55.336994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.483 [2024-10-07 09:48:55.337059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.483 qpair failed and we were unable to recover it. 00:28:06.483 [2024-10-07 09:48:55.337339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.483 [2024-10-07 09:48:55.337403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.483 qpair failed and we were unable to recover it. 00:28:06.483 [2024-10-07 09:48:55.337696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.483 [2024-10-07 09:48:55.337763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.483 qpair failed and we were unable to recover it. 00:28:06.483 [2024-10-07 09:48:55.338052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.483 [2024-10-07 09:48:55.338117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.483 qpair failed and we were unable to recover it. 00:28:06.483 [2024-10-07 09:48:55.338403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.483 [2024-10-07 09:48:55.338468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.483 qpair failed and we were unable to recover it. 00:28:06.483 [2024-10-07 09:48:55.338717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.483 [2024-10-07 09:48:55.338785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.483 qpair failed and we were unable to recover it. 00:28:06.483 [2024-10-07 09:48:55.339043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.483 [2024-10-07 09:48:55.339109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.483 qpair failed and we were unable to recover it. 00:28:06.483 [2024-10-07 09:48:55.339358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.483 [2024-10-07 09:48:55.339424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.483 qpair failed and we were unable to recover it. 00:28:06.483 [2024-10-07 09:48:55.339653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.483 [2024-10-07 09:48:55.339735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.483 qpair failed and we were unable to recover it. 00:28:06.483 [2024-10-07 09:48:55.339977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.483 [2024-10-07 09:48:55.340043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.483 qpair failed and we were unable to recover it. 00:28:06.484 [2024-10-07 09:48:55.340346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.484 [2024-10-07 09:48:55.340412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.484 qpair failed and we were unable to recover it. 00:28:06.484 [2024-10-07 09:48:55.340620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.484 [2024-10-07 09:48:55.340704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.484 qpair failed and we were unable to recover it. 00:28:06.484 [2024-10-07 09:48:55.340934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.484 [2024-10-07 09:48:55.340999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.484 qpair failed and we were unable to recover it. 00:28:06.484 [2024-10-07 09:48:55.341300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.484 [2024-10-07 09:48:55.341366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.484 qpair failed and we were unable to recover it. 00:28:06.484 [2024-10-07 09:48:55.341615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.484 [2024-10-07 09:48:55.341698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.484 qpair failed and we were unable to recover it. 00:28:06.484 [2024-10-07 09:48:55.341952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.484 [2024-10-07 09:48:55.342018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.484 qpair failed and we were unable to recover it. 00:28:06.484 [2024-10-07 09:48:55.342265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.484 [2024-10-07 09:48:55.342334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.484 qpair failed and we were unable to recover it. 00:28:06.484 [2024-10-07 09:48:55.342615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.484 [2024-10-07 09:48:55.342710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.484 qpair failed and we were unable to recover it. 00:28:06.484 [2024-10-07 09:48:55.342959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.484 [2024-10-07 09:48:55.343025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.484 qpair failed and we were unable to recover it. 00:28:06.484 [2024-10-07 09:48:55.343310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.484 [2024-10-07 09:48:55.343376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.484 qpair failed and we were unable to recover it. 00:28:06.484 [2024-10-07 09:48:55.343688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.484 [2024-10-07 09:48:55.343756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.484 qpair failed and we were unable to recover it. 00:28:06.484 [2024-10-07 09:48:55.344019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.484 [2024-10-07 09:48:55.344085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.484 qpair failed and we were unable to recover it. 00:28:06.484 [2024-10-07 09:48:55.344335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.484 [2024-10-07 09:48:55.344401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.484 qpair failed and we were unable to recover it. 00:28:06.484 [2024-10-07 09:48:55.344689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.484 [2024-10-07 09:48:55.344768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.484 qpair failed and we were unable to recover it. 00:28:06.484 [2024-10-07 09:48:55.345028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.484 [2024-10-07 09:48:55.345093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.484 qpair failed and we were unable to recover it. 00:28:06.484 [2024-10-07 09:48:55.345331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.484 [2024-10-07 09:48:55.345397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.484 qpair failed and we were unable to recover it. 00:28:06.484 [2024-10-07 09:48:55.345701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.484 [2024-10-07 09:48:55.345770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.484 qpair failed and we were unable to recover it. 00:28:06.484 [2024-10-07 09:48:55.346065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.484 [2024-10-07 09:48:55.346131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.484 qpair failed and we were unable to recover it. 00:28:06.484 [2024-10-07 09:48:55.346422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.484 [2024-10-07 09:48:55.346487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.484 qpair failed and we were unable to recover it. 00:28:06.484 [2024-10-07 09:48:55.346752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.484 [2024-10-07 09:48:55.346818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.484 qpair failed and we were unable to recover it. 00:28:06.484 [2024-10-07 09:48:55.347116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.484 [2024-10-07 09:48:55.347181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.484 qpair failed and we were unable to recover it. 00:28:06.484 [2024-10-07 09:48:55.347468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.484 [2024-10-07 09:48:55.347535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.484 qpair failed and we were unable to recover it. 00:28:06.484 [2024-10-07 09:48:55.347773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.484 [2024-10-07 09:48:55.347840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.484 qpair failed and we were unable to recover it. 00:28:06.484 [2024-10-07 09:48:55.348066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.484 [2024-10-07 09:48:55.348132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.484 qpair failed and we were unable to recover it. 00:28:06.484 [2024-10-07 09:48:55.348387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.484 [2024-10-07 09:48:55.348452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.484 qpair failed and we were unable to recover it. 00:28:06.484 [2024-10-07 09:48:55.348657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.484 [2024-10-07 09:48:55.348737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.484 qpair failed and we were unable to recover it. 00:28:06.484 [2024-10-07 09:48:55.349027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.484 [2024-10-07 09:48:55.349092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.484 qpair failed and we were unable to recover it. 00:28:06.484 [2024-10-07 09:48:55.349360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.484 [2024-10-07 09:48:55.349426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.484 qpair failed and we were unable to recover it. 00:28:06.484 [2024-10-07 09:48:55.349694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.484 [2024-10-07 09:48:55.349761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.484 qpair failed and we were unable to recover it. 00:28:06.484 [2024-10-07 09:48:55.349993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.484 [2024-10-07 09:48:55.350059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.484 qpair failed and we were unable to recover it. 00:28:06.484 [2024-10-07 09:48:55.350266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.484 [2024-10-07 09:48:55.350332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.484 qpair failed and we were unable to recover it. 00:28:06.484 [2024-10-07 09:48:55.350554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.484 [2024-10-07 09:48:55.350621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.484 qpair failed and we were unable to recover it. 00:28:06.484 [2024-10-07 09:48:55.350890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.484 [2024-10-07 09:48:55.350956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.484 qpair failed and we were unable to recover it. 00:28:06.484 [2024-10-07 09:48:55.351173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.484 [2024-10-07 09:48:55.351238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.484 qpair failed and we were unable to recover it. 00:28:06.484 [2024-10-07 09:48:55.351500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.484 [2024-10-07 09:48:55.351565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.484 qpair failed and we were unable to recover it. 00:28:06.484 [2024-10-07 09:48:55.351849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.484 [2024-10-07 09:48:55.351917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.484 qpair failed and we were unable to recover it. 00:28:06.484 [2024-10-07 09:48:55.352201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.484 [2024-10-07 09:48:55.352266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.484 qpair failed and we were unable to recover it. 00:28:06.484 [2024-10-07 09:48:55.352549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.484 [2024-10-07 09:48:55.352614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.484 qpair failed and we were unable to recover it. 00:28:06.484 [2024-10-07 09:48:55.352917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.484 [2024-10-07 09:48:55.352984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.484 qpair failed and we were unable to recover it. 00:28:06.485 [2024-10-07 09:48:55.353206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.485 [2024-10-07 09:48:55.353271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.485 qpair failed and we were unable to recover it. 00:28:06.485 [2024-10-07 09:48:55.353550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.485 [2024-10-07 09:48:55.353626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.485 qpair failed and we were unable to recover it. 00:28:06.485 [2024-10-07 09:48:55.353897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.485 [2024-10-07 09:48:55.353963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.485 qpair failed and we were unable to recover it. 00:28:06.485 [2024-10-07 09:48:55.354163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.485 [2024-10-07 09:48:55.354228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.485 qpair failed and we were unable to recover it. 00:28:06.485 [2024-10-07 09:48:55.354474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.485 [2024-10-07 09:48:55.354539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.485 qpair failed and we were unable to recover it. 00:28:06.485 [2024-10-07 09:48:55.354749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.485 [2024-10-07 09:48:55.354815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.485 qpair failed and we were unable to recover it. 00:28:06.485 [2024-10-07 09:48:55.355013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.485 [2024-10-07 09:48:55.355078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.485 qpair failed and we were unable to recover it. 00:28:06.485 [2024-10-07 09:48:55.355319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.485 [2024-10-07 09:48:55.355384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.485 qpair failed and we were unable to recover it. 00:28:06.485 [2024-10-07 09:48:55.355636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.485 [2024-10-07 09:48:55.355719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.485 qpair failed and we were unable to recover it. 00:28:06.485 [2024-10-07 09:48:55.355950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.485 [2024-10-07 09:48:55.356015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.485 qpair failed and we were unable to recover it. 00:28:06.485 [2024-10-07 09:48:55.356301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.485 [2024-10-07 09:48:55.356367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.485 qpair failed and we were unable to recover it. 00:28:06.485 [2024-10-07 09:48:55.356612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.485 [2024-10-07 09:48:55.356692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.485 qpair failed and we were unable to recover it. 00:28:06.485 [2024-10-07 09:48:55.356942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.485 [2024-10-07 09:48:55.357008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.485 qpair failed and we were unable to recover it. 00:28:06.485 [2024-10-07 09:48:55.357248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.485 [2024-10-07 09:48:55.357313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.485 qpair failed and we were unable to recover it. 00:28:06.485 [2024-10-07 09:48:55.357598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.485 [2024-10-07 09:48:55.357662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.485 qpair failed and we were unable to recover it. 00:28:06.485 [2024-10-07 09:48:55.357909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.485 [2024-10-07 09:48:55.357975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.485 qpair failed and we were unable to recover it. 00:28:06.485 [2024-10-07 09:48:55.358227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.485 [2024-10-07 09:48:55.358294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.485 qpair failed and we were unable to recover it. 00:28:06.485 [2024-10-07 09:48:55.358588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.485 [2024-10-07 09:48:55.358653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.485 qpair failed and we were unable to recover it. 00:28:06.485 [2024-10-07 09:48:55.358925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.485 [2024-10-07 09:48:55.358990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.485 qpair failed and we were unable to recover it. 00:28:06.485 [2024-10-07 09:48:55.359195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.485 [2024-10-07 09:48:55.359262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.485 qpair failed and we were unable to recover it. 00:28:06.485 [2024-10-07 09:48:55.359514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.485 [2024-10-07 09:48:55.359579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.485 qpair failed and we were unable to recover it. 00:28:06.485 [2024-10-07 09:48:55.359835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.485 [2024-10-07 09:48:55.359902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.485 qpair failed and we were unable to recover it. 00:28:06.485 [2024-10-07 09:48:55.360161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.485 [2024-10-07 09:48:55.360227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.485 qpair failed and we were unable to recover it. 00:28:06.485 [2024-10-07 09:48:55.360467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.485 [2024-10-07 09:48:55.360532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.485 qpair failed and we were unable to recover it. 00:28:06.485 [2024-10-07 09:48:55.360811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.485 [2024-10-07 09:48:55.360877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.485 qpair failed and we were unable to recover it. 00:28:06.485 [2024-10-07 09:48:55.361097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.485 [2024-10-07 09:48:55.361163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.485 qpair failed and we were unable to recover it. 00:28:06.485 [2024-10-07 09:48:55.361373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.485 [2024-10-07 09:48:55.361439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.485 qpair failed and we were unable to recover it. 00:28:06.485 [2024-10-07 09:48:55.361626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.485 [2024-10-07 09:48:55.361706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.485 qpair failed and we were unable to recover it. 00:28:06.485 [2024-10-07 09:48:55.361943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.485 [2024-10-07 09:48:55.362009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.485 qpair failed and we were unable to recover it. 00:28:06.485 [2024-10-07 09:48:55.362261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.485 [2024-10-07 09:48:55.362327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.485 qpair failed and we were unable to recover it. 00:28:06.485 [2024-10-07 09:48:55.362522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.485 [2024-10-07 09:48:55.362589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.485 qpair failed and we were unable to recover it. 00:28:06.485 [2024-10-07 09:48:55.362810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.485 [2024-10-07 09:48:55.362877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.485 qpair failed and we were unable to recover it. 00:28:06.485 [2024-10-07 09:48:55.363118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.485 [2024-10-07 09:48:55.363185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.485 qpair failed and we were unable to recover it. 00:28:06.485 [2024-10-07 09:48:55.363435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.485 [2024-10-07 09:48:55.363501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.485 qpair failed and we were unable to recover it. 00:28:06.485 [2024-10-07 09:48:55.363719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.486 [2024-10-07 09:48:55.363787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.486 qpair failed and we were unable to recover it. 00:28:06.486 [2024-10-07 09:48:55.363991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.486 [2024-10-07 09:48:55.364059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.486 qpair failed and we were unable to recover it. 00:28:06.486 [2024-10-07 09:48:55.364308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.486 [2024-10-07 09:48:55.364374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.486 qpair failed and we were unable to recover it. 00:28:06.486 [2024-10-07 09:48:55.364565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.486 [2024-10-07 09:48:55.364631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.486 qpair failed and we were unable to recover it. 00:28:06.486 [2024-10-07 09:48:55.364896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.486 [2024-10-07 09:48:55.364961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.486 qpair failed and we were unable to recover it. 00:28:06.486 [2024-10-07 09:48:55.365209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.486 [2024-10-07 09:48:55.365275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.486 qpair failed and we were unable to recover it. 00:28:06.486 [2024-10-07 09:48:55.365495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.486 [2024-10-07 09:48:55.365562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.486 qpair failed and we were unable to recover it. 00:28:06.486 [2024-10-07 09:48:55.365825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.486 [2024-10-07 09:48:55.365892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.486 qpair failed and we were unable to recover it. 00:28:06.486 [2024-10-07 09:48:55.366211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.486 [2024-10-07 09:48:55.366277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.486 qpair failed and we were unable to recover it. 00:28:06.486 [2024-10-07 09:48:55.366533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.486 [2024-10-07 09:48:55.366599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.486 qpair failed and we were unable to recover it. 00:28:06.486 [2024-10-07 09:48:55.366860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.486 [2024-10-07 09:48:55.366926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.486 qpair failed and we were unable to recover it. 00:28:06.486 [2024-10-07 09:48:55.367227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.486 [2024-10-07 09:48:55.367292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.486 qpair failed and we were unable to recover it. 00:28:06.486 [2024-10-07 09:48:55.367526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.486 [2024-10-07 09:48:55.367593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.486 qpair failed and we were unable to recover it. 00:28:06.486 [2024-10-07 09:48:55.367820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.486 [2024-10-07 09:48:55.367887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.486 qpair failed and we were unable to recover it. 00:28:06.486 [2024-10-07 09:48:55.368127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.486 [2024-10-07 09:48:55.368193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.486 qpair failed and we were unable to recover it. 00:28:06.486 [2024-10-07 09:48:55.368413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.486 [2024-10-07 09:48:55.368479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.486 qpair failed and we were unable to recover it. 00:28:06.486 [2024-10-07 09:48:55.368732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.486 [2024-10-07 09:48:55.368798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.486 qpair failed and we were unable to recover it. 00:28:06.486 [2024-10-07 09:48:55.368991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.486 [2024-10-07 09:48:55.369056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.486 qpair failed and we were unable to recover it. 00:28:06.486 [2024-10-07 09:48:55.369282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.486 [2024-10-07 09:48:55.369348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.486 qpair failed and we were unable to recover it. 00:28:06.486 [2024-10-07 09:48:55.369604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.486 [2024-10-07 09:48:55.369681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.486 qpair failed and we were unable to recover it. 00:28:06.486 [2024-10-07 09:48:55.369942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.486 [2024-10-07 09:48:55.370007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.486 qpair failed and we were unable to recover it. 00:28:06.486 [2024-10-07 09:48:55.370257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.486 [2024-10-07 09:48:55.370323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.486 qpair failed and we were unable to recover it. 00:28:06.486 [2024-10-07 09:48:55.370571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.486 [2024-10-07 09:48:55.370637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.486 qpair failed and we were unable to recover it. 00:28:06.486 [2024-10-07 09:48:55.370934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.486 [2024-10-07 09:48:55.371000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.486 qpair failed and we were unable to recover it. 00:28:06.486 [2024-10-07 09:48:55.371260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.486 [2024-10-07 09:48:55.371324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.486 qpair failed and we were unable to recover it. 00:28:06.486 [2024-10-07 09:48:55.371574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.486 [2024-10-07 09:48:55.371639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.486 qpair failed and we were unable to recover it. 00:28:06.486 [2024-10-07 09:48:55.371903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.486 [2024-10-07 09:48:55.371970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.486 qpair failed and we were unable to recover it. 00:28:06.486 [2024-10-07 09:48:55.372175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.486 [2024-10-07 09:48:55.372240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.486 qpair failed and we were unable to recover it. 00:28:06.486 [2024-10-07 09:48:55.372431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.486 [2024-10-07 09:48:55.372495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.486 qpair failed and we were unable to recover it. 00:28:06.486 [2024-10-07 09:48:55.372786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.486 [2024-10-07 09:48:55.372855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.486 qpair failed and we were unable to recover it. 00:28:06.486 [2024-10-07 09:48:55.373139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.486 [2024-10-07 09:48:55.373204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.486 qpair failed and we were unable to recover it. 00:28:06.486 [2024-10-07 09:48:55.373451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.486 [2024-10-07 09:48:55.373516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.486 qpair failed and we were unable to recover it. 00:28:06.486 [2024-10-07 09:48:55.373769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.486 [2024-10-07 09:48:55.373836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.486 qpair failed and we were unable to recover it. 00:28:06.486 [2024-10-07 09:48:55.374135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.486 [2024-10-07 09:48:55.374199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.486 qpair failed and we were unable to recover it. 00:28:06.486 [2024-10-07 09:48:55.374450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.486 [2024-10-07 09:48:55.374515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.486 qpair failed and we were unable to recover it. 00:28:06.486 [2024-10-07 09:48:55.374756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.486 [2024-10-07 09:48:55.374834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.486 qpair failed and we were unable to recover it. 00:28:06.486 [2024-10-07 09:48:55.375127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.486 [2024-10-07 09:48:55.375192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.486 qpair failed and we were unable to recover it. 00:28:06.486 [2024-10-07 09:48:55.375442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.486 [2024-10-07 09:48:55.375508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.486 qpair failed and we were unable to recover it. 00:28:06.486 [2024-10-07 09:48:55.375767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.486 [2024-10-07 09:48:55.375834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.486 qpair failed and we were unable to recover it. 00:28:06.487 [2024-10-07 09:48:55.376033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.487 [2024-10-07 09:48:55.376098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.487 qpair failed and we were unable to recover it. 00:28:06.487 [2024-10-07 09:48:55.376345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.487 [2024-10-07 09:48:55.376410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.487 qpair failed and we were unable to recover it. 00:28:06.487 [2024-10-07 09:48:55.376652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.487 [2024-10-07 09:48:55.376733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.487 qpair failed and we were unable to recover it. 00:28:06.487 [2024-10-07 09:48:55.376981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.487 [2024-10-07 09:48:55.377046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.487 qpair failed and we were unable to recover it. 00:28:06.487 [2024-10-07 09:48:55.377301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.487 [2024-10-07 09:48:55.377367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.487 qpair failed and we were unable to recover it. 00:28:06.487 [2024-10-07 09:48:55.377585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.487 [2024-10-07 09:48:55.377650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.487 qpair failed and we were unable to recover it. 00:28:06.487 [2024-10-07 09:48:55.377916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.487 [2024-10-07 09:48:55.377982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.487 qpair failed and we were unable to recover it. 00:28:06.487 [2024-10-07 09:48:55.378270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.487 [2024-10-07 09:48:55.378334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.487 qpair failed and we were unable to recover it. 00:28:06.487 [2024-10-07 09:48:55.378524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.487 [2024-10-07 09:48:55.378588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.487 qpair failed and we were unable to recover it. 00:28:06.487 [2024-10-07 09:48:55.378853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.487 [2024-10-07 09:48:55.378920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.487 qpair failed and we were unable to recover it. 00:28:06.487 [2024-10-07 09:48:55.379218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.487 [2024-10-07 09:48:55.379283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.487 qpair failed and we were unable to recover it. 00:28:06.487 [2024-10-07 09:48:55.379528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.487 [2024-10-07 09:48:55.379595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.487 qpair failed and we were unable to recover it. 00:28:06.487 [2024-10-07 09:48:55.379892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.487 [2024-10-07 09:48:55.379960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.487 qpair failed and we were unable to recover it. 00:28:06.487 [2024-10-07 09:48:55.380172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.487 [2024-10-07 09:48:55.380236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.487 qpair failed and we were unable to recover it. 00:28:06.487 [2024-10-07 09:48:55.380448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.487 [2024-10-07 09:48:55.380513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.487 qpair failed and we were unable to recover it. 00:28:06.487 [2024-10-07 09:48:55.380702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.487 [2024-10-07 09:48:55.380770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.487 qpair failed and we were unable to recover it. 00:28:06.487 [2024-10-07 09:48:55.381055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.487 [2024-10-07 09:48:55.381120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.487 qpair failed and we were unable to recover it. 00:28:06.487 [2024-10-07 09:48:55.381372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.487 [2024-10-07 09:48:55.381437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.487 qpair failed and we were unable to recover it. 00:28:06.487 [2024-10-07 09:48:55.381644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.487 [2024-10-07 09:48:55.381726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.487 qpair failed and we were unable to recover it. 00:28:06.487 [2024-10-07 09:48:55.381978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.487 [2024-10-07 09:48:55.382043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.487 qpair failed and we were unable to recover it. 00:28:06.487 [2024-10-07 09:48:55.382234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.487 [2024-10-07 09:48:55.382299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.487 qpair failed and we were unable to recover it. 00:28:06.487 [2024-10-07 09:48:55.382547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.487 [2024-10-07 09:48:55.382612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.487 qpair failed and we were unable to recover it. 00:28:06.487 [2024-10-07 09:48:55.382909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.487 [2024-10-07 09:48:55.382976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.487 qpair failed and we were unable to recover it. 00:28:06.487 [2024-10-07 09:48:55.383227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.487 [2024-10-07 09:48:55.383302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.487 qpair failed and we were unable to recover it. 00:28:06.487 [2024-10-07 09:48:55.383557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.487 [2024-10-07 09:48:55.383622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.487 qpair failed and we were unable to recover it. 00:28:06.487 [2024-10-07 09:48:55.383938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.487 [2024-10-07 09:48:55.384003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.487 qpair failed and we were unable to recover it. 00:28:06.487 [2024-10-07 09:48:55.384236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.487 [2024-10-07 09:48:55.384301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.487 qpair failed and we were unable to recover it. 00:28:06.487 [2024-10-07 09:48:55.384590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.487 [2024-10-07 09:48:55.384654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.487 qpair failed and we were unable to recover it. 00:28:06.487 [2024-10-07 09:48:55.384927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.487 [2024-10-07 09:48:55.384993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.487 qpair failed and we were unable to recover it. 00:28:06.487 [2024-10-07 09:48:55.385283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.487 [2024-10-07 09:48:55.385348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.487 qpair failed and we were unable to recover it. 00:28:06.487 [2024-10-07 09:48:55.385571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.487 [2024-10-07 09:48:55.385635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.487 qpair failed and we were unable to recover it. 00:28:06.487 [2024-10-07 09:48:55.385870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.487 [2024-10-07 09:48:55.385936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.487 qpair failed and we were unable to recover it. 00:28:06.487 [2024-10-07 09:48:55.386190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.487 [2024-10-07 09:48:55.386255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.487 qpair failed and we were unable to recover it. 00:28:06.487 [2024-10-07 09:48:55.386537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.487 [2024-10-07 09:48:55.386602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.487 qpair failed and we were unable to recover it. 00:28:06.487 [2024-10-07 09:48:55.386838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.487 [2024-10-07 09:48:55.386903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.487 qpair failed and we were unable to recover it. 00:28:06.487 [2024-10-07 09:48:55.387118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.487 [2024-10-07 09:48:55.387182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.487 qpair failed and we were unable to recover it. 00:28:06.487 [2024-10-07 09:48:55.387458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.487 [2024-10-07 09:48:55.387522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.487 qpair failed and we were unable to recover it. 00:28:06.487 [2024-10-07 09:48:55.387838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.487 [2024-10-07 09:48:55.387905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.487 qpair failed and we were unable to recover it. 00:28:06.487 [2024-10-07 09:48:55.388154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.488 [2024-10-07 09:48:55.388220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.488 qpair failed and we were unable to recover it. 00:28:06.488 [2024-10-07 09:48:55.388416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.488 [2024-10-07 09:48:55.388479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.488 qpair failed and we were unable to recover it. 00:28:06.488 [2024-10-07 09:48:55.388706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.488 [2024-10-07 09:48:55.388770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.488 qpair failed and we were unable to recover it. 00:28:06.488 [2024-10-07 09:48:55.388959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.488 [2024-10-07 09:48:55.389024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.488 qpair failed and we were unable to recover it. 00:28:06.488 [2024-10-07 09:48:55.389253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.488 [2024-10-07 09:48:55.389318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.488 qpair failed and we were unable to recover it. 00:28:06.488 [2024-10-07 09:48:55.389550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.488 [2024-10-07 09:48:55.389614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.488 qpair failed and we were unable to recover it. 00:28:06.488 [2024-10-07 09:48:55.389869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.488 [2024-10-07 09:48:55.389936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.488 qpair failed and we were unable to recover it. 00:28:06.488 [2024-10-07 09:48:55.390130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.488 [2024-10-07 09:48:55.390197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.488 qpair failed and we were unable to recover it. 00:28:06.488 [2024-10-07 09:48:55.390440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.488 [2024-10-07 09:48:55.390505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.488 qpair failed and we were unable to recover it. 00:28:06.488 [2024-10-07 09:48:55.390732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.488 [2024-10-07 09:48:55.390800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.488 qpair failed and we were unable to recover it. 00:28:06.488 [2024-10-07 09:48:55.391085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.488 [2024-10-07 09:48:55.391151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.488 qpair failed and we were unable to recover it. 00:28:06.488 [2024-10-07 09:48:55.391340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.488 [2024-10-07 09:48:55.391404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.488 qpair failed and we were unable to recover it. 00:28:06.488 [2024-10-07 09:48:55.391633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.488 [2024-10-07 09:48:55.391730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.488 qpair failed and we were unable to recover it. 00:28:06.488 [2024-10-07 09:48:55.392032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.488 [2024-10-07 09:48:55.392098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.488 qpair failed and we were unable to recover it. 00:28:06.488 [2024-10-07 09:48:55.392347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.488 [2024-10-07 09:48:55.392411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.488 qpair failed and we were unable to recover it. 00:28:06.488 [2024-10-07 09:48:55.392685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.488 [2024-10-07 09:48:55.392751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.488 qpair failed and we were unable to recover it. 00:28:06.488 [2024-10-07 09:48:55.393000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.488 [2024-10-07 09:48:55.393067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.488 qpair failed and we were unable to recover it. 00:28:06.488 [2024-10-07 09:48:55.393316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.488 [2024-10-07 09:48:55.393381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.488 qpair failed and we were unable to recover it. 00:28:06.488 [2024-10-07 09:48:55.393664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.488 [2024-10-07 09:48:55.393743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.488 qpair failed and we were unable to recover it. 00:28:06.488 [2024-10-07 09:48:55.394014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.488 [2024-10-07 09:48:55.394078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.488 qpair failed and we were unable to recover it. 00:28:06.488 [2024-10-07 09:48:55.394288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.488 [2024-10-07 09:48:55.394354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.488 qpair failed and we were unable to recover it. 00:28:06.488 [2024-10-07 09:48:55.394606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.488 [2024-10-07 09:48:55.394686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.488 qpair failed and we were unable to recover it. 00:28:06.488 [2024-10-07 09:48:55.394932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.488 [2024-10-07 09:48:55.394997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.488 qpair failed and we were unable to recover it. 00:28:06.488 [2024-10-07 09:48:55.395222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.488 [2024-10-07 09:48:55.395286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.488 qpair failed and we were unable to recover it. 00:28:06.488 [2024-10-07 09:48:55.395527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.488 [2024-10-07 09:48:55.395593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.488 qpair failed and we were unable to recover it. 00:28:06.488 [2024-10-07 09:48:55.395835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.488 [2024-10-07 09:48:55.395901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.488 qpair failed and we were unable to recover it. 00:28:06.488 [2024-10-07 09:48:55.396124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.488 [2024-10-07 09:48:55.396190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.488 qpair failed and we were unable to recover it. 00:28:06.488 [2024-10-07 09:48:55.396415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.488 [2024-10-07 09:48:55.396480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.488 qpair failed and we were unable to recover it. 00:28:06.488 [2024-10-07 09:48:55.396697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.488 [2024-10-07 09:48:55.396764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.488 qpair failed and we were unable to recover it. 00:28:06.488 [2024-10-07 09:48:55.397010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.488 [2024-10-07 09:48:55.397075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.488 qpair failed and we were unable to recover it. 00:28:06.488 [2024-10-07 09:48:55.397312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.488 [2024-10-07 09:48:55.397377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.488 qpair failed and we were unable to recover it. 00:28:06.488 [2024-10-07 09:48:55.397633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.488 [2024-10-07 09:48:55.397720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.488 qpair failed and we were unable to recover it. 00:28:06.488 [2024-10-07 09:48:55.397980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.488 [2024-10-07 09:48:55.398044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.488 qpair failed and we were unable to recover it. 00:28:06.488 [2024-10-07 09:48:55.398227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.488 [2024-10-07 09:48:55.398292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.488 qpair failed and we were unable to recover it. 00:28:06.488 [2024-10-07 09:48:55.398551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.488 [2024-10-07 09:48:55.398616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.488 qpair failed and we were unable to recover it. 00:28:06.488 [2024-10-07 09:48:55.398882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.488 [2024-10-07 09:48:55.398947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.488 qpair failed and we were unable to recover it. 00:28:06.488 [2024-10-07 09:48:55.399193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.488 [2024-10-07 09:48:55.399258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.488 qpair failed and we were unable to recover it. 00:28:06.488 [2024-10-07 09:48:55.399543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.488 [2024-10-07 09:48:55.399609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.488 qpair failed and we were unable to recover it. 00:28:06.488 [2024-10-07 09:48:55.399885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.488 [2024-10-07 09:48:55.399950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.488 qpair failed and we were unable to recover it. 00:28:06.488 [2024-10-07 09:48:55.400153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.489 [2024-10-07 09:48:55.400214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.489 qpair failed and we were unable to recover it. 00:28:06.489 [2024-10-07 09:48:55.400465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.489 [2024-10-07 09:48:55.400530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.489 qpair failed and we were unable to recover it. 00:28:06.489 [2024-10-07 09:48:55.400738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.489 [2024-10-07 09:48:55.400805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.489 qpair failed and we were unable to recover it. 00:28:06.489 [2024-10-07 09:48:55.401060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.489 [2024-10-07 09:48:55.401125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.489 qpair failed and we were unable to recover it. 00:28:06.489 [2024-10-07 09:48:55.401414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.489 [2024-10-07 09:48:55.401479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.489 qpair failed and we were unable to recover it. 00:28:06.489 [2024-10-07 09:48:55.401695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.489 [2024-10-07 09:48:55.401762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.489 qpair failed and we were unable to recover it. 00:28:06.489 [2024-10-07 09:48:55.402002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.489 [2024-10-07 09:48:55.402067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.489 qpair failed and we were unable to recover it. 00:28:06.489 [2024-10-07 09:48:55.402309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.489 [2024-10-07 09:48:55.402374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.489 qpair failed and we were unable to recover it. 00:28:06.489 [2024-10-07 09:48:55.402615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.489 [2024-10-07 09:48:55.402693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.489 qpair failed and we were unable to recover it. 00:28:06.489 [2024-10-07 09:48:55.402923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.489 [2024-10-07 09:48:55.402988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.489 qpair failed and we were unable to recover it. 00:28:06.489 [2024-10-07 09:48:55.403272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.489 [2024-10-07 09:48:55.403336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.489 qpair failed and we were unable to recover it. 00:28:06.489 [2024-10-07 09:48:55.403589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.489 [2024-10-07 09:48:55.403655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.489 qpair failed and we were unable to recover it. 00:28:06.489 [2024-10-07 09:48:55.403966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.489 [2024-10-07 09:48:55.404032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.489 qpair failed and we were unable to recover it. 00:28:06.489 [2024-10-07 09:48:55.404243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.489 [2024-10-07 09:48:55.404307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.489 qpair failed and we were unable to recover it. 00:28:06.489 [2024-10-07 09:48:55.404550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.489 [2024-10-07 09:48:55.404618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.489 qpair failed and we were unable to recover it. 00:28:06.489 [2024-10-07 09:48:55.404923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.489 [2024-10-07 09:48:55.404988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.489 qpair failed and we were unable to recover it. 00:28:06.489 [2024-10-07 09:48:55.405270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.489 [2024-10-07 09:48:55.405334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.489 qpair failed and we were unable to recover it. 00:28:06.489 [2024-10-07 09:48:55.405550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.489 [2024-10-07 09:48:55.405615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.489 qpair failed and we were unable to recover it. 00:28:06.489 [2024-10-07 09:48:55.405839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.489 [2024-10-07 09:48:55.405906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.489 qpair failed and we were unable to recover it. 00:28:06.489 [2024-10-07 09:48:55.406118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.489 [2024-10-07 09:48:55.406183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.489 qpair failed and we were unable to recover it. 00:28:06.489 [2024-10-07 09:48:55.406383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.489 [2024-10-07 09:48:55.406448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.489 qpair failed and we were unable to recover it. 00:28:06.489 [2024-10-07 09:48:55.406696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.489 [2024-10-07 09:48:55.406764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.489 qpair failed and we were unable to recover it. 00:28:06.489 [2024-10-07 09:48:55.406966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.489 [2024-10-07 09:48:55.407032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.489 qpair failed and we were unable to recover it. 00:28:06.489 [2024-10-07 09:48:55.407276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.489 [2024-10-07 09:48:55.407342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.489 qpair failed and we were unable to recover it. 00:28:06.489 [2024-10-07 09:48:55.407572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.489 [2024-10-07 09:48:55.407637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.489 qpair failed and we were unable to recover it. 00:28:06.489 [2024-10-07 09:48:55.407861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.489 [2024-10-07 09:48:55.407926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.489 qpair failed and we were unable to recover it. 00:28:06.489 [2024-10-07 09:48:55.408173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.489 [2024-10-07 09:48:55.408238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.489 qpair failed and we were unable to recover it. 00:28:06.489 [2024-10-07 09:48:55.408466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.489 [2024-10-07 09:48:55.408530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.489 qpair failed and we were unable to recover it. 00:28:06.489 [2024-10-07 09:48:55.408790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.489 [2024-10-07 09:48:55.408858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.489 qpair failed and we were unable to recover it. 00:28:06.489 [2024-10-07 09:48:55.409114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.489 [2024-10-07 09:48:55.409179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.489 qpair failed and we were unable to recover it. 00:28:06.489 [2024-10-07 09:48:55.409369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.489 [2024-10-07 09:48:55.409434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.489 qpair failed and we were unable to recover it. 00:28:06.489 [2024-10-07 09:48:55.409659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.489 [2024-10-07 09:48:55.409741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.489 qpair failed and we were unable to recover it. 00:28:06.489 [2024-10-07 09:48:55.409952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.489 [2024-10-07 09:48:55.410019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.489 qpair failed and we were unable to recover it. 00:28:06.489 [2024-10-07 09:48:55.410306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.489 [2024-10-07 09:48:55.410371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.489 qpair failed and we were unable to recover it. 00:28:06.489 [2024-10-07 09:48:55.410567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.489 [2024-10-07 09:48:55.410632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.489 qpair failed and we were unable to recover it. 00:28:06.489 [2024-10-07 09:48:55.410855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.489 [2024-10-07 09:48:55.410920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.489 qpair failed and we were unable to recover it. 00:28:06.489 [2024-10-07 09:48:55.411178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.489 [2024-10-07 09:48:55.411243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.489 qpair failed and we were unable to recover it. 00:28:06.489 [2024-10-07 09:48:55.411522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.489 [2024-10-07 09:48:55.411587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.489 qpair failed and we were unable to recover it. 00:28:06.489 [2024-10-07 09:48:55.411864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.489 [2024-10-07 09:48:55.411930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.489 qpair failed and we were unable to recover it. 00:28:06.490 [2024-10-07 09:48:55.412137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.490 [2024-10-07 09:48:55.412204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.490 qpair failed and we were unable to recover it. 00:28:06.490 [2024-10-07 09:48:55.412410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.490 [2024-10-07 09:48:55.412476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.490 qpair failed and we were unable to recover it. 00:28:06.490 [2024-10-07 09:48:55.412722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.490 [2024-10-07 09:48:55.412800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.490 qpair failed and we were unable to recover it. 00:28:06.490 [2024-10-07 09:48:55.413054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.490 [2024-10-07 09:48:55.413121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.490 qpair failed and we were unable to recover it. 00:28:06.490 [2024-10-07 09:48:55.413385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.490 [2024-10-07 09:48:55.413451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.490 qpair failed and we were unable to recover it. 00:28:06.490 [2024-10-07 09:48:55.413646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.490 [2024-10-07 09:48:55.413726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.490 qpair failed and we were unable to recover it. 00:28:06.490 [2024-10-07 09:48:55.413944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.490 [2024-10-07 09:48:55.414009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.490 qpair failed and we were unable to recover it. 00:28:06.490 [2024-10-07 09:48:55.414263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.490 [2024-10-07 09:48:55.414327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.490 qpair failed and we were unable to recover it. 00:28:06.490 [2024-10-07 09:48:55.414569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.490 [2024-10-07 09:48:55.414635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.490 qpair failed and we were unable to recover it. 00:28:06.490 [2024-10-07 09:48:55.414922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.490 [2024-10-07 09:48:55.414990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.490 qpair failed and we were unable to recover it. 00:28:06.490 [2024-10-07 09:48:55.415199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.490 [2024-10-07 09:48:55.415266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.490 qpair failed and we were unable to recover it. 00:28:06.490 [2024-10-07 09:48:55.415516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.490 [2024-10-07 09:48:55.415583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.490 qpair failed and we were unable to recover it. 00:28:06.490 [2024-10-07 09:48:55.415848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.490 [2024-10-07 09:48:55.415915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.490 qpair failed and we were unable to recover it. 00:28:06.490 [2024-10-07 09:48:55.416176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.490 [2024-10-07 09:48:55.416241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.490 qpair failed and we were unable to recover it. 00:28:06.490 [2024-10-07 09:48:55.416483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.490 [2024-10-07 09:48:55.416550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.490 qpair failed and we were unable to recover it. 00:28:06.490 [2024-10-07 09:48:55.416846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.490 [2024-10-07 09:48:55.416912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.490 qpair failed and we were unable to recover it. 00:28:06.490 [2024-10-07 09:48:55.417147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.490 [2024-10-07 09:48:55.417212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.490 qpair failed and we were unable to recover it. 00:28:06.490 [2024-10-07 09:48:55.417490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.490 [2024-10-07 09:48:55.417556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.490 qpair failed and we were unable to recover it. 00:28:06.490 [2024-10-07 09:48:55.417823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.490 [2024-10-07 09:48:55.417889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.490 qpair failed and we were unable to recover it. 00:28:06.490 [2024-10-07 09:48:55.418134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.490 [2024-10-07 09:48:55.418200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.490 qpair failed and we were unable to recover it. 00:28:06.490 [2024-10-07 09:48:55.418400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.490 [2024-10-07 09:48:55.418465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.490 qpair failed and we were unable to recover it. 00:28:06.490 [2024-10-07 09:48:55.418689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.490 [2024-10-07 09:48:55.418756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.490 qpair failed and we were unable to recover it. 00:28:06.490 [2024-10-07 09:48:55.418965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.490 [2024-10-07 09:48:55.419031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.490 qpair failed and we were unable to recover it. 00:28:06.490 [2024-10-07 09:48:55.419273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.490 [2024-10-07 09:48:55.419338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.490 qpair failed and we were unable to recover it. 00:28:06.490 [2024-10-07 09:48:55.419621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.490 [2024-10-07 09:48:55.419711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.490 qpair failed and we were unable to recover it. 00:28:06.490 [2024-10-07 09:48:55.420007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.490 [2024-10-07 09:48:55.420073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.490 qpair failed and we were unable to recover it. 00:28:06.490 [2024-10-07 09:48:55.420287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.490 [2024-10-07 09:48:55.420351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.490 qpair failed and we were unable to recover it. 00:28:06.490 [2024-10-07 09:48:55.420554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.490 [2024-10-07 09:48:55.420621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.490 qpair failed and we were unable to recover it. 00:28:06.490 [2024-10-07 09:48:55.420904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.490 [2024-10-07 09:48:55.420971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.490 qpair failed and we were unable to recover it. 00:28:06.490 [2024-10-07 09:48:55.421219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.490 [2024-10-07 09:48:55.421294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.490 qpair failed and we were unable to recover it. 00:28:06.490 [2024-10-07 09:48:55.421580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.490 [2024-10-07 09:48:55.421645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.490 qpair failed and we were unable to recover it. 00:28:06.490 [2024-10-07 09:48:55.421886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.490 [2024-10-07 09:48:55.421953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.490 qpair failed and we were unable to recover it. 00:28:06.490 [2024-10-07 09:48:55.422212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.490 [2024-10-07 09:48:55.422277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.490 qpair failed and we were unable to recover it. 00:28:06.490 [2024-10-07 09:48:55.422518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.490 [2024-10-07 09:48:55.422583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.490 qpair failed and we were unable to recover it. 00:28:06.490 [2024-10-07 09:48:55.422892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.491 [2024-10-07 09:48:55.422959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.491 qpair failed and we were unable to recover it. 00:28:06.491 [2024-10-07 09:48:55.423208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.491 [2024-10-07 09:48:55.423273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.491 qpair failed and we were unable to recover it. 00:28:06.491 [2024-10-07 09:48:55.423522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.491 [2024-10-07 09:48:55.423587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.491 qpair failed and we were unable to recover it. 00:28:06.491 [2024-10-07 09:48:55.423806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.491 [2024-10-07 09:48:55.423873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.491 qpair failed and we were unable to recover it. 00:28:06.491 [2024-10-07 09:48:55.424069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.491 [2024-10-07 09:48:55.424134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.491 qpair failed and we were unable to recover it. 00:28:06.491 [2024-10-07 09:48:55.424416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.491 [2024-10-07 09:48:55.424481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.491 qpair failed and we were unable to recover it. 00:28:06.491 [2024-10-07 09:48:55.424736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.491 [2024-10-07 09:48:55.424804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.491 qpair failed and we were unable to recover it. 00:28:06.491 [2024-10-07 09:48:55.425054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.491 [2024-10-07 09:48:55.425119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.491 qpair failed and we were unable to recover it. 00:28:06.491 [2024-10-07 09:48:55.425333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.491 [2024-10-07 09:48:55.425398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.491 qpair failed and we were unable to recover it. 00:28:06.491 [2024-10-07 09:48:55.425654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.491 [2024-10-07 09:48:55.425733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.491 qpair failed and we were unable to recover it. 00:28:06.491 [2024-10-07 09:48:55.425977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.491 [2024-10-07 09:48:55.426042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.491 qpair failed and we were unable to recover it. 00:28:06.491 [2024-10-07 09:48:55.426311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.491 [2024-10-07 09:48:55.426376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.491 qpair failed and we were unable to recover it. 00:28:06.491 [2024-10-07 09:48:55.426575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.491 [2024-10-07 09:48:55.426640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.491 qpair failed and we were unable to recover it. 00:28:06.491 [2024-10-07 09:48:55.426854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.491 [2024-10-07 09:48:55.426920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.491 qpair failed and we were unable to recover it. 00:28:06.491 [2024-10-07 09:48:55.427203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.491 [2024-10-07 09:48:55.427269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.491 qpair failed and we were unable to recover it. 00:28:06.491 [2024-10-07 09:48:55.427558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.491 [2024-10-07 09:48:55.427622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.491 qpair failed and we were unable to recover it. 00:28:06.491 [2024-10-07 09:48:55.427847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.491 [2024-10-07 09:48:55.427913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.491 qpair failed and we were unable to recover it. 00:28:06.491 [2024-10-07 09:48:55.428147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.491 [2024-10-07 09:48:55.428212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.491 qpair failed and we were unable to recover it. 00:28:06.491 [2024-10-07 09:48:55.428461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.491 [2024-10-07 09:48:55.428526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.491 qpair failed and we were unable to recover it. 00:28:06.491 [2024-10-07 09:48:55.428768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.491 [2024-10-07 09:48:55.428835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.491 qpair failed and we were unable to recover it. 00:28:06.491 [2024-10-07 09:48:55.429091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.491 [2024-10-07 09:48:55.429157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.491 qpair failed and we were unable to recover it. 00:28:06.491 [2024-10-07 09:48:55.429381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.491 [2024-10-07 09:48:55.429446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.491 qpair failed and we were unable to recover it. 00:28:06.491 [2024-10-07 09:48:55.429743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.491 [2024-10-07 09:48:55.429811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.491 qpair failed and we were unable to recover it. 00:28:06.491 [2024-10-07 09:48:55.430069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.491 [2024-10-07 09:48:55.430136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.491 qpair failed and we were unable to recover it. 00:28:06.491 [2024-10-07 09:48:55.430424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.491 [2024-10-07 09:48:55.430488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.491 qpair failed and we were unable to recover it. 00:28:06.491 [2024-10-07 09:48:55.430769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.491 [2024-10-07 09:48:55.430836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.491 qpair failed and we were unable to recover it. 00:28:06.491 [2024-10-07 09:48:55.431059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.491 [2024-10-07 09:48:55.431124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.491 qpair failed and we were unable to recover it. 00:28:06.491 [2024-10-07 09:48:55.431319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.491 [2024-10-07 09:48:55.431383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.491 qpair failed and we were unable to recover it. 00:28:06.491 [2024-10-07 09:48:55.431611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.491 [2024-10-07 09:48:55.431692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.491 qpair failed and we were unable to recover it. 00:28:06.491 [2024-10-07 09:48:55.431950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.491 [2024-10-07 09:48:55.432016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.491 qpair failed and we were unable to recover it. 00:28:06.491 [2024-10-07 09:48:55.432250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.491 [2024-10-07 09:48:55.432315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.491 qpair failed and we were unable to recover it. 00:28:06.491 [2024-10-07 09:48:55.432509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.491 [2024-10-07 09:48:55.432574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.491 qpair failed and we were unable to recover it. 00:28:06.491 [2024-10-07 09:48:55.432878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.491 [2024-10-07 09:48:55.432944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.491 qpair failed and we were unable to recover it. 00:28:06.491 [2024-10-07 09:48:55.433183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.491 [2024-10-07 09:48:55.433248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.491 qpair failed and we were unable to recover it. 00:28:06.491 [2024-10-07 09:48:55.433522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.491 [2024-10-07 09:48:55.433587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.491 qpair failed and we were unable to recover it. 00:28:06.491 [2024-10-07 09:48:55.433832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.491 [2024-10-07 09:48:55.433901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.491 qpair failed and we were unable to recover it. 00:28:06.491 [2024-10-07 09:48:55.434148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.491 [2024-10-07 09:48:55.434216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.491 qpair failed and we were unable to recover it. 00:28:06.491 [2024-10-07 09:48:55.434486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.491 [2024-10-07 09:48:55.434553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.491 qpair failed and we were unable to recover it. 00:28:06.491 [2024-10-07 09:48:55.434815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.492 [2024-10-07 09:48:55.434882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.492 qpair failed and we were unable to recover it. 00:28:06.492 [2024-10-07 09:48:55.435175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.492 [2024-10-07 09:48:55.435240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.492 qpair failed and we were unable to recover it. 00:28:06.492 [2024-10-07 09:48:55.435480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.492 [2024-10-07 09:48:55.435546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.492 qpair failed and we were unable to recover it. 00:28:06.492 [2024-10-07 09:48:55.435795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.492 [2024-10-07 09:48:55.435861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.492 qpair failed and we were unable to recover it. 00:28:06.492 [2024-10-07 09:48:55.436098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.492 [2024-10-07 09:48:55.436163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.492 qpair failed and we were unable to recover it. 00:28:06.492 [2024-10-07 09:48:55.436403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.492 [2024-10-07 09:48:55.436468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.492 qpair failed and we were unable to recover it. 00:28:06.492 [2024-10-07 09:48:55.436717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.492 [2024-10-07 09:48:55.436784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.492 qpair failed and we were unable to recover it. 00:28:06.492 [2024-10-07 09:48:55.437044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.492 [2024-10-07 09:48:55.437110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.492 qpair failed and we were unable to recover it. 00:28:06.492 [2024-10-07 09:48:55.437401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.492 [2024-10-07 09:48:55.437466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.492 qpair failed and we were unable to recover it. 00:28:06.492 [2024-10-07 09:48:55.437721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.492 [2024-10-07 09:48:55.437787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.492 qpair failed and we were unable to recover it. 00:28:06.492 [2024-10-07 09:48:55.438034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.492 [2024-10-07 09:48:55.438099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.492 qpair failed and we were unable to recover it. 00:28:06.492 [2024-10-07 09:48:55.438344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.492 [2024-10-07 09:48:55.438411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.492 qpair failed and we were unable to recover it. 00:28:06.492 [2024-10-07 09:48:55.438704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.492 [2024-10-07 09:48:55.438770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.492 qpair failed and we were unable to recover it. 00:28:06.492 [2024-10-07 09:48:55.438996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.492 [2024-10-07 09:48:55.439062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.492 qpair failed and we were unable to recover it. 00:28:06.492 [2024-10-07 09:48:55.439272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.492 [2024-10-07 09:48:55.439337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.492 qpair failed and we were unable to recover it. 00:28:06.492 [2024-10-07 09:48:55.439531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.492 [2024-10-07 09:48:55.439595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.492 qpair failed and we were unable to recover it. 00:28:06.492 [2024-10-07 09:48:55.439829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.492 [2024-10-07 09:48:55.439895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.492 qpair failed and we were unable to recover it. 00:28:06.492 [2024-10-07 09:48:55.440193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.492 [2024-10-07 09:48:55.440259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.492 qpair failed and we were unable to recover it. 00:28:06.492 [2024-10-07 09:48:55.440505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.492 [2024-10-07 09:48:55.440570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.492 qpair failed and we were unable to recover it. 00:28:06.492 [2024-10-07 09:48:55.440871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.492 [2024-10-07 09:48:55.440938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.492 qpair failed and we were unable to recover it. 00:28:06.765 [2024-10-07 09:48:55.441181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.766 [2024-10-07 09:48:55.441249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.766 qpair failed and we were unable to recover it. 00:28:06.766 [2024-10-07 09:48:55.441528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.766 [2024-10-07 09:48:55.441594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.766 qpair failed and we were unable to recover it. 00:28:06.766 [2024-10-07 09:48:55.441877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.766 [2024-10-07 09:48:55.441944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.766 qpair failed and we were unable to recover it. 00:28:06.766 [2024-10-07 09:48:55.442184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.766 [2024-10-07 09:48:55.442250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.766 qpair failed and we were unable to recover it. 00:28:06.766 [2024-10-07 09:48:55.442534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.766 [2024-10-07 09:48:55.442600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.766 qpair failed and we were unable to recover it. 00:28:06.766 [2024-10-07 09:48:55.442866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.766 [2024-10-07 09:48:55.442944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.766 qpair failed and we were unable to recover it. 00:28:06.766 [2024-10-07 09:48:55.443185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.766 [2024-10-07 09:48:55.443251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.766 qpair failed and we were unable to recover it. 00:28:06.766 [2024-10-07 09:48:55.443446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.766 [2024-10-07 09:48:55.443512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.766 qpair failed and we were unable to recover it. 00:28:06.766 [2024-10-07 09:48:55.443714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.766 [2024-10-07 09:48:55.443782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.766 qpair failed and we were unable to recover it. 00:28:06.766 [2024-10-07 09:48:55.443972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.766 [2024-10-07 09:48:55.444038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.766 qpair failed and we were unable to recover it. 00:28:06.766 [2024-10-07 09:48:55.444230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.766 [2024-10-07 09:48:55.444295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.766 qpair failed and we were unable to recover it. 00:28:06.766 [2024-10-07 09:48:55.444470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.766 [2024-10-07 09:48:55.444536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.766 qpair failed and we were unable to recover it. 00:28:06.766 [2024-10-07 09:48:55.444806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.766 [2024-10-07 09:48:55.444874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.766 qpair failed and we were unable to recover it. 00:28:06.766 [2024-10-07 09:48:55.445116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.766 [2024-10-07 09:48:55.445181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.766 qpair failed and we were unable to recover it. 00:28:06.766 [2024-10-07 09:48:55.445412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.766 [2024-10-07 09:48:55.445477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.766 qpair failed and we were unable to recover it. 00:28:06.766 [2024-10-07 09:48:55.445690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.766 [2024-10-07 09:48:55.445757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.766 qpair failed and we were unable to recover it. 00:28:06.766 [2024-10-07 09:48:55.446004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.766 [2024-10-07 09:48:55.446069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.766 qpair failed and we were unable to recover it. 00:28:06.766 [2024-10-07 09:48:55.446307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.766 [2024-10-07 09:48:55.446372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.766 qpair failed and we were unable to recover it. 00:28:06.766 [2024-10-07 09:48:55.446624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.766 [2024-10-07 09:48:55.446714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.766 qpair failed and we were unable to recover it. 00:28:06.766 [2024-10-07 09:48:55.447036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.766 [2024-10-07 09:48:55.447103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.766 qpair failed and we were unable to recover it. 00:28:06.766 [2024-10-07 09:48:55.447315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.766 [2024-10-07 09:48:55.447381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.766 qpair failed and we were unable to recover it. 00:28:06.766 [2024-10-07 09:48:55.447629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.766 [2024-10-07 09:48:55.447713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.766 qpair failed and we were unable to recover it. 00:28:06.766 [2024-10-07 09:48:55.447993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.766 [2024-10-07 09:48:55.448058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.766 qpair failed and we were unable to recover it. 00:28:06.766 [2024-10-07 09:48:55.448306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.766 [2024-10-07 09:48:55.448371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.766 qpair failed and we were unable to recover it. 00:28:06.766 [2024-10-07 09:48:55.448616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.766 [2024-10-07 09:48:55.448694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.766 qpair failed and we were unable to recover it. 00:28:06.766 [2024-10-07 09:48:55.448939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.766 [2024-10-07 09:48:55.449004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.766 qpair failed and we were unable to recover it. 00:28:06.766 [2024-10-07 09:48:55.449206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.766 [2024-10-07 09:48:55.449273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.766 qpair failed and we were unable to recover it. 00:28:06.766 [2024-10-07 09:48:55.449512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.766 [2024-10-07 09:48:55.449580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.766 qpair failed and we were unable to recover it. 00:28:06.766 [2024-10-07 09:48:55.449811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.766 [2024-10-07 09:48:55.449877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.766 qpair failed and we were unable to recover it. 00:28:06.766 [2024-10-07 09:48:55.450065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.766 [2024-10-07 09:48:55.450132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.766 qpair failed and we were unable to recover it. 00:28:06.766 [2024-10-07 09:48:55.450388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.766 [2024-10-07 09:48:55.450454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.766 qpair failed and we were unable to recover it. 00:28:06.766 [2024-10-07 09:48:55.450739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.766 [2024-10-07 09:48:55.450805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.766 qpair failed and we were unable to recover it. 00:28:06.766 [2024-10-07 09:48:55.451048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.766 [2024-10-07 09:48:55.451124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.766 qpair failed and we were unable to recover it. 00:28:06.766 [2024-10-07 09:48:55.451416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.766 [2024-10-07 09:48:55.451482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.766 qpair failed and we were unable to recover it. 00:28:06.766 [2024-10-07 09:48:55.451732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.766 [2024-10-07 09:48:55.451799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.766 qpair failed and we were unable to recover it. 00:28:06.766 [2024-10-07 09:48:55.452045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.767 [2024-10-07 09:48:55.452113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.767 qpair failed and we were unable to recover it. 00:28:06.767 [2024-10-07 09:48:55.452366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.767 [2024-10-07 09:48:55.452433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.767 qpair failed and we were unable to recover it. 00:28:06.767 [2024-10-07 09:48:55.452632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.767 [2024-10-07 09:48:55.452712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.767 qpair failed and we were unable to recover it. 00:28:06.767 [2024-10-07 09:48:55.452963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.767 [2024-10-07 09:48:55.453028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.767 qpair failed and we were unable to recover it. 00:28:06.767 [2024-10-07 09:48:55.453284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.767 [2024-10-07 09:48:55.453350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.767 qpair failed and we were unable to recover it. 00:28:06.767 [2024-10-07 09:48:55.453576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.767 [2024-10-07 09:48:55.453641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.767 qpair failed and we were unable to recover it. 00:28:06.767 [2024-10-07 09:48:55.453863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.767 [2024-10-07 09:48:55.453929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.767 qpair failed and we were unable to recover it. 00:28:06.767 [2024-10-07 09:48:55.454115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.767 [2024-10-07 09:48:55.454180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.767 qpair failed and we were unable to recover it. 00:28:06.767 [2024-10-07 09:48:55.454447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.767 [2024-10-07 09:48:55.454512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.767 qpair failed and we were unable to recover it. 00:28:06.767 [2024-10-07 09:48:55.454750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.767 [2024-10-07 09:48:55.454817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.767 qpair failed and we were unable to recover it. 00:28:06.767 [2024-10-07 09:48:55.455106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.767 [2024-10-07 09:48:55.455172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.767 qpair failed and we were unable to recover it. 00:28:06.767 [2024-10-07 09:48:55.455396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.767 [2024-10-07 09:48:55.455461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.767 qpair failed and we were unable to recover it. 00:28:06.767 [2024-10-07 09:48:55.455698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.767 [2024-10-07 09:48:55.455766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.767 qpair failed and we were unable to recover it. 00:28:06.767 [2024-10-07 09:48:55.455963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.767 [2024-10-07 09:48:55.456030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.767 qpair failed and we were unable to recover it. 00:28:06.767 [2024-10-07 09:48:55.456284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.767 [2024-10-07 09:48:55.456349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.767 qpair failed and we were unable to recover it. 00:28:06.767 [2024-10-07 09:48:55.456612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.767 [2024-10-07 09:48:55.456691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.767 qpair failed and we were unable to recover it. 00:28:06.767 [2024-10-07 09:48:55.456989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.767 [2024-10-07 09:48:55.457055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.767 qpair failed and we were unable to recover it. 00:28:06.767 [2024-10-07 09:48:55.457319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.767 [2024-10-07 09:48:55.457384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.767 qpair failed and we were unable to recover it. 00:28:06.767 [2024-10-07 09:48:55.457664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.767 [2024-10-07 09:48:55.457744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.767 qpair failed and we were unable to recover it. 00:28:06.767 [2024-10-07 09:48:55.458048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.767 [2024-10-07 09:48:55.458114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.767 qpair failed and we were unable to recover it. 00:28:06.767 [2024-10-07 09:48:55.458373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.767 [2024-10-07 09:48:55.458437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.767 qpair failed and we were unable to recover it. 00:28:06.767 [2024-10-07 09:48:55.458731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.767 [2024-10-07 09:48:55.458797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.767 qpair failed and we were unable to recover it. 00:28:06.767 [2024-10-07 09:48:55.459000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.767 [2024-10-07 09:48:55.459066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.767 qpair failed and we were unable to recover it. 00:28:06.767 [2024-10-07 09:48:55.459359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.767 [2024-10-07 09:48:55.459422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.767 qpair failed and we were unable to recover it. 00:28:06.767 [2024-10-07 09:48:55.459702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.767 [2024-10-07 09:48:55.459784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.767 qpair failed and we were unable to recover it. 00:28:06.767 [2024-10-07 09:48:55.460007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.767 [2024-10-07 09:48:55.460073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.767 qpair failed and we were unable to recover it. 00:28:06.767 [2024-10-07 09:48:55.460361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.767 [2024-10-07 09:48:55.460427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.767 qpair failed and we were unable to recover it. 00:28:06.767 [2024-10-07 09:48:55.460722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.767 [2024-10-07 09:48:55.460791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.767 qpair failed and we were unable to recover it. 00:28:06.767 [2024-10-07 09:48:55.461042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.767 [2024-10-07 09:48:55.461107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.767 qpair failed and we were unable to recover it. 00:28:06.767 [2024-10-07 09:48:55.461383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.767 [2024-10-07 09:48:55.461447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.767 qpair failed and we were unable to recover it. 00:28:06.767 [2024-10-07 09:48:55.461711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.767 [2024-10-07 09:48:55.461778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.767 qpair failed and we were unable to recover it. 00:28:06.767 [2024-10-07 09:48:55.462073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.767 [2024-10-07 09:48:55.462137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.767 qpair failed and we were unable to recover it. 00:28:06.767 [2024-10-07 09:48:55.462418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.767 [2024-10-07 09:48:55.462482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.767 qpair failed and we were unable to recover it. 00:28:06.767 [2024-10-07 09:48:55.462742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.767 [2024-10-07 09:48:55.462809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.767 qpair failed and we were unable to recover it. 00:28:06.767 [2024-10-07 09:48:55.463098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.767 [2024-10-07 09:48:55.463164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.767 qpair failed and we were unable to recover it. 00:28:06.768 [2024-10-07 09:48:55.463363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.768 [2024-10-07 09:48:55.463427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.768 qpair failed and we were unable to recover it. 00:28:06.768 [2024-10-07 09:48:55.463649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.768 [2024-10-07 09:48:55.463729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.768 qpair failed and we were unable to recover it. 00:28:06.768 [2024-10-07 09:48:55.463981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.768 [2024-10-07 09:48:55.464046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.768 qpair failed and we were unable to recover it. 00:28:06.768 [2024-10-07 09:48:55.464286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.768 [2024-10-07 09:48:55.464350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.768 qpair failed and we were unable to recover it. 00:28:06.768 [2024-10-07 09:48:55.464641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.768 [2024-10-07 09:48:55.464735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.768 qpair failed and we were unable to recover it. 00:28:06.768 [2024-10-07 09:48:55.464987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.768 [2024-10-07 09:48:55.465052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.768 qpair failed and we were unable to recover it. 00:28:06.768 [2024-10-07 09:48:55.465299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.768 [2024-10-07 09:48:55.465362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.768 qpair failed and we were unable to recover it. 00:28:06.768 [2024-10-07 09:48:55.465608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.768 [2024-10-07 09:48:55.465686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.768 qpair failed and we were unable to recover it. 00:28:06.768 [2024-10-07 09:48:55.465983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.768 [2024-10-07 09:48:55.466050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.768 qpair failed and we were unable to recover it. 00:28:06.768 [2024-10-07 09:48:55.466292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.768 [2024-10-07 09:48:55.466358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.768 qpair failed and we were unable to recover it. 00:28:06.768 [2024-10-07 09:48:55.466657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.768 [2024-10-07 09:48:55.466749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.768 qpair failed and we were unable to recover it. 00:28:06.768 [2024-10-07 09:48:55.467000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.768 [2024-10-07 09:48:55.467067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.768 qpair failed and we were unable to recover it. 00:28:06.768 [2024-10-07 09:48:55.467303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.768 [2024-10-07 09:48:55.467367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.768 qpair failed and we were unable to recover it. 00:28:06.768 [2024-10-07 09:48:55.467555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.768 [2024-10-07 09:48:55.467620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.768 qpair failed and we were unable to recover it. 00:28:06.768 [2024-10-07 09:48:55.467955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.768 [2024-10-07 09:48:55.468020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.768 qpair failed and we were unable to recover it. 00:28:06.768 [2024-10-07 09:48:55.468237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.768 [2024-10-07 09:48:55.468301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.768 qpair failed and we were unable to recover it. 00:28:06.768 [2024-10-07 09:48:55.468552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.768 [2024-10-07 09:48:55.468618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.768 qpair failed and we were unable to recover it. 00:28:06.768 [2024-10-07 09:48:55.468926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.768 [2024-10-07 09:48:55.468991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.768 qpair failed and we were unable to recover it. 00:28:06.768 [2024-10-07 09:48:55.469286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.768 [2024-10-07 09:48:55.469351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.768 qpair failed and we were unable to recover it. 00:28:06.768 [2024-10-07 09:48:55.469588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.768 [2024-10-07 09:48:55.469653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.768 qpair failed and we were unable to recover it. 00:28:06.768 [2024-10-07 09:48:55.469965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.768 [2024-10-07 09:48:55.470030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.768 qpair failed and we were unable to recover it. 00:28:06.768 [2024-10-07 09:48:55.470311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.768 [2024-10-07 09:48:55.470376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.768 qpair failed and we were unable to recover it. 00:28:06.768 [2024-10-07 09:48:55.470662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.768 [2024-10-07 09:48:55.470747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.768 qpair failed and we were unable to recover it. 00:28:06.768 [2024-10-07 09:48:55.470953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.768 [2024-10-07 09:48:55.471017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.768 qpair failed and we were unable to recover it. 00:28:06.768 [2024-10-07 09:48:55.471272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.768 [2024-10-07 09:48:55.471338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.768 qpair failed and we were unable to recover it. 00:28:06.768 [2024-10-07 09:48:55.471588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.768 [2024-10-07 09:48:55.471651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.768 qpair failed and we were unable to recover it. 00:28:06.768 [2024-10-07 09:48:55.471891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.768 [2024-10-07 09:48:55.471964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.768 qpair failed and we were unable to recover it. 00:28:06.768 [2024-10-07 09:48:55.472203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.768 [2024-10-07 09:48:55.472269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.768 qpair failed and we were unable to recover it. 00:28:06.768 [2024-10-07 09:48:55.472456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.768 [2024-10-07 09:48:55.472521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.768 qpair failed and we were unable to recover it. 00:28:06.768 [2024-10-07 09:48:55.472819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.768 [2024-10-07 09:48:55.472886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.768 qpair failed and we were unable to recover it. 00:28:06.768 [2024-10-07 09:48:55.473207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.768 [2024-10-07 09:48:55.473273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.768 qpair failed and we were unable to recover it. 00:28:06.768 [2024-10-07 09:48:55.473514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.768 [2024-10-07 09:48:55.473589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.768 qpair failed and we were unable to recover it. 00:28:06.768 [2024-10-07 09:48:55.473855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.768 [2024-10-07 09:48:55.473921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.768 qpair failed and we were unable to recover it. 00:28:06.768 [2024-10-07 09:48:55.474139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.768 [2024-10-07 09:48:55.474205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.768 qpair failed and we were unable to recover it. 00:28:06.768 [2024-10-07 09:48:55.474462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.768 [2024-10-07 09:48:55.474527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.768 qpair failed and we were unable to recover it. 00:28:06.768 [2024-10-07 09:48:55.474778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.768 [2024-10-07 09:48:55.474845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.768 qpair failed and we were unable to recover it. 00:28:06.769 [2024-10-07 09:48:55.475080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.769 [2024-10-07 09:48:55.475144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.769 qpair failed and we were unable to recover it. 00:28:06.769 [2024-10-07 09:48:55.475440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.769 [2024-10-07 09:48:55.475505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.769 qpair failed and we were unable to recover it. 00:28:06.769 [2024-10-07 09:48:55.475717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.769 [2024-10-07 09:48:55.475784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.769 qpair failed and we were unable to recover it. 00:28:06.769 [2024-10-07 09:48:55.476077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.769 [2024-10-07 09:48:55.476141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.769 qpair failed and we were unable to recover it. 00:28:06.769 [2024-10-07 09:48:55.476435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.769 [2024-10-07 09:48:55.476501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.769 qpair failed and we were unable to recover it. 00:28:06.769 [2024-10-07 09:48:55.476740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.769 [2024-10-07 09:48:55.476807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.769 qpair failed and we were unable to recover it. 00:28:06.769 [2024-10-07 09:48:55.477074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.769 [2024-10-07 09:48:55.477138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.769 qpair failed and we were unable to recover it. 00:28:06.769 [2024-10-07 09:48:55.477397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.769 [2024-10-07 09:48:55.477463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.769 qpair failed and we were unable to recover it. 00:28:06.769 [2024-10-07 09:48:55.477764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.769 [2024-10-07 09:48:55.477831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.769 qpair failed and we were unable to recover it. 00:28:06.769 [2024-10-07 09:48:55.478034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.769 [2024-10-07 09:48:55.478100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.769 qpair failed and we were unable to recover it. 00:28:06.769 [2024-10-07 09:48:55.478316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.769 [2024-10-07 09:48:55.478381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.769 qpair failed and we were unable to recover it. 00:28:06.769 [2024-10-07 09:48:55.478695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.769 [2024-10-07 09:48:55.478773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.769 qpair failed and we were unable to recover it. 00:28:06.769 [2024-10-07 09:48:55.479008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.769 [2024-10-07 09:48:55.479073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.769 qpair failed and we were unable to recover it. 00:28:06.769 [2024-10-07 09:48:55.479358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.769 [2024-10-07 09:48:55.479424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.769 qpair failed and we were unable to recover it. 00:28:06.769 [2024-10-07 09:48:55.479734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.769 [2024-10-07 09:48:55.479800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.769 qpair failed and we were unable to recover it. 00:28:06.769 [2024-10-07 09:48:55.480060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.769 [2024-10-07 09:48:55.480125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.769 qpair failed and we were unable to recover it. 00:28:06.769 [2024-10-07 09:48:55.480369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.769 [2024-10-07 09:48:55.480433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.769 qpair failed and we were unable to recover it. 00:28:06.769 [2024-10-07 09:48:55.480725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.769 [2024-10-07 09:48:55.480792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.769 qpair failed and we were unable to recover it. 00:28:06.769 [2024-10-07 09:48:55.481082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.769 [2024-10-07 09:48:55.481147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.769 qpair failed and we were unable to recover it. 00:28:06.769 [2024-10-07 09:48:55.481380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.769 [2024-10-07 09:48:55.481442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.769 qpair failed and we were unable to recover it. 00:28:06.769 [2024-10-07 09:48:55.481709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.769 [2024-10-07 09:48:55.481775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.769 qpair failed and we were unable to recover it. 00:28:06.769 [2024-10-07 09:48:55.481981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.769 [2024-10-07 09:48:55.482064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.769 qpair failed and we were unable to recover it. 00:28:06.769 [2024-10-07 09:48:55.482352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.769 [2024-10-07 09:48:55.482417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.769 qpair failed and we were unable to recover it. 00:28:06.769 [2024-10-07 09:48:55.482702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.769 [2024-10-07 09:48:55.482776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.769 qpair failed and we were unable to recover it. 00:28:06.769 [2024-10-07 09:48:55.483072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.769 [2024-10-07 09:48:55.483137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.769 qpair failed and we were unable to recover it. 00:28:06.769 [2024-10-07 09:48:55.483430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.769 [2024-10-07 09:48:55.483496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.769 qpair failed and we were unable to recover it. 00:28:06.769 [2024-10-07 09:48:55.483776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.769 [2024-10-07 09:48:55.483842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.769 qpair failed and we were unable to recover it. 00:28:06.769 [2024-10-07 09:48:55.484090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.769 [2024-10-07 09:48:55.484157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.769 qpair failed and we were unable to recover it. 00:28:06.769 [2024-10-07 09:48:55.484447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.769 [2024-10-07 09:48:55.484511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.769 qpair failed and we were unable to recover it. 00:28:06.769 [2024-10-07 09:48:55.484776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.769 [2024-10-07 09:48:55.484842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.769 qpair failed and we were unable to recover it. 00:28:06.769 [2024-10-07 09:48:55.485098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.769 [2024-10-07 09:48:55.485164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.769 qpair failed and we were unable to recover it. 00:28:06.769 [2024-10-07 09:48:55.485405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.769 [2024-10-07 09:48:55.485470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.769 qpair failed and we were unable to recover it. 00:28:06.769 [2024-10-07 09:48:55.485760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.769 [2024-10-07 09:48:55.485826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.769 qpair failed and we were unable to recover it. 00:28:06.769 [2024-10-07 09:48:55.486040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.769 [2024-10-07 09:48:55.486107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.769 qpair failed and we were unable to recover it. 00:28:06.769 [2024-10-07 09:48:55.486348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.769 [2024-10-07 09:48:55.486414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.769 qpair failed and we were unable to recover it. 00:28:06.769 [2024-10-07 09:48:55.486683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.769 [2024-10-07 09:48:55.486749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.769 qpair failed and we were unable to recover it. 00:28:06.770 [2024-10-07 09:48:55.486950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.770 [2024-10-07 09:48:55.487016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.770 qpair failed and we were unable to recover it. 00:28:06.770 [2024-10-07 09:48:55.487264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.770 [2024-10-07 09:48:55.487328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.770 qpair failed and we were unable to recover it. 00:28:06.770 [2024-10-07 09:48:55.487579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.770 [2024-10-07 09:48:55.487644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.770 qpair failed and we were unable to recover it. 00:28:06.770 [2024-10-07 09:48:55.487923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.770 [2024-10-07 09:48:55.487990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.770 qpair failed and we were unable to recover it. 00:28:06.770 [2024-10-07 09:48:55.488236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.770 [2024-10-07 09:48:55.488301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.770 qpair failed and we were unable to recover it. 00:28:06.770 [2024-10-07 09:48:55.488552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.770 [2024-10-07 09:48:55.488617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.770 qpair failed and we were unable to recover it. 00:28:06.770 [2024-10-07 09:48:55.488893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.770 [2024-10-07 09:48:55.488969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.770 qpair failed and we were unable to recover it. 00:28:06.770 [2024-10-07 09:48:55.489261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.770 [2024-10-07 09:48:55.489326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.770 qpair failed and we were unable to recover it. 00:28:06.770 [2024-10-07 09:48:55.489575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.770 [2024-10-07 09:48:55.489641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.770 qpair failed and we were unable to recover it. 00:28:06.770 [2024-10-07 09:48:55.489958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.770 [2024-10-07 09:48:55.490023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.770 qpair failed and we were unable to recover it. 00:28:06.770 [2024-10-07 09:48:55.490317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.770 [2024-10-07 09:48:55.490382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.770 qpair failed and we were unable to recover it. 00:28:06.770 [2024-10-07 09:48:55.490628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.770 [2024-10-07 09:48:55.490712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.770 qpair failed and we were unable to recover it. 00:28:06.770 [2024-10-07 09:48:55.490981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.770 [2024-10-07 09:48:55.491056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.770 qpair failed and we were unable to recover it. 00:28:06.770 [2024-10-07 09:48:55.491297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.770 [2024-10-07 09:48:55.491369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.770 qpair failed and we were unable to recover it. 00:28:06.770 [2024-10-07 09:48:55.491634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.770 [2024-10-07 09:48:55.491723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.770 qpair failed and we were unable to recover it. 00:28:06.770 [2024-10-07 09:48:55.491980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.770 [2024-10-07 09:48:55.492045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.770 qpair failed and we were unable to recover it. 00:28:06.770 [2024-10-07 09:48:55.492257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.770 [2024-10-07 09:48:55.492322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.770 qpair failed and we were unable to recover it. 00:28:06.770 [2024-10-07 09:48:55.492613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.770 [2024-10-07 09:48:55.492717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.770 qpair failed and we were unable to recover it. 00:28:06.770 [2024-10-07 09:48:55.492993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.770 [2024-10-07 09:48:55.493058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.770 qpair failed and we were unable to recover it. 00:28:06.770 [2024-10-07 09:48:55.493348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.770 [2024-10-07 09:48:55.493414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.770 qpair failed and we were unable to recover it. 00:28:06.770 [2024-10-07 09:48:55.493704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.770 [2024-10-07 09:48:55.493778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.770 qpair failed and we were unable to recover it. 00:28:06.770 [2024-10-07 09:48:55.494034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.770 [2024-10-07 09:48:55.494099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.770 qpair failed and we were unable to recover it. 00:28:06.770 [2024-10-07 09:48:55.494350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.770 [2024-10-07 09:48:55.494414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.770 qpair failed and we were unable to recover it. 00:28:06.770 [2024-10-07 09:48:55.494625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.770 [2024-10-07 09:48:55.494704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.770 qpair failed and we were unable to recover it. 00:28:06.770 [2024-10-07 09:48:55.494981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.770 [2024-10-07 09:48:55.495046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.770 qpair failed and we were unable to recover it. 00:28:06.770 [2024-10-07 09:48:55.495331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.770 [2024-10-07 09:48:55.495395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.770 qpair failed and we were unable to recover it. 00:28:06.770 [2024-10-07 09:48:55.495702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.770 [2024-10-07 09:48:55.495775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.770 qpair failed and we were unable to recover it. 00:28:06.770 [2024-10-07 09:48:55.495971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.770 [2024-10-07 09:48:55.496036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.770 qpair failed and we were unable to recover it. 00:28:06.770 [2024-10-07 09:48:55.496278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.770 [2024-10-07 09:48:55.496345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.770 qpair failed and we were unable to recover it. 00:28:06.770 [2024-10-07 09:48:55.496604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.770 [2024-10-07 09:48:55.496688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.770 qpair failed and we were unable to recover it. 00:28:06.770 [2024-10-07 09:48:55.496916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.771 [2024-10-07 09:48:55.496991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.771 qpair failed and we were unable to recover it. 00:28:06.771 [2024-10-07 09:48:55.497280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.771 [2024-10-07 09:48:55.497345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.771 qpair failed and we were unable to recover it. 00:28:06.771 [2024-10-07 09:48:55.497629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.771 [2024-10-07 09:48:55.497709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.771 qpair failed and we were unable to recover it. 00:28:06.771 [2024-10-07 09:48:55.498011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.771 [2024-10-07 09:48:55.498078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.771 qpair failed and we were unable to recover it. 00:28:06.771 [2024-10-07 09:48:55.498288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.771 [2024-10-07 09:48:55.498349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.771 qpair failed and we were unable to recover it. 00:28:06.771 [2024-10-07 09:48:55.498629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.771 [2024-10-07 09:48:55.498709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.771 qpair failed and we were unable to recover it. 00:28:06.771 [2024-10-07 09:48:55.498922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.771 [2024-10-07 09:48:55.498996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.771 qpair failed and we were unable to recover it. 00:28:06.771 [2024-10-07 09:48:55.499253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.771 [2024-10-07 09:48:55.499317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.771 qpair failed and we were unable to recover it. 00:28:06.771 [2024-10-07 09:48:55.499534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.771 [2024-10-07 09:48:55.499599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.771 qpair failed and we were unable to recover it. 00:28:06.771 [2024-10-07 09:48:55.499879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.771 [2024-10-07 09:48:55.499946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.771 qpair failed and we were unable to recover it. 00:28:06.771 [2024-10-07 09:48:55.500211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.771 [2024-10-07 09:48:55.500277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.771 qpair failed and we were unable to recover it. 00:28:06.771 [2024-10-07 09:48:55.500550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.771 [2024-10-07 09:48:55.500615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.771 qpair failed and we were unable to recover it. 00:28:06.771 [2024-10-07 09:48:55.500918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.771 [2024-10-07 09:48:55.500985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.771 qpair failed and we were unable to recover it. 00:28:06.771 [2024-10-07 09:48:55.501281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.771 [2024-10-07 09:48:55.501347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.771 qpair failed and we were unable to recover it. 00:28:06.771 [2024-10-07 09:48:55.501597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.771 [2024-10-07 09:48:55.501664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.771 qpair failed and we were unable to recover it. 00:28:06.771 [2024-10-07 09:48:55.501975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.771 [2024-10-07 09:48:55.502041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.771 qpair failed and we were unable to recover it. 00:28:06.771 [2024-10-07 09:48:55.502235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.771 [2024-10-07 09:48:55.502301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.771 qpair failed and we were unable to recover it. 00:28:06.771 [2024-10-07 09:48:55.502543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.771 [2024-10-07 09:48:55.502609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.771 qpair failed and we were unable to recover it. 00:28:06.771 [2024-10-07 09:48:55.502883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.771 [2024-10-07 09:48:55.502959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.771 qpair failed and we were unable to recover it. 00:28:06.771 [2024-10-07 09:48:55.503222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.771 [2024-10-07 09:48:55.503287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.771 qpair failed and we were unable to recover it. 00:28:06.771 [2024-10-07 09:48:55.503534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.771 [2024-10-07 09:48:55.503599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.771 qpair failed and we were unable to recover it. 00:28:06.771 [2024-10-07 09:48:55.503914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.771 [2024-10-07 09:48:55.503980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.771 qpair failed and we were unable to recover it. 00:28:06.771 [2024-10-07 09:48:55.504229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.771 [2024-10-07 09:48:55.504295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.771 qpair failed and we were unable to recover it. 00:28:06.771 [2024-10-07 09:48:55.504546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.771 [2024-10-07 09:48:55.504611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.771 qpair failed and we were unable to recover it. 00:28:06.771 [2024-10-07 09:48:55.504873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.771 [2024-10-07 09:48:55.504951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.771 qpair failed and we were unable to recover it. 00:28:06.771 [2024-10-07 09:48:55.505192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.771 [2024-10-07 09:48:55.505259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.771 qpair failed and we were unable to recover it. 00:28:06.771 [2024-10-07 09:48:55.505541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.771 [2024-10-07 09:48:55.505605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.771 qpair failed and we were unable to recover it. 00:28:06.771 [2024-10-07 09:48:55.505914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.771 [2024-10-07 09:48:55.505987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.771 qpair failed and we were unable to recover it. 00:28:06.771 [2024-10-07 09:48:55.506284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.771 [2024-10-07 09:48:55.506350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.771 qpair failed and we were unable to recover it. 00:28:06.771 [2024-10-07 09:48:55.506601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.771 [2024-10-07 09:48:55.506685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.771 qpair failed and we were unable to recover it. 00:28:06.771 [2024-10-07 09:48:55.506981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.771 [2024-10-07 09:48:55.507045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.771 qpair failed and we were unable to recover it. 00:28:06.771 [2024-10-07 09:48:55.507345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.771 [2024-10-07 09:48:55.507410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.771 qpair failed and we were unable to recover it. 00:28:06.771 [2024-10-07 09:48:55.507725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.771 [2024-10-07 09:48:55.507792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.771 qpair failed and we were unable to recover it. 00:28:06.771 [2024-10-07 09:48:55.508074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.771 [2024-10-07 09:48:55.508140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.771 qpair failed and we were unable to recover it. 00:28:06.771 [2024-10-07 09:48:55.508436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.771 [2024-10-07 09:48:55.508501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.771 qpair failed and we were unable to recover it. 00:28:06.771 [2024-10-07 09:48:55.508812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.772 [2024-10-07 09:48:55.508879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.772 qpair failed and we were unable to recover it. 00:28:06.772 [2024-10-07 09:48:55.509141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.772 [2024-10-07 09:48:55.509206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.772 qpair failed and we were unable to recover it. 00:28:06.772 [2024-10-07 09:48:55.509459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.772 [2024-10-07 09:48:55.509524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.772 qpair failed and we were unable to recover it. 00:28:06.772 [2024-10-07 09:48:55.509819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.772 [2024-10-07 09:48:55.509886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.772 qpair failed and we were unable to recover it. 00:28:06.772 [2024-10-07 09:48:55.510183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.772 [2024-10-07 09:48:55.510248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.772 qpair failed and we were unable to recover it. 00:28:06.772 [2024-10-07 09:48:55.510483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.772 [2024-10-07 09:48:55.510548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.772 qpair failed and we were unable to recover it. 00:28:06.772 [2024-10-07 09:48:55.510835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.772 [2024-10-07 09:48:55.510902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.772 qpair failed and we were unable to recover it. 00:28:06.772 [2024-10-07 09:48:55.511204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.772 [2024-10-07 09:48:55.511269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.772 qpair failed and we were unable to recover it. 00:28:06.772 [2024-10-07 09:48:55.511513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.772 [2024-10-07 09:48:55.511579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.772 qpair failed and we were unable to recover it. 00:28:06.772 [2024-10-07 09:48:55.511843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.772 [2024-10-07 09:48:55.511913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.772 qpair failed and we were unable to recover it. 00:28:06.772 [2024-10-07 09:48:55.512222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.772 [2024-10-07 09:48:55.512287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.772 qpair failed and we were unable to recover it. 00:28:06.772 [2024-10-07 09:48:55.512578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.772 [2024-10-07 09:48:55.512644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.772 qpair failed and we were unable to recover it. 00:28:06.772 [2024-10-07 09:48:55.512961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.772 [2024-10-07 09:48:55.513028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.772 qpair failed and we were unable to recover it. 00:28:06.772 [2024-10-07 09:48:55.513276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.772 [2024-10-07 09:48:55.513341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.772 qpair failed and we were unable to recover it. 00:28:06.772 [2024-10-07 09:48:55.513583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.772 [2024-10-07 09:48:55.513649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.772 qpair failed and we were unable to recover it. 00:28:06.772 [2024-10-07 09:48:55.513972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.772 [2024-10-07 09:48:55.514048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.772 qpair failed and we were unable to recover it. 00:28:06.772 [2024-10-07 09:48:55.514283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.772 [2024-10-07 09:48:55.514346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.772 qpair failed and we were unable to recover it. 00:28:06.772 [2024-10-07 09:48:55.514637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.772 [2024-10-07 09:48:55.514718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.772 qpair failed and we were unable to recover it. 00:28:06.772 [2024-10-07 09:48:55.514964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.772 [2024-10-07 09:48:55.515032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.772 qpair failed and we were unable to recover it. 00:28:06.772 [2024-10-07 09:48:55.515309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.772 [2024-10-07 09:48:55.515375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.772 qpair failed and we were unable to recover it. 00:28:06.772 [2024-10-07 09:48:55.515627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.772 [2024-10-07 09:48:55.515739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.772 qpair failed and we were unable to recover it. 00:28:06.772 [2024-10-07 09:48:55.515996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.772 [2024-10-07 09:48:55.516062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.772 qpair failed and we were unable to recover it. 00:28:06.772 [2024-10-07 09:48:55.516343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.772 [2024-10-07 09:48:55.516409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.772 qpair failed and we were unable to recover it. 00:28:06.772 [2024-10-07 09:48:55.516650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.772 [2024-10-07 09:48:55.516736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.772 qpair failed and we were unable to recover it. 00:28:06.772 [2024-10-07 09:48:55.516991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.772 [2024-10-07 09:48:55.517057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.772 qpair failed and we were unable to recover it. 00:28:06.772 [2024-10-07 09:48:55.517349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.772 [2024-10-07 09:48:55.517415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.772 qpair failed and we were unable to recover it. 00:28:06.772 [2024-10-07 09:48:55.517718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.772 [2024-10-07 09:48:55.517786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.772 qpair failed and we were unable to recover it. 00:28:06.772 [2024-10-07 09:48:55.517986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.772 [2024-10-07 09:48:55.518053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.772 qpair failed and we were unable to recover it. 00:28:06.772 [2024-10-07 09:48:55.518317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.772 [2024-10-07 09:48:55.518384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.772 qpair failed and we were unable to recover it. 00:28:06.772 [2024-10-07 09:48:55.518705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.772 [2024-10-07 09:48:55.518773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.772 qpair failed and we were unable to recover it. 00:28:06.772 [2024-10-07 09:48:55.518993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.772 [2024-10-07 09:48:55.519059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.772 qpair failed and we were unable to recover it. 00:28:06.772 [2024-10-07 09:48:55.519271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.772 [2024-10-07 09:48:55.519336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.772 qpair failed and we were unable to recover it. 00:28:06.772 [2024-10-07 09:48:55.519614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.772 [2024-10-07 09:48:55.519719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.772 qpair failed and we were unable to recover it. 00:28:06.772 [2024-10-07 09:48:55.520020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.772 [2024-10-07 09:48:55.520087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.772 qpair failed and we were unable to recover it. 00:28:06.772 [2024-10-07 09:48:55.520289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.772 [2024-10-07 09:48:55.520361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.772 qpair failed and we were unable to recover it. 00:28:06.772 [2024-10-07 09:48:55.520631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.772 [2024-10-07 09:48:55.520720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.772 qpair failed and we were unable to recover it. 00:28:06.773 [2024-10-07 09:48:55.521018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.773 [2024-10-07 09:48:55.521084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.773 qpair failed and we were unable to recover it. 00:28:06.773 [2024-10-07 09:48:55.521306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.773 [2024-10-07 09:48:55.521370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.773 qpair failed and we were unable to recover it. 00:28:06.773 [2024-10-07 09:48:55.521580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.773 [2024-10-07 09:48:55.521647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.773 qpair failed and we were unable to recover it. 00:28:06.773 [2024-10-07 09:48:55.521953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.773 [2024-10-07 09:48:55.522020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.773 qpair failed and we were unable to recover it. 00:28:06.773 [2024-10-07 09:48:55.522260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.773 [2024-10-07 09:48:55.522324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.773 qpair failed and we were unable to recover it. 00:28:06.773 [2024-10-07 09:48:55.522606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.773 [2024-10-07 09:48:55.522685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.773 qpair failed and we were unable to recover it. 00:28:06.773 [2024-10-07 09:48:55.522900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.773 [2024-10-07 09:48:55.522984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.773 qpair failed and we were unable to recover it. 00:28:06.773 [2024-10-07 09:48:55.523267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.773 [2024-10-07 09:48:55.523332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.773 qpair failed and we were unable to recover it. 00:28:06.773 [2024-10-07 09:48:55.523618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.773 [2024-10-07 09:48:55.523699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.773 qpair failed and we were unable to recover it. 00:28:06.773 [2024-10-07 09:48:55.523921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.773 [2024-10-07 09:48:55.523988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.773 qpair failed and we were unable to recover it. 00:28:06.773 [2024-10-07 09:48:55.524269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.773 [2024-10-07 09:48:55.524334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.773 qpair failed and we were unable to recover it. 00:28:06.773 [2024-10-07 09:48:55.524622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.773 [2024-10-07 09:48:55.524702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.773 qpair failed and we were unable to recover it. 00:28:06.773 [2024-10-07 09:48:55.525000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.773 [2024-10-07 09:48:55.525065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.773 qpair failed and we were unable to recover it. 00:28:06.773 [2024-10-07 09:48:55.525321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.773 [2024-10-07 09:48:55.525385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.773 qpair failed and we were unable to recover it. 00:28:06.773 [2024-10-07 09:48:55.525691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.773 [2024-10-07 09:48:55.525758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.773 qpair failed and we were unable to recover it. 00:28:06.773 [2024-10-07 09:48:55.526006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.773 [2024-10-07 09:48:55.526072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.773 qpair failed and we were unable to recover it. 00:28:06.773 [2024-10-07 09:48:55.526307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.773 [2024-10-07 09:48:55.526372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.773 qpair failed and we were unable to recover it. 00:28:06.773 [2024-10-07 09:48:55.526613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.773 [2024-10-07 09:48:55.526696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.773 qpair failed and we were unable to recover it. 00:28:06.773 [2024-10-07 09:48:55.526947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.773 [2024-10-07 09:48:55.527012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.773 qpair failed and we were unable to recover it. 00:28:06.773 [2024-10-07 09:48:55.527233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.773 [2024-10-07 09:48:55.527300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.773 qpair failed and we were unable to recover it. 00:28:06.773 [2024-10-07 09:48:55.527573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.773 [2024-10-07 09:48:55.527639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.773 qpair failed and we were unable to recover it. 00:28:06.773 [2024-10-07 09:48:55.527903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.773 [2024-10-07 09:48:55.527968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.773 qpair failed and we were unable to recover it. 00:28:06.773 [2024-10-07 09:48:55.528198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.773 [2024-10-07 09:48:55.528264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.773 qpair failed and we were unable to recover it. 00:28:06.773 [2024-10-07 09:48:55.528564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.773 [2024-10-07 09:48:55.528629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.773 qpair failed and we were unable to recover it. 00:28:06.773 [2024-10-07 09:48:55.528904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.773 [2024-10-07 09:48:55.528969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.773 qpair failed and we were unable to recover it. 00:28:06.773 [2024-10-07 09:48:55.529178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.773 [2024-10-07 09:48:55.529245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.773 qpair failed and we were unable to recover it. 00:28:06.773 [2024-10-07 09:48:55.529526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.773 [2024-10-07 09:48:55.529591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.773 qpair failed and we were unable to recover it. 00:28:06.773 [2024-10-07 09:48:55.529818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.773 [2024-10-07 09:48:55.529885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.773 qpair failed and we were unable to recover it. 00:28:06.773 [2024-10-07 09:48:55.530082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.773 [2024-10-07 09:48:55.530148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.773 qpair failed and we were unable to recover it. 00:28:06.773 [2024-10-07 09:48:55.530433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.773 [2024-10-07 09:48:55.530497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.773 qpair failed and we were unable to recover it. 00:28:06.773 [2024-10-07 09:48:55.530780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.773 [2024-10-07 09:48:55.530848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.773 qpair failed and we were unable to recover it. 00:28:06.773 [2024-10-07 09:48:55.531135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.773 [2024-10-07 09:48:55.531201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.773 qpair failed and we were unable to recover it. 00:28:06.773 [2024-10-07 09:48:55.531453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.773 [2024-10-07 09:48:55.531517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.773 qpair failed and we were unable to recover it. 00:28:06.773 [2024-10-07 09:48:55.531759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.773 [2024-10-07 09:48:55.531836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.773 qpair failed and we were unable to recover it. 00:28:06.773 [2024-10-07 09:48:55.532038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.773 [2024-10-07 09:48:55.532104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.773 qpair failed and we were unable to recover it. 00:28:06.773 [2024-10-07 09:48:55.532359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.773 [2024-10-07 09:48:55.532425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.773 qpair failed and we were unable to recover it. 00:28:06.773 [2024-10-07 09:48:55.532653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.774 [2024-10-07 09:48:55.532734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.774 qpair failed and we were unable to recover it. 00:28:06.774 [2024-10-07 09:48:55.533014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.774 [2024-10-07 09:48:55.533080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.774 qpair failed and we were unable to recover it. 00:28:06.774 [2024-10-07 09:48:55.533342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.774 [2024-10-07 09:48:55.533408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.774 qpair failed and we were unable to recover it. 00:28:06.774 [2024-10-07 09:48:55.533604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.774 [2024-10-07 09:48:55.533683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.774 qpair failed and we were unable to recover it. 00:28:06.774 [2024-10-07 09:48:55.533933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.774 [2024-10-07 09:48:55.533999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.774 qpair failed and we were unable to recover it. 00:28:06.774 [2024-10-07 09:48:55.534258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.774 [2024-10-07 09:48:55.534324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.774 qpair failed and we were unable to recover it. 00:28:06.774 [2024-10-07 09:48:55.534575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.774 [2024-10-07 09:48:55.534640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.774 qpair failed and we were unable to recover it. 00:28:06.774 [2024-10-07 09:48:55.534901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.774 [2024-10-07 09:48:55.534967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.774 qpair failed and we were unable to recover it. 00:28:06.774 [2024-10-07 09:48:55.535205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.774 [2024-10-07 09:48:55.535271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.774 qpair failed and we were unable to recover it. 00:28:06.774 [2024-10-07 09:48:55.535565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.774 [2024-10-07 09:48:55.535630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.774 qpair failed and we were unable to recover it. 00:28:06.774 [2024-10-07 09:48:55.535923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.774 [2024-10-07 09:48:55.535988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.774 qpair failed and we were unable to recover it. 00:28:06.774 [2024-10-07 09:48:55.536303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.774 [2024-10-07 09:48:55.536370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.774 qpair failed and we were unable to recover it. 00:28:06.774 [2024-10-07 09:48:55.536657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.774 [2024-10-07 09:48:55.536737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.774 qpair failed and we were unable to recover it. 00:28:06.774 [2024-10-07 09:48:55.536985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.774 [2024-10-07 09:48:55.537052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.774 qpair failed and we were unable to recover it. 00:28:06.774 [2024-10-07 09:48:55.537289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.774 [2024-10-07 09:48:55.537355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.774 qpair failed and we were unable to recover it. 00:28:06.774 [2024-10-07 09:48:55.537641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.774 [2024-10-07 09:48:55.537722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.774 qpair failed and we were unable to recover it. 00:28:06.774 [2024-10-07 09:48:55.537973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.774 [2024-10-07 09:48:55.538039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.774 qpair failed and we were unable to recover it. 00:28:06.774 [2024-10-07 09:48:55.538296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.774 [2024-10-07 09:48:55.538363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.774 qpair failed and we were unable to recover it. 00:28:06.774 [2024-10-07 09:48:55.538643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.774 [2024-10-07 09:48:55.538729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.774 qpair failed and we were unable to recover it. 00:28:06.774 [2024-10-07 09:48:55.538980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.774 [2024-10-07 09:48:55.539046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.774 qpair failed and we were unable to recover it. 00:28:06.774 [2024-10-07 09:48:55.539301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.774 [2024-10-07 09:48:55.539366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.774 qpair failed and we were unable to recover it. 00:28:06.774 [2024-10-07 09:48:55.539661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.774 [2024-10-07 09:48:55.539762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.774 qpair failed and we were unable to recover it. 00:28:06.774 [2024-10-07 09:48:55.540044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.774 [2024-10-07 09:48:55.540111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.774 qpair failed and we were unable to recover it. 00:28:06.774 [2024-10-07 09:48:55.540355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.774 [2024-10-07 09:48:55.540419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.774 qpair failed and we were unable to recover it. 00:28:06.774 [2024-10-07 09:48:55.540689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.774 [2024-10-07 09:48:55.540756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.774 qpair failed and we were unable to recover it. 00:28:06.774 [2024-10-07 09:48:55.541058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.774 [2024-10-07 09:48:55.541124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.774 qpair failed and we were unable to recover it. 00:28:06.774 [2024-10-07 09:48:55.541408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.774 [2024-10-07 09:48:55.541472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.774 qpair failed and we were unable to recover it. 00:28:06.774 [2024-10-07 09:48:55.541723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.774 [2024-10-07 09:48:55.541790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.774 qpair failed and we were unable to recover it. 00:28:06.774 [2024-10-07 09:48:55.542057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.774 [2024-10-07 09:48:55.542123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.774 qpair failed and we were unable to recover it. 00:28:06.774 [2024-10-07 09:48:55.542415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.774 [2024-10-07 09:48:55.542480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.774 qpair failed and we were unable to recover it. 00:28:06.775 [2024-10-07 09:48:55.542743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.775 [2024-10-07 09:48:55.542810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.775 qpair failed and we were unable to recover it. 00:28:06.775 [2024-10-07 09:48:55.543101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.775 [2024-10-07 09:48:55.543166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.775 qpair failed and we were unable to recover it. 00:28:06.775 [2024-10-07 09:48:55.543406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.775 [2024-10-07 09:48:55.543471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.775 qpair failed and we were unable to recover it. 00:28:06.775 [2024-10-07 09:48:55.543763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.775 [2024-10-07 09:48:55.543830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.775 qpair failed and we were unable to recover it. 00:28:06.775 [2024-10-07 09:48:55.544076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.775 [2024-10-07 09:48:55.544143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.775 qpair failed and we were unable to recover it. 00:28:06.775 [2024-10-07 09:48:55.544430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.775 [2024-10-07 09:48:55.544495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.775 qpair failed and we were unable to recover it. 00:28:06.775 [2024-10-07 09:48:55.544730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.775 [2024-10-07 09:48:55.544799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.775 qpair failed and we were unable to recover it. 00:28:06.775 [2024-10-07 09:48:55.545009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.775 [2024-10-07 09:48:55.545074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.775 qpair failed and we were unable to recover it. 00:28:06.775 [2024-10-07 09:48:55.545363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.775 [2024-10-07 09:48:55.545428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.775 qpair failed and we were unable to recover it. 00:28:06.775 [2024-10-07 09:48:55.545657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.775 [2024-10-07 09:48:55.545739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.775 qpair failed and we were unable to recover it. 00:28:06.775 [2024-10-07 09:48:55.545963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.775 [2024-10-07 09:48:55.546028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.775 qpair failed and we were unable to recover it. 00:28:06.775 [2024-10-07 09:48:55.546269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.775 [2024-10-07 09:48:55.546336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.775 qpair failed and we were unable to recover it. 00:28:06.775 [2024-10-07 09:48:55.546550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.775 [2024-10-07 09:48:55.546616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.775 qpair failed and we were unable to recover it. 00:28:06.775 [2024-10-07 09:48:55.546830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.775 [2024-10-07 09:48:55.546895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.775 qpair failed and we were unable to recover it. 00:28:06.775 [2024-10-07 09:48:55.547135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.775 [2024-10-07 09:48:55.547201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.775 qpair failed and we were unable to recover it. 00:28:06.775 [2024-10-07 09:48:55.547463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.775 [2024-10-07 09:48:55.547529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.775 qpair failed and we were unable to recover it. 00:28:06.775 [2024-10-07 09:48:55.547771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.775 [2024-10-07 09:48:55.547837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.775 qpair failed and we were unable to recover it. 00:28:06.775 [2024-10-07 09:48:55.548131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.775 [2024-10-07 09:48:55.548195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.775 qpair failed and we were unable to recover it. 00:28:06.775 [2024-10-07 09:48:55.548487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.775 [2024-10-07 09:48:55.548553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.775 qpair failed and we were unable to recover it. 00:28:06.775 [2024-10-07 09:48:55.548764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.775 [2024-10-07 09:48:55.548831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.775 qpair failed and we were unable to recover it. 00:28:06.775 [2024-10-07 09:48:55.549118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.775 [2024-10-07 09:48:55.549182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.775 qpair failed and we were unable to recover it. 00:28:06.775 [2024-10-07 09:48:55.549439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.775 [2024-10-07 09:48:55.549504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.775 qpair failed and we were unable to recover it. 00:28:06.775 [2024-10-07 09:48:55.549808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.775 [2024-10-07 09:48:55.549875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.775 qpair failed and we were unable to recover it. 00:28:06.775 [2024-10-07 09:48:55.550159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.775 [2024-10-07 09:48:55.550224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.775 qpair failed and we were unable to recover it. 00:28:06.775 [2024-10-07 09:48:55.550499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.775 [2024-10-07 09:48:55.550565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.775 qpair failed and we were unable to recover it. 00:28:06.775 [2024-10-07 09:48:55.550833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.775 [2024-10-07 09:48:55.550900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.775 qpair failed and we were unable to recover it. 00:28:06.775 [2024-10-07 09:48:55.551196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.775 [2024-10-07 09:48:55.551260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.775 qpair failed and we were unable to recover it. 00:28:06.775 [2024-10-07 09:48:55.551467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.775 [2024-10-07 09:48:55.551532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.775 qpair failed and we were unable to recover it. 00:28:06.775 [2024-10-07 09:48:55.551835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.775 [2024-10-07 09:48:55.551902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.775 qpair failed and we were unable to recover it. 00:28:06.775 [2024-10-07 09:48:55.552189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.775 [2024-10-07 09:48:55.552254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.775 qpair failed and we were unable to recover it. 00:28:06.775 [2024-10-07 09:48:55.552494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.775 [2024-10-07 09:48:55.552559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.775 qpair failed and we were unable to recover it. 00:28:06.775 [2024-10-07 09:48:55.552828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.775 [2024-10-07 09:48:55.552895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.775 qpair failed and we were unable to recover it. 00:28:06.775 [2024-10-07 09:48:55.553190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.775 [2024-10-07 09:48:55.553255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.775 qpair failed and we were unable to recover it. 00:28:06.775 [2024-10-07 09:48:55.553558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.775 [2024-10-07 09:48:55.553623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.775 qpair failed and we were unable to recover it. 00:28:06.775 [2024-10-07 09:48:55.553925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.775 [2024-10-07 09:48:55.553991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.775 qpair failed and we were unable to recover it. 00:28:06.775 [2024-10-07 09:48:55.554280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.776 [2024-10-07 09:48:55.554354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.776 qpair failed and we were unable to recover it. 00:28:06.776 [2024-10-07 09:48:55.554641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.776 [2024-10-07 09:48:55.554723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.776 qpair failed and we were unable to recover it. 00:28:06.776 [2024-10-07 09:48:55.554935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.776 [2024-10-07 09:48:55.555001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.776 qpair failed and we were unable to recover it. 00:28:06.776 [2024-10-07 09:48:55.555246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.776 [2024-10-07 09:48:55.555311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.776 qpair failed and we were unable to recover it. 00:28:06.776 [2024-10-07 09:48:55.555552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.776 [2024-10-07 09:48:55.555617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.776 qpair failed and we were unable to recover it. 00:28:06.776 [2024-10-07 09:48:55.555864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.776 [2024-10-07 09:48:55.555930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.776 qpair failed and we were unable to recover it. 00:28:06.776 [2024-10-07 09:48:55.556219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.776 [2024-10-07 09:48:55.556284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.776 qpair failed and we were unable to recover it. 00:28:06.776 [2024-10-07 09:48:55.556536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.776 [2024-10-07 09:48:55.556601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.776 qpair failed and we were unable to recover it. 00:28:06.776 [2024-10-07 09:48:55.556900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.776 [2024-10-07 09:48:55.556967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.776 qpair failed and we were unable to recover it. 00:28:06.776 [2024-10-07 09:48:55.557264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.776 [2024-10-07 09:48:55.557328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.776 qpair failed and we were unable to recover it. 00:28:06.776 [2024-10-07 09:48:55.557576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.776 [2024-10-07 09:48:55.557642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.776 qpair failed and we were unable to recover it. 00:28:06.776 [2024-10-07 09:48:55.557952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.776 [2024-10-07 09:48:55.558018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.776 qpair failed and we were unable to recover it. 00:28:06.776 [2024-10-07 09:48:55.558311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.776 [2024-10-07 09:48:55.558375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.776 qpair failed and we were unable to recover it. 00:28:06.776 [2024-10-07 09:48:55.558614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.776 [2024-10-07 09:48:55.558698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.776 qpair failed and we were unable to recover it. 00:28:06.776 [2024-10-07 09:48:55.559004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.776 [2024-10-07 09:48:55.559069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.776 qpair failed and we were unable to recover it. 00:28:06.776 [2024-10-07 09:48:55.559289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.776 [2024-10-07 09:48:55.559353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.776 qpair failed and we were unable to recover it. 00:28:06.776 [2024-10-07 09:48:55.559639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.776 [2024-10-07 09:48:55.559724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.776 qpair failed and we were unable to recover it. 00:28:06.776 [2024-10-07 09:48:55.560016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.776 [2024-10-07 09:48:55.560081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.776 qpair failed and we were unable to recover it. 00:28:06.776 [2024-10-07 09:48:55.560372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.776 [2024-10-07 09:48:55.560437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.776 qpair failed and we were unable to recover it. 00:28:06.776 [2024-10-07 09:48:55.560722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.776 [2024-10-07 09:48:55.560790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.776 qpair failed and we were unable to recover it. 00:28:06.776 [2024-10-07 09:48:55.561067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.776 [2024-10-07 09:48:55.561131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.776 qpair failed and we were unable to recover it. 00:28:06.776 [2024-10-07 09:48:55.561373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.776 [2024-10-07 09:48:55.561439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.776 qpair failed and we were unable to recover it. 00:28:06.776 [2024-10-07 09:48:55.561700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.776 [2024-10-07 09:48:55.561768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.776 qpair failed and we were unable to recover it. 00:28:06.776 [2024-10-07 09:48:55.562060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.776 [2024-10-07 09:48:55.562125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.776 qpair failed and we were unable to recover it. 00:28:06.776 [2024-10-07 09:48:55.562321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.776 [2024-10-07 09:48:55.562386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.776 qpair failed and we were unable to recover it. 00:28:06.776 [2024-10-07 09:48:55.562593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.776 [2024-10-07 09:48:55.562659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.776 qpair failed and we were unable to recover it. 00:28:06.776 [2024-10-07 09:48:55.562970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.776 [2024-10-07 09:48:55.563035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.776 qpair failed and we were unable to recover it. 00:28:06.776 [2024-10-07 09:48:55.563280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.776 [2024-10-07 09:48:55.563355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.776 qpair failed and we were unable to recover it. 00:28:06.776 [2024-10-07 09:48:55.563642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.776 [2024-10-07 09:48:55.563723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.776 qpair failed and we were unable to recover it. 00:28:06.776 [2024-10-07 09:48:55.563945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.776 [2024-10-07 09:48:55.564010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.776 qpair failed and we were unable to recover it. 00:28:06.776 [2024-10-07 09:48:55.564263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.776 [2024-10-07 09:48:55.564328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.776 qpair failed and we were unable to recover it. 00:28:06.776 [2024-10-07 09:48:55.564578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.776 [2024-10-07 09:48:55.564643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.776 qpair failed and we were unable to recover it. 00:28:06.776 [2024-10-07 09:48:55.564976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.776 [2024-10-07 09:48:55.565041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.776 qpair failed and we were unable to recover it. 00:28:06.776 [2024-10-07 09:48:55.565327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.776 [2024-10-07 09:48:55.565392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.776 qpair failed and we were unable to recover it. 00:28:06.776 [2024-10-07 09:48:55.565696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.776 [2024-10-07 09:48:55.565764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.776 qpair failed and we were unable to recover it. 00:28:06.776 [2024-10-07 09:48:55.566049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.776 [2024-10-07 09:48:55.566113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.776 qpair failed and we were unable to recover it. 00:28:06.777 [2024-10-07 09:48:55.566412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.777 [2024-10-07 09:48:55.566477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.777 qpair failed and we were unable to recover it. 00:28:06.777 [2024-10-07 09:48:55.566724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.777 [2024-10-07 09:48:55.566792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.777 qpair failed and we were unable to recover it. 00:28:06.777 [2024-10-07 09:48:55.567062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.777 [2024-10-07 09:48:55.567126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.777 qpair failed and we were unable to recover it. 00:28:06.777 [2024-10-07 09:48:55.567407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.777 [2024-10-07 09:48:55.567472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.777 qpair failed and we were unable to recover it. 00:28:06.777 [2024-10-07 09:48:55.567758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.777 [2024-10-07 09:48:55.567825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.777 qpair failed and we were unable to recover it. 00:28:06.777 [2024-10-07 09:48:55.568120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.777 [2024-10-07 09:48:55.568184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.777 qpair failed and we were unable to recover it. 00:28:06.777 [2024-10-07 09:48:55.568475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.777 [2024-10-07 09:48:55.568541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.777 qpair failed and we were unable to recover it. 00:28:06.777 [2024-10-07 09:48:55.568827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.777 [2024-10-07 09:48:55.568895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.777 qpair failed and we were unable to recover it. 00:28:06.777 [2024-10-07 09:48:55.569179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.777 [2024-10-07 09:48:55.569244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.777 qpair failed and we were unable to recover it. 00:28:06.777 [2024-10-07 09:48:55.569510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.777 [2024-10-07 09:48:55.569575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.777 qpair failed and we were unable to recover it. 00:28:06.777 [2024-10-07 09:48:55.569899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.777 [2024-10-07 09:48:55.569966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.777 qpair failed and we were unable to recover it. 00:28:06.777 [2024-10-07 09:48:55.570249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.777 [2024-10-07 09:48:55.570314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.777 qpair failed and we were unable to recover it. 00:28:06.777 [2024-10-07 09:48:55.570586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.777 [2024-10-07 09:48:55.570651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.777 qpair failed and we were unable to recover it. 00:28:06.777 [2024-10-07 09:48:55.570898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.777 [2024-10-07 09:48:55.570964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.777 qpair failed and we were unable to recover it. 00:28:06.777 [2024-10-07 09:48:55.571203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.777 [2024-10-07 09:48:55.571268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.777 qpair failed and we were unable to recover it. 00:28:06.777 [2024-10-07 09:48:55.571562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.777 [2024-10-07 09:48:55.571628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.777 qpair failed and we were unable to recover it. 00:28:06.777 [2024-10-07 09:48:55.571877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.777 [2024-10-07 09:48:55.571943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.777 qpair failed and we were unable to recover it. 00:28:06.777 [2024-10-07 09:48:55.572233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.777 [2024-10-07 09:48:55.572298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.777 qpair failed and we were unable to recover it. 00:28:06.777 [2024-10-07 09:48:55.572558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.777 [2024-10-07 09:48:55.572624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.777 qpair failed and we were unable to recover it. 00:28:06.777 [2024-10-07 09:48:55.572943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.777 [2024-10-07 09:48:55.573008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.777 qpair failed and we were unable to recover it. 00:28:06.777 [2024-10-07 09:48:55.573261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.777 [2024-10-07 09:48:55.573326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.777 qpair failed and we were unable to recover it. 00:28:06.777 [2024-10-07 09:48:55.573608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.777 [2024-10-07 09:48:55.573702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.777 qpair failed and we were unable to recover it. 00:28:06.777 [2024-10-07 09:48:55.573994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.777 [2024-10-07 09:48:55.574060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.777 qpair failed and we were unable to recover it. 00:28:06.777 [2024-10-07 09:48:55.574339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.777 [2024-10-07 09:48:55.574404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.777 qpair failed and we were unable to recover it. 00:28:06.777 [2024-10-07 09:48:55.574704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.777 [2024-10-07 09:48:55.574772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.777 qpair failed and we were unable to recover it. 00:28:06.777 [2024-10-07 09:48:55.575030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.777 [2024-10-07 09:48:55.575094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.777 qpair failed and we were unable to recover it. 00:28:06.777 [2024-10-07 09:48:55.575377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.777 [2024-10-07 09:48:55.575442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.777 qpair failed and we were unable to recover it. 00:28:06.777 [2024-10-07 09:48:55.575705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.777 [2024-10-07 09:48:55.575772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.777 qpair failed and we were unable to recover it. 00:28:06.777 [2024-10-07 09:48:55.576053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.777 [2024-10-07 09:48:55.576118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.777 qpair failed and we were unable to recover it. 00:28:06.777 [2024-10-07 09:48:55.576405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.777 [2024-10-07 09:48:55.576471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.777 qpair failed and we were unable to recover it. 00:28:06.777 [2024-10-07 09:48:55.576692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.777 [2024-10-07 09:48:55.576760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.777 qpair failed and we were unable to recover it. 00:28:06.777 [2024-10-07 09:48:55.577017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.777 [2024-10-07 09:48:55.577082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.777 qpair failed and we were unable to recover it. 00:28:06.777 [2024-10-07 09:48:55.577332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.777 [2024-10-07 09:48:55.577398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.777 qpair failed and we were unable to recover it. 00:28:06.777 [2024-10-07 09:48:55.577638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.777 [2024-10-07 09:48:55.577727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.778 qpair failed and we were unable to recover it. 00:28:06.778 [2024-10-07 09:48:55.577941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.778 [2024-10-07 09:48:55.578007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.778 qpair failed and we were unable to recover it. 00:28:06.778 [2024-10-07 09:48:55.578212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.778 [2024-10-07 09:48:55.578278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.778 qpair failed and we were unable to recover it. 00:28:06.778 [2024-10-07 09:48:55.578505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.778 [2024-10-07 09:48:55.578571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.778 qpair failed and we were unable to recover it. 00:28:06.778 [2024-10-07 09:48:55.578869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.778 [2024-10-07 09:48:55.578936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.778 qpair failed and we were unable to recover it. 00:28:06.778 [2024-10-07 09:48:55.579139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.778 [2024-10-07 09:48:55.579206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.778 qpair failed and we were unable to recover it. 00:28:06.778 [2024-10-07 09:48:55.579473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.778 [2024-10-07 09:48:55.579537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.778 qpair failed and we were unable to recover it. 00:28:06.778 [2024-10-07 09:48:55.579785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.778 [2024-10-07 09:48:55.579853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.778 qpair failed and we were unable to recover it. 00:28:06.778 [2024-10-07 09:48:55.580143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.778 [2024-10-07 09:48:55.580209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.778 qpair failed and we were unable to recover it. 00:28:06.778 [2024-10-07 09:48:55.580452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.778 [2024-10-07 09:48:55.580517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.778 qpair failed and we were unable to recover it. 00:28:06.778 [2024-10-07 09:48:55.580730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.778 [2024-10-07 09:48:55.580797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.778 qpair failed and we were unable to recover it. 00:28:06.778 [2024-10-07 09:48:55.581053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.778 [2024-10-07 09:48:55.581118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.778 qpair failed and we were unable to recover it. 00:28:06.778 [2024-10-07 09:48:55.581373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.778 [2024-10-07 09:48:55.581438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.778 qpair failed and we were unable to recover it. 00:28:06.778 [2024-10-07 09:48:55.581741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.778 [2024-10-07 09:48:55.581808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.778 qpair failed and we were unable to recover it. 00:28:06.778 [2024-10-07 09:48:55.582073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.778 [2024-10-07 09:48:55.582139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.778 qpair failed and we were unable to recover it. 00:28:06.778 [2024-10-07 09:48:55.582401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.778 [2024-10-07 09:48:55.582467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.778 qpair failed and we were unable to recover it. 00:28:06.778 [2024-10-07 09:48:55.582729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.778 [2024-10-07 09:48:55.582796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.778 qpair failed and we were unable to recover it. 00:28:06.778 [2024-10-07 09:48:55.583081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.778 [2024-10-07 09:48:55.583148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.778 qpair failed and we were unable to recover it. 00:28:06.778 [2024-10-07 09:48:55.583430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.778 [2024-10-07 09:48:55.583497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.778 qpair failed and we were unable to recover it. 00:28:06.778 [2024-10-07 09:48:55.583706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.778 [2024-10-07 09:48:55.583772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.778 qpair failed and we were unable to recover it. 00:28:06.778 [2024-10-07 09:48:55.584027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.778 [2024-10-07 09:48:55.584092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.778 qpair failed and we were unable to recover it. 00:28:06.778 [2024-10-07 09:48:55.584302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.778 [2024-10-07 09:48:55.584368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.778 qpair failed and we were unable to recover it. 00:28:06.778 [2024-10-07 09:48:55.584614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.778 [2024-10-07 09:48:55.584709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.778 qpair failed and we were unable to recover it. 00:28:06.778 [2024-10-07 09:48:55.584953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.778 [2024-10-07 09:48:55.585020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.778 qpair failed and we were unable to recover it. 00:28:06.778 [2024-10-07 09:48:55.585260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.778 [2024-10-07 09:48:55.585327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.778 qpair failed and we were unable to recover it. 00:28:06.778 [2024-10-07 09:48:55.585551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.778 [2024-10-07 09:48:55.585617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.778 qpair failed and we were unable to recover it. 00:28:06.778 [2024-10-07 09:48:55.585850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.778 [2024-10-07 09:48:55.585927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.778 qpair failed and we were unable to recover it. 00:28:06.778 [2024-10-07 09:48:55.586220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.778 [2024-10-07 09:48:55.586285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.778 qpair failed and we were unable to recover it. 00:28:06.778 [2024-10-07 09:48:55.586582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.778 [2024-10-07 09:48:55.586648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.778 qpair failed and we were unable to recover it. 00:28:06.778 [2024-10-07 09:48:55.586868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.778 [2024-10-07 09:48:55.586933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.778 qpair failed and we were unable to recover it. 00:28:06.778 [2024-10-07 09:48:55.587187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.778 [2024-10-07 09:48:55.587252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.778 qpair failed and we were unable to recover it. 00:28:06.778 [2024-10-07 09:48:55.587554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.778 [2024-10-07 09:48:55.587619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.778 qpair failed and we were unable to recover it. 00:28:06.778 [2024-10-07 09:48:55.587891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.778 [2024-10-07 09:48:55.587956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.778 qpair failed and we were unable to recover it. 00:28:06.778 [2024-10-07 09:48:55.588244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.778 [2024-10-07 09:48:55.588310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.778 qpair failed and we were unable to recover it. 00:28:06.778 [2024-10-07 09:48:55.588561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.778 [2024-10-07 09:48:55.588628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.778 qpair failed and we were unable to recover it. 00:28:06.778 [2024-10-07 09:48:55.588910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.778 [2024-10-07 09:48:55.588975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.778 qpair failed and we were unable to recover it. 00:28:06.778 [2024-10-07 09:48:55.589258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.778 [2024-10-07 09:48:55.589323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.778 qpair failed and we were unable to recover it. 00:28:06.778 [2024-10-07 09:48:55.589606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.779 [2024-10-07 09:48:55.589690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.779 qpair failed and we were unable to recover it. 00:28:06.779 [2024-10-07 09:48:55.589939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.779 [2024-10-07 09:48:55.590004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.779 qpair failed and we were unable to recover it. 00:28:06.779 [2024-10-07 09:48:55.590229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.779 [2024-10-07 09:48:55.590294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.779 qpair failed and we were unable to recover it. 00:28:06.779 [2024-10-07 09:48:55.590596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.779 [2024-10-07 09:48:55.590662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.779 qpair failed and we were unable to recover it. 00:28:06.779 [2024-10-07 09:48:55.590973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.779 [2024-10-07 09:48:55.591038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.779 qpair failed and we were unable to recover it. 00:28:06.779 [2024-10-07 09:48:55.591283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.779 [2024-10-07 09:48:55.591348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.779 qpair failed and we were unable to recover it. 00:28:06.779 [2024-10-07 09:48:55.591592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.779 [2024-10-07 09:48:55.591658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.779 qpair failed and we were unable to recover it. 00:28:06.779 [2024-10-07 09:48:55.591934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.779 [2024-10-07 09:48:55.592000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.779 qpair failed and we were unable to recover it. 00:28:06.779 [2024-10-07 09:48:55.592204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.779 [2024-10-07 09:48:55.592271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.779 qpair failed and we were unable to recover it. 00:28:06.779 [2024-10-07 09:48:55.592555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.779 [2024-10-07 09:48:55.592621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.779 qpair failed and we were unable to recover it. 00:28:06.779 [2024-10-07 09:48:55.592950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.779 [2024-10-07 09:48:55.593016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.779 qpair failed and we were unable to recover it. 00:28:06.779 [2024-10-07 09:48:55.593260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.779 [2024-10-07 09:48:55.593325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.779 qpair failed and we were unable to recover it. 00:28:06.779 [2024-10-07 09:48:55.593615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.779 [2024-10-07 09:48:55.593702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.779 qpair failed and we were unable to recover it. 00:28:06.779 [2024-10-07 09:48:55.593963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.779 [2024-10-07 09:48:55.594030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.779 qpair failed and we were unable to recover it. 00:28:06.779 [2024-10-07 09:48:55.594254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.779 [2024-10-07 09:48:55.594315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.779 qpair failed and we were unable to recover it. 00:28:06.779 [2024-10-07 09:48:55.594609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.779 [2024-10-07 09:48:55.594693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.779 qpair failed and we were unable to recover it. 00:28:06.779 [2024-10-07 09:48:55.594949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.779 [2024-10-07 09:48:55.595026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.779 qpair failed and we were unable to recover it. 00:28:06.779 [2024-10-07 09:48:55.595284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.779 [2024-10-07 09:48:55.595349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.779 qpair failed and we were unable to recover it. 00:28:06.779 [2024-10-07 09:48:55.595637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.779 [2024-10-07 09:48:55.595724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.779 qpair failed and we were unable to recover it. 00:28:06.779 [2024-10-07 09:48:55.595984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.779 [2024-10-07 09:48:55.596050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.779 qpair failed and we were unable to recover it. 00:28:06.779 [2024-10-07 09:48:55.596342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.779 [2024-10-07 09:48:55.596408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.779 qpair failed and we were unable to recover it. 00:28:06.779 [2024-10-07 09:48:55.596657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.779 [2024-10-07 09:48:55.596742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.779 qpair failed and we were unable to recover it. 00:28:06.779 [2024-10-07 09:48:55.597031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.779 [2024-10-07 09:48:55.597096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.779 qpair failed and we were unable to recover it. 00:28:06.779 [2024-10-07 09:48:55.597292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.779 [2024-10-07 09:48:55.597357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.779 qpair failed and we were unable to recover it. 00:28:06.779 [2024-10-07 09:48:55.597601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.779 [2024-10-07 09:48:55.597685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.779 qpair failed and we were unable to recover it. 00:28:06.779 [2024-10-07 09:48:55.597942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.779 [2024-10-07 09:48:55.598006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.779 qpair failed and we were unable to recover it. 00:28:06.779 [2024-10-07 09:48:55.598258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.779 [2024-10-07 09:48:55.598323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.779 qpair failed and we were unable to recover it. 00:28:06.779 [2024-10-07 09:48:55.598549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.779 [2024-10-07 09:48:55.598614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.779 qpair failed and we were unable to recover it. 00:28:06.779 [2024-10-07 09:48:55.598918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.779 [2024-10-07 09:48:55.598983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.779 qpair failed and we were unable to recover it. 00:28:06.779 [2024-10-07 09:48:55.599232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.780 [2024-10-07 09:48:55.599298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.780 qpair failed and we were unable to recover it. 00:28:06.780 [2024-10-07 09:48:55.599599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.780 [2024-10-07 09:48:55.599683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.780 qpair failed and we were unable to recover it. 00:28:06.780 [2024-10-07 09:48:55.599910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.780 [2024-10-07 09:48:55.599975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.780 qpair failed and we were unable to recover it. 00:28:06.780 [2024-10-07 09:48:55.600180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.780 [2024-10-07 09:48:55.600246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.780 qpair failed and we were unable to recover it. 00:28:06.780 [2024-10-07 09:48:55.600506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.780 [2024-10-07 09:48:55.600572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.780 qpair failed and we were unable to recover it. 00:28:06.780 [2024-10-07 09:48:55.600811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.780 [2024-10-07 09:48:55.600878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.780 qpair failed and we were unable to recover it. 00:28:06.780 [2024-10-07 09:48:55.601117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.780 [2024-10-07 09:48:55.601184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.780 qpair failed and we were unable to recover it. 00:28:06.780 [2024-10-07 09:48:55.601422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.780 [2024-10-07 09:48:55.601488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.780 qpair failed and we were unable to recover it. 00:28:06.780 [2024-10-07 09:48:55.601774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.780 [2024-10-07 09:48:55.601840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.780 qpair failed and we were unable to recover it. 00:28:06.780 [2024-10-07 09:48:55.602097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.780 [2024-10-07 09:48:55.602162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.780 qpair failed and we were unable to recover it. 00:28:06.780 [2024-10-07 09:48:55.602371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.780 [2024-10-07 09:48:55.602436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.780 qpair failed and we were unable to recover it. 00:28:06.780 [2024-10-07 09:48:55.602707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.780 [2024-10-07 09:48:55.602773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.780 qpair failed and we were unable to recover it. 00:28:06.780 [2024-10-07 09:48:55.603005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.780 [2024-10-07 09:48:55.603070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.780 qpair failed and we were unable to recover it. 00:28:06.780 [2024-10-07 09:48:55.603267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.780 [2024-10-07 09:48:55.603332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.780 qpair failed and we were unable to recover it. 00:28:06.780 [2024-10-07 09:48:55.603571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.780 [2024-10-07 09:48:55.603646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.780 qpair failed and we were unable to recover it. 00:28:06.780 [2024-10-07 09:48:55.603875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.780 [2024-10-07 09:48:55.603940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.780 qpair failed and we were unable to recover it. 00:28:06.780 [2024-10-07 09:48:55.604149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.780 [2024-10-07 09:48:55.604214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.780 qpair failed and we were unable to recover it. 00:28:06.780 [2024-10-07 09:48:55.604455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.780 [2024-10-07 09:48:55.604519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.780 qpair failed and we were unable to recover it. 00:28:06.780 [2024-10-07 09:48:55.604720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.780 [2024-10-07 09:48:55.604787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.780 qpair failed and we were unable to recover it. 00:28:06.780 [2024-10-07 09:48:55.605024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.780 [2024-10-07 09:48:55.605089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.780 qpair failed and we were unable to recover it. 00:28:06.780 [2024-10-07 09:48:55.605317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.780 [2024-10-07 09:48:55.605381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.780 qpair failed and we were unable to recover it. 00:28:06.780 [2024-10-07 09:48:55.605595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.780 [2024-10-07 09:48:55.605660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.780 qpair failed and we were unable to recover it. 00:28:06.780 [2024-10-07 09:48:55.605921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.780 [2024-10-07 09:48:55.605988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.780 qpair failed and we were unable to recover it. 00:28:06.780 [2024-10-07 09:48:55.606207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.780 [2024-10-07 09:48:55.606272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.780 qpair failed and we were unable to recover it. 00:28:06.780 [2024-10-07 09:48:55.606513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.780 [2024-10-07 09:48:55.606577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.780 qpair failed and we were unable to recover it. 00:28:06.780 [2024-10-07 09:48:55.606849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.780 [2024-10-07 09:48:55.606917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.780 qpair failed and we were unable to recover it. 00:28:06.780 [2024-10-07 09:48:55.607202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.780 [2024-10-07 09:48:55.607267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.780 qpair failed and we were unable to recover it. 00:28:06.780 [2024-10-07 09:48:55.607474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.780 [2024-10-07 09:48:55.607538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.780 qpair failed and we were unable to recover it. 00:28:06.780 [2024-10-07 09:48:55.607753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.780 [2024-10-07 09:48:55.607820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.780 qpair failed and we were unable to recover it. 00:28:06.780 [2024-10-07 09:48:55.608008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.780 [2024-10-07 09:48:55.608073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.780 qpair failed and we were unable to recover it. 00:28:06.780 [2024-10-07 09:48:55.608313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.780 [2024-10-07 09:48:55.608377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.780 qpair failed and we were unable to recover it. 00:28:06.780 [2024-10-07 09:48:55.608693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.780 [2024-10-07 09:48:55.608761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.780 qpair failed and we were unable to recover it. 00:28:06.780 [2024-10-07 09:48:55.609007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.780 [2024-10-07 09:48:55.609073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.780 qpair failed and we were unable to recover it. 00:28:06.780 [2024-10-07 09:48:55.609324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.780 [2024-10-07 09:48:55.609389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.780 qpair failed and we were unable to recover it. 00:28:06.780 [2024-10-07 09:48:55.609650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.780 [2024-10-07 09:48:55.609736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.780 qpair failed and we were unable to recover it. 00:28:06.780 [2024-10-07 09:48:55.609946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.780 [2024-10-07 09:48:55.610011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.780 qpair failed and we were unable to recover it. 00:28:06.780 [2024-10-07 09:48:55.610261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.780 [2024-10-07 09:48:55.610326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.780 qpair failed and we were unable to recover it. 00:28:06.780 [2024-10-07 09:48:55.610588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.780 [2024-10-07 09:48:55.610653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.780 qpair failed and we were unable to recover it. 00:28:06.780 [2024-10-07 09:48:55.610881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.781 [2024-10-07 09:48:55.610946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.781 qpair failed and we were unable to recover it. 00:28:06.781 [2024-10-07 09:48:55.611244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.781 [2024-10-07 09:48:55.611310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.781 qpair failed and we were unable to recover it. 00:28:06.781 [2024-10-07 09:48:55.611523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.781 [2024-10-07 09:48:55.611589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.781 qpair failed and we were unable to recover it. 00:28:06.781 [2024-10-07 09:48:55.611818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.781 [2024-10-07 09:48:55.611885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.781 qpair failed and we were unable to recover it. 00:28:06.781 [2024-10-07 09:48:55.612180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.781 [2024-10-07 09:48:55.612246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.781 qpair failed and we were unable to recover it. 00:28:06.781 [2024-10-07 09:48:55.612435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.781 [2024-10-07 09:48:55.612501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.781 qpair failed and we were unable to recover it. 00:28:06.781 [2024-10-07 09:48:55.612747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.781 [2024-10-07 09:48:55.612814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.781 qpair failed and we were unable to recover it. 00:28:06.781 [2024-10-07 09:48:55.613073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.781 [2024-10-07 09:48:55.613138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.781 qpair failed and we were unable to recover it. 00:28:06.781 [2024-10-07 09:48:55.613384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.781 [2024-10-07 09:48:55.613450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.781 qpair failed and we were unable to recover it. 00:28:06.781 [2024-10-07 09:48:55.613677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.781 [2024-10-07 09:48:55.613743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.781 qpair failed and we were unable to recover it. 00:28:06.781 [2024-10-07 09:48:55.613997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.781 [2024-10-07 09:48:55.614062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.781 qpair failed and we were unable to recover it. 00:28:06.781 [2024-10-07 09:48:55.614346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.781 [2024-10-07 09:48:55.614412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.781 qpair failed and we were unable to recover it. 00:28:06.781 [2024-10-07 09:48:55.614607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.781 [2024-10-07 09:48:55.614686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.781 qpair failed and we were unable to recover it. 00:28:06.781 [2024-10-07 09:48:55.614884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.781 [2024-10-07 09:48:55.614950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.781 qpair failed and we were unable to recover it. 00:28:06.781 [2024-10-07 09:48:55.615189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.781 [2024-10-07 09:48:55.615257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.781 qpair failed and we were unable to recover it. 00:28:06.781 [2024-10-07 09:48:55.615509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.781 [2024-10-07 09:48:55.615575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.781 qpair failed and we were unable to recover it. 00:28:06.781 [2024-10-07 09:48:55.615884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.781 [2024-10-07 09:48:55.615951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.781 qpair failed and we were unable to recover it. 00:28:06.781 [2024-10-07 09:48:55.616208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.781 [2024-10-07 09:48:55.616283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.781 qpair failed and we were unable to recover it. 00:28:06.781 [2024-10-07 09:48:55.616516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.781 [2024-10-07 09:48:55.616581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.781 qpair failed and we were unable to recover it. 00:28:06.781 [2024-10-07 09:48:55.616863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.781 [2024-10-07 09:48:55.616932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.781 qpair failed and we were unable to recover it. 00:28:06.781 [2024-10-07 09:48:55.617165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.781 [2024-10-07 09:48:55.617231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.781 qpair failed and we were unable to recover it. 00:28:06.781 [2024-10-07 09:48:55.617480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.781 [2024-10-07 09:48:55.617547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.781 qpair failed and we were unable to recover it. 00:28:06.781 [2024-10-07 09:48:55.617809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.781 [2024-10-07 09:48:55.617877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.781 qpair failed and we were unable to recover it. 00:28:06.781 [2024-10-07 09:48:55.618124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.781 [2024-10-07 09:48:55.618188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.781 qpair failed and we were unable to recover it. 00:28:06.781 [2024-10-07 09:48:55.618400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.781 [2024-10-07 09:48:55.618466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.781 qpair failed and we were unable to recover it. 00:28:06.781 [2024-10-07 09:48:55.618751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.781 [2024-10-07 09:48:55.618818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.781 qpair failed and we were unable to recover it. 00:28:06.781 [2024-10-07 09:48:55.619025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.781 [2024-10-07 09:48:55.619091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.781 qpair failed and we were unable to recover it. 00:28:06.781 [2024-10-07 09:48:55.619340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.781 [2024-10-07 09:48:55.619405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.781 qpair failed and we were unable to recover it. 00:28:06.781 [2024-10-07 09:48:55.619604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.781 [2024-10-07 09:48:55.619695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.781 qpair failed and we were unable to recover it. 00:28:06.781 [2024-10-07 09:48:55.619944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.781 [2024-10-07 09:48:55.620010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.781 qpair failed and we were unable to recover it. 00:28:06.781 [2024-10-07 09:48:55.620269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.781 [2024-10-07 09:48:55.620335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.781 qpair failed and we were unable to recover it. 00:28:06.781 [2024-10-07 09:48:55.620546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.781 [2024-10-07 09:48:55.620611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.781 qpair failed and we were unable to recover it. 00:28:06.781 [2024-10-07 09:48:55.620842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.781 [2024-10-07 09:48:55.620910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.781 qpair failed and we were unable to recover it. 00:28:06.781 [2024-10-07 09:48:55.621122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.781 [2024-10-07 09:48:55.621189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.781 qpair failed and we were unable to recover it. 00:28:06.781 [2024-10-07 09:48:55.621441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.781 [2024-10-07 09:48:55.621506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.781 qpair failed and we were unable to recover it. 00:28:06.781 [2024-10-07 09:48:55.621755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.781 [2024-10-07 09:48:55.621823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.781 qpair failed and we were unable to recover it. 00:28:06.781 [2024-10-07 09:48:55.622043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.781 [2024-10-07 09:48:55.622110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.781 qpair failed and we were unable to recover it. 00:28:06.781 [2024-10-07 09:48:55.622372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.781 [2024-10-07 09:48:55.622437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.781 qpair failed and we were unable to recover it. 00:28:06.781 [2024-10-07 09:48:55.622688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.781 [2024-10-07 09:48:55.622755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.781 qpair failed and we were unable to recover it. 00:28:06.782 [2024-10-07 09:48:55.622948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.782 [2024-10-07 09:48:55.623013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.782 qpair failed and we were unable to recover it. 00:28:06.782 [2024-10-07 09:48:55.623233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.782 [2024-10-07 09:48:55.623298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.782 qpair failed and we were unable to recover it. 00:28:06.782 [2024-10-07 09:48:55.623542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.782 [2024-10-07 09:48:55.623607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.782 qpair failed and we were unable to recover it. 00:28:06.782 [2024-10-07 09:48:55.623852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.782 [2024-10-07 09:48:55.623919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.782 qpair failed and we were unable to recover it. 00:28:06.782 [2024-10-07 09:48:55.624104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.782 [2024-10-07 09:48:55.624168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.782 qpair failed and we were unable to recover it. 00:28:06.782 [2024-10-07 09:48:55.624402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.782 [2024-10-07 09:48:55.624478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.782 qpair failed and we were unable to recover it. 00:28:06.782 [2024-10-07 09:48:55.624721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.782 [2024-10-07 09:48:55.624788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.782 qpair failed and we were unable to recover it. 00:28:06.782 [2024-10-07 09:48:55.625031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.782 [2024-10-07 09:48:55.625097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.782 qpair failed and we were unable to recover it. 00:28:06.782 [2024-10-07 09:48:55.625346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.782 [2024-10-07 09:48:55.625412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.782 qpair failed and we were unable to recover it. 00:28:06.782 [2024-10-07 09:48:55.625606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.782 [2024-10-07 09:48:55.625685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.782 qpair failed and we were unable to recover it. 00:28:06.782 [2024-10-07 09:48:55.625948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.782 [2024-10-07 09:48:55.626015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.782 qpair failed and we were unable to recover it. 00:28:06.782 [2024-10-07 09:48:55.626245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.782 [2024-10-07 09:48:55.626311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.782 qpair failed and we were unable to recover it. 00:28:06.782 [2024-10-07 09:48:55.626599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.782 [2024-10-07 09:48:55.626664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.782 qpair failed and we were unable to recover it. 00:28:06.782 [2024-10-07 09:48:55.626924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.782 [2024-10-07 09:48:55.626989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.782 qpair failed and we were unable to recover it. 00:28:06.782 [2024-10-07 09:48:55.627177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.782 [2024-10-07 09:48:55.627242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.782 qpair failed and we were unable to recover it. 00:28:06.782 [2024-10-07 09:48:55.627496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.782 [2024-10-07 09:48:55.627561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.782 qpair failed and we were unable to recover it. 00:28:06.782 [2024-10-07 09:48:55.627817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.782 [2024-10-07 09:48:55.627885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.782 qpair failed and we were unable to recover it. 00:28:06.782 [2024-10-07 09:48:55.628181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.782 [2024-10-07 09:48:55.628246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.782 qpair failed and we were unable to recover it. 00:28:06.782 [2024-10-07 09:48:55.628488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.782 [2024-10-07 09:48:55.628554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.782 qpair failed and we were unable to recover it. 00:28:06.782 [2024-10-07 09:48:55.628837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.782 [2024-10-07 09:48:55.628904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.782 qpair failed and we were unable to recover it. 00:28:06.782 [2024-10-07 09:48:55.629162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.782 [2024-10-07 09:48:55.629228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.782 qpair failed and we were unable to recover it. 00:28:06.782 [2024-10-07 09:48:55.629508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.782 [2024-10-07 09:48:55.629573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.782 qpair failed and we were unable to recover it. 00:28:06.782 [2024-10-07 09:48:55.629849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.782 [2024-10-07 09:48:55.629915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.782 qpair failed and we were unable to recover it. 00:28:06.782 [2024-10-07 09:48:55.630158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.782 [2024-10-07 09:48:55.630223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.782 qpair failed and we were unable to recover it. 00:28:06.782 [2024-10-07 09:48:55.630470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.782 [2024-10-07 09:48:55.630535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.782 qpair failed and we were unable to recover it. 00:28:06.782 [2024-10-07 09:48:55.630743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.782 [2024-10-07 09:48:55.630812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.782 qpair failed and we were unable to recover it. 00:28:06.782 [2024-10-07 09:48:55.631043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.782 [2024-10-07 09:48:55.631108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.782 qpair failed and we were unable to recover it. 00:28:06.782 [2024-10-07 09:48:55.631398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.782 [2024-10-07 09:48:55.631464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.782 qpair failed and we were unable to recover it. 00:28:06.782 [2024-10-07 09:48:55.631699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.782 [2024-10-07 09:48:55.631766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.782 qpair failed and we were unable to recover it. 00:28:06.782 [2024-10-07 09:48:55.631973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.782 [2024-10-07 09:48:55.632038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.782 qpair failed and we were unable to recover it. 00:28:06.782 [2024-10-07 09:48:55.632285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.782 [2024-10-07 09:48:55.632350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.782 qpair failed and we were unable to recover it. 00:28:06.782 [2024-10-07 09:48:55.632542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.782 [2024-10-07 09:48:55.632608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.782 qpair failed and we were unable to recover it. 00:28:06.782 [2024-10-07 09:48:55.632888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.782 [2024-10-07 09:48:55.632965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.782 qpair failed and we were unable to recover it. 00:28:06.782 [2024-10-07 09:48:55.633209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.782 [2024-10-07 09:48:55.633278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.782 qpair failed and we were unable to recover it. 00:28:06.782 [2024-10-07 09:48:55.633500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.782 [2024-10-07 09:48:55.633564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.782 qpair failed and we were unable to recover it. 00:28:06.782 [2024-10-07 09:48:55.633833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.782 [2024-10-07 09:48:55.633900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.782 qpair failed and we were unable to recover it. 00:28:06.782 [2024-10-07 09:48:55.634110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.782 [2024-10-07 09:48:55.634176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.782 qpair failed and we were unable to recover it. 00:28:06.782 [2024-10-07 09:48:55.634451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.783 [2024-10-07 09:48:55.634515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.783 qpair failed and we were unable to recover it. 00:28:06.783 [2024-10-07 09:48:55.634769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.783 [2024-10-07 09:48:55.634837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.783 qpair failed and we were unable to recover it. 00:28:06.783 [2024-10-07 09:48:55.635074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.783 [2024-10-07 09:48:55.635140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.783 qpair failed and we were unable to recover it. 00:28:06.783 [2024-10-07 09:48:55.635342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.783 [2024-10-07 09:48:55.635406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.783 qpair failed and we were unable to recover it. 00:28:06.783 [2024-10-07 09:48:55.635696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.783 [2024-10-07 09:48:55.635764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.783 qpair failed and we were unable to recover it. 00:28:06.783 [2024-10-07 09:48:55.635971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.783 [2024-10-07 09:48:55.636036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.783 qpair failed and we were unable to recover it. 00:28:06.783 [2024-10-07 09:48:55.636244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.783 [2024-10-07 09:48:55.636309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.783 qpair failed and we were unable to recover it. 00:28:06.783 [2024-10-07 09:48:55.636564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.783 [2024-10-07 09:48:55.636630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.783 qpair failed and we were unable to recover it. 00:28:06.783 [2024-10-07 09:48:55.636927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.783 [2024-10-07 09:48:55.636991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.783 qpair failed and we were unable to recover it. 00:28:06.783 [2024-10-07 09:48:55.637244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.783 [2024-10-07 09:48:55.637309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.783 qpair failed and we were unable to recover it. 00:28:06.783 [2024-10-07 09:48:55.637529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.783 [2024-10-07 09:48:55.637594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.783 qpair failed and we were unable to recover it. 00:28:06.783 [2024-10-07 09:48:55.637807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.783 [2024-10-07 09:48:55.637874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.783 qpair failed and we were unable to recover it. 00:28:06.783 [2024-10-07 09:48:55.638091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.783 [2024-10-07 09:48:55.638156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.783 qpair failed and we were unable to recover it. 00:28:06.783 [2024-10-07 09:48:55.638396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.783 [2024-10-07 09:48:55.638461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.783 qpair failed and we were unable to recover it. 00:28:06.783 [2024-10-07 09:48:55.638642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.783 [2024-10-07 09:48:55.638728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.783 qpair failed and we were unable to recover it. 00:28:06.783 [2024-10-07 09:48:55.638948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.783 [2024-10-07 09:48:55.639013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.783 qpair failed and we were unable to recover it. 00:28:06.783 [2024-10-07 09:48:55.639251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.783 [2024-10-07 09:48:55.639317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.783 qpair failed and we were unable to recover it. 00:28:06.783 [2024-10-07 09:48:55.639508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.783 [2024-10-07 09:48:55.639574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.783 qpair failed and we were unable to recover it. 00:28:06.783 [2024-10-07 09:48:55.639817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.783 [2024-10-07 09:48:55.639884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.783 qpair failed and we were unable to recover it. 00:28:06.783 [2024-10-07 09:48:55.640170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.783 [2024-10-07 09:48:55.640235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.783 qpair failed and we were unable to recover it. 00:28:06.783 [2024-10-07 09:48:55.640494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.783 [2024-10-07 09:48:55.640560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.783 qpair failed and we were unable to recover it. 00:28:06.783 [2024-10-07 09:48:55.640878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.783 [2024-10-07 09:48:55.640945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.783 qpair failed and we were unable to recover it. 00:28:06.783 [2024-10-07 09:48:55.641191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.783 [2024-10-07 09:48:55.641257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.783 qpair failed and we were unable to recover it. 00:28:06.783 [2024-10-07 09:48:55.641510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.783 [2024-10-07 09:48:55.641575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.783 qpair failed and we were unable to recover it. 00:28:06.783 [2024-10-07 09:48:55.641842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.783 [2024-10-07 09:48:55.641908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.783 qpair failed and we were unable to recover it. 00:28:06.783 [2024-10-07 09:48:55.642170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.783 [2024-10-07 09:48:55.642236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.783 qpair failed and we were unable to recover it. 00:28:06.783 [2024-10-07 09:48:55.642441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.783 [2024-10-07 09:48:55.642505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.783 qpair failed and we were unable to recover it. 00:28:06.783 [2024-10-07 09:48:55.642758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.783 [2024-10-07 09:48:55.642825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.783 qpair failed and we were unable to recover it. 00:28:06.783 [2024-10-07 09:48:55.643087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.783 [2024-10-07 09:48:55.643153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.783 qpair failed and we were unable to recover it. 00:28:06.783 [2024-10-07 09:48:55.643435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.783 [2024-10-07 09:48:55.643499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.783 qpair failed and we were unable to recover it. 00:28:06.783 [2024-10-07 09:48:55.643736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.783 [2024-10-07 09:48:55.643802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.783 qpair failed and we were unable to recover it. 00:28:06.783 [2024-10-07 09:48:55.644009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.783 [2024-10-07 09:48:55.644073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.783 qpair failed and we were unable to recover it. 00:28:06.783 [2024-10-07 09:48:55.644277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.783 [2024-10-07 09:48:55.644342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.783 qpair failed and we were unable to recover it. 00:28:06.783 [2024-10-07 09:48:55.644582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.783 [2024-10-07 09:48:55.644646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.783 qpair failed and we were unable to recover it. 00:28:06.783 [2024-10-07 09:48:55.644913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.783 [2024-10-07 09:48:55.644979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.783 qpair failed and we were unable to recover it. 00:28:06.783 [2024-10-07 09:48:55.645221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.783 [2024-10-07 09:48:55.645284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.783 qpair failed and we were unable to recover it. 00:28:06.783 [2024-10-07 09:48:55.645586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.783 [2024-10-07 09:48:55.645651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.783 qpair failed and we were unable to recover it. 00:28:06.783 [2024-10-07 09:48:55.645895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.783 [2024-10-07 09:48:55.645961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.783 qpair failed and we were unable to recover it. 00:28:06.783 [2024-10-07 09:48:55.646219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.783 [2024-10-07 09:48:55.646284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.783 qpair failed and we were unable to recover it. 00:28:06.783 [2024-10-07 09:48:55.646574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.784 [2024-10-07 09:48:55.646640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.784 qpair failed and we were unable to recover it. 00:28:06.784 [2024-10-07 09:48:55.646863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.784 [2024-10-07 09:48:55.646931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.784 qpair failed and we were unable to recover it. 00:28:06.784 [2024-10-07 09:48:55.647185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.784 [2024-10-07 09:48:55.647250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.784 qpair failed and we were unable to recover it. 00:28:06.784 [2024-10-07 09:48:55.647506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.784 [2024-10-07 09:48:55.647571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.784 qpair failed and we were unable to recover it. 00:28:06.784 [2024-10-07 09:48:55.647873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.784 [2024-10-07 09:48:55.647941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.784 qpair failed and we were unable to recover it. 00:28:06.784 [2024-10-07 09:48:55.648191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.784 [2024-10-07 09:48:55.648255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.784 qpair failed and we were unable to recover it. 00:28:06.784 [2024-10-07 09:48:55.648467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.784 [2024-10-07 09:48:55.648533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.784 qpair failed and we were unable to recover it. 00:28:06.784 [2024-10-07 09:48:55.648819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.784 [2024-10-07 09:48:55.648886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.784 qpair failed and we were unable to recover it. 00:28:06.784 [2024-10-07 09:48:55.649138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.784 [2024-10-07 09:48:55.649203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.784 qpair failed and we were unable to recover it. 00:28:06.784 [2024-10-07 09:48:55.649453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.784 [2024-10-07 09:48:55.649517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.784 qpair failed and we were unable to recover it. 00:28:06.784 [2024-10-07 09:48:55.649764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.784 [2024-10-07 09:48:55.649834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.784 qpair failed and we were unable to recover it. 00:28:06.784 [2024-10-07 09:48:55.650065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.784 [2024-10-07 09:48:55.650131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.784 qpair failed and we were unable to recover it. 00:28:06.784 [2024-10-07 09:48:55.650388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.784 [2024-10-07 09:48:55.650453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.784 qpair failed and we were unable to recover it. 00:28:06.784 [2024-10-07 09:48:55.650733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.784 [2024-10-07 09:48:55.650801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.784 qpair failed and we were unable to recover it. 00:28:06.784 [2024-10-07 09:48:55.651038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.784 [2024-10-07 09:48:55.651103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.784 qpair failed and we were unable to recover it. 00:28:06.784 [2024-10-07 09:48:55.651388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.784 [2024-10-07 09:48:55.651453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.784 qpair failed and we were unable to recover it. 00:28:06.784 [2024-10-07 09:48:55.651654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.784 [2024-10-07 09:48:55.651733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.784 qpair failed and we were unable to recover it. 00:28:06.784 [2024-10-07 09:48:55.651948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.784 [2024-10-07 09:48:55.652014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.784 qpair failed and we were unable to recover it. 00:28:06.784 [2024-10-07 09:48:55.652206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.784 [2024-10-07 09:48:55.652271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.784 qpair failed and we were unable to recover it. 00:28:06.784 [2024-10-07 09:48:55.652550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.784 [2024-10-07 09:48:55.652614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.784 qpair failed and we were unable to recover it. 00:28:06.784 [2024-10-07 09:48:55.652878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.784 [2024-10-07 09:48:55.652944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.784 qpair failed and we were unable to recover it. 00:28:06.784 [2024-10-07 09:48:55.653206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.784 [2024-10-07 09:48:55.653271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.784 qpair failed and we were unable to recover it. 00:28:06.784 [2024-10-07 09:48:55.653485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.784 [2024-10-07 09:48:55.653549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.784 qpair failed and we were unable to recover it. 00:28:06.784 [2024-10-07 09:48:55.653778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.784 [2024-10-07 09:48:55.653845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.784 qpair failed and we were unable to recover it. 00:28:06.784 [2024-10-07 09:48:55.654130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.784 [2024-10-07 09:48:55.654212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.784 qpair failed and we were unable to recover it. 00:28:06.784 [2024-10-07 09:48:55.654510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.784 [2024-10-07 09:48:55.654575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.784 qpair failed and we were unable to recover it. 00:28:06.784 [2024-10-07 09:48:55.654809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.784 [2024-10-07 09:48:55.654876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.784 qpair failed and we were unable to recover it. 00:28:06.784 [2024-10-07 09:48:55.655071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.784 [2024-10-07 09:48:55.655137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.784 qpair failed and we were unable to recover it. 00:28:06.784 [2024-10-07 09:48:55.655384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.784 [2024-10-07 09:48:55.655448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.784 qpair failed and we were unable to recover it. 00:28:06.784 [2024-10-07 09:48:55.655699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.784 [2024-10-07 09:48:55.655766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.784 qpair failed and we were unable to recover it. 00:28:06.784 [2024-10-07 09:48:55.656061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.784 [2024-10-07 09:48:55.656128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.784 qpair failed and we were unable to recover it. 00:28:06.784 [2024-10-07 09:48:55.656358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.784 [2024-10-07 09:48:55.656423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.784 qpair failed and we were unable to recover it. 00:28:06.784 [2024-10-07 09:48:55.656661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.784 [2024-10-07 09:48:55.656757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.784 qpair failed and we were unable to recover it. 00:28:06.784 [2024-10-07 09:48:55.656977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.784 [2024-10-07 09:48:55.657043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.784 qpair failed and we were unable to recover it. 00:28:06.785 [2024-10-07 09:48:55.657276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.785 [2024-10-07 09:48:55.657341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.785 qpair failed and we were unable to recover it. 00:28:06.785 [2024-10-07 09:48:55.657529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.785 [2024-10-07 09:48:55.657595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.785 qpair failed and we were unable to recover it. 00:28:06.785 [2024-10-07 09:48:55.657823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.785 [2024-10-07 09:48:55.657890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.785 qpair failed and we were unable to recover it. 00:28:06.785 [2024-10-07 09:48:55.658094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.785 [2024-10-07 09:48:55.658159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.785 qpair failed and we were unable to recover it. 00:28:06.785 [2024-10-07 09:48:55.658402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.785 [2024-10-07 09:48:55.658467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.785 qpair failed and we were unable to recover it. 00:28:06.785 [2024-10-07 09:48:55.658760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.785 [2024-10-07 09:48:55.658827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.785 qpair failed and we were unable to recover it. 00:28:06.785 [2024-10-07 09:48:55.659091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.785 [2024-10-07 09:48:55.659157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.785 qpair failed and we were unable to recover it. 00:28:06.785 [2024-10-07 09:48:55.659385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.785 [2024-10-07 09:48:55.659450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.785 qpair failed and we were unable to recover it. 00:28:06.785 [2024-10-07 09:48:55.659684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.785 [2024-10-07 09:48:55.659751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.785 qpair failed and we were unable to recover it. 00:28:06.785 [2024-10-07 09:48:55.660046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.785 [2024-10-07 09:48:55.660111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.785 qpair failed and we were unable to recover it. 00:28:06.785 [2024-10-07 09:48:55.660368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.785 [2024-10-07 09:48:55.660433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.785 qpair failed and we were unable to recover it. 00:28:06.785 [2024-10-07 09:48:55.660682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.785 [2024-10-07 09:48:55.660748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.785 qpair failed and we were unable to recover it. 00:28:06.785 [2024-10-07 09:48:55.660947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.785 [2024-10-07 09:48:55.661012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.785 qpair failed and we were unable to recover it. 00:28:06.785 [2024-10-07 09:48:55.661266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.785 [2024-10-07 09:48:55.661330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.785 qpair failed and we were unable to recover it. 00:28:06.785 [2024-10-07 09:48:55.661593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.785 [2024-10-07 09:48:55.661658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.785 qpair failed and we were unable to recover it. 00:28:06.785 [2024-10-07 09:48:55.661940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.785 [2024-10-07 09:48:55.662005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.785 qpair failed and we were unable to recover it. 00:28:06.785 [2024-10-07 09:48:55.662207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.785 [2024-10-07 09:48:55.662274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.785 qpair failed and we were unable to recover it. 00:28:06.785 [2024-10-07 09:48:55.662487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.785 [2024-10-07 09:48:55.662564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.785 qpair failed and we were unable to recover it. 00:28:06.785 [2024-10-07 09:48:55.662827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.785 [2024-10-07 09:48:55.662893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.785 qpair failed and we were unable to recover it. 00:28:06.785 [2024-10-07 09:48:55.663137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.785 [2024-10-07 09:48:55.663203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.785 qpair failed and we were unable to recover it. 00:28:06.785 [2024-10-07 09:48:55.663456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.785 [2024-10-07 09:48:55.663522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.785 qpair failed and we were unable to recover it. 00:28:06.785 [2024-10-07 09:48:55.663777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.785 [2024-10-07 09:48:55.663843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.785 qpair failed and we were unable to recover it. 00:28:06.785 [2024-10-07 09:48:55.664135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.785 [2024-10-07 09:48:55.664200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.785 qpair failed and we were unable to recover it. 00:28:06.785 [2024-10-07 09:48:55.664409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.785 [2024-10-07 09:48:55.664475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.785 qpair failed and we were unable to recover it. 00:28:06.785 [2024-10-07 09:48:55.664774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.785 [2024-10-07 09:48:55.664840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.785 qpair failed and we were unable to recover it. 00:28:06.785 [2024-10-07 09:48:55.665078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.785 [2024-10-07 09:48:55.665143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.785 qpair failed and we were unable to recover it. 00:28:06.785 [2024-10-07 09:48:55.665347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.785 [2024-10-07 09:48:55.665411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.785 qpair failed and we were unable to recover it. 00:28:06.785 [2024-10-07 09:48:55.665695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.785 [2024-10-07 09:48:55.665762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.785 qpair failed and we were unable to recover it. 00:28:06.785 [2024-10-07 09:48:55.665968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.785 [2024-10-07 09:48:55.666033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.785 qpair failed and we were unable to recover it. 00:28:06.785 [2024-10-07 09:48:55.666270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.785 [2024-10-07 09:48:55.666336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.785 qpair failed and we were unable to recover it. 00:28:06.785 [2024-10-07 09:48:55.666603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.785 [2024-10-07 09:48:55.666687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.785 qpair failed and we were unable to recover it. 00:28:06.785 [2024-10-07 09:48:55.666947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.785 [2024-10-07 09:48:55.667012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.785 qpair failed and we were unable to recover it. 00:28:06.785 [2024-10-07 09:48:55.667228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.785 [2024-10-07 09:48:55.667296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.785 qpair failed and we were unable to recover it. 00:28:06.785 [2024-10-07 09:48:55.667541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.785 [2024-10-07 09:48:55.667606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.785 qpair failed and we were unable to recover it. 00:28:06.785 [2024-10-07 09:48:55.667854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.785 [2024-10-07 09:48:55.667921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.785 qpair failed and we were unable to recover it. 00:28:06.785 [2024-10-07 09:48:55.668159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.785 [2024-10-07 09:48:55.668227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.785 qpair failed and we were unable to recover it. 00:28:06.785 [2024-10-07 09:48:55.668515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.785 [2024-10-07 09:48:55.668580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.785 qpair failed and we were unable to recover it. 00:28:06.785 [2024-10-07 09:48:55.668833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.785 [2024-10-07 09:48:55.668900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.785 qpair failed and we were unable to recover it. 00:28:06.785 [2024-10-07 09:48:55.669135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.785 [2024-10-07 09:48:55.669201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.785 qpair failed and we were unable to recover it. 00:28:06.786 [2024-10-07 09:48:55.669455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.786 [2024-10-07 09:48:55.669522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.786 qpair failed and we were unable to recover it. 00:28:06.786 [2024-10-07 09:48:55.669767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.786 [2024-10-07 09:48:55.669834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.786 qpair failed and we were unable to recover it. 00:28:06.786 [2024-10-07 09:48:55.670064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.786 [2024-10-07 09:48:55.670130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.786 qpair failed and we were unable to recover it. 00:28:06.786 [2024-10-07 09:48:55.670377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.786 [2024-10-07 09:48:55.670443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.786 qpair failed and we were unable to recover it. 00:28:06.786 [2024-10-07 09:48:55.670697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.786 [2024-10-07 09:48:55.670764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.786 qpair failed and we were unable to recover it. 00:28:06.786 [2024-10-07 09:48:55.671019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.786 [2024-10-07 09:48:55.671094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.786 qpair failed and we were unable to recover it. 00:28:06.786 [2024-10-07 09:48:55.671332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.786 [2024-10-07 09:48:55.671399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.786 qpair failed and we were unable to recover it. 00:28:06.786 [2024-10-07 09:48:55.671634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.786 [2024-10-07 09:48:55.671733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.786 qpair failed and we were unable to recover it. 00:28:06.786 [2024-10-07 09:48:55.672004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.786 [2024-10-07 09:48:55.672070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.786 qpair failed and we were unable to recover it. 00:28:06.786 [2024-10-07 09:48:55.672321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.786 [2024-10-07 09:48:55.672386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.786 qpair failed and we were unable to recover it. 00:28:06.786 [2024-10-07 09:48:55.672642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.786 [2024-10-07 09:48:55.672725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.786 qpair failed and we were unable to recover it. 00:28:06.786 [2024-10-07 09:48:55.672943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.786 [2024-10-07 09:48:55.673010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.786 qpair failed and we were unable to recover it. 00:28:06.786 [2024-10-07 09:48:55.673289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.786 [2024-10-07 09:48:55.673355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.786 qpair failed and we were unable to recover it. 00:28:06.786 [2024-10-07 09:48:55.673637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.786 [2024-10-07 09:48:55.673716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.786 qpair failed and we were unable to recover it. 00:28:06.786 [2024-10-07 09:48:55.674022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.786 [2024-10-07 09:48:55.674088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.786 qpair failed and we were unable to recover it. 00:28:06.786 [2024-10-07 09:48:55.674377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.786 [2024-10-07 09:48:55.674443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.786 qpair failed and we were unable to recover it. 00:28:06.786 [2024-10-07 09:48:55.674647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.786 [2024-10-07 09:48:55.674726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.786 qpair failed and we were unable to recover it. 00:28:06.786 [2024-10-07 09:48:55.674929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.786 [2024-10-07 09:48:55.674995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.786 qpair failed and we were unable to recover it. 00:28:06.786 [2024-10-07 09:48:55.675218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.786 [2024-10-07 09:48:55.675284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.786 qpair failed and we were unable to recover it. 00:28:06.786 [2024-10-07 09:48:55.675541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.786 [2024-10-07 09:48:55.675606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.786 qpair failed and we were unable to recover it. 00:28:06.786 [2024-10-07 09:48:55.675891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.786 [2024-10-07 09:48:55.675957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.786 qpair failed and we were unable to recover it. 00:28:06.786 [2024-10-07 09:48:55.676188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.786 [2024-10-07 09:48:55.676253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.786 qpair failed and we were unable to recover it. 00:28:06.786 [2024-10-07 09:48:55.676498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.786 [2024-10-07 09:48:55.676563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.786 qpair failed and we were unable to recover it. 00:28:06.786 [2024-10-07 09:48:55.676798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.786 [2024-10-07 09:48:55.676864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.786 qpair failed and we were unable to recover it. 00:28:06.786 [2024-10-07 09:48:55.677136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.786 [2024-10-07 09:48:55.677202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.786 qpair failed and we were unable to recover it. 00:28:06.786 [2024-10-07 09:48:55.677487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.786 [2024-10-07 09:48:55.677551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.786 qpair failed and we were unable to recover it. 00:28:06.786 [2024-10-07 09:48:55.677804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.786 [2024-10-07 09:48:55.677871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.786 qpair failed and we were unable to recover it. 00:28:06.786 [2024-10-07 09:48:55.678164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.786 [2024-10-07 09:48:55.678230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.786 qpair failed and we were unable to recover it. 00:28:06.786 [2024-10-07 09:48:55.678433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.786 [2024-10-07 09:48:55.678498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.786 qpair failed and we were unable to recover it. 00:28:06.786 [2024-10-07 09:48:55.678750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.786 [2024-10-07 09:48:55.678817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.786 qpair failed and we were unable to recover it. 00:28:06.786 [2024-10-07 09:48:55.679069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.786 [2024-10-07 09:48:55.679134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.786 qpair failed and we were unable to recover it. 00:28:06.786 [2024-10-07 09:48:55.679377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.786 [2024-10-07 09:48:55.679443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.786 qpair failed and we were unable to recover it. 00:28:06.786 [2024-10-07 09:48:55.679633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.786 [2024-10-07 09:48:55.679728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.786 qpair failed and we were unable to recover it. 00:28:06.786 [2024-10-07 09:48:55.680027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.786 [2024-10-07 09:48:55.680092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.786 qpair failed and we were unable to recover it. 00:28:06.786 [2024-10-07 09:48:55.680331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.786 [2024-10-07 09:48:55.680398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.786 qpair failed and we were unable to recover it. 00:28:06.786 [2024-10-07 09:48:55.680643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.786 [2024-10-07 09:48:55.680728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.786 qpair failed and we were unable to recover it. 00:28:06.786 [2024-10-07 09:48:55.680978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.786 [2024-10-07 09:48:55.681043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.786 qpair failed and we were unable to recover it. 00:28:06.786 [2024-10-07 09:48:55.681254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.786 [2024-10-07 09:48:55.681319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.786 qpair failed and we were unable to recover it. 00:28:06.786 [2024-10-07 09:48:55.681556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.786 [2024-10-07 09:48:55.681623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.787 qpair failed and we were unable to recover it. 00:28:06.787 [2024-10-07 09:48:55.681892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.787 [2024-10-07 09:48:55.681958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.787 qpair failed and we were unable to recover it. 00:28:06.787 [2024-10-07 09:48:55.682186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.787 [2024-10-07 09:48:55.682251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.787 qpair failed and we were unable to recover it. 00:28:06.787 [2024-10-07 09:48:55.682528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.787 [2024-10-07 09:48:55.682594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.787 qpair failed and we were unable to recover it. 00:28:06.787 [2024-10-07 09:48:55.682855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.787 [2024-10-07 09:48:55.682923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.787 qpair failed and we were unable to recover it. 00:28:06.787 [2024-10-07 09:48:55.683132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.787 [2024-10-07 09:48:55.683198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.787 qpair failed and we were unable to recover it. 00:28:06.787 [2024-10-07 09:48:55.683448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.787 [2024-10-07 09:48:55.683513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.787 qpair failed and we were unable to recover it. 00:28:06.787 [2024-10-07 09:48:55.683750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.787 [2024-10-07 09:48:55.683818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.787 qpair failed and we were unable to recover it. 00:28:06.787 [2024-10-07 09:48:55.684061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.787 [2024-10-07 09:48:55.684136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.787 qpair failed and we were unable to recover it. 00:28:06.787 [2024-10-07 09:48:55.684360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.787 [2024-10-07 09:48:55.684425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.787 qpair failed and we were unable to recover it. 00:28:06.787 [2024-10-07 09:48:55.684719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.787 [2024-10-07 09:48:55.684787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.787 qpair failed and we were unable to recover it. 00:28:06.787 [2024-10-07 09:48:55.685000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.787 [2024-10-07 09:48:55.685066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.787 qpair failed and we were unable to recover it. 00:28:06.787 [2024-10-07 09:48:55.685315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.787 [2024-10-07 09:48:55.685380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.787 qpair failed and we were unable to recover it. 00:28:06.787 [2024-10-07 09:48:55.685598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.787 [2024-10-07 09:48:55.685663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.787 qpair failed and we were unable to recover it. 00:28:06.787 [2024-10-07 09:48:55.685938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.787 [2024-10-07 09:48:55.686004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.787 qpair failed and we were unable to recover it. 00:28:06.787 [2024-10-07 09:48:55.686282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.787 [2024-10-07 09:48:55.686347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.787 qpair failed and we were unable to recover it. 00:28:06.787 [2024-10-07 09:48:55.686543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.787 [2024-10-07 09:48:55.686607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.787 qpair failed and we were unable to recover it. 00:28:06.787 [2024-10-07 09:48:55.686850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.787 [2024-10-07 09:48:55.686917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.787 qpair failed and we were unable to recover it. 00:28:06.787 [2024-10-07 09:48:55.687163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.787 [2024-10-07 09:48:55.687227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.787 qpair failed and we were unable to recover it. 00:28:06.787 [2024-10-07 09:48:55.687527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.787 [2024-10-07 09:48:55.687592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.787 qpair failed and we were unable to recover it. 00:28:06.787 [2024-10-07 09:48:55.687880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.787 [2024-10-07 09:48:55.687948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.787 qpair failed and we were unable to recover it. 00:28:06.787 [2024-10-07 09:48:55.688167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.787 [2024-10-07 09:48:55.688234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.787 qpair failed and we were unable to recover it. 00:28:06.787 [2024-10-07 09:48:55.688459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.787 [2024-10-07 09:48:55.688524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.787 qpair failed and we were unable to recover it. 00:28:06.787 [2024-10-07 09:48:55.688783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.787 [2024-10-07 09:48:55.688851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.787 qpair failed and we were unable to recover it. 00:28:06.787 [2024-10-07 09:48:55.689100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.787 [2024-10-07 09:48:55.689167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.787 qpair failed and we were unable to recover it. 00:28:06.787 [2024-10-07 09:48:55.689417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.787 [2024-10-07 09:48:55.689481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.787 qpair failed and we were unable to recover it. 00:28:06.787 [2024-10-07 09:48:55.689732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.787 [2024-10-07 09:48:55.689799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.787 qpair failed and we were unable to recover it. 00:28:06.787 [2024-10-07 09:48:55.690048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.787 [2024-10-07 09:48:55.690115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.787 qpair failed and we were unable to recover it. 00:28:06.787 [2024-10-07 09:48:55.690356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.787 [2024-10-07 09:48:55.690420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.787 qpair failed and we were unable to recover it. 00:28:06.787 [2024-10-07 09:48:55.690684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.787 [2024-10-07 09:48:55.690751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.787 qpair failed and we were unable to recover it. 00:28:06.787 [2024-10-07 09:48:55.690959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.787 [2024-10-07 09:48:55.691026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.787 qpair failed and we were unable to recover it. 00:28:06.787 [2024-10-07 09:48:55.691221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.787 [2024-10-07 09:48:55.691285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.787 qpair failed and we were unable to recover it. 00:28:06.787 [2024-10-07 09:48:55.691485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.787 [2024-10-07 09:48:55.691550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.787 qpair failed and we were unable to recover it. 00:28:06.787 [2024-10-07 09:48:55.691745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.787 [2024-10-07 09:48:55.691813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.787 qpair failed and we were unable to recover it. 00:28:06.787 [2024-10-07 09:48:55.692060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.787 [2024-10-07 09:48:55.692124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.787 qpair failed and we were unable to recover it. 00:28:06.787 [2024-10-07 09:48:55.692405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.787 [2024-10-07 09:48:55.692481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.787 qpair failed and we were unable to recover it. 00:28:06.787 [2024-10-07 09:48:55.692778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.787 [2024-10-07 09:48:55.692845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.787 qpair failed and we were unable to recover it. 00:28:06.787 [2024-10-07 09:48:55.693064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.787 [2024-10-07 09:48:55.693129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.787 qpair failed and we were unable to recover it. 00:28:06.787 [2024-10-07 09:48:55.693412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.787 [2024-10-07 09:48:55.693477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.787 qpair failed and we were unable to recover it. 00:28:06.788 [2024-10-07 09:48:55.693771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.788 [2024-10-07 09:48:55.693837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.788 qpair failed and we were unable to recover it. 00:28:06.788 [2024-10-07 09:48:55.694077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.788 [2024-10-07 09:48:55.694143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.788 qpair failed and we were unable to recover it. 00:28:06.788 [2024-10-07 09:48:55.694388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.788 [2024-10-07 09:48:55.694453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.788 qpair failed and we were unable to recover it. 00:28:06.788 [2024-10-07 09:48:55.694709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.788 [2024-10-07 09:48:55.694775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.788 qpair failed and we were unable to recover it. 00:28:06.788 [2024-10-07 09:48:55.695027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.788 [2024-10-07 09:48:55.695093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.788 qpair failed and we were unable to recover it. 00:28:06.788 [2024-10-07 09:48:55.695374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.788 [2024-10-07 09:48:55.695440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.788 qpair failed and we were unable to recover it. 00:28:06.788 [2024-10-07 09:48:55.695649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.788 [2024-10-07 09:48:55.695747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.788 qpair failed and we were unable to recover it. 00:28:06.788 [2024-10-07 09:48:55.695960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.788 [2024-10-07 09:48:55.696027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.788 qpair failed and we were unable to recover it. 00:28:06.788 [2024-10-07 09:48:55.696276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.788 [2024-10-07 09:48:55.696341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.788 qpair failed and we were unable to recover it. 00:28:06.788 [2024-10-07 09:48:55.696577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.788 [2024-10-07 09:48:55.696643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.788 qpair failed and we were unable to recover it. 00:28:06.788 [2024-10-07 09:48:55.696933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.788 [2024-10-07 09:48:55.697000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.788 qpair failed and we were unable to recover it. 00:28:06.788 [2024-10-07 09:48:55.697208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.788 [2024-10-07 09:48:55.697272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.788 qpair failed and we were unable to recover it. 00:28:06.788 [2024-10-07 09:48:55.697501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.788 [2024-10-07 09:48:55.697567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.788 qpair failed and we were unable to recover it. 00:28:06.788 [2024-10-07 09:48:55.697789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.788 [2024-10-07 09:48:55.697857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.788 qpair failed and we were unable to recover it. 00:28:06.788 [2024-10-07 09:48:55.698068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.788 [2024-10-07 09:48:55.698133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.788 qpair failed and we were unable to recover it. 00:28:06.788 [2024-10-07 09:48:55.698370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.788 [2024-10-07 09:48:55.698437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.788 qpair failed and we were unable to recover it. 00:28:06.788 [2024-10-07 09:48:55.698697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.788 [2024-10-07 09:48:55.698764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.788 qpair failed and we were unable to recover it. 00:28:06.788 [2024-10-07 09:48:55.699010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.788 [2024-10-07 09:48:55.699076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.788 qpair failed and we were unable to recover it. 00:28:06.788 [2024-10-07 09:48:55.699305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.788 [2024-10-07 09:48:55.699370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.788 qpair failed and we were unable to recover it. 00:28:06.788 [2024-10-07 09:48:55.699697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.788 [2024-10-07 09:48:55.699764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.788 qpair failed and we were unable to recover it. 00:28:06.788 [2024-10-07 09:48:55.700045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.788 [2024-10-07 09:48:55.700111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.788 qpair failed and we were unable to recover it. 00:28:06.788 [2024-10-07 09:48:55.700371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.788 [2024-10-07 09:48:55.700435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.788 qpair failed and we were unable to recover it. 00:28:06.788 [2024-10-07 09:48:55.700718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.788 [2024-10-07 09:48:55.700785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.788 qpair failed and we were unable to recover it. 00:28:06.788 [2024-10-07 09:48:55.701094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.788 [2024-10-07 09:48:55.701169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.788 qpair failed and we were unable to recover it. 00:28:06.788 [2024-10-07 09:48:55.701430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.788 [2024-10-07 09:48:55.701495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.788 qpair failed and we were unable to recover it. 00:28:06.788 [2024-10-07 09:48:55.701802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.788 [2024-10-07 09:48:55.701869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.788 qpair failed and we were unable to recover it. 00:28:06.788 [2024-10-07 09:48:55.702124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.788 [2024-10-07 09:48:55.702190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.788 qpair failed and we were unable to recover it. 00:28:06.788 [2024-10-07 09:48:55.702475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.788 [2024-10-07 09:48:55.702540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.788 qpair failed and we were unable to recover it. 00:28:06.788 [2024-10-07 09:48:55.702797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.788 [2024-10-07 09:48:55.702864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.788 qpair failed and we were unable to recover it. 00:28:06.788 [2024-10-07 09:48:55.703157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.788 [2024-10-07 09:48:55.703222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.788 qpair failed and we were unable to recover it. 00:28:06.788 [2024-10-07 09:48:55.703465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.788 [2024-10-07 09:48:55.703529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.788 qpair failed and we were unable to recover it. 00:28:06.788 [2024-10-07 09:48:55.703813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.788 [2024-10-07 09:48:55.703880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.788 qpair failed and we were unable to recover it. 00:28:06.788 [2024-10-07 09:48:55.704173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.788 [2024-10-07 09:48:55.704240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.788 qpair failed and we were unable to recover it. 00:28:06.788 [2024-10-07 09:48:55.704479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.788 [2024-10-07 09:48:55.704545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.788 qpair failed and we were unable to recover it. 00:28:06.789 [2024-10-07 09:48:55.704935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.789 [2024-10-07 09:48:55.705003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.789 qpair failed and we were unable to recover it. 00:28:06.789 [2024-10-07 09:48:55.705225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.789 [2024-10-07 09:48:55.705291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.789 qpair failed and we were unable to recover it. 00:28:06.789 [2024-10-07 09:48:55.705505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.789 [2024-10-07 09:48:55.705570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.789 qpair failed and we were unable to recover it. 00:28:06.789 [2024-10-07 09:48:55.705877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.789 [2024-10-07 09:48:55.705944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.789 qpair failed and we were unable to recover it. 00:28:06.789 [2024-10-07 09:48:55.706246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.789 [2024-10-07 09:48:55.706312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.789 qpair failed and we were unable to recover it. 00:28:06.789 [2024-10-07 09:48:55.706610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.789 [2024-10-07 09:48:55.706688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.789 qpair failed and we were unable to recover it. 00:28:06.789 [2024-10-07 09:48:55.706892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.789 [2024-10-07 09:48:55.706959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.789 qpair failed and we were unable to recover it. 00:28:06.789 [2024-10-07 09:48:55.707206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.789 [2024-10-07 09:48:55.707272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.789 qpair failed and we were unable to recover it. 00:28:06.789 [2024-10-07 09:48:55.707522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.789 [2024-10-07 09:48:55.707588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.789 qpair failed and we were unable to recover it. 00:28:06.789 [2024-10-07 09:48:55.707867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.789 [2024-10-07 09:48:55.707934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.789 qpair failed and we were unable to recover it. 00:28:06.789 [2024-10-07 09:48:55.708146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.789 [2024-10-07 09:48:55.708213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.789 qpair failed and we were unable to recover it. 00:28:06.789 [2024-10-07 09:48:55.708449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.789 [2024-10-07 09:48:55.708515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.789 qpair failed and we were unable to recover it. 00:28:06.789 [2024-10-07 09:48:55.708779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.789 [2024-10-07 09:48:55.708846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.789 qpair failed and we were unable to recover it. 00:28:06.789 [2024-10-07 09:48:55.709088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.789 [2024-10-07 09:48:55.709154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.789 qpair failed and we were unable to recover it. 00:28:06.789 [2024-10-07 09:48:55.709450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.789 [2024-10-07 09:48:55.709516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.789 qpair failed and we were unable to recover it. 00:28:06.789 [2024-10-07 09:48:55.709721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.789 [2024-10-07 09:48:55.709790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.789 qpair failed and we were unable to recover it. 00:28:06.789 [2024-10-07 09:48:55.709988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.789 [2024-10-07 09:48:55.710053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.789 qpair failed and we were unable to recover it. 00:28:06.789 [2024-10-07 09:48:55.710303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.789 [2024-10-07 09:48:55.710368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.789 qpair failed and we were unable to recover it. 00:28:06.789 [2024-10-07 09:48:55.710557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.789 [2024-10-07 09:48:55.710623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.789 qpair failed and we were unable to recover it. 00:28:06.789 [2024-10-07 09:48:55.710891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.789 [2024-10-07 09:48:55.710956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.789 qpair failed and we were unable to recover it. 00:28:06.789 [2024-10-07 09:48:55.711252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.789 [2024-10-07 09:48:55.711317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.789 qpair failed and we were unable to recover it. 00:28:06.789 [2024-10-07 09:48:55.711599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.789 [2024-10-07 09:48:55.711682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.789 qpair failed and we were unable to recover it. 00:28:06.789 [2024-10-07 09:48:55.711936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.789 [2024-10-07 09:48:55.712001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.789 qpair failed and we were unable to recover it. 00:28:06.789 [2024-10-07 09:48:55.712294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.789 [2024-10-07 09:48:55.712359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.789 qpair failed and we were unable to recover it. 00:28:06.789 [2024-10-07 09:48:55.712619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.789 [2024-10-07 09:48:55.712697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.789 qpair failed and we were unable to recover it. 00:28:06.789 [2024-10-07 09:48:55.712962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.789 [2024-10-07 09:48:55.713028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.789 qpair failed and we were unable to recover it. 00:28:06.789 [2024-10-07 09:48:55.713297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.789 [2024-10-07 09:48:55.713361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.789 qpair failed and we were unable to recover it. 00:28:06.789 [2024-10-07 09:48:55.713646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.789 [2024-10-07 09:48:55.713736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.789 qpair failed and we were unable to recover it. 00:28:06.789 [2024-10-07 09:48:55.713957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.789 [2024-10-07 09:48:55.714022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.789 qpair failed and we were unable to recover it. 00:28:06.789 [2024-10-07 09:48:55.714252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.789 [2024-10-07 09:48:55.714319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.789 qpair failed and we were unable to recover it. 00:28:06.789 [2024-10-07 09:48:55.714589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.789 [2024-10-07 09:48:55.714655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.789 qpair failed and we were unable to recover it. 00:28:06.789 [2024-10-07 09:48:55.714865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.789 [2024-10-07 09:48:55.714942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.789 qpair failed and we were unable to recover it. 00:28:06.789 [2024-10-07 09:48:55.715150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.789 [2024-10-07 09:48:55.715218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.789 qpair failed and we were unable to recover it. 00:28:06.789 [2024-10-07 09:48:55.715482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.789 [2024-10-07 09:48:55.715547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.789 qpair failed and we were unable to recover it. 00:28:06.789 [2024-10-07 09:48:55.715756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.789 [2024-10-07 09:48:55.715823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.789 qpair failed and we were unable to recover it. 00:28:06.789 [2024-10-07 09:48:55.716063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.789 [2024-10-07 09:48:55.716127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.789 qpair failed and we were unable to recover it. 00:28:06.789 [2024-10-07 09:48:55.716334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.789 [2024-10-07 09:48:55.716401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.789 qpair failed and we were unable to recover it. 00:28:06.789 [2024-10-07 09:48:55.716651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.789 [2024-10-07 09:48:55.716750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.789 qpair failed and we were unable to recover it. 00:28:06.789 [2024-10-07 09:48:55.716994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.789 [2024-10-07 09:48:55.717060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.789 qpair failed and we were unable to recover it. 00:28:06.790 [2024-10-07 09:48:55.717298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.790 [2024-10-07 09:48:55.717364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.790 qpair failed and we were unable to recover it. 00:28:06.790 [2024-10-07 09:48:55.717643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.790 [2024-10-07 09:48:55.717737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.790 qpair failed and we were unable to recover it. 00:28:06.790 [2024-10-07 09:48:55.717948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.790 [2024-10-07 09:48:55.718014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.790 qpair failed and we were unable to recover it. 00:28:06.790 [2024-10-07 09:48:55.718224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.790 [2024-10-07 09:48:55.718291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.790 qpair failed and we were unable to recover it. 00:28:06.790 [2024-10-07 09:48:55.718526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.790 [2024-10-07 09:48:55.718591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.790 qpair failed and we were unable to recover it. 00:28:06.790 [2024-10-07 09:48:55.718879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.790 [2024-10-07 09:48:55.718953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.790 qpair failed and we were unable to recover it. 00:28:06.790 [2024-10-07 09:48:55.719190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.790 [2024-10-07 09:48:55.719255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.790 qpair failed and we were unable to recover it. 00:28:06.790 [2024-10-07 09:48:55.719507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.790 [2024-10-07 09:48:55.719571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.790 qpair failed and we were unable to recover it. 00:28:06.790 [2024-10-07 09:48:55.719851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.790 [2024-10-07 09:48:55.719918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.790 qpair failed and we were unable to recover it. 00:28:06.790 [2024-10-07 09:48:55.720211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.790 [2024-10-07 09:48:55.720277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.790 qpair failed and we were unable to recover it. 00:28:06.790 [2024-10-07 09:48:55.720495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.790 [2024-10-07 09:48:55.720565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.790 qpair failed and we were unable to recover it. 00:28:06.790 [2024-10-07 09:48:55.720807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.790 [2024-10-07 09:48:55.720874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.790 qpair failed and we were unable to recover it. 00:28:06.790 [2024-10-07 09:48:55.721062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.790 [2024-10-07 09:48:55.721128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.790 qpair failed and we were unable to recover it. 00:28:06.790 [2024-10-07 09:48:55.721364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.790 [2024-10-07 09:48:55.721428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.790 qpair failed and we were unable to recover it. 00:28:06.790 [2024-10-07 09:48:55.721692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.790 [2024-10-07 09:48:55.721762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.790 qpair failed and we were unable to recover it. 00:28:06.790 [2024-10-07 09:48:55.722012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.790 [2024-10-07 09:48:55.722078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.790 qpair failed and we were unable to recover it. 00:28:06.790 [2024-10-07 09:48:55.722293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.790 [2024-10-07 09:48:55.722358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.790 qpair failed and we were unable to recover it. 00:28:06.790 [2024-10-07 09:48:55.722541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.790 [2024-10-07 09:48:55.722606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.790 qpair failed and we were unable to recover it. 00:28:06.790 [2024-10-07 09:48:55.722831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.790 [2024-10-07 09:48:55.722908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.790 qpair failed and we were unable to recover it. 00:28:06.790 [2024-10-07 09:48:55.723172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.790 [2024-10-07 09:48:55.723238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.790 qpair failed and we were unable to recover it. 00:28:06.790 [2024-10-07 09:48:55.723470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.790 [2024-10-07 09:48:55.723535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.790 qpair failed and we were unable to recover it. 00:28:06.790 [2024-10-07 09:48:55.723809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.790 [2024-10-07 09:48:55.723876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.790 qpair failed and we were unable to recover it. 00:28:06.790 [2024-10-07 09:48:55.724177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.790 [2024-10-07 09:48:55.724242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.790 qpair failed and we were unable to recover it. 00:28:06.790 [2024-10-07 09:48:55.724525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.790 [2024-10-07 09:48:55.724589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.790 qpair failed and we were unable to recover it. 00:28:06.790 [2024-10-07 09:48:55.724854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.790 [2024-10-07 09:48:55.724920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.790 qpair failed and we were unable to recover it. 00:28:06.790 [2024-10-07 09:48:55.725200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.790 [2024-10-07 09:48:55.725266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.790 qpair failed and we were unable to recover it. 00:28:06.790 [2024-10-07 09:48:55.725472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.790 [2024-10-07 09:48:55.725539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.790 qpair failed and we were unable to recover it. 00:28:06.790 [2024-10-07 09:48:55.725776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.790 [2024-10-07 09:48:55.725843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.790 qpair failed and we were unable to recover it. 00:28:06.790 [2024-10-07 09:48:55.726081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.790 [2024-10-07 09:48:55.726147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.790 qpair failed and we were unable to recover it. 00:28:06.790 [2024-10-07 09:48:55.726434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.790 [2024-10-07 09:48:55.726500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.790 qpair failed and we were unable to recover it. 00:28:06.790 [2024-10-07 09:48:55.726797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.790 [2024-10-07 09:48:55.726865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.790 qpair failed and we were unable to recover it. 00:28:06.790 [2024-10-07 09:48:55.727135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.790 [2024-10-07 09:48:55.727201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.790 qpair failed and we were unable to recover it. 00:28:06.790 [2024-10-07 09:48:55.727440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.790 [2024-10-07 09:48:55.727506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.790 qpair failed and we were unable to recover it. 00:28:06.790 [2024-10-07 09:48:55.727814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.790 [2024-10-07 09:48:55.727880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.790 qpair failed and we were unable to recover it. 00:28:06.790 [2024-10-07 09:48:55.728103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.790 [2024-10-07 09:48:55.728169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.790 qpair failed and we were unable to recover it. 00:28:06.790 [2024-10-07 09:48:55.728403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.790 [2024-10-07 09:48:55.728470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.790 qpair failed and we were unable to recover it. 00:28:06.790 [2024-10-07 09:48:55.728730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.790 [2024-10-07 09:48:55.728796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.790 qpair failed and we were unable to recover it. 00:28:06.790 [2024-10-07 09:48:55.729047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.790 [2024-10-07 09:48:55.729112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.790 qpair failed and we were unable to recover it. 00:28:06.790 [2024-10-07 09:48:55.729396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.791 [2024-10-07 09:48:55.729461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.791 qpair failed and we were unable to recover it. 00:28:06.791 [2024-10-07 09:48:55.729715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.791 [2024-10-07 09:48:55.729781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.791 qpair failed and we were unable to recover it. 00:28:06.791 [2024-10-07 09:48:55.730065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.791 [2024-10-07 09:48:55.730131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.791 qpair failed and we were unable to recover it. 00:28:06.791 [2024-10-07 09:48:55.730396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.791 [2024-10-07 09:48:55.730462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.791 qpair failed and we were unable to recover it. 00:28:06.791 [2024-10-07 09:48:55.730756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.791 [2024-10-07 09:48:55.730822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.791 qpair failed and we were unable to recover it. 00:28:06.791 [2024-10-07 09:48:55.731031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.791 [2024-10-07 09:48:55.731096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.791 qpair failed and we were unable to recover it. 00:28:06.791 [2024-10-07 09:48:55.731353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.791 [2024-10-07 09:48:55.731418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.791 qpair failed and we were unable to recover it. 00:28:06.791 [2024-10-07 09:48:55.731714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.791 [2024-10-07 09:48:55.731791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.791 qpair failed and we were unable to recover it. 00:28:06.791 [2024-10-07 09:48:55.732000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.791 [2024-10-07 09:48:55.732065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.791 qpair failed and we were unable to recover it. 00:28:06.791 [2024-10-07 09:48:55.732327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.791 [2024-10-07 09:48:55.732392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.791 qpair failed and we were unable to recover it. 00:28:06.791 [2024-10-07 09:48:55.732646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.791 [2024-10-07 09:48:55.732726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.791 qpair failed and we were unable to recover it. 00:28:06.791 [2024-10-07 09:48:55.732965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.791 [2024-10-07 09:48:55.733031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.791 qpair failed and we were unable to recover it. 00:28:06.791 [2024-10-07 09:48:55.733243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.791 [2024-10-07 09:48:55.733307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.791 qpair failed and we were unable to recover it. 00:28:06.791 [2024-10-07 09:48:55.733568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.791 [2024-10-07 09:48:55.733633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.791 qpair failed and we were unable to recover it. 00:28:06.791 [2024-10-07 09:48:55.733847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.791 [2024-10-07 09:48:55.733912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.791 qpair failed and we were unable to recover it. 00:28:06.791 [2024-10-07 09:48:55.734168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.791 [2024-10-07 09:48:55.734232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.791 qpair failed and we were unable to recover it. 00:28:06.791 [2024-10-07 09:48:55.734533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.791 [2024-10-07 09:48:55.734597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.791 qpair failed and we were unable to recover it. 00:28:06.791 [2024-10-07 09:48:55.734826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.791 [2024-10-07 09:48:55.734893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.791 qpair failed and we were unable to recover it. 00:28:06.791 [2024-10-07 09:48:55.735191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.791 [2024-10-07 09:48:55.735255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.791 qpair failed and we were unable to recover it. 00:28:06.791 [2024-10-07 09:48:55.735494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.791 [2024-10-07 09:48:55.735560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.791 qpair failed and we were unable to recover it. 00:28:06.791 [2024-10-07 09:48:55.735828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.791 [2024-10-07 09:48:55.735897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.791 qpair failed and we were unable to recover it. 00:28:06.791 [2024-10-07 09:48:55.736212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.791 [2024-10-07 09:48:55.736279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.791 qpair failed and we were unable to recover it. 00:28:06.791 [2024-10-07 09:48:55.736525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.791 [2024-10-07 09:48:55.736591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.791 qpair failed and we were unable to recover it. 00:28:06.791 [2024-10-07 09:48:55.736890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.791 [2024-10-07 09:48:55.736956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.791 qpair failed and we were unable to recover it. 00:28:06.791 [2024-10-07 09:48:55.737241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.791 [2024-10-07 09:48:55.737307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.791 qpair failed and we were unable to recover it. 00:28:06.791 [2024-10-07 09:48:55.737567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.791 [2024-10-07 09:48:55.737632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.791 qpair failed and we were unable to recover it. 00:28:06.791 [2024-10-07 09:48:55.737940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.791 [2024-10-07 09:48:55.738005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.791 qpair failed and we were unable to recover it. 00:28:06.791 [2024-10-07 09:48:55.738218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.791 [2024-10-07 09:48:55.738284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.791 qpair failed and we were unable to recover it. 00:28:06.791 [2024-10-07 09:48:55.738530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.791 [2024-10-07 09:48:55.738597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.791 qpair failed and we were unable to recover it. 00:28:06.791 [2024-10-07 09:48:55.738857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.791 [2024-10-07 09:48:55.738923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.791 qpair failed and we were unable to recover it. 00:28:06.791 [2024-10-07 09:48:55.739218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.791 [2024-10-07 09:48:55.739282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.791 qpair failed and we were unable to recover it. 00:28:06.791 [2024-10-07 09:48:55.739577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.791 [2024-10-07 09:48:55.739652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.791 qpair failed and we were unable to recover it. 00:28:06.791 [2024-10-07 09:48:55.739926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.791 [2024-10-07 09:48:55.739999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.791 qpair failed and we were unable to recover it. 00:28:06.791 [2024-10-07 09:48:55.740281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.791 [2024-10-07 09:48:55.740346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.791 qpair failed and we were unable to recover it. 00:28:06.791 [2024-10-07 09:48:55.740575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.791 [2024-10-07 09:48:55.740650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.791 qpair failed and we were unable to recover it. 00:28:06.791 [2024-10-07 09:48:55.740915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.791 [2024-10-07 09:48:55.740981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.791 qpair failed and we were unable to recover it. 00:28:06.791 [2024-10-07 09:48:55.741190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.791 [2024-10-07 09:48:55.741255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.791 qpair failed and we were unable to recover it. 00:28:06.791 [2024-10-07 09:48:55.741544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.791 [2024-10-07 09:48:55.741609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.791 qpair failed and we were unable to recover it. 00:28:06.791 [2024-10-07 09:48:55.741933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.791 [2024-10-07 09:48:55.742000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.791 qpair failed and we were unable to recover it. 00:28:06.791 [2024-10-07 09:48:55.742257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.792 [2024-10-07 09:48:55.742323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.792 qpair failed and we were unable to recover it. 00:28:06.792 [2024-10-07 09:48:55.742585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.792 [2024-10-07 09:48:55.742650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.792 qpair failed and we were unable to recover it. 00:28:06.792 [2024-10-07 09:48:55.742945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.792 [2024-10-07 09:48:55.743010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.792 qpair failed and we were unable to recover it. 00:28:06.792 [2024-10-07 09:48:55.743268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.792 [2024-10-07 09:48:55.743334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.792 qpair failed and we were unable to recover it. 00:28:06.792 [2024-10-07 09:48:55.743573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.792 [2024-10-07 09:48:55.743638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.792 qpair failed and we were unable to recover it. 00:28:06.792 [2024-10-07 09:48:55.743958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.792 [2024-10-07 09:48:55.744024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.792 qpair failed and we were unable to recover it. 00:28:06.792 [2024-10-07 09:48:55.744313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.792 [2024-10-07 09:48:55.744377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.792 qpair failed and we were unable to recover it. 00:28:06.792 [2024-10-07 09:48:55.744588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.792 [2024-10-07 09:48:55.744654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.792 qpair failed and we were unable to recover it. 00:28:06.792 [2024-10-07 09:48:55.744918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.792 [2024-10-07 09:48:55.744984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.792 qpair failed and we were unable to recover it. 00:28:06.792 [2024-10-07 09:48:55.745291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.792 [2024-10-07 09:48:55.745357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.792 qpair failed and we were unable to recover it. 00:28:06.792 [2024-10-07 09:48:55.745643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.792 [2024-10-07 09:48:55.745728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.792 qpair failed and we were unable to recover it. 00:28:06.792 [2024-10-07 09:48:55.745975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.792 [2024-10-07 09:48:55.746042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.792 qpair failed and we were unable to recover it. 00:28:06.792 [2024-10-07 09:48:55.746336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.792 [2024-10-07 09:48:55.746401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.792 qpair failed and we were unable to recover it. 00:28:06.792 [2024-10-07 09:48:55.746703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.792 [2024-10-07 09:48:55.746781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.792 qpair failed and we were unable to recover it. 00:28:06.792 [2024-10-07 09:48:55.746996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.792 [2024-10-07 09:48:55.747061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.792 qpair failed and we were unable to recover it. 00:28:06.792 [2024-10-07 09:48:55.747375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.792 [2024-10-07 09:48:55.747441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.792 qpair failed and we were unable to recover it. 00:28:06.792 [2024-10-07 09:48:55.747626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.792 [2024-10-07 09:48:55.747705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:06.792 qpair failed and we were unable to recover it. 00:28:06.792 [2024-10-07 09:48:55.747942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.070 [2024-10-07 09:48:55.748007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.070 qpair failed and we were unable to recover it. 00:28:07.070 [2024-10-07 09:48:55.748262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.070 [2024-10-07 09:48:55.748328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.070 qpair failed and we were unable to recover it. 00:28:07.070 [2024-10-07 09:48:55.748592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.070 [2024-10-07 09:48:55.748657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.070 qpair failed and we were unable to recover it. 00:28:07.070 [2024-10-07 09:48:55.748903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.070 [2024-10-07 09:48:55.748969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.070 qpair failed and we were unable to recover it. 00:28:07.070 [2024-10-07 09:48:55.749215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.070 [2024-10-07 09:48:55.749280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.070 qpair failed and we were unable to recover it. 00:28:07.070 [2024-10-07 09:48:55.749543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.070 [2024-10-07 09:48:55.749607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.071 qpair failed and we were unable to recover it. 00:28:07.071 [2024-10-07 09:48:55.749880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.071 [2024-10-07 09:48:55.749946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.071 qpair failed and we were unable to recover it. 00:28:07.071 [2024-10-07 09:48:55.750133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.071 [2024-10-07 09:48:55.750198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.071 qpair failed and we were unable to recover it. 00:28:07.071 [2024-10-07 09:48:55.750429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.071 [2024-10-07 09:48:55.750493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.071 qpair failed and we were unable to recover it. 00:28:07.071 [2024-10-07 09:48:55.750781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.071 [2024-10-07 09:48:55.750816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.071 qpair failed and we were unable to recover it. 00:28:07.071 [2024-10-07 09:48:55.750934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.071 [2024-10-07 09:48:55.750978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.071 qpair failed and we were unable to recover it. 00:28:07.071 [2024-10-07 09:48:55.751131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.071 [2024-10-07 09:48:55.751180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.071 qpair failed and we were unable to recover it. 00:28:07.071 [2024-10-07 09:48:55.751317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.071 [2024-10-07 09:48:55.751359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.071 qpair failed and we were unable to recover it. 00:28:07.071 [2024-10-07 09:48:55.751516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.071 [2024-10-07 09:48:55.751569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.071 qpair failed and we were unable to recover it. 00:28:07.071 [2024-10-07 09:48:55.751751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.071 [2024-10-07 09:48:55.751786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.071 qpair failed and we were unable to recover it. 00:28:07.071 [2024-10-07 09:48:55.751901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.071 [2024-10-07 09:48:55.751936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.071 qpair failed and we were unable to recover it. 00:28:07.071 [2024-10-07 09:48:55.752108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.071 [2024-10-07 09:48:55.752143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.071 qpair failed and we were unable to recover it. 00:28:07.071 [2024-10-07 09:48:55.752252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.071 [2024-10-07 09:48:55.752287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.071 qpair failed and we were unable to recover it. 00:28:07.071 [2024-10-07 09:48:55.752527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.071 [2024-10-07 09:48:55.752562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.071 qpair failed and we were unable to recover it. 00:28:07.071 [2024-10-07 09:48:55.752699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.071 [2024-10-07 09:48:55.752740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.071 qpair failed and we were unable to recover it. 00:28:07.071 [2024-10-07 09:48:55.752881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.071 [2024-10-07 09:48:55.752916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.071 qpair failed and we were unable to recover it. 00:28:07.071 [2024-10-07 09:48:55.753017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.071 [2024-10-07 09:48:55.753088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.071 qpair failed and we were unable to recover it. 00:28:07.071 [2024-10-07 09:48:55.753377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.071 [2024-10-07 09:48:55.753442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.071 qpair failed and we were unable to recover it. 00:28:07.071 [2024-10-07 09:48:55.753716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.071 [2024-10-07 09:48:55.753751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.071 qpair failed and we were unable to recover it. 00:28:07.071 [2024-10-07 09:48:55.753890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.071 [2024-10-07 09:48:55.753925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.071 qpair failed and we were unable to recover it. 00:28:07.071 [2024-10-07 09:48:55.754144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.071 [2024-10-07 09:48:55.754209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.071 qpair failed and we were unable to recover it. 00:28:07.071 [2024-10-07 09:48:55.754457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.071 [2024-10-07 09:48:55.754523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.071 qpair failed and we were unable to recover it. 00:28:07.071 [2024-10-07 09:48:55.754740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.071 [2024-10-07 09:48:55.754776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.071 qpair failed and we were unable to recover it. 00:28:07.071 [2024-10-07 09:48:55.754911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.071 [2024-10-07 09:48:55.754945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.071 qpair failed and we were unable to recover it. 00:28:07.071 [2024-10-07 09:48:55.755077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.071 [2024-10-07 09:48:55.755143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.071 qpair failed and we were unable to recover it. 00:28:07.071 [2024-10-07 09:48:55.755345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.071 [2024-10-07 09:48:55.755406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.071 qpair failed and we were unable to recover it. 00:28:07.071 [2024-10-07 09:48:55.755658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.071 [2024-10-07 09:48:55.755739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.071 qpair failed and we were unable to recover it. 00:28:07.071 [2024-10-07 09:48:55.755887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.071 [2024-10-07 09:48:55.755921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.071 qpair failed and we were unable to recover it. 00:28:07.071 [2024-10-07 09:48:55.756025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.071 [2024-10-07 09:48:55.756090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.071 qpair failed and we were unable to recover it. 00:28:07.071 [2024-10-07 09:48:55.756325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.071 [2024-10-07 09:48:55.756390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.071 qpair failed and we were unable to recover it. 00:28:07.071 [2024-10-07 09:48:55.756691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.071 [2024-10-07 09:48:55.756742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.071 qpair failed and we were unable to recover it. 00:28:07.071 [2024-10-07 09:48:55.756914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.071 [2024-10-07 09:48:55.756949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.071 qpair failed and we were unable to recover it. 00:28:07.071 [2024-10-07 09:48:55.757153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.071 [2024-10-07 09:48:55.757218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.071 qpair failed and we were unable to recover it. 00:28:07.071 [2024-10-07 09:48:55.757443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.071 [2024-10-07 09:48:55.757516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.071 qpair failed and we were unable to recover it. 00:28:07.071 [2024-10-07 09:48:55.757784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.071 [2024-10-07 09:48:55.757819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.071 qpair failed and we were unable to recover it. 00:28:07.071 [2024-10-07 09:48:55.757922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.071 [2024-10-07 09:48:55.757966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.071 qpair failed and we were unable to recover it. 00:28:07.071 [2024-10-07 09:48:55.758110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.071 [2024-10-07 09:48:55.758145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.072 qpair failed and we were unable to recover it. 00:28:07.072 [2024-10-07 09:48:55.758278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.072 [2024-10-07 09:48:55.758311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.072 qpair failed and we were unable to recover it. 00:28:07.072 [2024-10-07 09:48:55.758475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.072 [2024-10-07 09:48:55.758540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.072 qpair failed and we were unable to recover it. 00:28:07.072 [2024-10-07 09:48:55.758785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.072 [2024-10-07 09:48:55.758821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.072 qpair failed and we were unable to recover it. 00:28:07.072 [2024-10-07 09:48:55.759052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.072 [2024-10-07 09:48:55.759115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.072 qpair failed and we were unable to recover it. 00:28:07.072 [2024-10-07 09:48:55.759414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.072 [2024-10-07 09:48:55.759490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.072 qpair failed and we were unable to recover it. 00:28:07.072 [2024-10-07 09:48:55.759747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.072 [2024-10-07 09:48:55.759782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.072 qpair failed and we were unable to recover it. 00:28:07.072 [2024-10-07 09:48:55.759896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.072 [2024-10-07 09:48:55.759930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.072 qpair failed and we were unable to recover it. 00:28:07.072 [2024-10-07 09:48:55.760069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.072 [2024-10-07 09:48:55.760131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.072 qpair failed and we were unable to recover it. 00:28:07.072 [2024-10-07 09:48:55.760334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.072 [2024-10-07 09:48:55.760399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.072 qpair failed and we were unable to recover it. 00:28:07.072 [2024-10-07 09:48:55.760602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.072 [2024-10-07 09:48:55.760660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.072 qpair failed and we were unable to recover it. 00:28:07.072 [2024-10-07 09:48:55.760843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.072 [2024-10-07 09:48:55.760877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.072 qpair failed and we were unable to recover it. 00:28:07.072 [2024-10-07 09:48:55.761131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.072 [2024-10-07 09:48:55.761197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.072 qpair failed and we were unable to recover it. 00:28:07.072 [2024-10-07 09:48:55.761480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.072 [2024-10-07 09:48:55.761544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.072 qpair failed and we were unable to recover it. 00:28:07.072 [2024-10-07 09:48:55.761775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.072 [2024-10-07 09:48:55.761811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.072 qpair failed and we were unable to recover it. 00:28:07.072 [2024-10-07 09:48:55.761946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.072 [2024-10-07 09:48:55.761980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.072 qpair failed and we were unable to recover it. 00:28:07.072 [2024-10-07 09:48:55.762247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.072 [2024-10-07 09:48:55.762312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.072 qpair failed and we were unable to recover it. 00:28:07.072 [2024-10-07 09:48:55.762496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.072 [2024-10-07 09:48:55.762553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.072 qpair failed and we were unable to recover it. 00:28:07.072 [2024-10-07 09:48:55.762759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.072 [2024-10-07 09:48:55.762794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.072 qpair failed and we were unable to recover it. 00:28:07.072 [2024-10-07 09:48:55.762905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.072 [2024-10-07 09:48:55.762969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.072 qpair failed and we were unable to recover it. 00:28:07.072 [2024-10-07 09:48:55.763207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.072 [2024-10-07 09:48:55.763272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.072 qpair failed and we were unable to recover it. 00:28:07.072 [2024-10-07 09:48:55.763581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.072 [2024-10-07 09:48:55.763647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.072 qpair failed and we were unable to recover it. 00:28:07.072 [2024-10-07 09:48:55.763839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.072 [2024-10-07 09:48:55.763874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.072 qpair failed and we were unable to recover it. 00:28:07.072 [2024-10-07 09:48:55.764066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.072 [2024-10-07 09:48:55.764131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.072 qpair failed and we were unable to recover it. 00:28:07.072 [2024-10-07 09:48:55.764380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.072 [2024-10-07 09:48:55.764445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.072 qpair failed and we were unable to recover it. 00:28:07.072 [2024-10-07 09:48:55.764702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.072 [2024-10-07 09:48:55.764757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.072 qpair failed and we were unable to recover it. 00:28:07.072 [2024-10-07 09:48:55.764881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.072 [2024-10-07 09:48:55.764916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.072 qpair failed and we were unable to recover it. 00:28:07.072 [2024-10-07 09:48:55.765085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.072 [2024-10-07 09:48:55.765150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.072 qpair failed and we were unable to recover it. 00:28:07.072 [2024-10-07 09:48:55.765451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.072 [2024-10-07 09:48:55.765516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.072 qpair failed and we were unable to recover it. 00:28:07.072 [2024-10-07 09:48:55.765771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.072 [2024-10-07 09:48:55.765807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.072 qpair failed and we were unable to recover it. 00:28:07.072 [2024-10-07 09:48:55.765959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.072 [2024-10-07 09:48:55.766025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.072 qpair failed and we were unable to recover it. 00:28:07.072 [2024-10-07 09:48:55.766271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.072 [2024-10-07 09:48:55.766338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.072 qpair failed and we were unable to recover it. 00:28:07.072 [2024-10-07 09:48:55.766625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.072 [2024-10-07 09:48:55.766734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.072 qpair failed and we were unable to recover it. 00:28:07.072 [2024-10-07 09:48:55.766880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.072 [2024-10-07 09:48:55.766915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.072 qpair failed and we were unable to recover it. 00:28:07.072 [2024-10-07 09:48:55.767177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.072 [2024-10-07 09:48:55.767242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.072 qpair failed and we were unable to recover it. 00:28:07.072 [2024-10-07 09:48:55.767495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.072 [2024-10-07 09:48:55.767561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.072 qpair failed and we were unable to recover it. 00:28:07.072 [2024-10-07 09:48:55.767781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.072 [2024-10-07 09:48:55.767816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.072 qpair failed and we were unable to recover it. 00:28:07.072 [2024-10-07 09:48:55.767924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.072 [2024-10-07 09:48:55.767958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.072 qpair failed and we were unable to recover it. 00:28:07.072 [2024-10-07 09:48:55.768076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.073 [2024-10-07 09:48:55.768110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.073 qpair failed and we were unable to recover it. 00:28:07.073 [2024-10-07 09:48:55.768306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.073 [2024-10-07 09:48:55.768371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.073 qpair failed and we were unable to recover it. 00:28:07.073 [2024-10-07 09:48:55.768619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.073 [2024-10-07 09:48:55.768725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.073 qpair failed and we were unable to recover it. 00:28:07.073 [2024-10-07 09:48:55.768876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.073 [2024-10-07 09:48:55.768910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.073 qpair failed and we were unable to recover it. 00:28:07.073 [2024-10-07 09:48:55.769090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.073 [2024-10-07 09:48:55.769156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.073 qpair failed and we were unable to recover it. 00:28:07.073 [2024-10-07 09:48:55.769401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.073 [2024-10-07 09:48:55.769472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.073 qpair failed and we were unable to recover it. 00:28:07.073 [2024-10-07 09:48:55.769722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.073 [2024-10-07 09:48:55.769763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.073 qpair failed and we were unable to recover it. 00:28:07.073 [2024-10-07 09:48:55.769874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.073 [2024-10-07 09:48:55.769908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.073 qpair failed and we were unable to recover it. 00:28:07.073 [2024-10-07 09:48:55.770018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.073 [2024-10-07 09:48:55.770052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.073 qpair failed and we were unable to recover it. 00:28:07.073 [2024-10-07 09:48:55.770193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.073 [2024-10-07 09:48:55.770259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.073 qpair failed and we were unable to recover it. 00:28:07.073 [2024-10-07 09:48:55.770545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.073 [2024-10-07 09:48:55.770610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.073 qpair failed and we were unable to recover it. 00:28:07.073 [2024-10-07 09:48:55.770864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.073 [2024-10-07 09:48:55.770899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.073 qpair failed and we were unable to recover it. 00:28:07.073 [2024-10-07 09:48:55.771036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.073 [2024-10-07 09:48:55.771072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.073 qpair failed and we were unable to recover it. 00:28:07.073 [2024-10-07 09:48:55.771258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.073 [2024-10-07 09:48:55.771325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.073 qpair failed and we were unable to recover it. 00:28:07.073 [2024-10-07 09:48:55.771565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.073 [2024-10-07 09:48:55.771630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.073 qpair failed and we were unable to recover it. 00:28:07.073 [2024-10-07 09:48:55.771805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.073 [2024-10-07 09:48:55.771840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.073 qpair failed and we were unable to recover it. 00:28:07.073 [2024-10-07 09:48:55.771946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.073 [2024-10-07 09:48:55.771981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.073 qpair failed and we were unable to recover it. 00:28:07.073 [2024-10-07 09:48:55.772112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.073 [2024-10-07 09:48:55.772187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.073 qpair failed and we were unable to recover it. 00:28:07.073 [2024-10-07 09:48:55.772488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.073 [2024-10-07 09:48:55.772554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.073 qpair failed and we were unable to recover it. 00:28:07.073 [2024-10-07 09:48:55.772819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.073 [2024-10-07 09:48:55.772855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.073 qpair failed and we were unable to recover it. 00:28:07.073 [2024-10-07 09:48:55.772993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.073 [2024-10-07 09:48:55.773032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.073 qpair failed and we were unable to recover it. 00:28:07.073 [2024-10-07 09:48:55.773233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.073 [2024-10-07 09:48:55.773299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.073 qpair failed and we were unable to recover it. 00:28:07.073 [2024-10-07 09:48:55.773494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.073 [2024-10-07 09:48:55.773560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.073 qpair failed and we were unable to recover it. 00:28:07.073 [2024-10-07 09:48:55.773795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.073 [2024-10-07 09:48:55.773830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.073 qpair failed and we were unable to recover it. 00:28:07.073 [2024-10-07 09:48:55.773972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.073 [2024-10-07 09:48:55.774006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.073 qpair failed and we were unable to recover it. 00:28:07.073 [2024-10-07 09:48:55.774139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.073 [2024-10-07 09:48:55.774210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.073 qpair failed and we were unable to recover it. 00:28:07.073 [2024-10-07 09:48:55.774420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.073 [2024-10-07 09:48:55.774487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.073 qpair failed and we were unable to recover it. 00:28:07.073 [2024-10-07 09:48:55.774711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.073 [2024-10-07 09:48:55.774779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.073 qpair failed and we were unable to recover it. 00:28:07.073 [2024-10-07 09:48:55.775047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.073 [2024-10-07 09:48:55.775111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.073 qpair failed and we were unable to recover it. 00:28:07.073 [2024-10-07 09:48:55.775393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.073 [2024-10-07 09:48:55.775468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.073 qpair failed and we were unable to recover it. 00:28:07.073 [2024-10-07 09:48:55.775761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.073 [2024-10-07 09:48:55.775829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.073 qpair failed and we were unable to recover it. 00:28:07.073 [2024-10-07 09:48:55.776078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.073 [2024-10-07 09:48:55.776143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.073 qpair failed and we were unable to recover it. 00:28:07.073 [2024-10-07 09:48:55.776431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.073 [2024-10-07 09:48:55.776495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.073 qpair failed and we were unable to recover it. 00:28:07.073 [2024-10-07 09:48:55.776796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.073 [2024-10-07 09:48:55.776862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.073 qpair failed and we were unable to recover it. 00:28:07.073 [2024-10-07 09:48:55.777107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.073 [2024-10-07 09:48:55.777172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.073 qpair failed and we were unable to recover it. 00:28:07.073 [2024-10-07 09:48:55.777416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.073 [2024-10-07 09:48:55.777483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.073 qpair failed and we were unable to recover it. 00:28:07.073 [2024-10-07 09:48:55.777718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.073 [2024-10-07 09:48:55.777802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.073 qpair failed and we were unable to recover it. 00:28:07.073 [2024-10-07 09:48:55.778068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.073 [2024-10-07 09:48:55.778133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.073 qpair failed and we were unable to recover it. 00:28:07.073 [2024-10-07 09:48:55.778375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.073 [2024-10-07 09:48:55.778440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.074 qpair failed and we were unable to recover it. 00:28:07.074 [2024-10-07 09:48:55.778738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.074 [2024-10-07 09:48:55.778804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.074 qpair failed and we were unable to recover it. 00:28:07.074 [2024-10-07 09:48:55.779092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.074 [2024-10-07 09:48:55.779156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.074 qpair failed and we were unable to recover it. 00:28:07.074 [2024-10-07 09:48:55.779360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.074 [2024-10-07 09:48:55.779425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.074 qpair failed and we were unable to recover it. 00:28:07.074 [2024-10-07 09:48:55.779657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.074 [2024-10-07 09:48:55.779738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.074 qpair failed and we were unable to recover it. 00:28:07.074 [2024-10-07 09:48:55.779959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.074 [2024-10-07 09:48:55.780023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.074 qpair failed and we were unable to recover it. 00:28:07.074 [2024-10-07 09:48:55.780209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.074 [2024-10-07 09:48:55.780275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.074 qpair failed and we were unable to recover it. 00:28:07.074 [2024-10-07 09:48:55.780521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.074 [2024-10-07 09:48:55.780588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.074 qpair failed and we were unable to recover it. 00:28:07.074 [2024-10-07 09:48:55.780859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.074 [2024-10-07 09:48:55.780925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.074 qpair failed and we were unable to recover it. 00:28:07.074 [2024-10-07 09:48:55.781179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.074 [2024-10-07 09:48:55.781247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.074 qpair failed and we were unable to recover it. 00:28:07.074 [2024-10-07 09:48:55.781538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.074 [2024-10-07 09:48:55.781604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.074 qpair failed and we were unable to recover it. 00:28:07.074 [2024-10-07 09:48:55.781929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.074 [2024-10-07 09:48:55.781995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.074 qpair failed and we were unable to recover it. 00:28:07.074 [2024-10-07 09:48:55.782194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.074 [2024-10-07 09:48:55.782258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.074 qpair failed and we were unable to recover it. 00:28:07.074 [2024-10-07 09:48:55.782547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.074 [2024-10-07 09:48:55.782612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.074 qpair failed and we were unable to recover it. 00:28:07.074 [2024-10-07 09:48:55.782874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.074 [2024-10-07 09:48:55.782943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.074 qpair failed and we were unable to recover it. 00:28:07.074 [2024-10-07 09:48:55.783229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.074 [2024-10-07 09:48:55.783294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.074 qpair failed and we were unable to recover it. 00:28:07.074 [2024-10-07 09:48:55.783575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.074 [2024-10-07 09:48:55.783642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.074 qpair failed and we were unable to recover it. 00:28:07.074 [2024-10-07 09:48:55.783971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.074 [2024-10-07 09:48:55.784036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.074 qpair failed and we were unable to recover it. 00:28:07.074 [2024-10-07 09:48:55.784283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.074 [2024-10-07 09:48:55.784348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.074 qpair failed and we were unable to recover it. 00:28:07.074 [2024-10-07 09:48:55.784623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.074 [2024-10-07 09:48:55.784706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.074 qpair failed and we were unable to recover it. 00:28:07.074 [2024-10-07 09:48:55.784973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.074 [2024-10-07 09:48:55.785037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.074 qpair failed and we were unable to recover it. 00:28:07.074 [2024-10-07 09:48:55.785323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.074 [2024-10-07 09:48:55.785388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.074 qpair failed and we were unable to recover it. 00:28:07.074 [2024-10-07 09:48:55.785710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.074 [2024-10-07 09:48:55.785777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.074 qpair failed and we were unable to recover it. 00:28:07.074 [2024-10-07 09:48:55.786061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.074 [2024-10-07 09:48:55.786127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.074 qpair failed and we were unable to recover it. 00:28:07.074 [2024-10-07 09:48:55.786416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.074 [2024-10-07 09:48:55.786490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.074 qpair failed and we were unable to recover it. 00:28:07.074 [2024-10-07 09:48:55.786788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.074 [2024-10-07 09:48:55.786855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.074 qpair failed and we were unable to recover it. 00:28:07.074 [2024-10-07 09:48:55.787112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.074 [2024-10-07 09:48:55.787177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.074 qpair failed and we were unable to recover it. 00:28:07.074 [2024-10-07 09:48:55.787465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.074 [2024-10-07 09:48:55.787529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.074 qpair failed and we were unable to recover it. 00:28:07.074 [2024-10-07 09:48:55.787783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.074 [2024-10-07 09:48:55.787850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.074 qpair failed and we were unable to recover it. 00:28:07.074 [2024-10-07 09:48:55.788150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.074 [2024-10-07 09:48:55.788215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.074 qpair failed and we were unable to recover it. 00:28:07.074 [2024-10-07 09:48:55.788462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.074 [2024-10-07 09:48:55.788527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.074 qpair failed and we were unable to recover it. 00:28:07.074 [2024-10-07 09:48:55.788815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.074 [2024-10-07 09:48:55.788881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.074 qpair failed and we were unable to recover it. 00:28:07.074 [2024-10-07 09:48:55.789122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.074 [2024-10-07 09:48:55.789189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.074 qpair failed and we were unable to recover it. 00:28:07.074 [2024-10-07 09:48:55.789436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.074 [2024-10-07 09:48:55.789501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.074 qpair failed and we were unable to recover it. 00:28:07.074 [2024-10-07 09:48:55.789765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.074 [2024-10-07 09:48:55.789832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.074 qpair failed and we were unable to recover it. 00:28:07.074 [2024-10-07 09:48:55.790082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.074 [2024-10-07 09:48:55.790148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.074 qpair failed and we were unable to recover it. 00:28:07.074 [2024-10-07 09:48:55.790431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.074 [2024-10-07 09:48:55.790495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.074 qpair failed and we were unable to recover it. 00:28:07.074 [2024-10-07 09:48:55.790708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.074 [2024-10-07 09:48:55.790776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.074 qpair failed and we were unable to recover it. 00:28:07.074 [2024-10-07 09:48:55.790979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.075 [2024-10-07 09:48:55.791044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.075 qpair failed and we were unable to recover it. 00:28:07.075 [2024-10-07 09:48:55.791278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.075 [2024-10-07 09:48:55.791344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.075 qpair failed and we were unable to recover it. 00:28:07.075 [2024-10-07 09:48:55.791565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.075 [2024-10-07 09:48:55.791631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.075 qpair failed and we were unable to recover it. 00:28:07.075 [2024-10-07 09:48:55.791912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.075 [2024-10-07 09:48:55.791983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.075 qpair failed and we were unable to recover it. 00:28:07.075 [2024-10-07 09:48:55.792276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.075 [2024-10-07 09:48:55.792341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.075 qpair failed and we were unable to recover it. 00:28:07.075 [2024-10-07 09:48:55.792548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.075 [2024-10-07 09:48:55.792613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.075 qpair failed and we were unable to recover it. 00:28:07.075 [2024-10-07 09:48:55.792927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.075 [2024-10-07 09:48:55.792993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.075 qpair failed and we were unable to recover it. 00:28:07.075 [2024-10-07 09:48:55.793285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.075 [2024-10-07 09:48:55.793350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.075 qpair failed and we were unable to recover it. 00:28:07.075 [2024-10-07 09:48:55.793642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.075 [2024-10-07 09:48:55.793734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.075 qpair failed and we were unable to recover it. 00:28:07.075 [2024-10-07 09:48:55.793981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.075 [2024-10-07 09:48:55.794049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.075 qpair failed and we were unable to recover it. 00:28:07.075 [2024-10-07 09:48:55.794257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.075 [2024-10-07 09:48:55.794323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.075 qpair failed and we were unable to recover it. 00:28:07.075 [2024-10-07 09:48:55.794531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.075 [2024-10-07 09:48:55.794598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.075 qpair failed and we were unable to recover it. 00:28:07.075 [2024-10-07 09:48:55.794832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.075 [2024-10-07 09:48:55.794899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.075 qpair failed and we were unable to recover it. 00:28:07.075 [2024-10-07 09:48:55.795081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.075 [2024-10-07 09:48:55.795156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.075 qpair failed and we were unable to recover it. 00:28:07.075 [2024-10-07 09:48:55.795408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.075 [2024-10-07 09:48:55.795482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.075 qpair failed and we were unable to recover it. 00:28:07.075 [2024-10-07 09:48:55.795740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.075 [2024-10-07 09:48:55.795808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.075 qpair failed and we were unable to recover it. 00:28:07.075 [2024-10-07 09:48:55.796096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.075 [2024-10-07 09:48:55.796161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.075 qpair failed and we were unable to recover it. 00:28:07.075 [2024-10-07 09:48:55.796414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.075 [2024-10-07 09:48:55.796479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.075 qpair failed and we were unable to recover it. 00:28:07.075 [2024-10-07 09:48:55.796722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.075 [2024-10-07 09:48:55.796790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.075 qpair failed and we were unable to recover it. 00:28:07.075 [2024-10-07 09:48:55.797030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.075 [2024-10-07 09:48:55.797096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.075 qpair failed and we were unable to recover it. 00:28:07.075 [2024-10-07 09:48:55.797402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.075 [2024-10-07 09:48:55.797466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.075 qpair failed and we were unable to recover it. 00:28:07.075 [2024-10-07 09:48:55.797752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.075 [2024-10-07 09:48:55.797820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.075 qpair failed and we were unable to recover it. 00:28:07.075 [2024-10-07 09:48:55.798070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.075 [2024-10-07 09:48:55.798135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.075 qpair failed and we were unable to recover it. 00:28:07.075 [2024-10-07 09:48:55.798376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.075 [2024-10-07 09:48:55.798442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.075 qpair failed and we were unable to recover it. 00:28:07.075 [2024-10-07 09:48:55.798688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.075 [2024-10-07 09:48:55.798756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.075 qpair failed and we were unable to recover it. 00:28:07.075 [2024-10-07 09:48:55.799061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.075 [2024-10-07 09:48:55.799125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.075 qpair failed and we were unable to recover it. 00:28:07.075 [2024-10-07 09:48:55.799413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.075 [2024-10-07 09:48:55.799478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.075 qpair failed and we were unable to recover it. 00:28:07.075 [2024-10-07 09:48:55.799764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.075 [2024-10-07 09:48:55.799831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.075 qpair failed and we were unable to recover it. 00:28:07.075 [2024-10-07 09:48:55.800116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.075 [2024-10-07 09:48:55.800180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.075 qpair failed and we were unable to recover it. 00:28:07.075 [2024-10-07 09:48:55.800424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.075 [2024-10-07 09:48:55.800489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.075 qpair failed and we were unable to recover it. 00:28:07.075 [2024-10-07 09:48:55.800786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.075 [2024-10-07 09:48:55.800853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.075 qpair failed and we were unable to recover it. 00:28:07.075 [2024-10-07 09:48:55.801101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.075 [2024-10-07 09:48:55.801166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.075 qpair failed and we were unable to recover it. 00:28:07.075 [2024-10-07 09:48:55.801429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.075 [2024-10-07 09:48:55.801495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.075 qpair failed and we were unable to recover it. 00:28:07.076 [2024-10-07 09:48:55.801751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.076 [2024-10-07 09:48:55.801818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.076 qpair failed and we were unable to recover it. 00:28:07.076 [2024-10-07 09:48:55.802065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.076 [2024-10-07 09:48:55.802131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.076 qpair failed and we were unable to recover it. 00:28:07.076 [2024-10-07 09:48:55.802389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.076 [2024-10-07 09:48:55.802454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.076 qpair failed and we were unable to recover it. 00:28:07.076 [2024-10-07 09:48:55.802736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.076 [2024-10-07 09:48:55.802804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.076 qpair failed and we were unable to recover it. 00:28:07.076 [2024-10-07 09:48:55.803111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.076 [2024-10-07 09:48:55.803176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.076 qpair failed and we were unable to recover it. 00:28:07.076 [2024-10-07 09:48:55.803415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.076 [2024-10-07 09:48:55.803479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.076 qpair failed and we were unable to recover it. 00:28:07.076 [2024-10-07 09:48:55.803767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.076 [2024-10-07 09:48:55.803834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.076 qpair failed and we were unable to recover it. 00:28:07.076 [2024-10-07 09:48:55.804071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.076 [2024-10-07 09:48:55.804137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.076 qpair failed and we were unable to recover it. 00:28:07.076 [2024-10-07 09:48:55.804405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.076 [2024-10-07 09:48:55.804470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.076 qpair failed and we were unable to recover it. 00:28:07.076 [2024-10-07 09:48:55.804751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.076 [2024-10-07 09:48:55.804818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.076 qpair failed and we were unable to recover it. 00:28:07.076 [2024-10-07 09:48:55.805025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.076 [2024-10-07 09:48:55.805090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.076 qpair failed and we were unable to recover it. 00:28:07.076 [2024-10-07 09:48:55.805315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.076 [2024-10-07 09:48:55.805380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.076 qpair failed and we were unable to recover it. 00:28:07.076 [2024-10-07 09:48:55.805630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.076 [2024-10-07 09:48:55.805712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.076 qpair failed and we were unable to recover it. 00:28:07.076 [2024-10-07 09:48:55.805967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.076 [2024-10-07 09:48:55.806035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.076 qpair failed and we were unable to recover it. 00:28:07.076 [2024-10-07 09:48:55.806316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.076 [2024-10-07 09:48:55.806388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.076 qpair failed and we were unable to recover it. 00:28:07.076 [2024-10-07 09:48:55.806645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.076 [2024-10-07 09:48:55.806728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.076 qpair failed and we were unable to recover it. 00:28:07.076 [2024-10-07 09:48:55.807019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.076 [2024-10-07 09:48:55.807085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.076 qpair failed and we were unable to recover it. 00:28:07.076 [2024-10-07 09:48:55.807344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.076 [2024-10-07 09:48:55.807408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.076 qpair failed and we were unable to recover it. 00:28:07.076 [2024-10-07 09:48:55.807627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.076 [2024-10-07 09:48:55.807710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.076 qpair failed and we were unable to recover it. 00:28:07.076 [2024-10-07 09:48:55.807929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.076 [2024-10-07 09:48:55.807995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.076 qpair failed and we were unable to recover it. 00:28:07.076 [2024-10-07 09:48:55.808277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.076 [2024-10-07 09:48:55.808343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.076 qpair failed and we were unable to recover it. 00:28:07.076 [2024-10-07 09:48:55.808641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.076 [2024-10-07 09:48:55.808722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.076 qpair failed and we were unable to recover it. 00:28:07.076 [2024-10-07 09:48:55.809017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.076 [2024-10-07 09:48:55.809084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.076 qpair failed and we were unable to recover it. 00:28:07.076 [2024-10-07 09:48:55.809272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.076 [2024-10-07 09:48:55.809336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.076 qpair failed and we were unable to recover it. 00:28:07.076 [2024-10-07 09:48:55.809590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.076 [2024-10-07 09:48:55.809655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.076 qpair failed and we were unable to recover it. 00:28:07.076 [2024-10-07 09:48:55.809923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.076 [2024-10-07 09:48:55.809990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.076 qpair failed and we were unable to recover it. 00:28:07.076 [2024-10-07 09:48:55.810223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.076 [2024-10-07 09:48:55.810288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.076 qpair failed and we were unable to recover it. 00:28:07.076 [2024-10-07 09:48:55.810580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.076 [2024-10-07 09:48:55.810646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.076 qpair failed and we were unable to recover it. 00:28:07.076 [2024-10-07 09:48:55.810887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.076 [2024-10-07 09:48:55.810953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.076 qpair failed and we were unable to recover it. 00:28:07.076 [2024-10-07 09:48:55.811236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.076 [2024-10-07 09:48:55.811299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.076 qpair failed and we were unable to recover it. 00:28:07.076 [2024-10-07 09:48:55.811488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.076 [2024-10-07 09:48:55.811553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.076 qpair failed and we were unable to recover it. 00:28:07.076 [2024-10-07 09:48:55.811841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.076 [2024-10-07 09:48:55.811908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.076 qpair failed and we were unable to recover it. 00:28:07.076 [2024-10-07 09:48:55.812203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.076 [2024-10-07 09:48:55.812268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.076 qpair failed and we were unable to recover it. 00:28:07.076 [2024-10-07 09:48:55.812564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.076 [2024-10-07 09:48:55.812628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.076 qpair failed and we were unable to recover it. 00:28:07.076 [2024-10-07 09:48:55.812909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.076 [2024-10-07 09:48:55.812975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.076 qpair failed and we were unable to recover it. 00:28:07.076 [2024-10-07 09:48:55.813271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.076 [2024-10-07 09:48:55.813336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.076 qpair failed and we were unable to recover it. 00:28:07.076 [2024-10-07 09:48:55.813573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.076 [2024-10-07 09:48:55.813639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.076 qpair failed and we were unable to recover it. 00:28:07.076 [2024-10-07 09:48:55.813885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.076 [2024-10-07 09:48:55.813952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.076 qpair failed and we were unable to recover it. 00:28:07.076 [2024-10-07 09:48:55.814231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.077 [2024-10-07 09:48:55.814295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.077 qpair failed and we were unable to recover it. 00:28:07.077 [2024-10-07 09:48:55.814532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.077 [2024-10-07 09:48:55.814599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.077 qpair failed and we were unable to recover it. 00:28:07.077 [2024-10-07 09:48:55.814841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.077 [2024-10-07 09:48:55.814908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.077 qpair failed and we were unable to recover it. 00:28:07.077 [2024-10-07 09:48:55.815156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.077 [2024-10-07 09:48:55.815243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.077 qpair failed and we were unable to recover it. 00:28:07.077 [2024-10-07 09:48:55.815542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.077 [2024-10-07 09:48:55.815608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.077 qpair failed and we were unable to recover it. 00:28:07.077 [2024-10-07 09:48:55.815867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.077 [2024-10-07 09:48:55.815934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.077 qpair failed and we were unable to recover it. 00:28:07.077 [2024-10-07 09:48:55.816127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.077 [2024-10-07 09:48:55.816192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.077 qpair failed and we were unable to recover it. 00:28:07.077 [2024-10-07 09:48:55.816437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.077 [2024-10-07 09:48:55.816503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.077 qpair failed and we were unable to recover it. 00:28:07.077 [2024-10-07 09:48:55.816797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.077 [2024-10-07 09:48:55.816864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.077 qpair failed and we were unable to recover it. 00:28:07.077 [2024-10-07 09:48:55.817153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.077 [2024-10-07 09:48:55.817219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.077 qpair failed and we were unable to recover it. 00:28:07.077 [2024-10-07 09:48:55.817503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.077 [2024-10-07 09:48:55.817578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.077 qpair failed and we were unable to recover it. 00:28:07.077 [2024-10-07 09:48:55.817850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.077 [2024-10-07 09:48:55.817917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.077 qpair failed and we were unable to recover it. 00:28:07.077 [2024-10-07 09:48:55.818166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.077 [2024-10-07 09:48:55.818231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.077 qpair failed and we were unable to recover it. 00:28:07.077 [2024-10-07 09:48:55.818432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.077 [2024-10-07 09:48:55.818500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.077 qpair failed and we were unable to recover it. 00:28:07.077 [2024-10-07 09:48:55.818722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.077 [2024-10-07 09:48:55.818788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.077 qpair failed and we were unable to recover it. 00:28:07.077 [2024-10-07 09:48:55.819049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.077 [2024-10-07 09:48:55.819114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.077 qpair failed and we were unable to recover it. 00:28:07.077 [2024-10-07 09:48:55.819414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.077 [2024-10-07 09:48:55.819478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.077 qpair failed and we were unable to recover it. 00:28:07.077 [2024-10-07 09:48:55.819775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.077 [2024-10-07 09:48:55.819841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.077 qpair failed and we were unable to recover it. 00:28:07.077 [2024-10-07 09:48:55.820123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.077 [2024-10-07 09:48:55.820189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.077 qpair failed and we were unable to recover it. 00:28:07.077 [2024-10-07 09:48:55.820477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.077 [2024-10-07 09:48:55.820551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.077 qpair failed and we were unable to recover it. 00:28:07.077 [2024-10-07 09:48:55.820786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.077 [2024-10-07 09:48:55.820852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.077 qpair failed and we were unable to recover it. 00:28:07.077 [2024-10-07 09:48:55.821147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.077 [2024-10-07 09:48:55.821212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.077 qpair failed and we were unable to recover it. 00:28:07.077 [2024-10-07 09:48:55.821500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.077 [2024-10-07 09:48:55.821567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.077 qpair failed and we were unable to recover it. 00:28:07.077 [2024-10-07 09:48:55.821870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.077 [2024-10-07 09:48:55.821937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.077 qpair failed and we were unable to recover it. 00:28:07.077 [2024-10-07 09:48:55.822209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.077 [2024-10-07 09:48:55.822274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.077 qpair failed and we were unable to recover it. 00:28:07.077 [2024-10-07 09:48:55.822562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.077 [2024-10-07 09:48:55.822627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.077 qpair failed and we were unable to recover it. 00:28:07.077 [2024-10-07 09:48:55.822874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.077 [2024-10-07 09:48:55.822940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.077 qpair failed and we were unable to recover it. 00:28:07.077 [2024-10-07 09:48:55.823200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.077 [2024-10-07 09:48:55.823264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.077 qpair failed and we were unable to recover it. 00:28:07.077 [2024-10-07 09:48:55.823560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.077 [2024-10-07 09:48:55.823625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.077 qpair failed and we were unable to recover it. 00:28:07.077 [2024-10-07 09:48:55.823855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.077 [2024-10-07 09:48:55.823920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.077 qpair failed and we were unable to recover it. 00:28:07.077 [2024-10-07 09:48:55.824210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.077 [2024-10-07 09:48:55.824274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.077 qpair failed and we were unable to recover it. 00:28:07.077 [2024-10-07 09:48:55.824568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.077 [2024-10-07 09:48:55.824635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.077 qpair failed and we were unable to recover it. 00:28:07.077 [2024-10-07 09:48:55.824873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.077 [2024-10-07 09:48:55.824941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.077 qpair failed and we were unable to recover it. 00:28:07.077 [2024-10-07 09:48:55.825190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.077 [2024-10-07 09:48:55.825254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.077 qpair failed and we were unable to recover it. 00:28:07.077 [2024-10-07 09:48:55.825549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.077 [2024-10-07 09:48:55.825614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.077 qpair failed and we were unable to recover it. 00:28:07.077 [2024-10-07 09:48:55.825922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.077 [2024-10-07 09:48:55.825989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.077 qpair failed and we were unable to recover it. 00:28:07.077 [2024-10-07 09:48:55.826240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.077 [2024-10-07 09:48:55.826306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.077 qpair failed and we were unable to recover it. 00:28:07.077 [2024-10-07 09:48:55.826553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.077 [2024-10-07 09:48:55.826627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.077 qpair failed and we were unable to recover it. 00:28:07.077 [2024-10-07 09:48:55.826922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.077 [2024-10-07 09:48:55.826991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.077 qpair failed and we were unable to recover it. 00:28:07.078 [2024-10-07 09:48:55.827278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.078 [2024-10-07 09:48:55.827343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.078 qpair failed and we were unable to recover it. 00:28:07.078 [2024-10-07 09:48:55.827650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.078 [2024-10-07 09:48:55.827734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.078 qpair failed and we were unable to recover it. 00:28:07.078 [2024-10-07 09:48:55.828025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.078 [2024-10-07 09:48:55.828090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.078 qpair failed and we were unable to recover it. 00:28:07.078 [2024-10-07 09:48:55.828336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.078 [2024-10-07 09:48:55.828401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.078 qpair failed and we were unable to recover it. 00:28:07.078 [2024-10-07 09:48:55.828655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.078 [2024-10-07 09:48:55.828739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.078 qpair failed and we were unable to recover it. 00:28:07.078 [2024-10-07 09:48:55.828983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.078 [2024-10-07 09:48:55.829049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.078 qpair failed and we were unable to recover it. 00:28:07.078 [2024-10-07 09:48:55.829298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.078 [2024-10-07 09:48:55.829364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.078 qpair failed and we were unable to recover it. 00:28:07.078 [2024-10-07 09:48:55.829649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.078 [2024-10-07 09:48:55.829744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.078 qpair failed and we were unable to recover it. 00:28:07.078 [2024-10-07 09:48:55.830010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.078 [2024-10-07 09:48:55.830074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.078 qpair failed and we were unable to recover it. 00:28:07.078 [2024-10-07 09:48:55.830329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.078 [2024-10-07 09:48:55.830394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.078 qpair failed and we were unable to recover it. 00:28:07.078 [2024-10-07 09:48:55.830642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.078 [2024-10-07 09:48:55.830728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.078 qpair failed and we were unable to recover it. 00:28:07.078 [2024-10-07 09:48:55.830985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.078 [2024-10-07 09:48:55.831050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.078 qpair failed and we were unable to recover it. 00:28:07.078 [2024-10-07 09:48:55.831354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.078 [2024-10-07 09:48:55.831419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.078 qpair failed and we were unable to recover it. 00:28:07.078 [2024-10-07 09:48:55.831710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.078 [2024-10-07 09:48:55.831778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.078 qpair failed and we were unable to recover it. 00:28:07.078 [2024-10-07 09:48:55.832036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.078 [2024-10-07 09:48:55.832100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.078 qpair failed and we were unable to recover it. 00:28:07.078 [2024-10-07 09:48:55.832362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.078 [2024-10-07 09:48:55.832426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.078 qpair failed and we were unable to recover it. 00:28:07.078 [2024-10-07 09:48:55.832613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.078 [2024-10-07 09:48:55.832692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.078 qpair failed and we were unable to recover it. 00:28:07.078 [2024-10-07 09:48:55.832939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.078 [2024-10-07 09:48:55.833005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.078 qpair failed and we were unable to recover it. 00:28:07.078 [2024-10-07 09:48:55.833295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.078 [2024-10-07 09:48:55.833359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.078 qpair failed and we were unable to recover it. 00:28:07.078 [2024-10-07 09:48:55.833629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.078 [2024-10-07 09:48:55.833736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.078 qpair failed and we were unable to recover it. 00:28:07.078 [2024-10-07 09:48:55.833980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.078 [2024-10-07 09:48:55.834047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.078 qpair failed and we were unable to recover it. 00:28:07.078 [2024-10-07 09:48:55.834327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.078 [2024-10-07 09:48:55.834392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.078 qpair failed and we were unable to recover it. 00:28:07.078 [2024-10-07 09:48:55.834654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.078 [2024-10-07 09:48:55.834748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.078 qpair failed and we were unable to recover it. 00:28:07.078 [2024-10-07 09:48:55.835045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.078 [2024-10-07 09:48:55.835111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.078 qpair failed and we were unable to recover it. 00:28:07.078 [2024-10-07 09:48:55.835395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.078 [2024-10-07 09:48:55.835465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.078 qpair failed and we were unable to recover it. 00:28:07.078 [2024-10-07 09:48:55.835698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.078 [2024-10-07 09:48:55.835782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.078 qpair failed and we were unable to recover it. 00:28:07.078 [2024-10-07 09:48:55.836020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.078 [2024-10-07 09:48:55.836087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.078 qpair failed and we were unable to recover it. 00:28:07.078 [2024-10-07 09:48:55.836354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.078 [2024-10-07 09:48:55.836419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.078 qpair failed and we were unable to recover it. 00:28:07.078 [2024-10-07 09:48:55.836705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.078 [2024-10-07 09:48:55.836772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.078 qpair failed and we were unable to recover it. 00:28:07.078 [2024-10-07 09:48:55.837026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.078 [2024-10-07 09:48:55.837091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.078 qpair failed and we were unable to recover it. 00:28:07.078 [2024-10-07 09:48:55.837343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.078 [2024-10-07 09:48:55.837407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.078 qpair failed and we were unable to recover it. 00:28:07.078 [2024-10-07 09:48:55.837662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.078 [2024-10-07 09:48:55.837741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.078 qpair failed and we were unable to recover it. 00:28:07.078 [2024-10-07 09:48:55.838036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.078 [2024-10-07 09:48:55.838102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.078 qpair failed and we were unable to recover it. 00:28:07.078 [2024-10-07 09:48:55.838403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.078 [2024-10-07 09:48:55.838467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.078 qpair failed and we were unable to recover it. 00:28:07.078 [2024-10-07 09:48:55.838773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.078 [2024-10-07 09:48:55.838841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.078 qpair failed and we were unable to recover it. 00:28:07.078 [2024-10-07 09:48:55.839145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.078 [2024-10-07 09:48:55.839209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.078 qpair failed and we were unable to recover it. 00:28:07.078 [2024-10-07 09:48:55.839454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.078 [2024-10-07 09:48:55.839518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.078 qpair failed and we were unable to recover it. 00:28:07.078 [2024-10-07 09:48:55.839776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.078 [2024-10-07 09:48:55.839843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.078 qpair failed and we were unable to recover it. 00:28:07.078 [2024-10-07 09:48:55.840146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.079 [2024-10-07 09:48:55.840210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.079 qpair failed and we were unable to recover it. 00:28:07.079 [2024-10-07 09:48:55.840511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.079 [2024-10-07 09:48:55.840576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.079 qpair failed and we were unable to recover it. 00:28:07.079 [2024-10-07 09:48:55.840802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.079 [2024-10-07 09:48:55.840868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.079 qpair failed and we were unable to recover it. 00:28:07.079 [2024-10-07 09:48:55.841072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.079 [2024-10-07 09:48:55.841138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.079 qpair failed and we were unable to recover it. 00:28:07.079 [2024-10-07 09:48:55.841388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.079 [2024-10-07 09:48:55.841453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.079 qpair failed and we were unable to recover it. 00:28:07.079 [2024-10-07 09:48:55.841743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.079 [2024-10-07 09:48:55.841809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.079 qpair failed and we were unable to recover it. 00:28:07.079 [2024-10-07 09:48:55.842108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.079 [2024-10-07 09:48:55.842173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.079 qpair failed and we were unable to recover it. 00:28:07.079 [2024-10-07 09:48:55.842467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.079 [2024-10-07 09:48:55.842533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.079 qpair failed and we were unable to recover it. 00:28:07.079 [2024-10-07 09:48:55.842783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.079 [2024-10-07 09:48:55.842849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.079 qpair failed and we were unable to recover it. 00:28:07.079 [2024-10-07 09:48:55.843128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.079 [2024-10-07 09:48:55.843193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.079 qpair failed and we were unable to recover it. 00:28:07.079 [2024-10-07 09:48:55.843439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.079 [2024-10-07 09:48:55.843504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.079 qpair failed and we were unable to recover it. 00:28:07.079 [2024-10-07 09:48:55.843747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.079 [2024-10-07 09:48:55.843814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.079 qpair failed and we were unable to recover it. 00:28:07.079 [2024-10-07 09:48:55.844019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.079 [2024-10-07 09:48:55.844083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.079 qpair failed and we were unable to recover it. 00:28:07.079 [2024-10-07 09:48:55.844326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.079 [2024-10-07 09:48:55.844391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.079 qpair failed and we were unable to recover it. 00:28:07.079 [2024-10-07 09:48:55.844687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.079 [2024-10-07 09:48:55.844753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.079 qpair failed and we were unable to recover it. 00:28:07.079 [2024-10-07 09:48:55.845057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.079 [2024-10-07 09:48:55.845122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.079 qpair failed and we were unable to recover it. 00:28:07.079 [2024-10-07 09:48:55.845374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.079 [2024-10-07 09:48:55.845440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.079 qpair failed and we were unable to recover it. 00:28:07.079 [2024-10-07 09:48:55.845714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.079 [2024-10-07 09:48:55.845782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.079 qpair failed and we were unable to recover it. 00:28:07.079 [2024-10-07 09:48:55.846032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.079 [2024-10-07 09:48:55.846098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.079 qpair failed and we were unable to recover it. 00:28:07.079 [2024-10-07 09:48:55.846310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.079 [2024-10-07 09:48:55.846375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.079 qpair failed and we were unable to recover it. 00:28:07.079 [2024-10-07 09:48:55.846614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.079 [2024-10-07 09:48:55.846692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.079 qpair failed and we were unable to recover it. 00:28:07.079 [2024-10-07 09:48:55.846985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.079 [2024-10-07 09:48:55.847049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.079 qpair failed and we were unable to recover it. 00:28:07.079 [2024-10-07 09:48:55.847263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.079 [2024-10-07 09:48:55.847328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.079 qpair failed and we were unable to recover it. 00:28:07.079 [2024-10-07 09:48:55.847575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.079 [2024-10-07 09:48:55.847641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.079 qpair failed and we were unable to recover it. 00:28:07.079 [2024-10-07 09:48:55.847919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.079 [2024-10-07 09:48:55.847984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.079 qpair failed and we were unable to recover it. 00:28:07.079 [2024-10-07 09:48:55.848181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.079 [2024-10-07 09:48:55.848246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.079 qpair failed and we were unable to recover it. 00:28:07.079 [2024-10-07 09:48:55.848474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.079 [2024-10-07 09:48:55.848540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.079 qpair failed and we were unable to recover it. 00:28:07.079 [2024-10-07 09:48:55.848775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.079 [2024-10-07 09:48:55.848841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.079 qpair failed and we were unable to recover it. 00:28:07.079 [2024-10-07 09:48:55.849111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.079 [2024-10-07 09:48:55.849176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.079 qpair failed and we were unable to recover it. 00:28:07.079 [2024-10-07 09:48:55.849397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.079 [2024-10-07 09:48:55.849471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.079 qpair failed and we were unable to recover it. 00:28:07.079 [2024-10-07 09:48:55.849728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.079 [2024-10-07 09:48:55.849794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.079 qpair failed and we were unable to recover it. 00:28:07.079 [2024-10-07 09:48:55.850041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.079 [2024-10-07 09:48:55.850109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.079 qpair failed and we were unable to recover it. 00:28:07.079 [2024-10-07 09:48:55.850320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.079 [2024-10-07 09:48:55.850387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.079 qpair failed and we were unable to recover it. 00:28:07.079 [2024-10-07 09:48:55.850580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.079 [2024-10-07 09:48:55.850645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.079 qpair failed and we were unable to recover it. 00:28:07.079 [2024-10-07 09:48:55.850938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.079 [2024-10-07 09:48:55.851003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.079 qpair failed and we were unable to recover it. 00:28:07.079 [2024-10-07 09:48:55.851226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.079 [2024-10-07 09:48:55.851292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.079 qpair failed and we were unable to recover it. 00:28:07.079 [2024-10-07 09:48:55.851549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.079 [2024-10-07 09:48:55.851614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.079 qpair failed and we were unable to recover it. 00:28:07.079 [2024-10-07 09:48:55.851884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.079 [2024-10-07 09:48:55.851951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.079 qpair failed and we were unable to recover it. 00:28:07.080 [2024-10-07 09:48:55.852197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.080 [2024-10-07 09:48:55.852262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.080 qpair failed and we were unable to recover it. 00:28:07.080 [2024-10-07 09:48:55.852475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.080 [2024-10-07 09:48:55.852542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.080 qpair failed and we were unable to recover it. 00:28:07.080 [2024-10-07 09:48:55.852770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.080 [2024-10-07 09:48:55.852837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.080 qpair failed and we were unable to recover it. 00:28:07.080 [2024-10-07 09:48:55.853128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.080 [2024-10-07 09:48:55.853192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.080 qpair failed and we were unable to recover it. 00:28:07.080 [2024-10-07 09:48:55.853413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.080 [2024-10-07 09:48:55.853478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.080 qpair failed and we were unable to recover it. 00:28:07.080 [2024-10-07 09:48:55.853781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.080 [2024-10-07 09:48:55.853848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.080 qpair failed and we were unable to recover it. 00:28:07.080 [2024-10-07 09:48:55.854118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.080 [2024-10-07 09:48:55.854185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.080 qpair failed and we were unable to recover it. 00:28:07.080 [2024-10-07 09:48:55.854398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.080 [2024-10-07 09:48:55.854463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.080 qpair failed and we were unable to recover it. 00:28:07.080 [2024-10-07 09:48:55.854699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.080 [2024-10-07 09:48:55.854766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.080 qpair failed and we were unable to recover it. 00:28:07.080 [2024-10-07 09:48:55.854977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.080 [2024-10-07 09:48:55.855043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.080 qpair failed and we were unable to recover it. 00:28:07.080 [2024-10-07 09:48:55.855232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.080 [2024-10-07 09:48:55.855297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.080 qpair failed and we were unable to recover it. 00:28:07.080 [2024-10-07 09:48:55.855568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.080 [2024-10-07 09:48:55.855633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.080 qpair failed and we were unable to recover it. 00:28:07.080 [2024-10-07 09:48:55.855882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.080 [2024-10-07 09:48:55.855948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.080 qpair failed and we were unable to recover it. 00:28:07.080 [2024-10-07 09:48:55.856195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.080 [2024-10-07 09:48:55.856256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.080 qpair failed and we were unable to recover it. 00:28:07.080 [2024-10-07 09:48:55.856460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.080 [2024-10-07 09:48:55.856525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.080 qpair failed and we were unable to recover it. 00:28:07.080 [2024-10-07 09:48:55.856746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.080 [2024-10-07 09:48:55.856812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.080 qpair failed and we were unable to recover it. 00:28:07.080 [2024-10-07 09:48:55.857028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.080 [2024-10-07 09:48:55.857093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.080 qpair failed and we were unable to recover it. 00:28:07.080 [2024-10-07 09:48:55.857357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.080 [2024-10-07 09:48:55.857433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.080 qpair failed and we were unable to recover it. 00:28:07.080 [2024-10-07 09:48:55.857634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.080 [2024-10-07 09:48:55.857715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.080 qpair failed and we were unable to recover it. 00:28:07.080 [2024-10-07 09:48:55.857966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.080 [2024-10-07 09:48:55.858033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.080 qpair failed and we were unable to recover it. 00:28:07.080 [2024-10-07 09:48:55.858283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.080 [2024-10-07 09:48:55.858348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.080 qpair failed and we were unable to recover it. 00:28:07.080 [2024-10-07 09:48:55.858633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.080 [2024-10-07 09:48:55.858712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.080 qpair failed and we were unable to recover it. 00:28:07.080 [2024-10-07 09:48:55.858967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.080 [2024-10-07 09:48:55.859033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.080 qpair failed and we were unable to recover it. 00:28:07.080 [2024-10-07 09:48:55.859305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.080 [2024-10-07 09:48:55.859371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.080 qpair failed and we were unable to recover it. 00:28:07.080 [2024-10-07 09:48:55.859683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.080 [2024-10-07 09:48:55.859760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.080 qpair failed and we were unable to recover it. 00:28:07.080 [2024-10-07 09:48:55.859946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.080 [2024-10-07 09:48:55.860014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.080 qpair failed and we were unable to recover it. 00:28:07.080 [2024-10-07 09:48:55.860305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.080 [2024-10-07 09:48:55.860372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.080 qpair failed and we were unable to recover it. 00:28:07.080 [2024-10-07 09:48:55.860589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.080 [2024-10-07 09:48:55.860654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.080 qpair failed and we were unable to recover it. 00:28:07.080 [2024-10-07 09:48:55.860933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.080 [2024-10-07 09:48:55.860999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.080 qpair failed and we were unable to recover it. 00:28:07.080 [2024-10-07 09:48:55.861247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.080 [2024-10-07 09:48:55.861313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.080 qpair failed and we were unable to recover it. 00:28:07.080 [2024-10-07 09:48:55.861594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.080 [2024-10-07 09:48:55.861658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.080 qpair failed and we were unable to recover it. 00:28:07.080 [2024-10-07 09:48:55.861955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.080 [2024-10-07 09:48:55.862021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.080 qpair failed and we were unable to recover it. 00:28:07.080 [2024-10-07 09:48:55.862259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.080 [2024-10-07 09:48:55.862321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.080 qpair failed and we were unable to recover it. 00:28:07.080 [2024-10-07 09:48:55.862565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.081 [2024-10-07 09:48:55.862629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.081 qpair failed and we were unable to recover it. 00:28:07.081 [2024-10-07 09:48:55.862827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.081 [2024-10-07 09:48:55.862889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.081 qpair failed and we were unable to recover it. 00:28:07.081 [2024-10-07 09:48:55.863148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.081 [2024-10-07 09:48:55.863215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.081 qpair failed and we were unable to recover it. 00:28:07.081 [2024-10-07 09:48:55.863437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.081 [2024-10-07 09:48:55.863503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.081 qpair failed and we were unable to recover it. 00:28:07.081 [2024-10-07 09:48:55.863754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.081 [2024-10-07 09:48:55.863828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.081 qpair failed and we were unable to recover it. 00:28:07.081 [2024-10-07 09:48:55.864030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.081 [2024-10-07 09:48:55.864094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.081 qpair failed and we were unable to recover it. 00:28:07.081 [2024-10-07 09:48:55.864331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.081 [2024-10-07 09:48:55.864396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.081 qpair failed and we were unable to recover it. 00:28:07.081 [2024-10-07 09:48:55.864700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.081 [2024-10-07 09:48:55.864768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.081 qpair failed and we were unable to recover it. 00:28:07.081 [2024-10-07 09:48:55.865025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.081 [2024-10-07 09:48:55.865093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.081 qpair failed and we were unable to recover it. 00:28:07.081 [2024-10-07 09:48:55.865331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.081 [2024-10-07 09:48:55.865396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.081 qpair failed and we were unable to recover it. 00:28:07.081 [2024-10-07 09:48:55.865615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.081 [2024-10-07 09:48:55.865700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.081 qpair failed and we were unable to recover it. 00:28:07.081 [2024-10-07 09:48:55.865912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.081 [2024-10-07 09:48:55.865994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.081 qpair failed and we were unable to recover it. 00:28:07.081 [2024-10-07 09:48:55.866228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.081 [2024-10-07 09:48:55.866292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.081 qpair failed and we were unable to recover it. 00:28:07.081 [2024-10-07 09:48:55.866555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.081 [2024-10-07 09:48:55.866620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.081 qpair failed and we were unable to recover it. 00:28:07.081 [2024-10-07 09:48:55.866842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.081 [2024-10-07 09:48:55.866908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.081 qpair failed and we were unable to recover it. 00:28:07.081 [2024-10-07 09:48:55.867141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.081 [2024-10-07 09:48:55.867205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.081 qpair failed and we were unable to recover it. 00:28:07.081 [2024-10-07 09:48:55.867490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.081 [2024-10-07 09:48:55.867555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.081 qpair failed and we were unable to recover it. 00:28:07.081 [2024-10-07 09:48:55.867834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.081 [2024-10-07 09:48:55.867900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.081 qpair failed and we were unable to recover it. 00:28:07.081 [2024-10-07 09:48:55.868144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.081 [2024-10-07 09:48:55.868210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.081 qpair failed and we were unable to recover it. 00:28:07.081 [2024-10-07 09:48:55.868463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.081 [2024-10-07 09:48:55.868528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.081 qpair failed and we were unable to recover it. 00:28:07.081 [2024-10-07 09:48:55.868736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.081 [2024-10-07 09:48:55.868802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.081 qpair failed and we were unable to recover it. 00:28:07.081 [2024-10-07 09:48:55.869097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.081 [2024-10-07 09:48:55.869163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.081 qpair failed and we were unable to recover it. 00:28:07.081 [2024-10-07 09:48:55.869417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.081 [2024-10-07 09:48:55.869480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.081 qpair failed and we were unable to recover it. 00:28:07.081 [2024-10-07 09:48:55.869771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.081 [2024-10-07 09:48:55.869838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.081 qpair failed and we were unable to recover it. 00:28:07.081 [2024-10-07 09:48:55.870082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.081 [2024-10-07 09:48:55.870148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.081 qpair failed and we were unable to recover it. 00:28:07.081 [2024-10-07 09:48:55.870370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.081 [2024-10-07 09:48:55.870437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.081 qpair failed and we were unable to recover it. 00:28:07.081 [2024-10-07 09:48:55.870685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.081 [2024-10-07 09:48:55.870759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.081 qpair failed and we were unable to recover it. 00:28:07.081 [2024-10-07 09:48:55.871000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.081 [2024-10-07 09:48:55.871065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.081 qpair failed and we were unable to recover it. 00:28:07.081 [2024-10-07 09:48:55.871273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.081 [2024-10-07 09:48:55.871340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.081 qpair failed and we were unable to recover it. 00:28:07.081 [2024-10-07 09:48:55.871581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.081 [2024-10-07 09:48:55.871646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.081 qpair failed and we were unable to recover it. 00:28:07.081 [2024-10-07 09:48:55.871898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.081 [2024-10-07 09:48:55.871963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.081 qpair failed and we were unable to recover it. 00:28:07.081 [2024-10-07 09:48:55.872213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.081 [2024-10-07 09:48:55.872279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.081 qpair failed and we were unable to recover it. 00:28:07.081 [2024-10-07 09:48:55.872540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.081 [2024-10-07 09:48:55.872605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.081 qpair failed and we were unable to recover it. 00:28:07.081 [2024-10-07 09:48:55.872884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.081 [2024-10-07 09:48:55.872953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.081 qpair failed and we were unable to recover it. 00:28:07.081 [2024-10-07 09:48:55.873215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.081 [2024-10-07 09:48:55.873280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.081 qpair failed and we were unable to recover it. 00:28:07.081 [2024-10-07 09:48:55.873574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.081 [2024-10-07 09:48:55.873639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.081 qpair failed and we were unable to recover it. 00:28:07.081 [2024-10-07 09:48:55.873917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.081 [2024-10-07 09:48:55.873990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.081 qpair failed and we were unable to recover it. 00:28:07.081 [2024-10-07 09:48:55.874238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.081 [2024-10-07 09:48:55.874303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.081 qpair failed and we were unable to recover it. 00:28:07.081 [2024-10-07 09:48:55.874577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.081 [2024-10-07 09:48:55.874642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.081 qpair failed and we were unable to recover it. 00:28:07.082 [2024-10-07 09:48:55.874883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.082 [2024-10-07 09:48:55.874949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.082 qpair failed and we were unable to recover it. 00:28:07.082 [2024-10-07 09:48:55.875145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.082 [2024-10-07 09:48:55.875210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.082 qpair failed and we were unable to recover it. 00:28:07.082 [2024-10-07 09:48:55.875440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.082 [2024-10-07 09:48:55.875506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.082 qpair failed and we were unable to recover it. 00:28:07.082 [2024-10-07 09:48:55.875748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.082 [2024-10-07 09:48:55.875814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.082 qpair failed and we were unable to recover it. 00:28:07.082 [2024-10-07 09:48:55.876064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.082 [2024-10-07 09:48:55.876130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.082 qpair failed and we were unable to recover it. 00:28:07.082 [2024-10-07 09:48:55.876413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.082 [2024-10-07 09:48:55.876479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.082 qpair failed and we were unable to recover it. 00:28:07.082 [2024-10-07 09:48:55.876723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.082 [2024-10-07 09:48:55.876789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.082 qpair failed and we were unable to recover it. 00:28:07.082 [2024-10-07 09:48:55.877036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.082 [2024-10-07 09:48:55.877102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.082 qpair failed and we were unable to recover it. 00:28:07.082 [2024-10-07 09:48:55.877296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.082 [2024-10-07 09:48:55.877365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.082 qpair failed and we were unable to recover it. 00:28:07.082 [2024-10-07 09:48:55.877599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.082 [2024-10-07 09:48:55.877680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.082 qpair failed and we were unable to recover it. 00:28:07.082 [2024-10-07 09:48:55.877969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.082 [2024-10-07 09:48:55.878035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.082 qpair failed and we were unable to recover it. 00:28:07.082 [2024-10-07 09:48:55.878290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.082 [2024-10-07 09:48:55.878355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.082 qpair failed and we were unable to recover it. 00:28:07.082 [2024-10-07 09:48:55.878617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.082 [2024-10-07 09:48:55.878695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.082 qpair failed and we were unable to recover it. 00:28:07.082 [2024-10-07 09:48:55.878906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.082 [2024-10-07 09:48:55.878974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.082 qpair failed and we were unable to recover it. 00:28:07.082 [2024-10-07 09:48:55.879265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.082 [2024-10-07 09:48:55.879329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.082 qpair failed and we were unable to recover it. 00:28:07.082 [2024-10-07 09:48:55.879580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.082 [2024-10-07 09:48:55.879645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.082 qpair failed and we were unable to recover it. 00:28:07.082 [2024-10-07 09:48:55.879916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.082 [2024-10-07 09:48:55.879989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.082 qpair failed and we were unable to recover it. 00:28:07.082 [2024-10-07 09:48:55.880249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.082 [2024-10-07 09:48:55.880314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.082 qpair failed and we were unable to recover it. 00:28:07.082 [2024-10-07 09:48:55.880563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.082 [2024-10-07 09:48:55.880628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.082 qpair failed and we were unable to recover it. 00:28:07.082 [2024-10-07 09:48:55.880898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.082 [2024-10-07 09:48:55.880975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.082 qpair failed and we were unable to recover it. 00:28:07.082 [2024-10-07 09:48:55.881226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.082 [2024-10-07 09:48:55.881291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.082 qpair failed and we were unable to recover it. 00:28:07.082 [2024-10-07 09:48:55.881490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.082 [2024-10-07 09:48:55.881557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.082 qpair failed and we were unable to recover it. 00:28:07.082 [2024-10-07 09:48:55.881873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.082 [2024-10-07 09:48:55.881952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.082 qpair failed and we were unable to recover it. 00:28:07.082 [2024-10-07 09:48:55.882210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.082 [2024-10-07 09:48:55.882275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.082 qpair failed and we were unable to recover it. 00:28:07.082 [2024-10-07 09:48:55.882527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.082 [2024-10-07 09:48:55.882593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.082 qpair failed and we were unable to recover it. 00:28:07.082 [2024-10-07 09:48:55.882843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.082 [2024-10-07 09:48:55.882910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.082 qpair failed and we were unable to recover it. 00:28:07.082 [2024-10-07 09:48:55.883104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.082 [2024-10-07 09:48:55.883170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.082 qpair failed and we were unable to recover it. 00:28:07.082 [2024-10-07 09:48:55.883473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.082 [2024-10-07 09:48:55.883539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.082 qpair failed and we were unable to recover it. 00:28:07.082 [2024-10-07 09:48:55.883803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.082 [2024-10-07 09:48:55.883868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.082 qpair failed and we were unable to recover it. 00:28:07.082 [2024-10-07 09:48:55.884119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.082 [2024-10-07 09:48:55.884185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.082 qpair failed and we were unable to recover it. 00:28:07.082 [2024-10-07 09:48:55.884399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.082 [2024-10-07 09:48:55.884464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.082 qpair failed and we were unable to recover it. 00:28:07.082 [2024-10-07 09:48:55.884737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.082 [2024-10-07 09:48:55.884804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.082 qpair failed and we were unable to recover it. 00:28:07.082 [2024-10-07 09:48:55.885091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.082 [2024-10-07 09:48:55.885156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.082 qpair failed and we were unable to recover it. 00:28:07.082 [2024-10-07 09:48:55.885441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.082 [2024-10-07 09:48:55.885506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.082 qpair failed and we were unable to recover it. 00:28:07.082 [2024-10-07 09:48:55.885759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.082 [2024-10-07 09:48:55.885826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.082 qpair failed and we were unable to recover it. 00:28:07.082 [2024-10-07 09:48:55.886077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.082 [2024-10-07 09:48:55.886143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.082 qpair failed and we were unable to recover it. 00:28:07.082 [2024-10-07 09:48:55.886391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.082 [2024-10-07 09:48:55.886457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.082 qpair failed and we were unable to recover it. 00:28:07.082 [2024-10-07 09:48:55.886720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.083 [2024-10-07 09:48:55.886786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.083 qpair failed and we were unable to recover it. 00:28:07.083 [2024-10-07 09:48:55.887040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.083 [2024-10-07 09:48:55.887106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.083 qpair failed and we were unable to recover it. 00:28:07.083 [2024-10-07 09:48:55.887290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.083 [2024-10-07 09:48:55.887356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.083 qpair failed and we were unable to recover it. 00:28:07.083 [2024-10-07 09:48:55.887573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.083 [2024-10-07 09:48:55.887648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.083 qpair failed and we were unable to recover it. 00:28:07.083 [2024-10-07 09:48:55.887963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.083 [2024-10-07 09:48:55.888029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.083 qpair failed and we were unable to recover it. 00:28:07.083 [2024-10-07 09:48:55.888281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.083 [2024-10-07 09:48:55.888347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.083 qpair failed and we were unable to recover it. 00:28:07.083 [2024-10-07 09:48:55.888531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.083 [2024-10-07 09:48:55.888595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.083 qpair failed and we were unable to recover it. 00:28:07.083 [2024-10-07 09:48:55.888829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.083 [2024-10-07 09:48:55.888897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.083 qpair failed and we were unable to recover it. 00:28:07.083 [2024-10-07 09:48:55.889096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.083 [2024-10-07 09:48:55.889164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.083 qpair failed and we were unable to recover it. 00:28:07.083 [2024-10-07 09:48:55.889357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.083 [2024-10-07 09:48:55.889421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.083 qpair failed and we were unable to recover it. 00:28:07.083 [2024-10-07 09:48:55.889631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.083 [2024-10-07 09:48:55.889714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.083 qpair failed and we were unable to recover it. 00:28:07.083 [2024-10-07 09:48:55.889977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.083 [2024-10-07 09:48:55.890043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.083 qpair failed and we were unable to recover it. 00:28:07.083 [2024-10-07 09:48:55.890278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.083 [2024-10-07 09:48:55.890343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.083 qpair failed and we were unable to recover it. 00:28:07.083 [2024-10-07 09:48:55.890540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.083 [2024-10-07 09:48:55.890605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.083 qpair failed and we were unable to recover it. 00:28:07.083 [2024-10-07 09:48:55.890874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.083 [2024-10-07 09:48:55.890953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.083 qpair failed and we were unable to recover it. 00:28:07.083 [2024-10-07 09:48:55.891237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.083 [2024-10-07 09:48:55.891302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.083 qpair failed and we were unable to recover it. 00:28:07.083 [2024-10-07 09:48:55.891543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.083 [2024-10-07 09:48:55.891609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.083 qpair failed and we were unable to recover it. 00:28:07.083 [2024-10-07 09:48:55.891846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.083 [2024-10-07 09:48:55.891912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.083 qpair failed and we were unable to recover it. 00:28:07.083 [2024-10-07 09:48:55.892135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.083 [2024-10-07 09:48:55.892200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.083 qpair failed and we were unable to recover it. 00:28:07.083 [2024-10-07 09:48:55.892434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.083 [2024-10-07 09:48:55.892499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.083 qpair failed and we were unable to recover it. 00:28:07.083 [2024-10-07 09:48:55.892720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.083 [2024-10-07 09:48:55.892787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.083 qpair failed and we were unable to recover it. 00:28:07.083 [2024-10-07 09:48:55.893007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.083 [2024-10-07 09:48:55.893074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.083 qpair failed and we were unable to recover it. 00:28:07.083 [2024-10-07 09:48:55.893327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.083 [2024-10-07 09:48:55.893393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.083 qpair failed and we were unable to recover it. 00:28:07.083 [2024-10-07 09:48:55.893637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.083 [2024-10-07 09:48:55.893718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.083 qpair failed and we were unable to recover it. 00:28:07.083 [2024-10-07 09:48:55.893960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.083 [2024-10-07 09:48:55.894026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.083 qpair failed and we were unable to recover it. 00:28:07.083 [2024-10-07 09:48:55.894234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.083 [2024-10-07 09:48:55.894301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.083 qpair failed and we were unable to recover it. 00:28:07.083 [2024-10-07 09:48:55.894561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.083 [2024-10-07 09:48:55.894626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.083 qpair failed and we were unable to recover it. 00:28:07.083 [2024-10-07 09:48:55.894840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.083 [2024-10-07 09:48:55.894906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.083 qpair failed and we were unable to recover it. 00:28:07.083 [2024-10-07 09:48:55.895123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.083 [2024-10-07 09:48:55.895188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.083 qpair failed and we were unable to recover it. 00:28:07.083 [2024-10-07 09:48:55.895405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.083 [2024-10-07 09:48:55.895469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.083 qpair failed and we were unable to recover it. 00:28:07.083 [2024-10-07 09:48:55.895711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.083 [2024-10-07 09:48:55.895788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.083 qpair failed and we were unable to recover it. 00:28:07.083 [2024-10-07 09:48:55.896021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.083 [2024-10-07 09:48:55.896087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.083 qpair failed and we were unable to recover it. 00:28:07.083 [2024-10-07 09:48:55.896379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.083 [2024-10-07 09:48:55.896443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.083 qpair failed and we were unable to recover it. 00:28:07.083 [2024-10-07 09:48:55.896638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.083 [2024-10-07 09:48:55.896719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.083 qpair failed and we were unable to recover it. 00:28:07.083 [2024-10-07 09:48:55.896931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.083 [2024-10-07 09:48:55.896999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.083 qpair failed and we were unable to recover it. 00:28:07.083 [2024-10-07 09:48:55.897260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.083 [2024-10-07 09:48:55.897325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.083 qpair failed and we were unable to recover it. 00:28:07.083 [2024-10-07 09:48:55.897577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.083 [2024-10-07 09:48:55.897642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.083 qpair failed and we were unable to recover it. 00:28:07.083 [2024-10-07 09:48:55.897959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.083 [2024-10-07 09:48:55.898025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.083 qpair failed and we were unable to recover it. 00:28:07.083 [2024-10-07 09:48:55.898308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.083 [2024-10-07 09:48:55.898372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.083 qpair failed and we were unable to recover it. 00:28:07.084 [2024-10-07 09:48:55.898582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.084 [2024-10-07 09:48:55.898647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.084 qpair failed and we were unable to recover it. 00:28:07.084 [2024-10-07 09:48:55.898957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.084 [2024-10-07 09:48:55.899022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.084 qpair failed and we were unable to recover it. 00:28:07.084 [2024-10-07 09:48:55.899277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.084 [2024-10-07 09:48:55.899342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.084 qpair failed and we were unable to recover it. 00:28:07.084 [2024-10-07 09:48:55.899596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.084 [2024-10-07 09:48:55.899661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.084 qpair failed and we were unable to recover it. 00:28:07.084 [2024-10-07 09:48:55.899932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.084 [2024-10-07 09:48:55.899997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.084 qpair failed and we were unable to recover it. 00:28:07.084 [2024-10-07 09:48:55.900217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.084 [2024-10-07 09:48:55.900282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.084 qpair failed and we were unable to recover it. 00:28:07.084 [2024-10-07 09:48:55.900543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.084 [2024-10-07 09:48:55.900609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.084 qpair failed and we were unable to recover it. 00:28:07.084 [2024-10-07 09:48:55.900869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.084 [2024-10-07 09:48:55.900935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.084 qpair failed and we were unable to recover it. 00:28:07.084 [2024-10-07 09:48:55.901192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.084 [2024-10-07 09:48:55.901257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.084 qpair failed and we were unable to recover it. 00:28:07.084 [2024-10-07 09:48:55.901495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.084 [2024-10-07 09:48:55.901560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.084 qpair failed and we were unable to recover it. 00:28:07.084 [2024-10-07 09:48:55.901827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.084 [2024-10-07 09:48:55.901894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.084 qpair failed and we were unable to recover it. 00:28:07.084 [2024-10-07 09:48:55.902101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.084 [2024-10-07 09:48:55.902166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.084 qpair failed and we were unable to recover it. 00:28:07.084 [2024-10-07 09:48:55.902369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.084 [2024-10-07 09:48:55.902434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.084 qpair failed and we were unable to recover it. 00:28:07.084 [2024-10-07 09:48:55.902719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.084 [2024-10-07 09:48:55.902787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.084 qpair failed and we were unable to recover it. 00:28:07.084 [2024-10-07 09:48:55.903006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.084 [2024-10-07 09:48:55.903072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.084 qpair failed and we were unable to recover it. 00:28:07.084 [2024-10-07 09:48:55.903261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.084 [2024-10-07 09:48:55.903326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.084 qpair failed and we were unable to recover it. 00:28:07.084 [2024-10-07 09:48:55.903591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.084 [2024-10-07 09:48:55.903657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.084 qpair failed and we were unable to recover it. 00:28:07.084 [2024-10-07 09:48:55.903892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.084 [2024-10-07 09:48:55.903956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.084 qpair failed and we were unable to recover it. 00:28:07.084 [2024-10-07 09:48:55.904171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.084 [2024-10-07 09:48:55.904249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.084 qpair failed and we were unable to recover it. 00:28:07.084 [2024-10-07 09:48:55.904494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.084 [2024-10-07 09:48:55.904559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.084 qpair failed and we were unable to recover it. 00:28:07.084 [2024-10-07 09:48:55.904827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.084 [2024-10-07 09:48:55.904892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.084 qpair failed and we were unable to recover it. 00:28:07.084 [2024-10-07 09:48:55.905141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.084 [2024-10-07 09:48:55.905206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.084 qpair failed and we were unable to recover it. 00:28:07.084 [2024-10-07 09:48:55.905428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.084 [2024-10-07 09:48:55.905493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.084 qpair failed and we were unable to recover it. 00:28:07.084 [2024-10-07 09:48:55.905770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.084 [2024-10-07 09:48:55.905838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.084 qpair failed and we were unable to recover it. 00:28:07.084 [2024-10-07 09:48:55.906028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.084 [2024-10-07 09:48:55.906094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.084 qpair failed and we were unable to recover it. 00:28:07.084 [2024-10-07 09:48:55.906350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.084 [2024-10-07 09:48:55.906415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.084 qpair failed and we were unable to recover it. 00:28:07.084 [2024-10-07 09:48:55.906708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.084 [2024-10-07 09:48:55.906775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.084 qpair failed and we were unable to recover it. 00:28:07.084 [2024-10-07 09:48:55.907026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.084 [2024-10-07 09:48:55.907091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.084 qpair failed and we were unable to recover it. 00:28:07.084 [2024-10-07 09:48:55.907314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.084 [2024-10-07 09:48:55.907380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.084 qpair failed and we were unable to recover it. 00:28:07.084 [2024-10-07 09:48:55.907608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.084 [2024-10-07 09:48:55.907687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.084 qpair failed and we were unable to recover it. 00:28:07.084 [2024-10-07 09:48:55.907890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.084 [2024-10-07 09:48:55.907957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.084 qpair failed and we were unable to recover it. 00:28:07.084 [2024-10-07 09:48:55.908170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.084 [2024-10-07 09:48:55.908238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.084 qpair failed and we were unable to recover it. 00:28:07.084 [2024-10-07 09:48:55.908547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.084 [2024-10-07 09:48:55.908614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.084 qpair failed and we were unable to recover it. 00:28:07.084 [2024-10-07 09:48:55.908828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.084 [2024-10-07 09:48:55.908894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.084 qpair failed and we were unable to recover it. 00:28:07.085 [2024-10-07 09:48:55.909140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.085 [2024-10-07 09:48:55.909205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.085 qpair failed and we were unable to recover it. 00:28:07.085 [2024-10-07 09:48:55.909483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.085 [2024-10-07 09:48:55.909549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.085 qpair failed and we were unable to recover it. 00:28:07.085 [2024-10-07 09:48:55.909814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.085 [2024-10-07 09:48:55.909882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.085 qpair failed and we were unable to recover it. 00:28:07.085 [2024-10-07 09:48:55.910137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.085 [2024-10-07 09:48:55.910202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.085 qpair failed and we were unable to recover it. 00:28:07.085 [2024-10-07 09:48:55.910449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.085 [2024-10-07 09:48:55.910515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.085 qpair failed and we were unable to recover it. 00:28:07.085 [2024-10-07 09:48:55.910797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.085 [2024-10-07 09:48:55.910864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.085 qpair failed and we were unable to recover it. 00:28:07.085 [2024-10-07 09:48:55.911069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.085 [2024-10-07 09:48:55.911134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.085 qpair failed and we were unable to recover it. 00:28:07.085 [2024-10-07 09:48:55.911369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.085 [2024-10-07 09:48:55.911435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.085 qpair failed and we were unable to recover it. 00:28:07.085 [2024-10-07 09:48:55.911697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.085 [2024-10-07 09:48:55.911764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.085 qpair failed and we were unable to recover it. 00:28:07.085 [2024-10-07 09:48:55.911991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.085 [2024-10-07 09:48:55.912057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.085 qpair failed and we were unable to recover it. 00:28:07.085 [2024-10-07 09:48:55.912299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.085 [2024-10-07 09:48:55.912365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.085 qpair failed and we were unable to recover it. 00:28:07.085 [2024-10-07 09:48:55.912647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.085 [2024-10-07 09:48:55.912729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.085 qpair failed and we were unable to recover it. 00:28:07.085 [2024-10-07 09:48:55.913028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.085 [2024-10-07 09:48:55.913094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.085 qpair failed and we were unable to recover it. 00:28:07.085 [2024-10-07 09:48:55.913352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.085 [2024-10-07 09:48:55.913418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.085 qpair failed and we were unable to recover it. 00:28:07.085 [2024-10-07 09:48:55.913690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.085 [2024-10-07 09:48:55.913756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.085 qpair failed and we were unable to recover it. 00:28:07.085 [2024-10-07 09:48:55.913968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.085 [2024-10-07 09:48:55.914034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.085 qpair failed and we were unable to recover it. 00:28:07.085 [2024-10-07 09:48:55.914280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.085 [2024-10-07 09:48:55.914346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.085 qpair failed and we were unable to recover it. 00:28:07.085 [2024-10-07 09:48:55.914623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.085 [2024-10-07 09:48:55.914708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.085 qpair failed and we were unable to recover it. 00:28:07.085 [2024-10-07 09:48:55.914975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.085 [2024-10-07 09:48:55.915041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.085 qpair failed and we were unable to recover it. 00:28:07.085 [2024-10-07 09:48:55.915290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.085 [2024-10-07 09:48:55.915376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.085 qpair failed and we were unable to recover it. 00:28:07.085 [2024-10-07 09:48:55.915638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.085 [2024-10-07 09:48:55.915724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.085 qpair failed and we were unable to recover it. 00:28:07.085 [2024-10-07 09:48:55.915967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.085 [2024-10-07 09:48:55.916032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.085 qpair failed and we were unable to recover it. 00:28:07.085 [2024-10-07 09:48:55.916245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.085 [2024-10-07 09:48:55.916310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.085 qpair failed and we were unable to recover it. 00:28:07.085 [2024-10-07 09:48:55.916546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.085 [2024-10-07 09:48:55.916611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.085 qpair failed and we were unable to recover it. 00:28:07.085 [2024-10-07 09:48:55.916860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.085 [2024-10-07 09:48:55.916928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.085 qpair failed and we were unable to recover it. 00:28:07.085 [2024-10-07 09:48:55.917182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.085 [2024-10-07 09:48:55.917247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.085 qpair failed and we were unable to recover it. 00:28:07.085 [2024-10-07 09:48:55.917457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.085 [2024-10-07 09:48:55.917522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.085 qpair failed and we were unable to recover it. 00:28:07.085 [2024-10-07 09:48:55.917782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.085 [2024-10-07 09:48:55.917850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.085 qpair failed and we were unable to recover it. 00:28:07.085 [2024-10-07 09:48:55.918152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.085 [2024-10-07 09:48:55.918218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.085 qpair failed and we were unable to recover it. 00:28:07.085 [2024-10-07 09:48:55.918456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.085 [2024-10-07 09:48:55.918522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.085 qpair failed and we were unable to recover it. 00:28:07.085 [2024-10-07 09:48:55.918778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.085 [2024-10-07 09:48:55.918844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.085 qpair failed and we were unable to recover it. 00:28:07.085 [2024-10-07 09:48:55.919082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.085 [2024-10-07 09:48:55.919148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.085 qpair failed and we were unable to recover it. 00:28:07.085 [2024-10-07 09:48:55.919340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.085 [2024-10-07 09:48:55.919406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.085 qpair failed and we were unable to recover it. 00:28:07.085 [2024-10-07 09:48:55.919645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.085 [2024-10-07 09:48:55.919731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.085 qpair failed and we were unable to recover it. 00:28:07.085 [2024-10-07 09:48:55.920036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.085 [2024-10-07 09:48:55.920101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.085 qpair failed and we were unable to recover it. 00:28:07.085 [2024-10-07 09:48:55.920319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.085 [2024-10-07 09:48:55.920384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.085 qpair failed and we were unable to recover it. 00:28:07.085 [2024-10-07 09:48:55.920633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.085 [2024-10-07 09:48:55.920724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.085 qpair failed and we were unable to recover it. 00:28:07.085 [2024-10-07 09:48:55.920964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.085 [2024-10-07 09:48:55.921030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.085 qpair failed and we were unable to recover it. 00:28:07.085 [2024-10-07 09:48:55.921256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.086 [2024-10-07 09:48:55.921322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.086 qpair failed and we were unable to recover it. 00:28:07.086 [2024-10-07 09:48:55.921594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.086 [2024-10-07 09:48:55.921659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.086 qpair failed and we were unable to recover it. 00:28:07.086 [2024-10-07 09:48:55.921887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.086 [2024-10-07 09:48:55.921952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.086 qpair failed and we were unable to recover it. 00:28:07.086 [2024-10-07 09:48:55.922196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.086 [2024-10-07 09:48:55.922262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.086 qpair failed and we were unable to recover it. 00:28:07.086 [2024-10-07 09:48:55.922439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.086 [2024-10-07 09:48:55.922503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.086 qpair failed and we were unable to recover it. 00:28:07.086 [2024-10-07 09:48:55.922758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.086 [2024-10-07 09:48:55.922824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.086 qpair failed and we were unable to recover it. 00:28:07.086 [2024-10-07 09:48:55.923072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.086 [2024-10-07 09:48:55.923139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.086 qpair failed and we were unable to recover it. 00:28:07.086 [2024-10-07 09:48:55.923382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.086 [2024-10-07 09:48:55.923447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.086 qpair failed and we were unable to recover it. 00:28:07.086 [2024-10-07 09:48:55.923704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.086 [2024-10-07 09:48:55.923771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.086 qpair failed and we were unable to recover it. 00:28:07.086 [2024-10-07 09:48:55.924022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.086 [2024-10-07 09:48:55.924088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.086 qpair failed and we were unable to recover it. 00:28:07.086 [2024-10-07 09:48:55.924340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.086 [2024-10-07 09:48:55.924405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.086 qpair failed and we were unable to recover it. 00:28:07.086 [2024-10-07 09:48:55.924695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.086 [2024-10-07 09:48:55.924762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.086 qpair failed and we were unable to recover it. 00:28:07.086 [2024-10-07 09:48:55.925003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.086 [2024-10-07 09:48:55.925068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.086 qpair failed and we were unable to recover it. 00:28:07.086 [2024-10-07 09:48:55.925321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.086 [2024-10-07 09:48:55.925386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.086 qpair failed and we were unable to recover it. 00:28:07.086 [2024-10-07 09:48:55.925614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.086 [2024-10-07 09:48:55.925709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.086 qpair failed and we were unable to recover it. 00:28:07.086 [2024-10-07 09:48:55.925952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.086 [2024-10-07 09:48:55.926018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.086 qpair failed and we were unable to recover it. 00:28:07.086 [2024-10-07 09:48:55.926204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.086 [2024-10-07 09:48:55.926269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.086 qpair failed and we were unable to recover it. 00:28:07.086 [2024-10-07 09:48:55.926517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.086 [2024-10-07 09:48:55.926581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.086 qpair failed and we were unable to recover it. 00:28:07.086 [2024-10-07 09:48:55.926847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.086 [2024-10-07 09:48:55.926916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.086 qpair failed and we were unable to recover it. 00:28:07.086 [2024-10-07 09:48:55.927133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.086 [2024-10-07 09:48:55.927200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.086 qpair failed and we were unable to recover it. 00:28:07.086 [2024-10-07 09:48:55.927398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.086 [2024-10-07 09:48:55.927463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.086 qpair failed and we were unable to recover it. 00:28:07.086 [2024-10-07 09:48:55.927656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.086 [2024-10-07 09:48:55.927742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.086 qpair failed and we were unable to recover it. 00:28:07.086 [2024-10-07 09:48:55.927957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.086 [2024-10-07 09:48:55.928023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.086 qpair failed and we were unable to recover it. 00:28:07.086 [2024-10-07 09:48:55.928255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.086 [2024-10-07 09:48:55.928320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.086 qpair failed and we were unable to recover it. 00:28:07.086 [2024-10-07 09:48:55.928567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.086 [2024-10-07 09:48:55.928632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.086 qpair failed and we were unable to recover it. 00:28:07.086 [2024-10-07 09:48:55.928892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.086 [2024-10-07 09:48:55.928958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.086 qpair failed and we were unable to recover it. 00:28:07.086 [2024-10-07 09:48:55.929210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.086 [2024-10-07 09:48:55.929275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.086 qpair failed and we were unable to recover it. 00:28:07.086 [2024-10-07 09:48:55.929508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.086 [2024-10-07 09:48:55.929573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.086 qpair failed and we were unable to recover it. 00:28:07.086 [2024-10-07 09:48:55.929846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.086 [2024-10-07 09:48:55.929913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.086 qpair failed and we were unable to recover it. 00:28:07.086 [2024-10-07 09:48:55.930143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.086 [2024-10-07 09:48:55.930209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.086 qpair failed and we were unable to recover it. 00:28:07.086 [2024-10-07 09:48:55.930456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.086 [2024-10-07 09:48:55.930520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.086 qpair failed and we were unable to recover it. 00:28:07.086 [2024-10-07 09:48:55.930768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.086 [2024-10-07 09:48:55.930836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.086 qpair failed and we were unable to recover it. 00:28:07.086 [2024-10-07 09:48:55.931087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.086 [2024-10-07 09:48:55.931153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.086 qpair failed and we were unable to recover it. 00:28:07.086 [2024-10-07 09:48:55.931371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.086 [2024-10-07 09:48:55.931436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.086 qpair failed and we were unable to recover it. 00:28:07.086 [2024-10-07 09:48:55.931694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.086 [2024-10-07 09:48:55.931760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.086 qpair failed and we were unable to recover it. 00:28:07.086 [2024-10-07 09:48:55.932039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.086 [2024-10-07 09:48:55.932106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.086 qpair failed and we were unable to recover it. 00:28:07.086 [2024-10-07 09:48:55.932351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.086 [2024-10-07 09:48:55.932417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.086 qpair failed and we were unable to recover it. 00:28:07.086 [2024-10-07 09:48:55.932664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.086 [2024-10-07 09:48:55.932745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.086 qpair failed and we were unable to recover it. 00:28:07.086 [2024-10-07 09:48:55.932951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.086 [2024-10-07 09:48:55.933018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.087 qpair failed and we were unable to recover it. 00:28:07.087 [2024-10-07 09:48:55.933243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.087 [2024-10-07 09:48:55.933308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.087 qpair failed and we were unable to recover it. 00:28:07.087 [2024-10-07 09:48:55.933558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.087 [2024-10-07 09:48:55.933623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.087 qpair failed and we were unable to recover it. 00:28:07.087 [2024-10-07 09:48:55.933917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.087 [2024-10-07 09:48:55.933993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.087 qpair failed and we were unable to recover it. 00:28:07.087 [2024-10-07 09:48:55.934251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.087 [2024-10-07 09:48:55.934316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.087 qpair failed and we were unable to recover it. 00:28:07.087 [2024-10-07 09:48:55.934556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.087 [2024-10-07 09:48:55.934621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.087 qpair failed and we were unable to recover it. 00:28:07.087 [2024-10-07 09:48:55.934877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.087 [2024-10-07 09:48:55.934942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.087 qpair failed and we were unable to recover it. 00:28:07.087 [2024-10-07 09:48:55.935225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.087 [2024-10-07 09:48:55.935290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.087 qpair failed and we were unable to recover it. 00:28:07.087 [2024-10-07 09:48:55.935577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.087 [2024-10-07 09:48:55.935643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.087 qpair failed and we were unable to recover it. 00:28:07.087 [2024-10-07 09:48:55.935908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.087 [2024-10-07 09:48:55.935974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.087 qpair failed and we were unable to recover it. 00:28:07.087 [2024-10-07 09:48:55.936226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.087 [2024-10-07 09:48:55.936291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.087 qpair failed and we were unable to recover it. 00:28:07.087 [2024-10-07 09:48:55.936509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.087 [2024-10-07 09:48:55.936575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.087 qpair failed and we were unable to recover it. 00:28:07.087 [2024-10-07 09:48:55.936853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.087 [2024-10-07 09:48:55.936920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.087 qpair failed and we were unable to recover it. 00:28:07.087 [2024-10-07 09:48:55.937176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.087 [2024-10-07 09:48:55.937242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.087 qpair failed and we were unable to recover it. 00:28:07.087 [2024-10-07 09:48:55.937497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.087 [2024-10-07 09:48:55.937563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.087 qpair failed and we were unable to recover it. 00:28:07.087 [2024-10-07 09:48:55.937839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.087 [2024-10-07 09:48:55.937905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.087 qpair failed and we were unable to recover it. 00:28:07.087 [2024-10-07 09:48:55.938148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.087 [2024-10-07 09:48:55.938213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.087 qpair failed and we were unable to recover it. 00:28:07.087 [2024-10-07 09:48:55.938517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.087 [2024-10-07 09:48:55.938584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.087 qpair failed and we were unable to recover it. 00:28:07.087 [2024-10-07 09:48:55.938845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.087 [2024-10-07 09:48:55.938911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.087 qpair failed and we were unable to recover it. 00:28:07.087 [2024-10-07 09:48:55.939176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.087 [2024-10-07 09:48:55.939241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.087 qpair failed and we were unable to recover it. 00:28:07.087 [2024-10-07 09:48:55.939453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.087 [2024-10-07 09:48:55.939519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.087 qpair failed and we were unable to recover it. 00:28:07.087 [2024-10-07 09:48:55.939780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.087 [2024-10-07 09:48:55.939847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.087 qpair failed and we were unable to recover it. 00:28:07.087 [2024-10-07 09:48:55.940114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.087 [2024-10-07 09:48:55.940181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.087 qpair failed and we were unable to recover it. 00:28:07.087 [2024-10-07 09:48:55.940469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.087 [2024-10-07 09:48:55.940533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.087 qpair failed and we were unable to recover it. 00:28:07.087 [2024-10-07 09:48:55.940815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.087 [2024-10-07 09:48:55.940881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.087 qpair failed and we were unable to recover it. 00:28:07.087 [2024-10-07 09:48:55.941093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.087 [2024-10-07 09:48:55.941159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.087 qpair failed and we were unable to recover it. 00:28:07.087 [2024-10-07 09:48:55.941355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.087 [2024-10-07 09:48:55.941419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.087 qpair failed and we were unable to recover it. 00:28:07.087 [2024-10-07 09:48:55.941727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.087 [2024-10-07 09:48:55.941794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.087 qpair failed and we were unable to recover it. 00:28:07.087 [2024-10-07 09:48:55.942040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.087 [2024-10-07 09:48:55.942105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.087 qpair failed and we were unable to recover it. 00:28:07.087 [2024-10-07 09:48:55.942344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.087 [2024-10-07 09:48:55.942411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.087 qpair failed and we were unable to recover it. 00:28:07.087 [2024-10-07 09:48:55.942684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.087 [2024-10-07 09:48:55.942752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.087 qpair failed and we were unable to recover it. 00:28:07.087 [2024-10-07 09:48:55.943066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.087 [2024-10-07 09:48:55.943132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.087 qpair failed and we were unable to recover it. 00:28:07.087 [2024-10-07 09:48:55.943384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.087 [2024-10-07 09:48:55.943450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.087 qpair failed and we were unable to recover it. 00:28:07.087 [2024-10-07 09:48:55.943744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.087 [2024-10-07 09:48:55.943812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.087 qpair failed and we were unable to recover it. 00:28:07.087 [2024-10-07 09:48:55.944064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.087 [2024-10-07 09:48:55.944128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.087 qpair failed and we were unable to recover it. 00:28:07.087 [2024-10-07 09:48:55.944336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.087 [2024-10-07 09:48:55.944401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.087 qpair failed and we were unable to recover it. 00:28:07.087 [2024-10-07 09:48:55.944659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.087 [2024-10-07 09:48:55.944741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.087 qpair failed and we were unable to recover it. 00:28:07.087 [2024-10-07 09:48:55.944974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.087 [2024-10-07 09:48:55.945038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.087 qpair failed and we were unable to recover it. 00:28:07.087 [2024-10-07 09:48:55.945329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.087 [2024-10-07 09:48:55.945394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.088 qpair failed and we were unable to recover it. 00:28:07.088 [2024-10-07 09:48:55.945642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.088 [2024-10-07 09:48:55.945728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.088 qpair failed and we were unable to recover it. 00:28:07.088 [2024-10-07 09:48:55.945982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.088 [2024-10-07 09:48:55.946047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.088 qpair failed and we were unable to recover it. 00:28:07.088 [2024-10-07 09:48:55.946332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.088 [2024-10-07 09:48:55.946398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.088 qpair failed and we were unable to recover it. 00:28:07.088 [2024-10-07 09:48:55.946648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.088 [2024-10-07 09:48:55.946748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.088 qpair failed and we were unable to recover it. 00:28:07.088 [2024-10-07 09:48:55.946972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.088 [2024-10-07 09:48:55.947038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.088 qpair failed and we were unable to recover it. 00:28:07.088 [2024-10-07 09:48:55.947249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.088 [2024-10-07 09:48:55.947316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.088 qpair failed and we were unable to recover it. 00:28:07.088 [2024-10-07 09:48:55.947539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.088 [2024-10-07 09:48:55.947605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.088 qpair failed and we were unable to recover it. 00:28:07.088 [2024-10-07 09:48:55.947884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.088 [2024-10-07 09:48:55.947951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.088 qpair failed and we were unable to recover it. 00:28:07.088 [2024-10-07 09:48:55.948209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.088 [2024-10-07 09:48:55.948274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.088 qpair failed and we were unable to recover it. 00:28:07.088 [2024-10-07 09:48:55.948528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.088 [2024-10-07 09:48:55.948594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.088 qpair failed and we were unable to recover it. 00:28:07.088 [2024-10-07 09:48:55.948846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.088 [2024-10-07 09:48:55.948913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.088 qpair failed and we were unable to recover it. 00:28:07.088 [2024-10-07 09:48:55.949165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.088 [2024-10-07 09:48:55.949230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.088 qpair failed and we were unable to recover it. 00:28:07.088 [2024-10-07 09:48:55.949520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.088 [2024-10-07 09:48:55.949585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.088 qpair failed and we were unable to recover it. 00:28:07.088 [2024-10-07 09:48:55.949804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.088 [2024-10-07 09:48:55.949871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.088 qpair failed and we were unable to recover it. 00:28:07.088 [2024-10-07 09:48:55.950101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.088 [2024-10-07 09:48:55.950167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.088 qpair failed and we were unable to recover it. 00:28:07.088 [2024-10-07 09:48:55.950463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.088 [2024-10-07 09:48:55.950529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.088 qpair failed and we were unable to recover it. 00:28:07.088 [2024-10-07 09:48:55.950746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.088 [2024-10-07 09:48:55.950813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.088 qpair failed and we were unable to recover it. 00:28:07.088 [2024-10-07 09:48:55.951058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.088 [2024-10-07 09:48:55.951124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.088 qpair failed and we were unable to recover it. 00:28:07.088 [2024-10-07 09:48:55.951335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.088 [2024-10-07 09:48:55.951401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.088 qpair failed and we were unable to recover it. 00:28:07.088 [2024-10-07 09:48:55.951603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.088 [2024-10-07 09:48:55.951682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.088 qpair failed and we were unable to recover it. 00:28:07.088 [2024-10-07 09:48:55.951974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.088 [2024-10-07 09:48:55.952039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.088 qpair failed and we were unable to recover it. 00:28:07.088 [2024-10-07 09:48:55.952295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.088 [2024-10-07 09:48:55.952361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.088 qpair failed and we were unable to recover it. 00:28:07.088 [2024-10-07 09:48:55.952650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.088 [2024-10-07 09:48:55.952729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.088 qpair failed and we were unable to recover it. 00:28:07.088 [2024-10-07 09:48:55.952945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.088 [2024-10-07 09:48:55.953010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.088 qpair failed and we were unable to recover it. 00:28:07.088 [2024-10-07 09:48:55.953264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.088 [2024-10-07 09:48:55.953329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.088 qpair failed and we were unable to recover it. 00:28:07.088 [2024-10-07 09:48:55.953535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.088 [2024-10-07 09:48:55.953599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.088 qpair failed and we were unable to recover it. 00:28:07.088 [2024-10-07 09:48:55.953825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.088 [2024-10-07 09:48:55.953893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.088 qpair failed and we were unable to recover it. 00:28:07.088 [2024-10-07 09:48:55.954136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.088 [2024-10-07 09:48:55.954201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.088 qpair failed and we were unable to recover it. 00:28:07.088 [2024-10-07 09:48:55.954458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.088 [2024-10-07 09:48:55.954523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.088 qpair failed and we were unable to recover it. 00:28:07.088 [2024-10-07 09:48:55.954742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.088 [2024-10-07 09:48:55.954809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.088 qpair failed and we were unable to recover it. 00:28:07.088 [2024-10-07 09:48:55.955016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.088 [2024-10-07 09:48:55.955082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.088 qpair failed and we were unable to recover it. 00:28:07.088 [2024-10-07 09:48:55.955317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.088 [2024-10-07 09:48:55.955382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.088 qpair failed and we were unable to recover it. 00:28:07.088 [2024-10-07 09:48:55.955638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.088 [2024-10-07 09:48:55.955727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.088 qpair failed and we were unable to recover it. 00:28:07.088 [2024-10-07 09:48:55.955927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.088 [2024-10-07 09:48:55.955992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.088 qpair failed and we were unable to recover it. 00:28:07.088 [2024-10-07 09:48:55.956221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.088 [2024-10-07 09:48:55.956286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.088 qpair failed and we were unable to recover it. 00:28:07.088 [2024-10-07 09:48:55.956511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.088 [2024-10-07 09:48:55.956576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.088 qpair failed and we were unable to recover it. 00:28:07.088 [2024-10-07 09:48:55.956829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.088 [2024-10-07 09:48:55.956895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.088 qpair failed and we were unable to recover it. 00:28:07.088 [2024-10-07 09:48:55.957150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.088 [2024-10-07 09:48:55.957216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.088 qpair failed and we were unable to recover it. 00:28:07.089 [2024-10-07 09:48:55.957451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.089 [2024-10-07 09:48:55.957516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.089 qpair failed and we were unable to recover it. 00:28:07.089 [2024-10-07 09:48:55.957757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.089 [2024-10-07 09:48:55.957824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.089 qpair failed and we were unable to recover it. 00:28:07.089 [2024-10-07 09:48:55.958081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.089 [2024-10-07 09:48:55.958146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.089 qpair failed and we were unable to recover it. 00:28:07.089 [2024-10-07 09:48:55.958403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.089 [2024-10-07 09:48:55.958468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.089 qpair failed and we were unable to recover it. 00:28:07.089 [2024-10-07 09:48:55.958689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.089 [2024-10-07 09:48:55.958757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.089 qpair failed and we were unable to recover it. 00:28:07.089 [2024-10-07 09:48:55.958991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.089 [2024-10-07 09:48:55.959057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.089 qpair failed and we were unable to recover it. 00:28:07.089 [2024-10-07 09:48:55.959241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.089 [2024-10-07 09:48:55.959306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.089 qpair failed and we were unable to recover it. 00:28:07.089 [2024-10-07 09:48:55.959526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.089 [2024-10-07 09:48:55.959591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.089 qpair failed and we were unable to recover it. 00:28:07.089 [2024-10-07 09:48:55.959869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.089 [2024-10-07 09:48:55.959935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.089 qpair failed and we were unable to recover it. 00:28:07.089 [2024-10-07 09:48:55.960218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.089 [2024-10-07 09:48:55.960283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.089 qpair failed and we were unable to recover it. 00:28:07.089 [2024-10-07 09:48:55.960484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.089 [2024-10-07 09:48:55.960549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.089 qpair failed and we were unable to recover it. 00:28:07.089 [2024-10-07 09:48:55.960763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.089 [2024-10-07 09:48:55.960830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.089 qpair failed and we were unable to recover it. 00:28:07.089 [2024-10-07 09:48:55.961124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.089 [2024-10-07 09:48:55.961188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.089 qpair failed and we were unable to recover it. 00:28:07.089 [2024-10-07 09:48:55.961433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.089 [2024-10-07 09:48:55.961500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.089 qpair failed and we were unable to recover it. 00:28:07.089 [2024-10-07 09:48:55.961721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.089 [2024-10-07 09:48:55.961789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.089 qpair failed and we were unable to recover it. 00:28:07.089 [2024-10-07 09:48:55.961993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.089 [2024-10-07 09:48:55.962060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.089 qpair failed and we were unable to recover it. 00:28:07.089 [2024-10-07 09:48:55.962290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.089 [2024-10-07 09:48:55.962356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.089 qpair failed and we were unable to recover it. 00:28:07.089 [2024-10-07 09:48:55.962592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.089 [2024-10-07 09:48:55.962657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.089 qpair failed and we were unable to recover it. 00:28:07.089 [2024-10-07 09:48:55.962920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.089 [2024-10-07 09:48:55.962985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.089 qpair failed and we were unable to recover it. 00:28:07.089 [2024-10-07 09:48:55.963279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.089 [2024-10-07 09:48:55.963343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.089 qpair failed and we were unable to recover it. 00:28:07.089 [2024-10-07 09:48:55.963589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.089 [2024-10-07 09:48:55.963655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.089 qpair failed and we were unable to recover it. 00:28:07.089 [2024-10-07 09:48:55.963951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.089 [2024-10-07 09:48:55.964027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.089 qpair failed and we were unable to recover it. 00:28:07.089 [2024-10-07 09:48:55.964241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.089 [2024-10-07 09:48:55.964306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.089 qpair failed and we were unable to recover it. 00:28:07.089 [2024-10-07 09:48:55.964566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.089 [2024-10-07 09:48:55.964632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.089 qpair failed and we were unable to recover it. 00:28:07.089 [2024-10-07 09:48:55.964856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.089 [2024-10-07 09:48:55.964922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.089 qpair failed and we were unable to recover it. 00:28:07.089 [2024-10-07 09:48:55.965172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.089 [2024-10-07 09:48:55.965237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.089 qpair failed and we were unable to recover it. 00:28:07.089 [2024-10-07 09:48:55.965486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.089 [2024-10-07 09:48:55.965552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.089 qpair failed and we were unable to recover it. 00:28:07.089 [2024-10-07 09:48:55.965835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.089 [2024-10-07 09:48:55.965901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.089 qpair failed and we were unable to recover it. 00:28:07.089 [2024-10-07 09:48:55.966165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.089 [2024-10-07 09:48:55.966231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.089 qpair failed and we were unable to recover it. 00:28:07.089 [2024-10-07 09:48:55.966443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.089 [2024-10-07 09:48:55.966508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.089 qpair failed and we were unable to recover it. 00:28:07.089 [2024-10-07 09:48:55.966770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.089 [2024-10-07 09:48:55.966836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.089 qpair failed and we were unable to recover it. 00:28:07.089 [2024-10-07 09:48:55.967053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.089 [2024-10-07 09:48:55.967120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.089 qpair failed and we were unable to recover it. 00:28:07.089 [2024-10-07 09:48:55.967334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.089 [2024-10-07 09:48:55.967399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.090 qpair failed and we were unable to recover it. 00:28:07.090 [2024-10-07 09:48:55.967597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.090 [2024-10-07 09:48:55.967664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.090 qpair failed and we were unable to recover it. 00:28:07.090 [2024-10-07 09:48:55.967906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.090 [2024-10-07 09:48:55.967971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.090 qpair failed and we were unable to recover it. 00:28:07.090 [2024-10-07 09:48:55.968198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.090 [2024-10-07 09:48:55.968263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.090 qpair failed and we were unable to recover it. 00:28:07.090 [2024-10-07 09:48:55.968466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.090 [2024-10-07 09:48:55.968531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.090 qpair failed and we were unable to recover it. 00:28:07.090 [2024-10-07 09:48:55.968787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.090 [2024-10-07 09:48:55.968855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.090 qpair failed and we were unable to recover it. 00:28:07.090 [2024-10-07 09:48:55.969095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.090 [2024-10-07 09:48:55.969161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.090 qpair failed and we were unable to recover it. 00:28:07.090 [2024-10-07 09:48:55.969406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.090 [2024-10-07 09:48:55.969471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.090 qpair failed and we were unable to recover it. 00:28:07.090 [2024-10-07 09:48:55.969732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.090 [2024-10-07 09:48:55.969799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.090 qpair failed and we were unable to recover it. 00:28:07.090 [2024-10-07 09:48:55.970018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.090 [2024-10-07 09:48:55.970082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.090 qpair failed and we were unable to recover it. 00:28:07.090 [2024-10-07 09:48:55.970372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.090 [2024-10-07 09:48:55.970437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.090 qpair failed and we were unable to recover it. 00:28:07.090 [2024-10-07 09:48:55.970627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.090 [2024-10-07 09:48:55.970728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.090 qpair failed and we were unable to recover it. 00:28:07.090 [2024-10-07 09:48:55.971028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.090 [2024-10-07 09:48:55.971095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.090 qpair failed and we were unable to recover it. 00:28:07.090 [2024-10-07 09:48:55.971343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.090 [2024-10-07 09:48:55.971410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.090 qpair failed and we were unable to recover it. 00:28:07.090 [2024-10-07 09:48:55.971688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.090 [2024-10-07 09:48:55.971755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.090 qpair failed and we were unable to recover it. 00:28:07.090 [2024-10-07 09:48:55.972016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.090 [2024-10-07 09:48:55.972082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.090 qpair failed and we were unable to recover it. 00:28:07.090 [2024-10-07 09:48:55.972301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.090 [2024-10-07 09:48:55.972382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.090 qpair failed and we were unable to recover it. 00:28:07.090 [2024-10-07 09:48:55.972681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.090 [2024-10-07 09:48:55.972749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.090 qpair failed and we were unable to recover it. 00:28:07.090 [2024-10-07 09:48:55.972957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.090 [2024-10-07 09:48:55.973024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.090 qpair failed and we were unable to recover it. 00:28:07.090 [2024-10-07 09:48:55.973268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.090 [2024-10-07 09:48:55.973334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.090 qpair failed and we were unable to recover it. 00:28:07.090 [2024-10-07 09:48:55.973627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.090 [2024-10-07 09:48:55.973708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.090 qpair failed and we were unable to recover it. 00:28:07.090 [2024-10-07 09:48:55.973980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.090 [2024-10-07 09:48:55.974047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.090 qpair failed and we were unable to recover it. 00:28:07.090 [2024-10-07 09:48:55.974340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.090 [2024-10-07 09:48:55.974404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.090 qpair failed and we were unable to recover it. 00:28:07.090 [2024-10-07 09:48:55.974696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.090 [2024-10-07 09:48:55.974768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.090 qpair failed and we were unable to recover it. 00:28:07.090 [2024-10-07 09:48:55.975068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.090 [2024-10-07 09:48:55.975134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.090 qpair failed and we were unable to recover it. 00:28:07.090 [2024-10-07 09:48:55.975415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.090 [2024-10-07 09:48:55.975481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.090 qpair failed and we were unable to recover it. 00:28:07.090 [2024-10-07 09:48:55.975735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.090 [2024-10-07 09:48:55.975803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.090 qpair failed and we were unable to recover it. 00:28:07.090 [2024-10-07 09:48:55.976070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.090 [2024-10-07 09:48:55.976135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.090 qpair failed and we were unable to recover it. 00:28:07.090 [2024-10-07 09:48:55.976431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.090 [2024-10-07 09:48:55.976497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.090 qpair failed and we were unable to recover it. 00:28:07.090 [2024-10-07 09:48:55.976758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.090 [2024-10-07 09:48:55.976826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.090 qpair failed and we were unable to recover it. 00:28:07.090 [2024-10-07 09:48:55.977130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.090 [2024-10-07 09:48:55.977195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.090 qpair failed and we were unable to recover it. 00:28:07.090 [2024-10-07 09:48:55.977477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.090 [2024-10-07 09:48:55.977543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.090 qpair failed and we were unable to recover it. 00:28:07.090 [2024-10-07 09:48:55.977826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.090 [2024-10-07 09:48:55.977893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.090 qpair failed and we were unable to recover it. 00:28:07.090 [2024-10-07 09:48:55.978133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.090 [2024-10-07 09:48:55.978199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.090 qpair failed and we were unable to recover it. 00:28:07.090 [2024-10-07 09:48:55.978448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.090 [2024-10-07 09:48:55.978515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.090 qpair failed and we were unable to recover it. 00:28:07.090 [2024-10-07 09:48:55.978771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.090 [2024-10-07 09:48:55.978839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.090 qpair failed and we were unable to recover it. 00:28:07.090 [2024-10-07 09:48:55.979040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.090 [2024-10-07 09:48:55.979108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.090 qpair failed and we were unable to recover it. 00:28:07.090 [2024-10-07 09:48:55.979332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.090 [2024-10-07 09:48:55.979399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.090 qpair failed and we were unable to recover it. 00:28:07.090 [2024-10-07 09:48:55.979693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.091 [2024-10-07 09:48:55.979761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.091 qpair failed and we were unable to recover it. 00:28:07.091 [2024-10-07 09:48:55.979971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.091 [2024-10-07 09:48:55.980038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.091 qpair failed and we were unable to recover it. 00:28:07.091 [2024-10-07 09:48:55.980322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.091 [2024-10-07 09:48:55.980387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.091 qpair failed and we were unable to recover it. 00:28:07.091 [2024-10-07 09:48:55.980584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.091 [2024-10-07 09:48:55.980651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.091 qpair failed and we were unable to recover it. 00:28:07.091 [2024-10-07 09:48:55.980862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.091 [2024-10-07 09:48:55.980928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.091 qpair failed and we were unable to recover it. 00:28:07.091 [2024-10-07 09:48:55.981158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.091 [2024-10-07 09:48:55.981225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.091 qpair failed and we were unable to recover it. 00:28:07.091 [2024-10-07 09:48:55.981521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.091 [2024-10-07 09:48:55.981586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.091 qpair failed and we were unable to recover it. 00:28:07.091 [2024-10-07 09:48:55.981859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.091 [2024-10-07 09:48:55.981925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.091 qpair failed and we were unable to recover it. 00:28:07.091 [2024-10-07 09:48:55.982217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.091 [2024-10-07 09:48:55.982282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.091 qpair failed and we were unable to recover it. 00:28:07.091 [2024-10-07 09:48:55.982565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.091 [2024-10-07 09:48:55.982631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.091 qpair failed and we were unable to recover it. 00:28:07.091 [2024-10-07 09:48:55.982876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.091 [2024-10-07 09:48:55.982943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.091 qpair failed and we were unable to recover it. 00:28:07.091 [2024-10-07 09:48:55.983199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.091 [2024-10-07 09:48:55.983264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.091 qpair failed and we were unable to recover it. 00:28:07.091 [2024-10-07 09:48:55.983527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.091 [2024-10-07 09:48:55.983593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.091 qpair failed and we were unable to recover it. 00:28:07.091 [2024-10-07 09:48:55.983893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.091 [2024-10-07 09:48:55.983960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.091 qpair failed and we were unable to recover it. 00:28:07.091 [2024-10-07 09:48:55.984176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.091 [2024-10-07 09:48:55.984242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.091 qpair failed and we were unable to recover it. 00:28:07.091 [2024-10-07 09:48:55.984528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.091 [2024-10-07 09:48:55.984593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.091 qpair failed and we were unable to recover it. 00:28:07.091 [2024-10-07 09:48:55.984814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.091 [2024-10-07 09:48:55.984881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.091 qpair failed and we were unable to recover it. 00:28:07.091 [2024-10-07 09:48:55.985101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.091 [2024-10-07 09:48:55.985166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.091 qpair failed and we were unable to recover it. 00:28:07.091 [2024-10-07 09:48:55.985388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.091 [2024-10-07 09:48:55.985454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.091 qpair failed and we were unable to recover it. 00:28:07.091 [2024-10-07 09:48:55.985711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.091 [2024-10-07 09:48:55.985796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.091 qpair failed and we were unable to recover it. 00:28:07.091 [2024-10-07 09:48:55.986025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.091 [2024-10-07 09:48:55.986090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.091 qpair failed and we were unable to recover it. 00:28:07.091 [2024-10-07 09:48:55.986347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.091 [2024-10-07 09:48:55.986412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.091 qpair failed and we were unable to recover it. 00:28:07.091 [2024-10-07 09:48:55.986693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.091 [2024-10-07 09:48:55.986762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.091 qpair failed and we were unable to recover it. 00:28:07.091 [2024-10-07 09:48:55.987062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.091 [2024-10-07 09:48:55.987128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.091 qpair failed and we were unable to recover it. 00:28:07.091 [2024-10-07 09:48:55.987370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.091 [2024-10-07 09:48:55.987435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.091 qpair failed and we were unable to recover it. 00:28:07.091 [2024-10-07 09:48:55.987717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.091 [2024-10-07 09:48:55.987785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.091 qpair failed and we were unable to recover it. 00:28:07.091 [2024-10-07 09:48:55.988088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.091 [2024-10-07 09:48:55.988152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.091 qpair failed and we were unable to recover it. 00:28:07.091 [2024-10-07 09:48:55.988451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.091 [2024-10-07 09:48:55.988517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.091 qpair failed and we were unable to recover it. 00:28:07.091 [2024-10-07 09:48:55.988764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.091 [2024-10-07 09:48:55.988832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.091 qpair failed and we were unable to recover it. 00:28:07.091 [2024-10-07 09:48:55.989123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.091 [2024-10-07 09:48:55.989188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.091 qpair failed and we were unable to recover it. 00:28:07.091 [2024-10-07 09:48:55.989434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.091 [2024-10-07 09:48:55.989500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.091 qpair failed and we were unable to recover it. 00:28:07.091 [2024-10-07 09:48:55.989747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.091 [2024-10-07 09:48:55.989816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.091 qpair failed and we were unable to recover it. 00:28:07.091 [2024-10-07 09:48:55.990070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.091 [2024-10-07 09:48:55.990136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.091 qpair failed and we were unable to recover it. 00:28:07.091 [2024-10-07 09:48:55.990381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.091 [2024-10-07 09:48:55.990446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.091 qpair failed and we were unable to recover it. 00:28:07.091 [2024-10-07 09:48:55.990740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.091 [2024-10-07 09:48:55.990808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.091 qpair failed and we were unable to recover it. 00:28:07.091 [2024-10-07 09:48:55.991094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.091 [2024-10-07 09:48:55.991159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.091 qpair failed and we were unable to recover it. 00:28:07.091 [2024-10-07 09:48:55.991446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.091 [2024-10-07 09:48:55.991511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.091 qpair failed and we were unable to recover it. 00:28:07.091 [2024-10-07 09:48:55.991805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.091 [2024-10-07 09:48:55.991872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.091 qpair failed and we were unable to recover it. 00:28:07.091 [2024-10-07 09:48:55.992120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.091 [2024-10-07 09:48:55.992185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.091 qpair failed and we were unable to recover it. 00:28:07.092 [2024-10-07 09:48:55.992426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.092 [2024-10-07 09:48:55.992490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.092 qpair failed and we were unable to recover it. 00:28:07.092 [2024-10-07 09:48:55.992741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.092 [2024-10-07 09:48:55.992809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.092 qpair failed and we were unable to recover it. 00:28:07.092 [2024-10-07 09:48:55.993064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.092 [2024-10-07 09:48:55.993129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.092 qpair failed and we were unable to recover it. 00:28:07.092 [2024-10-07 09:48:55.993382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.092 [2024-10-07 09:48:55.993448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.092 qpair failed and we were unable to recover it. 00:28:07.092 [2024-10-07 09:48:55.993702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.092 [2024-10-07 09:48:55.993769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.092 qpair failed and we were unable to recover it. 00:28:07.092 [2024-10-07 09:48:55.994074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.092 [2024-10-07 09:48:55.994140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.092 qpair failed and we were unable to recover it. 00:28:07.092 [2024-10-07 09:48:55.994399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.092 [2024-10-07 09:48:55.994464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.092 qpair failed and we were unable to recover it. 00:28:07.092 [2024-10-07 09:48:55.994717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.092 [2024-10-07 09:48:55.994794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.092 qpair failed and we were unable to recover it. 00:28:07.092 [2024-10-07 09:48:55.995073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.092 [2024-10-07 09:48:55.995139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.092 qpair failed and we were unable to recover it. 00:28:07.092 [2024-10-07 09:48:55.995392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.092 [2024-10-07 09:48:55.995456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.092 qpair failed and we were unable to recover it. 00:28:07.092 [2024-10-07 09:48:55.995709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.092 [2024-10-07 09:48:55.995775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.092 qpair failed and we were unable to recover it. 00:28:07.092 [2024-10-07 09:48:55.996031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.092 [2024-10-07 09:48:55.996097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.092 qpair failed and we were unable to recover it. 00:28:07.092 [2024-10-07 09:48:55.996350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.092 [2024-10-07 09:48:55.996414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.092 qpair failed and we were unable to recover it. 00:28:07.092 [2024-10-07 09:48:55.996694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.092 [2024-10-07 09:48:55.996762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.092 qpair failed and we were unable to recover it. 00:28:07.092 [2024-10-07 09:48:55.997022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.092 [2024-10-07 09:48:55.997087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.092 qpair failed and we were unable to recover it. 00:28:07.092 [2024-10-07 09:48:55.997370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.092 [2024-10-07 09:48:55.997435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.092 qpair failed and we were unable to recover it. 00:28:07.092 [2024-10-07 09:48:55.997717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.092 [2024-10-07 09:48:55.997785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.092 qpair failed and we were unable to recover it. 00:28:07.092 [2024-10-07 09:48:55.997987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.092 [2024-10-07 09:48:55.998052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.092 qpair failed and we were unable to recover it. 00:28:07.092 [2024-10-07 09:48:55.998342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.092 [2024-10-07 09:48:55.998407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.092 qpair failed and we were unable to recover it. 00:28:07.092 [2024-10-07 09:48:55.998703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.092 [2024-10-07 09:48:55.998770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.092 qpair failed and we were unable to recover it. 00:28:07.092 [2024-10-07 09:48:55.999013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.092 [2024-10-07 09:48:55.999078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.092 qpair failed and we were unable to recover it. 00:28:07.092 [2024-10-07 09:48:55.999382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.092 [2024-10-07 09:48:55.999447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.092 qpair failed and we were unable to recover it. 00:28:07.092 [2024-10-07 09:48:55.999693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.092 [2024-10-07 09:48:55.999761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.092 qpair failed and we were unable to recover it. 00:28:07.092 [2024-10-07 09:48:56.000019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.092 [2024-10-07 09:48:56.000085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.092 qpair failed and we were unable to recover it. 00:28:07.092 [2024-10-07 09:48:56.000324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.092 [2024-10-07 09:48:56.000390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.092 qpair failed and we were unable to recover it. 00:28:07.092 [2024-10-07 09:48:56.000682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.092 [2024-10-07 09:48:56.000748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.092 qpair failed and we were unable to recover it. 00:28:07.092 [2024-10-07 09:48:56.000995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.092 [2024-10-07 09:48:56.001061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.092 qpair failed and we were unable to recover it. 00:28:07.092 [2024-10-07 09:48:56.001253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.092 [2024-10-07 09:48:56.001321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.092 qpair failed and we were unable to recover it. 00:28:07.092 [2024-10-07 09:48:56.001617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.092 [2024-10-07 09:48:56.001700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.092 qpair failed and we were unable to recover it. 00:28:07.092 [2024-10-07 09:48:56.001950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.092 [2024-10-07 09:48:56.002017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.092 qpair failed and we were unable to recover it. 00:28:07.092 [2024-10-07 09:48:56.002269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.092 [2024-10-07 09:48:56.002336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.092 qpair failed and we were unable to recover it. 00:28:07.092 [2024-10-07 09:48:56.002562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.092 [2024-10-07 09:48:56.002627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.092 qpair failed and we were unable to recover it. 00:28:07.092 [2024-10-07 09:48:56.002938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.092 [2024-10-07 09:48:56.003005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.092 qpair failed and we were unable to recover it. 00:28:07.092 [2024-10-07 09:48:56.003252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.092 [2024-10-07 09:48:56.003318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.092 qpair failed and we were unable to recover it. 00:28:07.092 [2024-10-07 09:48:56.003567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.092 [2024-10-07 09:48:56.003642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.092 qpair failed and we were unable to recover it. 00:28:07.092 [2024-10-07 09:48:56.003932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.092 [2024-10-07 09:48:56.003999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.092 qpair failed and we were unable to recover it. 00:28:07.092 [2024-10-07 09:48:56.004238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.092 [2024-10-07 09:48:56.004304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.092 qpair failed and we were unable to recover it. 00:28:07.092 [2024-10-07 09:48:56.004600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.092 [2024-10-07 09:48:56.004683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.092 qpair failed and we were unable to recover it. 00:28:07.093 [2024-10-07 09:48:56.004978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.093 [2024-10-07 09:48:56.005044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.093 qpair failed and we were unable to recover it. 00:28:07.093 [2024-10-07 09:48:56.005346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.093 [2024-10-07 09:48:56.005411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.093 qpair failed and we were unable to recover it. 00:28:07.093 [2024-10-07 09:48:56.005722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.093 [2024-10-07 09:48:56.005789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.093 qpair failed and we were unable to recover it. 00:28:07.093 [2024-10-07 09:48:56.006064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.093 [2024-10-07 09:48:56.006132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.093 qpair failed and we were unable to recover it. 00:28:07.093 [2024-10-07 09:48:56.006418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.093 [2024-10-07 09:48:56.006484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.093 qpair failed and we were unable to recover it. 00:28:07.093 [2024-10-07 09:48:56.006773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.093 [2024-10-07 09:48:56.006840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.093 qpair failed and we were unable to recover it. 00:28:07.093 [2024-10-07 09:48:56.007049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.093 [2024-10-07 09:48:56.007115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.093 qpair failed and we were unable to recover it. 00:28:07.093 [2024-10-07 09:48:56.007394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.093 [2024-10-07 09:48:56.007459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.093 qpair failed and we were unable to recover it. 00:28:07.093 [2024-10-07 09:48:56.007705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.093 [2024-10-07 09:48:56.007772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.093 qpair failed and we were unable to recover it. 00:28:07.093 [2024-10-07 09:48:56.008051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.093 [2024-10-07 09:48:56.008117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.093 qpair failed and we were unable to recover it. 00:28:07.093 [2024-10-07 09:48:56.008344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.093 [2024-10-07 09:48:56.008406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.093 qpair failed and we were unable to recover it. 00:28:07.093 [2024-10-07 09:48:56.008643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.093 [2024-10-07 09:48:56.008726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.093 qpair failed and we were unable to recover it. 00:28:07.093 [2024-10-07 09:48:56.008984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.093 [2024-10-07 09:48:56.009049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.093 qpair failed and we were unable to recover it. 00:28:07.093 [2024-10-07 09:48:56.009333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.093 [2024-10-07 09:48:56.009398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.093 qpair failed and we were unable to recover it. 00:28:07.093 [2024-10-07 09:48:56.009642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.093 [2024-10-07 09:48:56.009733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.093 qpair failed and we were unable to recover it. 00:28:07.093 [2024-10-07 09:48:56.009952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.093 [2024-10-07 09:48:56.010018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.093 qpair failed and we were unable to recover it. 00:28:07.093 [2024-10-07 09:48:56.010216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.093 [2024-10-07 09:48:56.010282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.093 qpair failed and we were unable to recover it. 00:28:07.093 [2024-10-07 09:48:56.010487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.093 [2024-10-07 09:48:56.010553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.093 qpair failed and we were unable to recover it. 00:28:07.093 [2024-10-07 09:48:56.010808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.093 [2024-10-07 09:48:56.010874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.093 qpair failed and we were unable to recover it. 00:28:07.093 [2024-10-07 09:48:56.011167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.093 [2024-10-07 09:48:56.011231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.093 qpair failed and we were unable to recover it. 00:28:07.093 [2024-10-07 09:48:56.011484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.093 [2024-10-07 09:48:56.011549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.093 qpair failed and we were unable to recover it. 00:28:07.093 [2024-10-07 09:48:56.011850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.093 [2024-10-07 09:48:56.011916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.093 qpair failed and we were unable to recover it. 00:28:07.093 [2024-10-07 09:48:56.012196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.093 [2024-10-07 09:48:56.012261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.093 qpair failed and we were unable to recover it. 00:28:07.093 [2024-10-07 09:48:56.012506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.093 [2024-10-07 09:48:56.012570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.093 qpair failed and we were unable to recover it. 00:28:07.093 [2024-10-07 09:48:56.012884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.093 [2024-10-07 09:48:56.012951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.093 qpair failed and we were unable to recover it. 00:28:07.093 [2024-10-07 09:48:56.013160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.093 [2024-10-07 09:48:56.013227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.093 qpair failed and we were unable to recover it. 00:28:07.093 [2024-10-07 09:48:56.013510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.093 [2024-10-07 09:48:56.013575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.093 qpair failed and we were unable to recover it. 00:28:07.093 [2024-10-07 09:48:56.013881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.093 [2024-10-07 09:48:56.013947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.093 qpair failed and we were unable to recover it. 00:28:07.093 [2024-10-07 09:48:56.014205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.093 [2024-10-07 09:48:56.014272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.093 qpair failed and we were unable to recover it. 00:28:07.093 [2024-10-07 09:48:56.014558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.093 [2024-10-07 09:48:56.014622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.093 qpair failed and we were unable to recover it. 00:28:07.093 [2024-10-07 09:48:56.014948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.093 [2024-10-07 09:48:56.015013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.093 qpair failed and we were unable to recover it. 00:28:07.093 [2024-10-07 09:48:56.015301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.093 [2024-10-07 09:48:56.015367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.093 qpair failed and we were unable to recover it. 00:28:07.093 [2024-10-07 09:48:56.015656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.093 [2024-10-07 09:48:56.015742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.093 qpair failed and we were unable to recover it. 00:28:07.093 [2024-10-07 09:48:56.016027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.094 [2024-10-07 09:48:56.016092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.094 qpair failed and we were unable to recover it. 00:28:07.094 [2024-10-07 09:48:56.016346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.094 [2024-10-07 09:48:56.016413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.094 qpair failed and we were unable to recover it. 00:28:07.094 [2024-10-07 09:48:56.016593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.094 [2024-10-07 09:48:56.016658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.094 qpair failed and we were unable to recover it. 00:28:07.094 [2024-10-07 09:48:56.016944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.094 [2024-10-07 09:48:56.017010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.094 qpair failed and we were unable to recover it. 00:28:07.094 [2024-10-07 09:48:56.017271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.094 [2024-10-07 09:48:56.017337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.094 qpair failed and we were unable to recover it. 00:28:07.094 [2024-10-07 09:48:56.017547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.094 [2024-10-07 09:48:56.017612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.094 qpair failed and we were unable to recover it. 00:28:07.094 [2024-10-07 09:48:56.017870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.094 [2024-10-07 09:48:56.017937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.094 qpair failed and we were unable to recover it. 00:28:07.094 [2024-10-07 09:48:56.018207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.094 [2024-10-07 09:48:56.018271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.094 qpair failed and we were unable to recover it. 00:28:07.094 [2024-10-07 09:48:56.018566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.094 [2024-10-07 09:48:56.018631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.094 qpair failed and we were unable to recover it. 00:28:07.094 [2024-10-07 09:48:56.018917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.094 [2024-10-07 09:48:56.018983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.094 qpair failed and we were unable to recover it. 00:28:07.094 [2024-10-07 09:48:56.019232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.094 [2024-10-07 09:48:56.019296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.094 qpair failed and we were unable to recover it. 00:28:07.094 [2024-10-07 09:48:56.019595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.094 [2024-10-07 09:48:56.019659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.094 qpair failed and we were unable to recover it. 00:28:07.094 [2024-10-07 09:48:56.019991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.094 [2024-10-07 09:48:56.020057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.094 qpair failed and we were unable to recover it. 00:28:07.094 [2024-10-07 09:48:56.020251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.094 [2024-10-07 09:48:56.020317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.094 qpair failed and we were unable to recover it. 00:28:07.094 [2024-10-07 09:48:56.020532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.094 [2024-10-07 09:48:56.020600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.094 qpair failed and we were unable to recover it. 00:28:07.094 [2024-10-07 09:48:56.020874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.094 [2024-10-07 09:48:56.020943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.094 qpair failed and we were unable to recover it. 00:28:07.094 [2024-10-07 09:48:56.021184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.094 [2024-10-07 09:48:56.021248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.094 qpair failed and we were unable to recover it. 00:28:07.094 [2024-10-07 09:48:56.021544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.094 [2024-10-07 09:48:56.021609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.094 qpair failed and we were unable to recover it. 00:28:07.094 [2024-10-07 09:48:56.021898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.094 [2024-10-07 09:48:56.021964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.094 qpair failed and we were unable to recover it. 00:28:07.094 [2024-10-07 09:48:56.022265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.094 [2024-10-07 09:48:56.022329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.094 qpair failed and we were unable to recover it. 00:28:07.094 [2024-10-07 09:48:56.022579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.094 [2024-10-07 09:48:56.022645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.094 qpair failed and we were unable to recover it. 00:28:07.094 [2024-10-07 09:48:56.022955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.094 [2024-10-07 09:48:56.023021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.094 qpair failed and we were unable to recover it. 00:28:07.094 [2024-10-07 09:48:56.023268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.094 [2024-10-07 09:48:56.023332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.094 qpair failed and we were unable to recover it. 00:28:07.094 [2024-10-07 09:48:56.023562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.094 [2024-10-07 09:48:56.023626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.094 qpair failed and we were unable to recover it. 00:28:07.094 [2024-10-07 09:48:56.023968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.094 [2024-10-07 09:48:56.024034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.094 qpair failed and we were unable to recover it. 00:28:07.094 [2024-10-07 09:48:56.024285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.094 [2024-10-07 09:48:56.024349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.094 qpair failed and we were unable to recover it. 00:28:07.094 [2024-10-07 09:48:56.024552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.094 [2024-10-07 09:48:56.024616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.094 qpair failed and we were unable to recover it. 00:28:07.094 [2024-10-07 09:48:56.024931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.094 [2024-10-07 09:48:56.024996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.094 qpair failed and we were unable to recover it. 00:28:07.094 [2024-10-07 09:48:56.025251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.094 [2024-10-07 09:48:56.025317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.094 qpair failed and we were unable to recover it. 00:28:07.094 [2024-10-07 09:48:56.025612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.094 [2024-10-07 09:48:56.025697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.094 qpair failed and we were unable to recover it. 00:28:07.094 [2024-10-07 09:48:56.025961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.094 [2024-10-07 09:48:56.026026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.094 qpair failed and we were unable to recover it. 00:28:07.094 [2024-10-07 09:48:56.026270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.094 [2024-10-07 09:48:56.026347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.094 qpair failed and we were unable to recover it. 00:28:07.094 [2024-10-07 09:48:56.026630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.094 [2024-10-07 09:48:56.026715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.094 qpair failed and we were unable to recover it. 00:28:07.094 [2024-10-07 09:48:56.026965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.094 [2024-10-07 09:48:56.027032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.094 qpair failed and we were unable to recover it. 00:28:07.094 [2024-10-07 09:48:56.027261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.094 [2024-10-07 09:48:56.027326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.094 qpair failed and we were unable to recover it. 00:28:07.094 [2024-10-07 09:48:56.027582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.094 [2024-10-07 09:48:56.027647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.094 qpair failed and we were unable to recover it. 00:28:07.094 [2024-10-07 09:48:56.027958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.094 [2024-10-07 09:48:56.028024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.094 qpair failed and we were unable to recover it. 00:28:07.094 [2024-10-07 09:48:56.028313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.094 [2024-10-07 09:48:56.028378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.094 qpair failed and we were unable to recover it. 00:28:07.095 [2024-10-07 09:48:56.028685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.095 [2024-10-07 09:48:56.028752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.095 qpair failed and we were unable to recover it. 00:28:07.095 [2024-10-07 09:48:56.028962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.095 [2024-10-07 09:48:56.029027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.095 qpair failed and we were unable to recover it. 00:28:07.095 [2024-10-07 09:48:56.029272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.095 [2024-10-07 09:48:56.029339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.095 qpair failed and we were unable to recover it. 00:28:07.095 [2024-10-07 09:48:56.029628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.095 [2024-10-07 09:48:56.029714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.095 qpair failed and we were unable to recover it. 00:28:07.095 [2024-10-07 09:48:56.029980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.095 [2024-10-07 09:48:56.030045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.095 qpair failed and we were unable to recover it. 00:28:07.095 [2024-10-07 09:48:56.030295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.095 [2024-10-07 09:48:56.030359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.095 qpair failed and we were unable to recover it. 00:28:07.095 [2024-10-07 09:48:56.030614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.095 [2024-10-07 09:48:56.030698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.095 qpair failed and we were unable to recover it. 00:28:07.095 [2024-10-07 09:48:56.030974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.095 [2024-10-07 09:48:56.031040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.095 qpair failed and we were unable to recover it. 00:28:07.095 [2024-10-07 09:48:56.031334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.095 [2024-10-07 09:48:56.031399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.095 qpair failed and we were unable to recover it. 00:28:07.095 [2024-10-07 09:48:56.031724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.095 [2024-10-07 09:48:56.031791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.095 qpair failed and we were unable to recover it. 00:28:07.095 [2024-10-07 09:48:56.032054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.095 [2024-10-07 09:48:56.032118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.095 qpair failed and we were unable to recover it. 00:28:07.095 [2024-10-07 09:48:56.032408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.095 [2024-10-07 09:48:56.032473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.095 qpair failed and we were unable to recover it. 00:28:07.095 [2024-10-07 09:48:56.032725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.095 [2024-10-07 09:48:56.032792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.095 qpair failed and we were unable to recover it. 00:28:07.095 [2024-10-07 09:48:56.033045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.095 [2024-10-07 09:48:56.033110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.095 qpair failed and we were unable to recover it. 00:28:07.095 [2024-10-07 09:48:56.033395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.095 [2024-10-07 09:48:56.033460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.095 qpair failed and we were unable to recover it. 00:28:07.095 [2024-10-07 09:48:56.033849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.095 [2024-10-07 09:48:56.033918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.095 qpair failed and we were unable to recover it. 00:28:07.095 [2024-10-07 09:48:56.034209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.095 [2024-10-07 09:48:56.034275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.095 qpair failed and we were unable to recover it. 00:28:07.095 [2024-10-07 09:48:56.034495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.095 [2024-10-07 09:48:56.034560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.095 qpair failed and we were unable to recover it. 00:28:07.095 [2024-10-07 09:48:56.034830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.095 [2024-10-07 09:48:56.034898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.095 qpair failed and we were unable to recover it. 00:28:07.095 [2024-10-07 09:48:56.035136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.095 [2024-10-07 09:48:56.035202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.095 qpair failed and we were unable to recover it. 00:28:07.095 [2024-10-07 09:48:56.035441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.095 [2024-10-07 09:48:56.035516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.095 qpair failed and we were unable to recover it. 00:28:07.095 [2024-10-07 09:48:56.035823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.095 [2024-10-07 09:48:56.035890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.095 qpair failed and we were unable to recover it. 00:28:07.095 [2024-10-07 09:48:56.036186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.095 [2024-10-07 09:48:56.036251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.095 qpair failed and we were unable to recover it. 00:28:07.095 [2024-10-07 09:48:56.036543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.095 [2024-10-07 09:48:56.036607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.095 qpair failed and we were unable to recover it. 00:28:07.095 [2024-10-07 09:48:56.036907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.095 [2024-10-07 09:48:56.036973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.095 qpair failed and we were unable to recover it. 00:28:07.095 [2024-10-07 09:48:56.037257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.095 [2024-10-07 09:48:56.037322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.095 qpair failed and we were unable to recover it. 00:28:07.095 [2024-10-07 09:48:56.037590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.095 [2024-10-07 09:48:56.037655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.095 qpair failed and we were unable to recover it. 00:28:07.095 [2024-10-07 09:48:56.037892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.095 [2024-10-07 09:48:56.037957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.095 qpair failed and we were unable to recover it. 00:28:07.095 [2024-10-07 09:48:56.038242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.095 [2024-10-07 09:48:56.038307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.095 qpair failed and we were unable to recover it. 00:28:07.095 [2024-10-07 09:48:56.038540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.095 [2024-10-07 09:48:56.038607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.095 qpair failed and we were unable to recover it. 00:28:07.095 [2024-10-07 09:48:56.038910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.095 [2024-10-07 09:48:56.038977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.095 qpair failed and we were unable to recover it. 00:28:07.095 [2024-10-07 09:48:56.039224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.095 [2024-10-07 09:48:56.039289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.095 qpair failed and we were unable to recover it. 00:28:07.095 [2024-10-07 09:48:56.039533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.095 [2024-10-07 09:48:56.039598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.095 qpair failed and we were unable to recover it. 00:28:07.095 [2024-10-07 09:48:56.039916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.095 [2024-10-07 09:48:56.039982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.095 qpair failed and we were unable to recover it. 00:28:07.095 [2024-10-07 09:48:56.040293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.095 [2024-10-07 09:48:56.040358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.095 qpair failed and we were unable to recover it. 00:28:07.095 [2024-10-07 09:48:56.040595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.095 [2024-10-07 09:48:56.040661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.095 qpair failed and we were unable to recover it. 00:28:07.095 [2024-10-07 09:48:56.040935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.095 [2024-10-07 09:48:56.041000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.095 qpair failed and we were unable to recover it. 00:28:07.095 [2024-10-07 09:48:56.041252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.095 [2024-10-07 09:48:56.041316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.095 qpair failed and we were unable to recover it. 00:28:07.095 [2024-10-07 09:48:56.041565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.096 [2024-10-07 09:48:56.041630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.096 qpair failed and we were unable to recover it. 00:28:07.096 [2024-10-07 09:48:56.041901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.096 [2024-10-07 09:48:56.041966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.096 qpair failed and we were unable to recover it. 00:28:07.096 [2024-10-07 09:48:56.042253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.096 [2024-10-07 09:48:56.042318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.096 qpair failed and we were unable to recover it. 00:28:07.096 [2024-10-07 09:48:56.042577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.096 [2024-10-07 09:48:56.042642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.096 qpair failed and we were unable to recover it. 00:28:07.096 [2024-10-07 09:48:56.042960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.096 [2024-10-07 09:48:56.043025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.096 qpair failed and we were unable to recover it. 00:28:07.096 [2024-10-07 09:48:56.043309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.096 [2024-10-07 09:48:56.043374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.096 qpair failed and we were unable to recover it. 00:28:07.096 [2024-10-07 09:48:56.043664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.096 [2024-10-07 09:48:56.043747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.096 qpair failed and we were unable to recover it. 00:28:07.096 [2024-10-07 09:48:56.043971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.096 [2024-10-07 09:48:56.044037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.096 qpair failed and we were unable to recover it. 00:28:07.096 [2024-10-07 09:48:56.044287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.096 [2024-10-07 09:48:56.044352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.096 qpair failed and we were unable to recover it. 00:28:07.096 [2024-10-07 09:48:56.044650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.096 [2024-10-07 09:48:56.044742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.096 qpair failed and we were unable to recover it. 00:28:07.096 [2024-10-07 09:48:56.045007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.096 [2024-10-07 09:48:56.045073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.096 qpair failed and we were unable to recover it. 00:28:07.096 [2024-10-07 09:48:56.045309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.096 [2024-10-07 09:48:56.045375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.096 qpair failed and we were unable to recover it. 00:28:07.096 [2024-10-07 09:48:56.045683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.096 [2024-10-07 09:48:56.045750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.096 qpair failed and we were unable to recover it. 00:28:07.096 [2024-10-07 09:48:56.045973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.096 [2024-10-07 09:48:56.046038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.096 qpair failed and we were unable to recover it. 00:28:07.096 [2024-10-07 09:48:56.046263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.096 [2024-10-07 09:48:56.046328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.096 qpair failed and we were unable to recover it. 00:28:07.096 [2024-10-07 09:48:56.046569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.096 [2024-10-07 09:48:56.046633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.096 qpair failed and we were unable to recover it. 00:28:07.096 [2024-10-07 09:48:56.046949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.096 [2024-10-07 09:48:56.047014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.096 qpair failed and we were unable to recover it. 00:28:07.096 [2024-10-07 09:48:56.047300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.096 [2024-10-07 09:48:56.047365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.096 qpair failed and we were unable to recover it. 00:28:07.096 [2024-10-07 09:48:56.047611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.394 [2024-10-07 09:48:56.047709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.394 qpair failed and we were unable to recover it. 00:28:07.394 [2024-10-07 09:48:56.047999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.394 [2024-10-07 09:48:56.048064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.394 qpair failed and we were unable to recover it. 00:28:07.394 [2024-10-07 09:48:56.048363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.394 [2024-10-07 09:48:56.048428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.394 qpair failed and we were unable to recover it. 00:28:07.394 [2024-10-07 09:48:56.048723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.394 [2024-10-07 09:48:56.048791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.394 qpair failed and we were unable to recover it. 00:28:07.394 [2024-10-07 09:48:56.048995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.394 [2024-10-07 09:48:56.049061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.394 qpair failed and we were unable to recover it. 00:28:07.394 [2024-10-07 09:48:56.049354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.394 [2024-10-07 09:48:56.049418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.394 qpair failed and we were unable to recover it. 00:28:07.394 [2024-10-07 09:48:56.049663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.394 [2024-10-07 09:48:56.049743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.394 qpair failed and we were unable to recover it. 00:28:07.394 [2024-10-07 09:48:56.049996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.394 [2024-10-07 09:48:56.050061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.394 qpair failed and we were unable to recover it. 00:28:07.394 [2024-10-07 09:48:56.050313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.394 [2024-10-07 09:48:56.050378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.394 qpair failed and we were unable to recover it. 00:28:07.394 [2024-10-07 09:48:56.050662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.394 [2024-10-07 09:48:56.050743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.394 qpair failed and we were unable to recover it. 00:28:07.394 [2024-10-07 09:48:56.050988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.394 [2024-10-07 09:48:56.051055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.394 qpair failed and we were unable to recover it. 00:28:07.394 [2024-10-07 09:48:56.051297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.394 [2024-10-07 09:48:56.051362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.394 qpair failed and we were unable to recover it. 00:28:07.394 [2024-10-07 09:48:56.051606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.394 [2024-10-07 09:48:56.051690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.394 qpair failed and we were unable to recover it. 00:28:07.394 [2024-10-07 09:48:56.051932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.394 [2024-10-07 09:48:56.052008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.394 qpair failed and we were unable to recover it. 00:28:07.394 [2024-10-07 09:48:56.052261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.394 [2024-10-07 09:48:56.052327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.394 qpair failed and we were unable to recover it. 00:28:07.394 [2024-10-07 09:48:56.052610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.394 [2024-10-07 09:48:56.052691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.394 qpair failed and we were unable to recover it. 00:28:07.394 [2024-10-07 09:48:56.052980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.394 [2024-10-07 09:48:56.053047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.394 qpair failed and we were unable to recover it. 00:28:07.394 [2024-10-07 09:48:56.053337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.394 [2024-10-07 09:48:56.053402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.394 qpair failed and we were unable to recover it. 00:28:07.394 [2024-10-07 09:48:56.053644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.394 [2024-10-07 09:48:56.053736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.394 qpair failed and we were unable to recover it. 00:28:07.394 [2024-10-07 09:48:56.054001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.394 [2024-10-07 09:48:56.054067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.394 qpair failed and we were unable to recover it. 00:28:07.394 [2024-10-07 09:48:56.054355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.394 [2024-10-07 09:48:56.054420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.394 qpair failed and we were unable to recover it. 00:28:07.395 [2024-10-07 09:48:56.054659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.395 [2024-10-07 09:48:56.054747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.395 qpair failed and we were unable to recover it. 00:28:07.395 [2024-10-07 09:48:56.055049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.395 [2024-10-07 09:48:56.055115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.395 qpair failed and we were unable to recover it. 00:28:07.395 [2024-10-07 09:48:56.055411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.395 [2024-10-07 09:48:56.055476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.395 qpair failed and we were unable to recover it. 00:28:07.395 [2024-10-07 09:48:56.055772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.395 [2024-10-07 09:48:56.055838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.395 qpair failed and we were unable to recover it. 00:28:07.395 [2024-10-07 09:48:56.056032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.395 [2024-10-07 09:48:56.056098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.395 qpair failed and we were unable to recover it. 00:28:07.395 [2024-10-07 09:48:56.056338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.395 [2024-10-07 09:48:56.056404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.395 qpair failed and we were unable to recover it. 00:28:07.395 [2024-10-07 09:48:56.056700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.395 [2024-10-07 09:48:56.056775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.395 qpair failed and we were unable to recover it. 00:28:07.395 [2024-10-07 09:48:56.057086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.395 [2024-10-07 09:48:56.057150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.395 qpair failed and we were unable to recover it. 00:28:07.395 [2024-10-07 09:48:56.057448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.395 [2024-10-07 09:48:56.057514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.395 qpair failed and we were unable to recover it. 00:28:07.395 [2024-10-07 09:48:56.057801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.395 [2024-10-07 09:48:56.057868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.395 qpair failed and we were unable to recover it. 00:28:07.395 [2024-10-07 09:48:56.058157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.395 [2024-10-07 09:48:56.058223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.395 qpair failed and we were unable to recover it. 00:28:07.395 [2024-10-07 09:48:56.058527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.395 [2024-10-07 09:48:56.058595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.395 qpair failed and we were unable to recover it. 00:28:07.395 [2024-10-07 09:48:56.058901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.395 [2024-10-07 09:48:56.058974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.395 qpair failed and we were unable to recover it. 00:28:07.395 [2024-10-07 09:48:56.059260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.395 [2024-10-07 09:48:56.059326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.395 qpair failed and we were unable to recover it. 00:28:07.395 [2024-10-07 09:48:56.059576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.395 [2024-10-07 09:48:56.059642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.395 qpair failed and we were unable to recover it. 00:28:07.395 [2024-10-07 09:48:56.059903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.395 [2024-10-07 09:48:56.059981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.395 qpair failed and we were unable to recover it. 00:28:07.395 [2024-10-07 09:48:56.060231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.395 [2024-10-07 09:48:56.060297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.395 qpair failed and we were unable to recover it. 00:28:07.395 [2024-10-07 09:48:56.060542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.395 [2024-10-07 09:48:56.060609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.395 qpair failed and we were unable to recover it. 00:28:07.395 [2024-10-07 09:48:56.060884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.395 [2024-10-07 09:48:56.060959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.395 qpair failed and we were unable to recover it. 00:28:07.395 [2024-10-07 09:48:56.061258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.395 [2024-10-07 09:48:56.061324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.395 qpair failed and we were unable to recover it. 00:28:07.395 [2024-10-07 09:48:56.061615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.395 [2024-10-07 09:48:56.061695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.395 qpair failed and we were unable to recover it. 00:28:07.395 [2024-10-07 09:48:56.061958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.395 [2024-10-07 09:48:56.062024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.395 qpair failed and we were unable to recover it. 00:28:07.395 [2024-10-07 09:48:56.062307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.395 [2024-10-07 09:48:56.062374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.395 qpair failed and we were unable to recover it. 00:28:07.395 [2024-10-07 09:48:56.062630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.395 [2024-10-07 09:48:56.062711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.395 qpair failed and we were unable to recover it. 00:28:07.395 [2024-10-07 09:48:56.062973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.395 [2024-10-07 09:48:56.063039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.395 qpair failed and we were unable to recover it. 00:28:07.395 [2024-10-07 09:48:56.063350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.395 [2024-10-07 09:48:56.063417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.395 qpair failed and we were unable to recover it. 00:28:07.395 [2024-10-07 09:48:56.063703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.395 [2024-10-07 09:48:56.063770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.395 qpair failed and we were unable to recover it. 00:28:07.395 [2024-10-07 09:48:56.063961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.395 [2024-10-07 09:48:56.064026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.395 qpair failed and we were unable to recover it. 00:28:07.395 [2024-10-07 09:48:56.064286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.395 [2024-10-07 09:48:56.064351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.395 qpair failed and we were unable to recover it. 00:28:07.395 [2024-10-07 09:48:56.064600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.395 [2024-10-07 09:48:56.064683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.395 qpair failed and we were unable to recover it. 00:28:07.395 [2024-10-07 09:48:56.064976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.395 [2024-10-07 09:48:56.065042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.395 qpair failed and we were unable to recover it. 00:28:07.395 [2024-10-07 09:48:56.065294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.395 [2024-10-07 09:48:56.065360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.395 qpair failed and we were unable to recover it. 00:28:07.395 [2024-10-07 09:48:56.065647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.395 [2024-10-07 09:48:56.065740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.395 qpair failed and we were unable to recover it. 00:28:07.395 [2024-10-07 09:48:56.065983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.395 [2024-10-07 09:48:56.066049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.395 qpair failed and we were unable to recover it. 00:28:07.395 [2024-10-07 09:48:56.066284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.395 [2024-10-07 09:48:56.066350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.395 qpair failed and we were unable to recover it. 00:28:07.395 [2024-10-07 09:48:56.066636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.395 [2024-10-07 09:48:56.066732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.395 qpair failed and we were unable to recover it. 00:28:07.395 [2024-10-07 09:48:56.067024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.395 [2024-10-07 09:48:56.067089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.395 qpair failed and we were unable to recover it. 00:28:07.396 [2024-10-07 09:48:56.067299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.396 [2024-10-07 09:48:56.067361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.396 qpair failed and we were unable to recover it. 00:28:07.396 [2024-10-07 09:48:56.067641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.396 [2024-10-07 09:48:56.067742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.396 qpair failed and we were unable to recover it. 00:28:07.396 [2024-10-07 09:48:56.067996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.396 [2024-10-07 09:48:56.068063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.396 qpair failed and we were unable to recover it. 00:28:07.396 [2024-10-07 09:48:56.068363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.396 [2024-10-07 09:48:56.068429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.396 qpair failed and we were unable to recover it. 00:28:07.396 [2024-10-07 09:48:56.068727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.396 [2024-10-07 09:48:56.068793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.396 qpair failed and we were unable to recover it. 00:28:07.396 [2024-10-07 09:48:56.069048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.396 [2024-10-07 09:48:56.069114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.396 qpair failed and we were unable to recover it. 00:28:07.396 [2024-10-07 09:48:56.069359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.396 [2024-10-07 09:48:56.069425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.396 qpair failed and we were unable to recover it. 00:28:07.396 [2024-10-07 09:48:56.069694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.396 [2024-10-07 09:48:56.069772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.396 qpair failed and we were unable to recover it. 00:28:07.396 [2024-10-07 09:48:56.070061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.396 [2024-10-07 09:48:56.070125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.396 qpair failed and we were unable to recover it. 00:28:07.396 [2024-10-07 09:48:56.070403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.396 [2024-10-07 09:48:56.070470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.396 qpair failed and we were unable to recover it. 00:28:07.396 [2024-10-07 09:48:56.070708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.396 [2024-10-07 09:48:56.070774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.396 qpair failed and we were unable to recover it. 00:28:07.396 [2024-10-07 09:48:56.071000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.396 [2024-10-07 09:48:56.071066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.396 qpair failed and we were unable to recover it. 00:28:07.396 [2024-10-07 09:48:56.071300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.396 [2024-10-07 09:48:56.071367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.396 qpair failed and we were unable to recover it. 00:28:07.396 [2024-10-07 09:48:56.071630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.396 [2024-10-07 09:48:56.071734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.396 qpair failed and we were unable to recover it. 00:28:07.396 [2024-10-07 09:48:56.072040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.396 [2024-10-07 09:48:56.072107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.396 qpair failed and we were unable to recover it. 00:28:07.396 [2024-10-07 09:48:56.072367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.396 [2024-10-07 09:48:56.072433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.396 qpair failed and we were unable to recover it. 00:28:07.396 [2024-10-07 09:48:56.072623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.396 [2024-10-07 09:48:56.072707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.396 qpair failed and we were unable to recover it. 00:28:07.396 [2024-10-07 09:48:56.072994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.396 [2024-10-07 09:48:56.073061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.396 qpair failed and we were unable to recover it. 00:28:07.396 [2024-10-07 09:48:56.073349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.396 [2024-10-07 09:48:56.073416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.396 qpair failed and we were unable to recover it. 00:28:07.396 [2024-10-07 09:48:56.073713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.396 [2024-10-07 09:48:56.073780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.396 qpair failed and we were unable to recover it. 00:28:07.396 [2024-10-07 09:48:56.074084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.396 [2024-10-07 09:48:56.074150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.396 qpair failed and we were unable to recover it. 00:28:07.396 [2024-10-07 09:48:56.074442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.396 [2024-10-07 09:48:56.074508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.396 qpair failed and we were unable to recover it. 00:28:07.396 [2024-10-07 09:48:56.074767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.396 [2024-10-07 09:48:56.074833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.396 qpair failed and we were unable to recover it. 00:28:07.396 [2024-10-07 09:48:56.075082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.396 [2024-10-07 09:48:56.075148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.396 qpair failed and we were unable to recover it. 00:28:07.396 [2024-10-07 09:48:56.075434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.396 [2024-10-07 09:48:56.075501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.396 qpair failed and we were unable to recover it. 00:28:07.396 [2024-10-07 09:48:56.075699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.396 [2024-10-07 09:48:56.075768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.396 qpair failed and we were unable to recover it. 00:28:07.396 [2024-10-07 09:48:56.076034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.396 [2024-10-07 09:48:56.076100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.396 qpair failed and we were unable to recover it. 00:28:07.396 [2024-10-07 09:48:56.076345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.396 [2024-10-07 09:48:56.076410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.396 qpair failed and we were unable to recover it. 00:28:07.396 [2024-10-07 09:48:56.076719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.396 [2024-10-07 09:48:56.076798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.396 qpair failed and we were unable to recover it. 00:28:07.396 [2024-10-07 09:48:56.077048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.396 [2024-10-07 09:48:56.077113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.396 qpair failed and we were unable to recover it. 00:28:07.396 [2024-10-07 09:48:56.077402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.396 [2024-10-07 09:48:56.077467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.396 qpair failed and we were unable to recover it. 00:28:07.396 [2024-10-07 09:48:56.077727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.396 [2024-10-07 09:48:56.077794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.396 qpair failed and we were unable to recover it. 00:28:07.396 [2024-10-07 09:48:56.078096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.396 [2024-10-07 09:48:56.078162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.396 qpair failed and we were unable to recover it. 00:28:07.396 [2024-10-07 09:48:56.078416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.396 [2024-10-07 09:48:56.078481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.396 qpair failed and we were unable to recover it. 00:28:07.396 [2024-10-07 09:48:56.078764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.396 [2024-10-07 09:48:56.078831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.396 qpair failed and we were unable to recover it. 00:28:07.396 [2024-10-07 09:48:56.079073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.396 [2024-10-07 09:48:56.079140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.396 qpair failed and we were unable to recover it. 00:28:07.396 [2024-10-07 09:48:56.079393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.396 [2024-10-07 09:48:56.079458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.396 qpair failed and we were unable to recover it. 00:28:07.396 [2024-10-07 09:48:56.079762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.396 [2024-10-07 09:48:56.079837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.396 qpair failed and we were unable to recover it. 00:28:07.396 [2024-10-07 09:48:56.080093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.397 [2024-10-07 09:48:56.080159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.397 qpair failed and we were unable to recover it. 00:28:07.397 [2024-10-07 09:48:56.080412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.397 [2024-10-07 09:48:56.080486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.397 qpair failed and we were unable to recover it. 00:28:07.397 [2024-10-07 09:48:56.080779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.397 [2024-10-07 09:48:56.080845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.397 qpair failed and we were unable to recover it. 00:28:07.397 [2024-10-07 09:48:56.081129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.397 [2024-10-07 09:48:56.081193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.397 qpair failed and we were unable to recover it. 00:28:07.397 [2024-10-07 09:48:56.081465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.397 [2024-10-07 09:48:56.081529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.397 qpair failed and we were unable to recover it. 00:28:07.397 [2024-10-07 09:48:56.081766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.397 [2024-10-07 09:48:56.081835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.397 qpair failed and we were unable to recover it. 00:28:07.397 [2024-10-07 09:48:56.082121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.397 [2024-10-07 09:48:56.082187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.397 qpair failed and we were unable to recover it. 00:28:07.397 [2024-10-07 09:48:56.082480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.397 [2024-10-07 09:48:56.082546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.397 qpair failed and we were unable to recover it. 00:28:07.397 [2024-10-07 09:48:56.082840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.397 [2024-10-07 09:48:56.082907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.397 qpair failed and we were unable to recover it. 00:28:07.397 [2024-10-07 09:48:56.083190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.397 [2024-10-07 09:48:56.083256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.397 qpair failed and we were unable to recover it. 00:28:07.397 [2024-10-07 09:48:56.083502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.397 [2024-10-07 09:48:56.083567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.397 qpair failed and we were unable to recover it. 00:28:07.397 [2024-10-07 09:48:56.083824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.397 [2024-10-07 09:48:56.083892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.397 qpair failed and we were unable to recover it. 00:28:07.397 [2024-10-07 09:48:56.084140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.397 [2024-10-07 09:48:56.084206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.397 qpair failed and we were unable to recover it. 00:28:07.397 [2024-10-07 09:48:56.084442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.397 [2024-10-07 09:48:56.084507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.397 qpair failed and we were unable to recover it. 00:28:07.397 [2024-10-07 09:48:56.084806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.397 [2024-10-07 09:48:56.084874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.397 qpair failed and we were unable to recover it. 00:28:07.397 [2024-10-07 09:48:56.085080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.397 [2024-10-07 09:48:56.085145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.397 qpair failed and we were unable to recover it. 00:28:07.397 [2024-10-07 09:48:56.085436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.397 [2024-10-07 09:48:56.085502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.397 qpair failed and we were unable to recover it. 00:28:07.397 [2024-10-07 09:48:56.085787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.397 [2024-10-07 09:48:56.085855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.397 qpair failed and we were unable to recover it. 00:28:07.397 [2024-10-07 09:48:56.086074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.397 [2024-10-07 09:48:56.086138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.397 qpair failed and we were unable to recover it. 00:28:07.397 [2024-10-07 09:48:56.086373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.397 [2024-10-07 09:48:56.086439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.397 qpair failed and we were unable to recover it. 00:28:07.397 [2024-10-07 09:48:56.086699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.397 [2024-10-07 09:48:56.086767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.397 qpair failed and we were unable to recover it. 00:28:07.397 [2024-10-07 09:48:56.087008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.397 [2024-10-07 09:48:56.087075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.397 qpair failed and we were unable to recover it. 00:28:07.397 [2024-10-07 09:48:56.087368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.397 [2024-10-07 09:48:56.087434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.397 qpair failed and we were unable to recover it. 00:28:07.397 [2024-10-07 09:48:56.087716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.397 [2024-10-07 09:48:56.087784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.397 qpair failed and we were unable to recover it. 00:28:07.397 [2024-10-07 09:48:56.088022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.397 [2024-10-07 09:48:56.088091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.397 qpair failed and we were unable to recover it. 00:28:07.397 [2024-10-07 09:48:56.088377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.397 [2024-10-07 09:48:56.088443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.397 qpair failed and we were unable to recover it. 00:28:07.397 [2024-10-07 09:48:56.088737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.397 [2024-10-07 09:48:56.088804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.397 qpair failed and we were unable to recover it. 00:28:07.397 [2024-10-07 09:48:56.089012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.397 [2024-10-07 09:48:56.089077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.397 qpair failed and we were unable to recover it. 00:28:07.397 [2024-10-07 09:48:56.089358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.397 [2024-10-07 09:48:56.089423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.397 qpair failed and we were unable to recover it. 00:28:07.397 [2024-10-07 09:48:56.089722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.397 [2024-10-07 09:48:56.089809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.397 qpair failed and we were unable to recover it. 00:28:07.397 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 329527 Killed "${NVMF_APP[@]}" "$@" 00:28:07.397 [2024-10-07 09:48:56.090097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.397 [2024-10-07 09:48:56.090162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.397 qpair failed and we were unable to recover it. 00:28:07.397 [2024-10-07 09:48:56.090466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.397 [2024-10-07 09:48:56.090532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.397 qpair failed and we were unable to recover it. 00:28:07.397 [2024-10-07 09:48:56.090847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.397 09:48:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:28:07.397 [2024-10-07 09:48:56.090914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.397 qpair failed and we were unable to recover it. 00:28:07.397 [2024-10-07 09:48:56.091175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.397 09:48:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:28:07.397 [2024-10-07 09:48:56.091241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.397 qpair failed and we were unable to recover it. 00:28:07.397 09:48:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:28:07.397 [2024-10-07 09:48:56.091534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.397 [2024-10-07 09:48:56.091604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.397 qpair failed and we were unable to recover it. 00:28:07.397 [2024-10-07 09:48:56.091837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.397 09:48:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:07.397 [2024-10-07 09:48:56.091903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.397 qpair failed and we were unable to recover it. 00:28:07.397 09:48:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:07.397 [2024-10-07 09:48:56.092187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.398 [2024-10-07 09:48:56.092252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.398 qpair failed and we were unable to recover it. 00:28:07.398 [2024-10-07 09:48:56.092541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.398 [2024-10-07 09:48:56.092607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.398 qpair failed and we were unable to recover it. 00:28:07.398 [2024-10-07 09:48:56.092824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.398 [2024-10-07 09:48:56.092892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.398 qpair failed and we were unable to recover it. 00:28:07.398 [2024-10-07 09:48:56.093154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.398 [2024-10-07 09:48:56.093220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.398 qpair failed and we were unable to recover it. 00:28:07.398 [2024-10-07 09:48:56.093470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.398 [2024-10-07 09:48:56.093536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.398 qpair failed and we were unable to recover it. 00:28:07.398 [2024-10-07 09:48:56.093788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.398 [2024-10-07 09:48:56.093825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.398 qpair failed and we were unable to recover it. 00:28:07.398 [2024-10-07 09:48:56.093942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.398 [2024-10-07 09:48:56.093978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.398 qpair failed and we were unable to recover it. 00:28:07.398 [2024-10-07 09:48:56.094132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.398 [2024-10-07 09:48:56.094169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.398 qpair failed and we were unable to recover it. 00:28:07.398 [2024-10-07 09:48:56.094317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.398 [2024-10-07 09:48:56.094354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.398 qpair failed and we were unable to recover it. 00:28:07.398 [2024-10-07 09:48:56.094500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.398 [2024-10-07 09:48:56.094536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.398 qpair failed and we were unable to recover it. 00:28:07.398 [2024-10-07 09:48:56.094650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.398 [2024-10-07 09:48:56.094695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.398 qpair failed and we were unable to recover it. 00:28:07.398 [2024-10-07 09:48:56.094809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.398 [2024-10-07 09:48:56.094845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.398 qpair failed and we were unable to recover it. 00:28:07.398 [2024-10-07 09:48:56.094994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.398 [2024-10-07 09:48:56.095031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.398 qpair failed and we were unable to recover it. 00:28:07.398 [2024-10-07 09:48:56.095287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.398 [2024-10-07 09:48:56.095353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.398 qpair failed and we were unable to recover it. 00:28:07.398 [2024-10-07 09:48:56.095641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.398 [2024-10-07 09:48:56.095741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.398 qpair failed and we were unable to recover it. 00:28:07.398 [2024-10-07 09:48:56.096001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.398 [2024-10-07 09:48:56.096066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.398 qpair failed and we were unable to recover it. 00:28:07.398 [2024-10-07 09:48:56.096353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.398 [2024-10-07 09:48:56.096412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.398 qpair failed and we were unable to recover it. 00:28:07.398 [2024-10-07 09:48:56.096599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.398 [2024-10-07 09:48:56.096635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.398 qpair failed and we were unable to recover it. 00:28:07.398 [2024-10-07 09:48:56.096780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.398 [2024-10-07 09:48:56.096815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.398 qpair failed and we were unable to recover it. 00:28:07.398 [2024-10-07 09:48:56.096999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.398 [2024-10-07 09:48:56.097035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.398 qpair failed and we were unable to recover it. 00:28:07.398 [2024-10-07 09:48:56.097292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.398 [2024-10-07 09:48:56.097357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.398 qpair failed and we were unable to recover it. 00:28:07.398 [2024-10-07 09:48:56.097616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.398 [2024-10-07 09:48:56.097721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.398 qpair failed and we were unable to recover it. 00:28:07.398 [2024-10-07 09:48:56.097839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.398 [2024-10-07 09:48:56.097872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.398 qpair failed and we were unable to recover it. 00:28:07.398 [2024-10-07 09:48:56.098079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.398 [2024-10-07 09:48:56.098140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.398 qpair failed and we were unable to recover it. 00:28:07.398 [2024-10-07 09:48:56.098379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.398 [2024-10-07 09:48:56.098440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.398 qpair failed and we were unable to recover it. 00:28:07.398 [2024-10-07 09:48:56.098685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.398 09:48:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # nvmfpid=330048 00:28:07.398 [2024-10-07 09:48:56.098744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.398 qpair failed and we were unable to recover it. 00:28:07.398 [2024-10-07 09:48:56.098854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.398 09:48:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:28:07.398 [2024-10-07 09:48:56.098889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.398 qpair failed and we were unable to recover it. 00:28:07.398 09:48:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # waitforlisten 330048 00:28:07.398 [2024-10-07 09:48:56.099026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.398 [2024-10-07 09:48:56.099060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.398 qpair failed and we were unable to recover it. 00:28:07.398 09:48:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 330048 ']' 00:28:07.398 [2024-10-07 09:48:56.099227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.398 [2024-10-07 09:48:56.099263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.398 qpair failed and we were unable to recover it. 00:28:07.398 09:48:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:07.398 [2024-10-07 09:48:56.099369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.398 [2024-10-07 09:48:56.099405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.398 qpair failed and we were unable to recover it. 00:28:07.398 09:48:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:07.398 [2024-10-07 09:48:56.099565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.398 [2024-10-07 09:48:56.099641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.398 qpair failed and we were unable to recover it. 00:28:07.398 [2024-10-07 09:48:56.099800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.398 09:48:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:07.398 [2024-10-07 09:48:56.099835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:07.398 qpair failed and we were unable to recover it. 00:28:07.398 [2024-10-07 09:48:56.099961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.398 09:48:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:07.398 [2024-10-07 09:48:56.099995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.399 qpair failed and we were unable to recover it. 00:28:07.399 [2024-10-07 09:48:56.100102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.399 09:48:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:07.399 [2024-10-07 09:48:56.100135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.399 qpair failed and we were unable to recover it. 00:28:07.399 [2024-10-07 09:48:56.100280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.399 [2024-10-07 09:48:56.100313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.399 qpair failed and we were unable to recover it. 00:28:07.399 [2024-10-07 09:48:56.100424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.399 [2024-10-07 09:48:56.100456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.399 qpair failed and we were unable to recover it. 00:28:07.399 [2024-10-07 09:48:56.100564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.399 [2024-10-07 09:48:56.100596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.399 qpair failed and we were unable to recover it. 00:28:07.399 [2024-10-07 09:48:56.100728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.399 [2024-10-07 09:48:56.100762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.399 qpair failed and we were unable to recover it. 00:28:07.399 [2024-10-07 09:48:56.100874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.399 [2024-10-07 09:48:56.100908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.399 qpair failed and we were unable to recover it. 00:28:07.399 [2024-10-07 09:48:56.101035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.399 [2024-10-07 09:48:56.101070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.399 qpair failed and we were unable to recover it. 00:28:07.399 [2024-10-07 09:48:56.101208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.399 [2024-10-07 09:48:56.101245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.399 qpair failed and we were unable to recover it. 00:28:07.399 [2024-10-07 09:48:56.101384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.399 [2024-10-07 09:48:56.101427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.399 qpair failed and we were unable to recover it. 00:28:07.399 [2024-10-07 09:48:56.101557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.399 [2024-10-07 09:48:56.101592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.399 qpair failed and we were unable to recover it. 00:28:07.399 [2024-10-07 09:48:56.101720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.399 [2024-10-07 09:48:56.101756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.399 qpair failed and we were unable to recover it. 00:28:07.399 [2024-10-07 09:48:56.101861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.399 [2024-10-07 09:48:56.101896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.399 qpair failed and we were unable to recover it. 00:28:07.399 [2024-10-07 09:48:56.102003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.399 [2024-10-07 09:48:56.102037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.399 qpair failed and we were unable to recover it. 00:28:07.399 [2024-10-07 09:48:56.102139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.399 [2024-10-07 09:48:56.102173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.399 qpair failed and we were unable to recover it. 00:28:07.399 [2024-10-07 09:48:56.102331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.399 [2024-10-07 09:48:56.102396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.399 qpair failed and we were unable to recover it. 00:28:07.399 [2024-10-07 09:48:56.102583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.399 [2024-10-07 09:48:56.102647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.399 qpair failed and we were unable to recover it. 00:28:07.399 [2024-10-07 09:48:56.102818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.399 [2024-10-07 09:48:56.102852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.399 qpair failed and we were unable to recover it. 00:28:07.399 [2024-10-07 09:48:56.102988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.399 [2024-10-07 09:48:56.103023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.399 qpair failed and we were unable to recover it. 00:28:07.399 [2024-10-07 09:48:56.103139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.399 [2024-10-07 09:48:56.103213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.399 qpair failed and we were unable to recover it. 00:28:07.399 [2024-10-07 09:48:56.103456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.399 [2024-10-07 09:48:56.103521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.399 qpair failed and we were unable to recover it. 00:28:07.399 [2024-10-07 09:48:56.103759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.399 [2024-10-07 09:48:56.103795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.399 qpair failed and we were unable to recover it. 00:28:07.399 [2024-10-07 09:48:56.103915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.399 [2024-10-07 09:48:56.103950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.399 qpair failed and we were unable to recover it. 00:28:07.399 [2024-10-07 09:48:56.104060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.399 [2024-10-07 09:48:56.104103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.399 qpair failed and we were unable to recover it. 00:28:07.399 [2024-10-07 09:48:56.104249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.399 [2024-10-07 09:48:56.104284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.399 qpair failed and we were unable to recover it. 00:28:07.399 [2024-10-07 09:48:56.104523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.399 [2024-10-07 09:48:56.104589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.399 qpair failed and we were unable to recover it. 00:28:07.399 [2024-10-07 09:48:56.104769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.399 [2024-10-07 09:48:56.104804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.399 qpair failed and we were unable to recover it. 00:28:07.399 [2024-10-07 09:48:56.104946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.399 [2024-10-07 09:48:56.104998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.399 qpair failed and we were unable to recover it. 00:28:07.399 [2024-10-07 09:48:56.105265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.399 [2024-10-07 09:48:56.105299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.399 qpair failed and we were unable to recover it. 00:28:07.399 [2024-10-07 09:48:56.105554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.399 [2024-10-07 09:48:56.105619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.399 qpair failed and we were unable to recover it. 00:28:07.399 [2024-10-07 09:48:56.105796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.399 [2024-10-07 09:48:56.105831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.399 qpair failed and we were unable to recover it. 00:28:07.399 [2024-10-07 09:48:56.105942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.399 [2024-10-07 09:48:56.105992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.399 qpair failed and we were unable to recover it. 00:28:07.399 [2024-10-07 09:48:56.106133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.399 [2024-10-07 09:48:56.106197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.399 qpair failed and we were unable to recover it. 00:28:07.399 [2024-10-07 09:48:56.106476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.399 [2024-10-07 09:48:56.106542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.399 qpair failed and we were unable to recover it. 00:28:07.399 [2024-10-07 09:48:56.106783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.400 [2024-10-07 09:48:56.106815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.400 qpair failed and we were unable to recover it. 00:28:07.400 [2024-10-07 09:48:56.106936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.400 [2024-10-07 09:48:56.106967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.400 qpair failed and we were unable to recover it. 00:28:07.400 [2024-10-07 09:48:56.107096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.400 [2024-10-07 09:48:56.107126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.400 qpair failed and we were unable to recover it. 00:28:07.400 [2024-10-07 09:48:56.107266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.400 [2024-10-07 09:48:56.107298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.400 qpair failed and we were unable to recover it. 00:28:07.400 [2024-10-07 09:48:56.107397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.400 [2024-10-07 09:48:56.107428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.400 qpair failed and we were unable to recover it. 00:28:07.400 [2024-10-07 09:48:56.107554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.400 [2024-10-07 09:48:56.107584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.400 qpair failed and we were unable to recover it. 00:28:07.400 [2024-10-07 09:48:56.107691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.400 [2024-10-07 09:48:56.107723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.400 qpair failed and we were unable to recover it. 00:28:07.400 [2024-10-07 09:48:56.107826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.400 [2024-10-07 09:48:56.107857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.400 qpair failed and we were unable to recover it. 00:28:07.400 [2024-10-07 09:48:56.107960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.400 [2024-10-07 09:48:56.107990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.400 qpair failed and we were unable to recover it. 00:28:07.400 [2024-10-07 09:48:56.108090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.400 [2024-10-07 09:48:56.108121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.400 qpair failed and we were unable to recover it. 00:28:07.400 [2024-10-07 09:48:56.108248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.400 [2024-10-07 09:48:56.108281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.400 qpair failed and we were unable to recover it. 00:28:07.400 [2024-10-07 09:48:56.108382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.400 [2024-10-07 09:48:56.108410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.400 qpair failed and we were unable to recover it. 00:28:07.400 [2024-10-07 09:48:56.108506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.400 [2024-10-07 09:48:56.108537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.400 qpair failed and we were unable to recover it. 00:28:07.400 [2024-10-07 09:48:56.108683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.400 [2024-10-07 09:48:56.108725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.400 qpair failed and we were unable to recover it. 00:28:07.400 [2024-10-07 09:48:56.108828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.400 [2024-10-07 09:48:56.108859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.400 qpair failed and we were unable to recover it. 00:28:07.400 [2024-10-07 09:48:56.108960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.400 [2024-10-07 09:48:56.108991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.400 qpair failed and we were unable to recover it. 00:28:07.400 [2024-10-07 09:48:56.109142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.400 [2024-10-07 09:48:56.109206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.400 qpair failed and we were unable to recover it. 00:28:07.400 [2024-10-07 09:48:56.109415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.400 [2024-10-07 09:48:56.109480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.400 qpair failed and we were unable to recover it. 00:28:07.400 [2024-10-07 09:48:56.109685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.400 [2024-10-07 09:48:56.109735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.400 qpair failed and we were unable to recover it. 00:28:07.400 [2024-10-07 09:48:56.109836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.400 [2024-10-07 09:48:56.109865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.400 qpair failed and we were unable to recover it. 00:28:07.400 [2024-10-07 09:48:56.109998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.400 [2024-10-07 09:48:56.110027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.400 qpair failed and we were unable to recover it. 00:28:07.400 [2024-10-07 09:48:56.110132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.400 [2024-10-07 09:48:56.110164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.400 qpair failed and we were unable to recover it. 00:28:07.400 [2024-10-07 09:48:56.110341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.400 [2024-10-07 09:48:56.110400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.400 qpair failed and we were unable to recover it. 00:28:07.400 [2024-10-07 09:48:56.110628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.400 [2024-10-07 09:48:56.110710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.400 qpair failed and we were unable to recover it. 00:28:07.400 [2024-10-07 09:48:56.110836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.400 [2024-10-07 09:48:56.110870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.400 qpair failed and we were unable to recover it. 00:28:07.400 [2024-10-07 09:48:56.110984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.400 [2024-10-07 09:48:56.111019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.400 qpair failed and we were unable to recover it. 00:28:07.400 [2024-10-07 09:48:56.111241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.400 [2024-10-07 09:48:56.111301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.400 qpair failed and we were unable to recover it. 00:28:07.400 [2024-10-07 09:48:56.111494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.400 [2024-10-07 09:48:56.111555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.400 qpair failed and we were unable to recover it. 00:28:07.400 [2024-10-07 09:48:56.111764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.400 [2024-10-07 09:48:56.111795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.400 qpair failed and we were unable to recover it. 00:28:07.400 [2024-10-07 09:48:56.111901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.400 [2024-10-07 09:48:56.111932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.400 qpair failed and we were unable to recover it. 00:28:07.400 [2024-10-07 09:48:56.112099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.400 [2024-10-07 09:48:56.112158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.400 qpair failed and we were unable to recover it. 00:28:07.400 [2024-10-07 09:48:56.112392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.400 [2024-10-07 09:48:56.112423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.400 qpair failed and we were unable to recover it. 00:28:07.400 [2024-10-07 09:48:56.112517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.400 [2024-10-07 09:48:56.112547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.400 qpair failed and we were unable to recover it. 00:28:07.400 [2024-10-07 09:48:56.112654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.400 [2024-10-07 09:48:56.112693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.400 qpair failed and we were unable to recover it. 00:28:07.400 [2024-10-07 09:48:56.112785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.400 [2024-10-07 09:48:56.112816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.400 qpair failed and we were unable to recover it. 00:28:07.400 [2024-10-07 09:48:56.112918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.400 [2024-10-07 09:48:56.112948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.400 qpair failed and we were unable to recover it. 00:28:07.400 [2024-10-07 09:48:56.113137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.400 [2024-10-07 09:48:56.113209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.400 qpair failed and we were unable to recover it. 00:28:07.400 [2024-10-07 09:48:56.113460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.400 [2024-10-07 09:48:56.113521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.400 qpair failed and we were unable to recover it. 00:28:07.400 [2024-10-07 09:48:56.113726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.400 [2024-10-07 09:48:56.113756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.401 qpair failed and we were unable to recover it. 00:28:07.401 [2024-10-07 09:48:56.113878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.401 [2024-10-07 09:48:56.113907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.401 qpair failed and we were unable to recover it. 00:28:07.401 [2024-10-07 09:48:56.114003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.401 [2024-10-07 09:48:56.114032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.401 qpair failed and we were unable to recover it. 00:28:07.401 [2024-10-07 09:48:56.114122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.401 [2024-10-07 09:48:56.114152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.401 qpair failed and we were unable to recover it. 00:28:07.401 [2024-10-07 09:48:56.114246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.401 [2024-10-07 09:48:56.114276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.401 qpair failed and we were unable to recover it. 00:28:07.401 [2024-10-07 09:48:56.114375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.401 [2024-10-07 09:48:56.114405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.401 qpair failed and we were unable to recover it. 00:28:07.401 [2024-10-07 09:48:56.114581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.401 [2024-10-07 09:48:56.114615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.401 qpair failed and we were unable to recover it. 00:28:07.401 [2024-10-07 09:48:56.114789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.401 [2024-10-07 09:48:56.114819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.401 qpair failed and we were unable to recover it. 00:28:07.401 [2024-10-07 09:48:56.114918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.401 [2024-10-07 09:48:56.114948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.401 qpair failed and we were unable to recover it. 00:28:07.401 [2024-10-07 09:48:56.115049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.401 [2024-10-07 09:48:56.115079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.401 qpair failed and we were unable to recover it. 00:28:07.401 [2024-10-07 09:48:56.115180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.401 [2024-10-07 09:48:56.115210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.401 qpair failed and we were unable to recover it. 00:28:07.401 [2024-10-07 09:48:56.115334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.401 [2024-10-07 09:48:56.115364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.401 qpair failed and we were unable to recover it. 00:28:07.401 [2024-10-07 09:48:56.115458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.401 [2024-10-07 09:48:56.115489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.401 qpair failed and we were unable to recover it. 00:28:07.401 [2024-10-07 09:48:56.115595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.401 [2024-10-07 09:48:56.115625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.401 qpair failed and we were unable to recover it. 00:28:07.401 [2024-10-07 09:48:56.115736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.401 [2024-10-07 09:48:56.115767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.401 qpair failed and we were unable to recover it. 00:28:07.401 [2024-10-07 09:48:56.115864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.401 [2024-10-07 09:48:56.115894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.401 qpair failed and we were unable to recover it. 00:28:07.401 [2024-10-07 09:48:56.115986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.401 [2024-10-07 09:48:56.116015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.401 qpair failed and we were unable to recover it. 00:28:07.401 [2024-10-07 09:48:56.116156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.401 [2024-10-07 09:48:56.116190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.401 qpair failed and we were unable to recover it. 00:28:07.401 [2024-10-07 09:48:56.116333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.401 [2024-10-07 09:48:56.116365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.401 qpair failed and we were unable to recover it. 00:28:07.401 [2024-10-07 09:48:56.116516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.401 [2024-10-07 09:48:56.116551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.401 qpair failed and we were unable to recover it. 00:28:07.401 [2024-10-07 09:48:56.116643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.401 [2024-10-07 09:48:56.116689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.401 qpair failed and we were unable to recover it. 00:28:07.401 [2024-10-07 09:48:56.116788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.401 [2024-10-07 09:48:56.116818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.401 qpair failed and we were unable to recover it. 00:28:07.401 [2024-10-07 09:48:56.116919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.401 [2024-10-07 09:48:56.116949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.401 qpair failed and we were unable to recover it. 00:28:07.401 [2024-10-07 09:48:56.117080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.401 [2024-10-07 09:48:56.117126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.401 qpair failed and we were unable to recover it. 00:28:07.401 [2024-10-07 09:48:56.117227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.401 [2024-10-07 09:48:56.117261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.401 qpair failed and we were unable to recover it. 00:28:07.401 [2024-10-07 09:48:56.117422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.401 [2024-10-07 09:48:56.117451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.401 qpair failed and we were unable to recover it. 00:28:07.401 [2024-10-07 09:48:56.117623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.401 [2024-10-07 09:48:56.117659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.401 qpair failed and we were unable to recover it. 00:28:07.401 [2024-10-07 09:48:56.117782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.401 [2024-10-07 09:48:56.117812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.401 qpair failed and we were unable to recover it. 00:28:07.401 [2024-10-07 09:48:56.117902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.401 [2024-10-07 09:48:56.117932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.401 qpair failed and we were unable to recover it. 00:28:07.401 [2024-10-07 09:48:56.118024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.401 [2024-10-07 09:48:56.118054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.401 qpair failed and we were unable to recover it. 00:28:07.401 [2024-10-07 09:48:56.118147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.401 [2024-10-07 09:48:56.118176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.401 qpair failed and we were unable to recover it. 00:28:07.401 [2024-10-07 09:48:56.118268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.401 [2024-10-07 09:48:56.118298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.401 qpair failed and we were unable to recover it. 00:28:07.401 [2024-10-07 09:48:56.118389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.401 [2024-10-07 09:48:56.118418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.401 qpair failed and we were unable to recover it. 00:28:07.401 [2024-10-07 09:48:56.118516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.401 [2024-10-07 09:48:56.118546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.401 qpair failed and we were unable to recover it. 00:28:07.401 [2024-10-07 09:48:56.118700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.401 [2024-10-07 09:48:56.118731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.401 qpair failed and we were unable to recover it. 00:28:07.401 [2024-10-07 09:48:56.118855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.401 [2024-10-07 09:48:56.118885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.401 qpair failed and we were unable to recover it. 00:28:07.401 [2024-10-07 09:48:56.119009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.401 [2024-10-07 09:48:56.119039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.401 qpair failed and we were unable to recover it. 00:28:07.401 [2024-10-07 09:48:56.119135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.401 [2024-10-07 09:48:56.119164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.401 qpair failed and we were unable to recover it. 00:28:07.401 [2024-10-07 09:48:56.119385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.401 [2024-10-07 09:48:56.119420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.401 qpair failed and we were unable to recover it. 00:28:07.401 [2024-10-07 09:48:56.119532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.402 [2024-10-07 09:48:56.119568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.402 qpair failed and we were unable to recover it. 00:28:07.402 [2024-10-07 09:48:56.119760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.402 [2024-10-07 09:48:56.119790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.402 qpair failed and we were unable to recover it. 00:28:07.402 [2024-10-07 09:48:56.119888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.402 [2024-10-07 09:48:56.119918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.402 qpair failed and we were unable to recover it. 00:28:07.402 [2024-10-07 09:48:56.120009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.402 [2024-10-07 09:48:56.120038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.402 qpair failed and we were unable to recover it. 00:28:07.402 [2024-10-07 09:48:56.120129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.402 [2024-10-07 09:48:56.120159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.402 qpair failed and we were unable to recover it. 00:28:07.402 [2024-10-07 09:48:56.120293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.402 [2024-10-07 09:48:56.120323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.402 qpair failed and we were unable to recover it. 00:28:07.402 [2024-10-07 09:48:56.120420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.402 [2024-10-07 09:48:56.120451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.402 qpair failed and we were unable to recover it. 00:28:07.402 [2024-10-07 09:48:56.120579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.402 [2024-10-07 09:48:56.120614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.402 qpair failed and we were unable to recover it. 00:28:07.402 [2024-10-07 09:48:56.120728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.402 [2024-10-07 09:48:56.120759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.402 qpair failed and we were unable to recover it. 00:28:07.402 [2024-10-07 09:48:56.120862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.402 [2024-10-07 09:48:56.120892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.402 qpair failed and we were unable to recover it. 00:28:07.402 [2024-10-07 09:48:56.121016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.402 [2024-10-07 09:48:56.121045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.402 qpair failed and we were unable to recover it. 00:28:07.402 [2024-10-07 09:48:56.121225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.402 [2024-10-07 09:48:56.121282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.402 qpair failed and we were unable to recover it. 00:28:07.402 [2024-10-07 09:48:56.121487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.402 [2024-10-07 09:48:56.121550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.402 qpair failed and we were unable to recover it. 00:28:07.402 [2024-10-07 09:48:56.121712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.402 [2024-10-07 09:48:56.121746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.402 qpair failed and we were unable to recover it. 00:28:07.402 [2024-10-07 09:48:56.121885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.402 [2024-10-07 09:48:56.121920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.402 qpair failed and we were unable to recover it. 00:28:07.402 [2024-10-07 09:48:56.122024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.402 [2024-10-07 09:48:56.122058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.402 qpair failed and we were unable to recover it. 00:28:07.402 [2024-10-07 09:48:56.122198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.402 [2024-10-07 09:48:56.122250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.402 qpair failed and we were unable to recover it. 00:28:07.402 [2024-10-07 09:48:56.122444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.402 [2024-10-07 09:48:56.122479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.402 qpair failed and we were unable to recover it. 00:28:07.402 [2024-10-07 09:48:56.122588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.402 [2024-10-07 09:48:56.122622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.402 qpair failed and we were unable to recover it. 00:28:07.402 [2024-10-07 09:48:56.122786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.402 [2024-10-07 09:48:56.122822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.402 qpair failed and we were unable to recover it. 00:28:07.402 [2024-10-07 09:48:56.122937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.402 [2024-10-07 09:48:56.122993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.402 qpair failed and we were unable to recover it. 00:28:07.402 [2024-10-07 09:48:56.123109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.402 [2024-10-07 09:48:56.123142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.402 qpair failed and we were unable to recover it. 00:28:07.402 [2024-10-07 09:48:56.123279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.402 [2024-10-07 09:48:56.123340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.402 qpair failed and we were unable to recover it. 00:28:07.402 [2024-10-07 09:48:56.123632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.402 [2024-10-07 09:48:56.123685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.402 qpair failed and we were unable to recover it. 00:28:07.402 [2024-10-07 09:48:56.123800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.402 [2024-10-07 09:48:56.123834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.402 qpair failed and we were unable to recover it. 00:28:07.402 [2024-10-07 09:48:56.123979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.402 [2024-10-07 09:48:56.124014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.402 qpair failed and we were unable to recover it. 00:28:07.402 [2024-10-07 09:48:56.124188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.402 [2024-10-07 09:48:56.124239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.402 qpair failed and we were unable to recover it. 00:28:07.402 [2024-10-07 09:48:56.124432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.402 [2024-10-07 09:48:56.124483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.402 qpair failed and we were unable to recover it. 00:28:07.402 [2024-10-07 09:48:56.124639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.402 [2024-10-07 09:48:56.124741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.402 qpair failed and we were unable to recover it. 00:28:07.402 [2024-10-07 09:48:56.124859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.402 [2024-10-07 09:48:56.124894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.402 qpair failed and we were unable to recover it. 00:28:07.402 [2024-10-07 09:48:56.125105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.402 [2024-10-07 09:48:56.125139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.402 qpair failed and we were unable to recover it. 00:28:07.402 [2024-10-07 09:48:56.125236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.402 [2024-10-07 09:48:56.125270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.402 qpair failed and we were unable to recover it. 00:28:07.402 [2024-10-07 09:48:56.125383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.402 [2024-10-07 09:48:56.125417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.402 qpair failed and we were unable to recover it. 00:28:07.402 [2024-10-07 09:48:56.125560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.402 [2024-10-07 09:48:56.125593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.402 qpair failed and we were unable to recover it. 00:28:07.402 [2024-10-07 09:48:56.125771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.402 [2024-10-07 09:48:56.125834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.402 qpair failed and we were unable to recover it. 00:28:07.402 [2024-10-07 09:48:56.126030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.402 [2024-10-07 09:48:56.126082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.402 qpair failed and we were unable to recover it. 00:28:07.402 [2024-10-07 09:48:56.126249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.402 [2024-10-07 09:48:56.126301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.402 qpair failed and we were unable to recover it. 00:28:07.402 [2024-10-07 09:48:56.126504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.402 [2024-10-07 09:48:56.126554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.402 qpair failed and we were unable to recover it. 00:28:07.402 [2024-10-07 09:48:56.126656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.402 [2024-10-07 09:48:56.126701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.402 qpair failed and we were unable to recover it. 00:28:07.403 [2024-10-07 09:48:56.126813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.403 [2024-10-07 09:48:56.126847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.403 qpair failed and we were unable to recover it. 00:28:07.403 [2024-10-07 09:48:56.127064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.403 [2024-10-07 09:48:56.127115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.403 qpair failed and we were unable to recover it. 00:28:07.403 [2024-10-07 09:48:56.127343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.403 [2024-10-07 09:48:56.127377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.403 qpair failed and we were unable to recover it. 00:28:07.403 [2024-10-07 09:48:56.127494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.403 [2024-10-07 09:48:56.127527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.403 qpair failed and we were unable to recover it. 00:28:07.403 [2024-10-07 09:48:56.127677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.403 [2024-10-07 09:48:56.127713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.403 qpair failed and we were unable to recover it. 00:28:07.403 [2024-10-07 09:48:56.127873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.403 [2024-10-07 09:48:56.127908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.403 qpair failed and we were unable to recover it. 00:28:07.403 [2024-10-07 09:48:56.128053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.403 [2024-10-07 09:48:56.128088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.403 qpair failed and we were unable to recover it. 00:28:07.403 [2024-10-07 09:48:56.128258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.403 [2024-10-07 09:48:56.128310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.403 qpair failed and we were unable to recover it. 00:28:07.403 [2024-10-07 09:48:56.128476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.403 [2024-10-07 09:48:56.128511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.403 qpair failed and we were unable to recover it. 00:28:07.403 [2024-10-07 09:48:56.128677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.403 [2024-10-07 09:48:56.128714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.403 qpair failed and we were unable to recover it. 00:28:07.403 [2024-10-07 09:48:56.128903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.403 [2024-10-07 09:48:56.128955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.403 qpair failed and we were unable to recover it. 00:28:07.403 [2024-10-07 09:48:56.129207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.403 [2024-10-07 09:48:56.129272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.403 qpair failed and we were unable to recover it. 00:28:07.403 [2024-10-07 09:48:56.129528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.403 [2024-10-07 09:48:56.129587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.403 qpair failed and we were unable to recover it. 00:28:07.403 [2024-10-07 09:48:56.129712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.403 [2024-10-07 09:48:56.129750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.403 qpair failed and we were unable to recover it. 00:28:07.403 [2024-10-07 09:48:56.129966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.403 [2024-10-07 09:48:56.130018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.403 qpair failed and we were unable to recover it. 00:28:07.403 [2024-10-07 09:48:56.130197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.403 [2024-10-07 09:48:56.130249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.403 qpair failed and we were unable to recover it. 00:28:07.403 [2024-10-07 09:48:56.130457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.403 [2024-10-07 09:48:56.130491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.403 qpair failed and we were unable to recover it. 00:28:07.403 [2024-10-07 09:48:56.130605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.403 [2024-10-07 09:48:56.130640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.403 qpair failed and we were unable to recover it. 00:28:07.403 [2024-10-07 09:48:56.130791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.403 [2024-10-07 09:48:56.130827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.403 qpair failed and we were unable to recover it. 00:28:07.403 [2024-10-07 09:48:56.131055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.403 [2024-10-07 09:48:56.131106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.403 qpair failed and we were unable to recover it. 00:28:07.403 [2024-10-07 09:48:56.131257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.403 [2024-10-07 09:48:56.131310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.403 qpair failed and we were unable to recover it. 00:28:07.403 [2024-10-07 09:48:56.131486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.403 [2024-10-07 09:48:56.131547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.403 qpair failed and we were unable to recover it. 00:28:07.403 [2024-10-07 09:48:56.131676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.403 [2024-10-07 09:48:56.131713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.403 qpair failed and we were unable to recover it. 00:28:07.403 [2024-10-07 09:48:56.131830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.403 [2024-10-07 09:48:56.131865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.403 qpair failed and we were unable to recover it. 00:28:07.403 [2024-10-07 09:48:56.132065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.403 [2024-10-07 09:48:56.132117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.403 qpair failed and we were unable to recover it. 00:28:07.403 [2024-10-07 09:48:56.132277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.403 [2024-10-07 09:48:56.132328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.403 qpair failed and we were unable to recover it. 00:28:07.403 [2024-10-07 09:48:56.132545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.403 [2024-10-07 09:48:56.132598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.403 qpair failed and we were unable to recover it. 00:28:07.403 [2024-10-07 09:48:56.132737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.403 [2024-10-07 09:48:56.132773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.403 qpair failed and we were unable to recover it. 00:28:07.403 [2024-10-07 09:48:56.132882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.403 [2024-10-07 09:48:56.132919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.403 qpair failed and we were unable to recover it. 00:28:07.403 [2024-10-07 09:48:56.133119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.403 [2024-10-07 09:48:56.133172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.403 qpair failed and we were unable to recover it. 00:28:07.403 [2024-10-07 09:48:56.133329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.403 [2024-10-07 09:48:56.133380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.403 qpair failed and we were unable to recover it. 00:28:07.403 [2024-10-07 09:48:56.133536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.403 [2024-10-07 09:48:56.133593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.403 qpair failed and we were unable to recover it. 00:28:07.403 [2024-10-07 09:48:56.133756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.404 [2024-10-07 09:48:56.133792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.404 qpair failed and we were unable to recover it. 00:28:07.404 [2024-10-07 09:48:56.133928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.404 [2024-10-07 09:48:56.133974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.404 qpair failed and we were unable to recover it. 00:28:07.404 [2024-10-07 09:48:56.134194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.404 [2024-10-07 09:48:56.134246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.404 qpair failed and we were unable to recover it. 00:28:07.404 [2024-10-07 09:48:56.134442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.404 [2024-10-07 09:48:56.134503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.404 qpair failed and we were unable to recover it. 00:28:07.404 [2024-10-07 09:48:56.134734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.404 [2024-10-07 09:48:56.134771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.404 qpair failed and we were unable to recover it. 00:28:07.404 [2024-10-07 09:48:56.134935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.404 [2024-10-07 09:48:56.134987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.404 qpair failed and we were unable to recover it. 00:28:07.404 [2024-10-07 09:48:56.135169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.404 [2024-10-07 09:48:56.135221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.404 qpair failed and we were unable to recover it. 00:28:07.404 [2024-10-07 09:48:56.135390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.404 [2024-10-07 09:48:56.135441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.404 qpair failed and we were unable to recover it. 00:28:07.404 [2024-10-07 09:48:56.135627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.404 [2024-10-07 09:48:56.135694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.404 qpair failed and we were unable to recover it. 00:28:07.404 [2024-10-07 09:48:56.135813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.404 [2024-10-07 09:48:56.135846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.404 qpair failed and we were unable to recover it. 00:28:07.404 [2024-10-07 09:48:56.136030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.404 [2024-10-07 09:48:56.136064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.404 qpair failed and we were unable to recover it. 00:28:07.404 [2024-10-07 09:48:56.136178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.404 [2024-10-07 09:48:56.136212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.404 qpair failed and we were unable to recover it. 00:28:07.404 [2024-10-07 09:48:56.136398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.404 [2024-10-07 09:48:56.136451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.404 qpair failed and we were unable to recover it. 00:28:07.404 [2024-10-07 09:48:56.136651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.404 [2024-10-07 09:48:56.136705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.404 qpair failed and we were unable to recover it. 00:28:07.404 [2024-10-07 09:48:56.136826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.404 [2024-10-07 09:48:56.136862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.404 qpair failed and we were unable to recover it. 00:28:07.404 [2024-10-07 09:48:56.136983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.404 [2024-10-07 09:48:56.137019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.404 qpair failed and we were unable to recover it. 00:28:07.404 [2024-10-07 09:48:56.137196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.404 [2024-10-07 09:48:56.137247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.404 qpair failed and we were unable to recover it. 00:28:07.404 [2024-10-07 09:48:56.137489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.404 [2024-10-07 09:48:56.137542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.404 qpair failed and we were unable to recover it. 00:28:07.404 [2024-10-07 09:48:56.137727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.404 [2024-10-07 09:48:56.137763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.404 qpair failed and we were unable to recover it. 00:28:07.404 [2024-10-07 09:48:56.137951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.404 [2024-10-07 09:48:56.137985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.404 qpair failed and we were unable to recover it. 00:28:07.404 [2024-10-07 09:48:56.138090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.404 [2024-10-07 09:48:56.138124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.404 qpair failed and we were unable to recover it. 00:28:07.404 [2024-10-07 09:48:56.138228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.404 [2024-10-07 09:48:56.138292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.404 qpair failed and we were unable to recover it. 00:28:07.404 [2024-10-07 09:48:56.138490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.404 [2024-10-07 09:48:56.138550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.404 qpair failed and we were unable to recover it. 00:28:07.404 [2024-10-07 09:48:56.138692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.404 [2024-10-07 09:48:56.138728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.404 qpair failed and we were unable to recover it. 00:28:07.404 [2024-10-07 09:48:56.138860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.404 [2024-10-07 09:48:56.138895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.404 qpair failed and we were unable to recover it. 00:28:07.404 [2024-10-07 09:48:56.139023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.404 [2024-10-07 09:48:56.139076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.404 qpair failed and we were unable to recover it. 00:28:07.404 [2024-10-07 09:48:56.139245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.404 [2024-10-07 09:48:56.139297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.404 qpair failed and we were unable to recover it. 00:28:07.404 [2024-10-07 09:48:56.139473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.404 [2024-10-07 09:48:56.139533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.404 qpair failed and we were unable to recover it. 00:28:07.404 [2024-10-07 09:48:56.139640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.404 [2024-10-07 09:48:56.139681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.404 qpair failed and we were unable to recover it. 00:28:07.404 [2024-10-07 09:48:56.139843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.404 [2024-10-07 09:48:56.139879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.404 qpair failed and we were unable to recover it. 00:28:07.404 [2024-10-07 09:48:56.139996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.404 [2024-10-07 09:48:56.140061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.404 qpair failed and we were unable to recover it. 00:28:07.404 [2024-10-07 09:48:56.140252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.404 [2024-10-07 09:48:56.140312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.404 qpair failed and we were unable to recover it. 00:28:07.404 [2024-10-07 09:48:56.140561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.404 [2024-10-07 09:48:56.140597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.404 qpair failed and we were unable to recover it. 00:28:07.404 [2024-10-07 09:48:56.140703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.404 [2024-10-07 09:48:56.140739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.404 qpair failed and we were unable to recover it. 00:28:07.404 [2024-10-07 09:48:56.140875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.404 [2024-10-07 09:48:56.140911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.404 qpair failed and we were unable to recover it. 00:28:07.404 [2024-10-07 09:48:56.141089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.404 [2024-10-07 09:48:56.141123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.404 qpair failed and we were unable to recover it. 00:28:07.404 [2024-10-07 09:48:56.141260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.404 [2024-10-07 09:48:56.141294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.404 qpair failed and we were unable to recover it. 00:28:07.404 [2024-10-07 09:48:56.141421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.404 [2024-10-07 09:48:56.141478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.404 qpair failed and we were unable to recover it. 00:28:07.404 [2024-10-07 09:48:56.141630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.404 [2024-10-07 09:48:56.141692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.404 qpair failed and we were unable to recover it. 00:28:07.405 [2024-10-07 09:48:56.141862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.405 [2024-10-07 09:48:56.141897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.405 qpair failed and we were unable to recover it. 00:28:07.405 [2024-10-07 09:48:56.142091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.405 [2024-10-07 09:48:56.142139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.405 qpair failed and we were unable to recover it. 00:28:07.405 [2024-10-07 09:48:56.142290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.405 [2024-10-07 09:48:56.142325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.405 qpair failed and we were unable to recover it. 00:28:07.405 [2024-10-07 09:48:56.142570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.405 [2024-10-07 09:48:56.142605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.405 qpair failed and we were unable to recover it. 00:28:07.405 [2024-10-07 09:48:56.142748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.405 [2024-10-07 09:48:56.142783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.405 qpair failed and we were unable to recover it. 00:28:07.405 [2024-10-07 09:48:56.142946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.405 [2024-10-07 09:48:56.142994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.405 qpair failed and we were unable to recover it. 00:28:07.405 [2024-10-07 09:48:56.143155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.405 [2024-10-07 09:48:56.143203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.405 qpair failed and we were unable to recover it. 00:28:07.405 [2024-10-07 09:48:56.143444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.405 [2024-10-07 09:48:56.143478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.405 qpair failed and we were unable to recover it. 00:28:07.405 [2024-10-07 09:48:56.143643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.405 [2024-10-07 09:48:56.143687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.405 qpair failed and we were unable to recover it. 00:28:07.405 [2024-10-07 09:48:56.143813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.405 [2024-10-07 09:48:56.143848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.405 qpair failed and we were unable to recover it. 00:28:07.405 [2024-10-07 09:48:56.144010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.405 [2024-10-07 09:48:56.144059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.405 qpair failed and we were unable to recover it. 00:28:07.405 [2024-10-07 09:48:56.144201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.405 [2024-10-07 09:48:56.144249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.405 qpair failed and we were unable to recover it. 00:28:07.405 [2024-10-07 09:48:56.144470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.405 [2024-10-07 09:48:56.144518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.405 qpair failed and we were unable to recover it. 00:28:07.405 [2024-10-07 09:48:56.144699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.405 [2024-10-07 09:48:56.144735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.405 qpair failed and we were unable to recover it. 00:28:07.405 [2024-10-07 09:48:56.144878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.405 [2024-10-07 09:48:56.144914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.405 qpair failed and we were unable to recover it. 00:28:07.405 [2024-10-07 09:48:56.145082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.405 [2024-10-07 09:48:56.145131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.405 qpair failed and we were unable to recover it. 00:28:07.405 [2024-10-07 09:48:56.145337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.405 [2024-10-07 09:48:56.145385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.405 qpair failed and we were unable to recover it. 00:28:07.405 [2024-10-07 09:48:56.145545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.405 [2024-10-07 09:48:56.145593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.405 qpair failed and we were unable to recover it. 00:28:07.405 [2024-10-07 09:48:56.145777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.405 [2024-10-07 09:48:56.145812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.405 qpair failed and we were unable to recover it. 00:28:07.405 [2024-10-07 09:48:56.145951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.405 [2024-10-07 09:48:56.145990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.405 qpair failed and we were unable to recover it. 00:28:07.405 [2024-10-07 09:48:56.146183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.405 [2024-10-07 09:48:56.146232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.405 qpair failed and we were unable to recover it. 00:28:07.405 [2024-10-07 09:48:56.146406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.405 [2024-10-07 09:48:56.146441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.405 qpair failed and we were unable to recover it. 00:28:07.405 [2024-10-07 09:48:56.146702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.405 [2024-10-07 09:48:56.146737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.405 qpair failed and we were unable to recover it. 00:28:07.405 [2024-10-07 09:48:56.146856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.405 [2024-10-07 09:48:56.146889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.405 qpair failed and we were unable to recover it. 00:28:07.405 [2024-10-07 09:48:56.147106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.405 [2024-10-07 09:48:56.147154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.405 qpair failed and we were unable to recover it. 00:28:07.405 [2024-10-07 09:48:56.147321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.405 [2024-10-07 09:48:56.147383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.405 qpair failed and we were unable to recover it. 00:28:07.405 [2024-10-07 09:48:56.147532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.405 [2024-10-07 09:48:56.147580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.405 qpair failed and we were unable to recover it. 00:28:07.405 [2024-10-07 09:48:56.147742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.405 [2024-10-07 09:48:56.147779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.405 qpair failed and we were unable to recover it. 00:28:07.405 [2024-10-07 09:48:56.147889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.405 [2024-10-07 09:48:56.147924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.405 qpair failed and we were unable to recover it. 00:28:07.405 [2024-10-07 09:48:56.148042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.405 [2024-10-07 09:48:56.148077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.405 qpair failed and we were unable to recover it. 00:28:07.405 [2024-10-07 09:48:56.148233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.405 [2024-10-07 09:48:56.148266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.405 qpair failed and we were unable to recover it. 00:28:07.405 [2024-10-07 09:48:56.148438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.405 [2024-10-07 09:48:56.148472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.405 qpair failed and we were unable to recover it. 00:28:07.405 [2024-10-07 09:48:56.148630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.405 [2024-10-07 09:48:56.148673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.405 qpair failed and we were unable to recover it. 00:28:07.405 [2024-10-07 09:48:56.148838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.405 [2024-10-07 09:48:56.148873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.405 qpair failed and we were unable to recover it. 00:28:07.405 [2024-10-07 09:48:56.148994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.405 [2024-10-07 09:48:56.149028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.405 qpair failed and we were unable to recover it. 00:28:07.405 [2024-10-07 09:48:56.149189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.405 [2024-10-07 09:48:56.149237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.405 qpair failed and we were unable to recover it. 00:28:07.405 [2024-10-07 09:48:56.149380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.405 [2024-10-07 09:48:56.149429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.405 qpair failed and we were unable to recover it. 00:28:07.405 [2024-10-07 09:48:56.149595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.405 [2024-10-07 09:48:56.149630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.405 qpair failed and we were unable to recover it. 00:28:07.405 [2024-10-07 09:48:56.149760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.405 [2024-10-07 09:48:56.149795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.406 qpair failed and we were unable to recover it. 00:28:07.406 [2024-10-07 09:48:56.149942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.406 [2024-10-07 09:48:56.149978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.406 qpair failed and we were unable to recover it. 00:28:07.406 [2024-10-07 09:48:56.150138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.406 [2024-10-07 09:48:56.150193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.406 qpair failed and we were unable to recover it. 00:28:07.406 [2024-10-07 09:48:56.150383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.406 [2024-10-07 09:48:56.150430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.406 qpair failed and we were unable to recover it. 00:28:07.406 [2024-10-07 09:48:56.150621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.406 [2024-10-07 09:48:56.150690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.406 qpair failed and we were unable to recover it. 00:28:07.406 [2024-10-07 09:48:56.150820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.406 [2024-10-07 09:48:56.150882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.406 qpair failed and we were unable to recover it. 00:28:07.406 [2024-10-07 09:48:56.151028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.406 [2024-10-07 09:48:56.151076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.406 qpair failed and we were unable to recover it. 00:28:07.406 [2024-10-07 09:48:56.151227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.406 [2024-10-07 09:48:56.151275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.406 qpair failed and we were unable to recover it. 00:28:07.406 [2024-10-07 09:48:56.151422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.406 [2024-10-07 09:48:56.151470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.406 qpair failed and we were unable to recover it. 00:28:07.406 [2024-10-07 09:48:56.151720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.406 [2024-10-07 09:48:56.151754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.406 qpair failed and we were unable to recover it. 00:28:07.406 [2024-10-07 09:48:56.151867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.406 [2024-10-07 09:48:56.151901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.406 qpair failed and we were unable to recover it. 00:28:07.406 [2024-10-07 09:48:56.152061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.406 [2024-10-07 09:48:56.152094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.406 qpair failed and we were unable to recover it. 00:28:07.406 [2024-10-07 09:48:56.152237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.406 [2024-10-07 09:48:56.152271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.406 qpair failed and we were unable to recover it. 00:28:07.406 [2024-10-07 09:48:56.152476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.406 [2024-10-07 09:48:56.152524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.406 qpair failed and we were unable to recover it. 00:28:07.406 [2024-10-07 09:48:56.152720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.406 [2024-10-07 09:48:56.152695] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:28:07.406 [2024-10-07 09:48:56.152757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.406 qpair failed and we were unable to recover it. 00:28:07.406 [2024-10-07 09:48:56.152789] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:07.406 [2024-10-07 09:48:56.152877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.406 [2024-10-07 09:48:56.152912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.406 qpair failed and we were unable to recover it. 00:28:07.406 [2024-10-07 09:48:56.153019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.406 [2024-10-07 09:48:56.153078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.406 qpair failed and we were unable to recover it. 00:28:07.406 [2024-10-07 09:48:56.153247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.406 [2024-10-07 09:48:56.153282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.406 qpair failed and we were unable to recover it. 00:28:07.406 [2024-10-07 09:48:56.153447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.406 [2024-10-07 09:48:56.153493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.406 qpair failed and we were unable to recover it. 00:28:07.406 [2024-10-07 09:48:56.153677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.406 [2024-10-07 09:48:56.153738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.406 qpair failed and we were unable to recover it. 00:28:07.406 [2024-10-07 09:48:56.153869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.406 [2024-10-07 09:48:56.153932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.406 qpair failed and we were unable to recover it. 00:28:07.406 [2024-10-07 09:48:56.154100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.406 [2024-10-07 09:48:56.154148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.406 qpair failed and we were unable to recover it. 00:28:07.406 [2024-10-07 09:48:56.154315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.406 [2024-10-07 09:48:56.154348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.406 qpair failed and we were unable to recover it. 00:28:07.406 [2024-10-07 09:48:56.154530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.406 [2024-10-07 09:48:56.154579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.406 qpair failed and we were unable to recover it. 00:28:07.406 [2024-10-07 09:48:56.154791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.406 [2024-10-07 09:48:56.154828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.406 qpair failed and we were unable to recover it. 00:28:07.406 [2024-10-07 09:48:56.154940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.406 [2024-10-07 09:48:56.155001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.406 qpair failed and we were unable to recover it. 00:28:07.406 [2024-10-07 09:48:56.155188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.406 [2024-10-07 09:48:56.155254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.406 qpair failed and we were unable to recover it. 00:28:07.406 [2024-10-07 09:48:56.155456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.406 [2024-10-07 09:48:56.155502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.406 qpair failed and we were unable to recover it. 00:28:07.406 [2024-10-07 09:48:56.155693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.406 [2024-10-07 09:48:56.155749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.406 qpair failed and we were unable to recover it. 00:28:07.406 [2024-10-07 09:48:56.155868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.406 [2024-10-07 09:48:56.155903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.406 qpair failed and we were unable to recover it. 00:28:07.406 [2024-10-07 09:48:56.156062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.406 [2024-10-07 09:48:56.156109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.406 qpair failed and we were unable to recover it. 00:28:07.406 [2024-10-07 09:48:56.156259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.406 [2024-10-07 09:48:56.156305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.406 qpair failed and we were unable to recover it. 00:28:07.406 [2024-10-07 09:48:56.156492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.406 [2024-10-07 09:48:56.156538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.406 qpair failed and we were unable to recover it. 00:28:07.406 [2024-10-07 09:48:56.156699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.406 [2024-10-07 09:48:56.156735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.406 qpair failed and we were unable to recover it. 00:28:07.406 [2024-10-07 09:48:56.156869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.406 [2024-10-07 09:48:56.156910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.406 qpair failed and we were unable to recover it. 00:28:07.406 [2024-10-07 09:48:56.157116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.406 [2024-10-07 09:48:56.157166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.406 qpair failed and we were unable to recover it. 00:28:07.406 [2024-10-07 09:48:56.157369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.406 [2024-10-07 09:48:56.157415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.406 qpair failed and we were unable to recover it. 00:28:07.406 [2024-10-07 09:48:56.157591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.406 [2024-10-07 09:48:56.157626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.406 qpair failed and we were unable to recover it. 00:28:07.406 [2024-10-07 09:48:56.157740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.407 [2024-10-07 09:48:56.157777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.407 qpair failed and we were unable to recover it. 00:28:07.407 [2024-10-07 09:48:56.157909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.407 [2024-10-07 09:48:56.157959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.407 qpair failed and we were unable to recover it. 00:28:07.407 [2024-10-07 09:48:56.158155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.407 [2024-10-07 09:48:56.158190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.407 qpair failed and we were unable to recover it. 00:28:07.407 [2024-10-07 09:48:56.158328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.407 [2024-10-07 09:48:56.158362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.407 qpair failed and we were unable to recover it. 00:28:07.407 [2024-10-07 09:48:56.158508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.407 [2024-10-07 09:48:56.158553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.407 qpair failed and we were unable to recover it. 00:28:07.407 [2024-10-07 09:48:56.158739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.407 [2024-10-07 09:48:56.158776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.407 qpair failed and we were unable to recover it. 00:28:07.407 [2024-10-07 09:48:56.158897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.407 [2024-10-07 09:48:56.158944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.407 qpair failed and we were unable to recover it. 00:28:07.407 [2024-10-07 09:48:56.159129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.407 [2024-10-07 09:48:56.159175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.407 qpair failed and we were unable to recover it. 00:28:07.407 [2024-10-07 09:48:56.159359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.407 [2024-10-07 09:48:56.159418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.407 qpair failed and we were unable to recover it. 00:28:07.407 [2024-10-07 09:48:56.159610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.407 [2024-10-07 09:48:56.159656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.407 qpair failed and we were unable to recover it. 00:28:07.407 [2024-10-07 09:48:56.159811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.407 [2024-10-07 09:48:56.159847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.407 qpair failed and we were unable to recover it. 00:28:07.407 [2024-10-07 09:48:56.160004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.407 [2024-10-07 09:48:56.160060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.407 qpair failed and we were unable to recover it. 00:28:07.407 [2024-10-07 09:48:56.160264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.407 [2024-10-07 09:48:56.160310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.407 qpair failed and we were unable to recover it. 00:28:07.407 [2024-10-07 09:48:56.160473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.407 [2024-10-07 09:48:56.160526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.407 qpair failed and we were unable to recover it. 00:28:07.407 [2024-10-07 09:48:56.160661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.407 [2024-10-07 09:48:56.160702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.407 qpair failed and we were unable to recover it. 00:28:07.407 [2024-10-07 09:48:56.160856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.407 [2024-10-07 09:48:56.160891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.407 qpair failed and we were unable to recover it. 00:28:07.407 [2024-10-07 09:48:56.161158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.407 [2024-10-07 09:48:56.161190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.407 qpair failed and we were unable to recover it. 00:28:07.407 [2024-10-07 09:48:56.161327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.407 [2024-10-07 09:48:56.161361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.407 qpair failed and we were unable to recover it. 00:28:07.407 [2024-10-07 09:48:56.161527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.407 [2024-10-07 09:48:56.161570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.407 qpair failed and we were unable to recover it. 00:28:07.407 [2024-10-07 09:48:56.161735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.407 [2024-10-07 09:48:56.161770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.407 qpair failed and we were unable to recover it. 00:28:07.407 [2024-10-07 09:48:56.161911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.407 [2024-10-07 09:48:56.161975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.407 qpair failed and we were unable to recover it. 00:28:07.407 [2024-10-07 09:48:56.162148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.407 [2024-10-07 09:48:56.162190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.407 qpair failed and we were unable to recover it. 00:28:07.407 [2024-10-07 09:48:56.162358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.407 [2024-10-07 09:48:56.162413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.407 qpair failed and we were unable to recover it. 00:28:07.407 [2024-10-07 09:48:56.162548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.407 [2024-10-07 09:48:56.162581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.407 qpair failed and we were unable to recover it. 00:28:07.407 [2024-10-07 09:48:56.162743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.407 [2024-10-07 09:48:56.162779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.407 qpair failed and we were unable to recover it. 00:28:07.407 [2024-10-07 09:48:56.162947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.407 [2024-10-07 09:48:56.162983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.407 qpair failed and we were unable to recover it. 00:28:07.407 [2024-10-07 09:48:56.163128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.407 [2024-10-07 09:48:56.163170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.407 qpair failed and we were unable to recover it. 00:28:07.407 [2024-10-07 09:48:56.163354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.407 [2024-10-07 09:48:56.163396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.407 qpair failed and we were unable to recover it. 00:28:07.407 [2024-10-07 09:48:56.163569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.407 [2024-10-07 09:48:56.163614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.407 qpair failed and we were unable to recover it. 00:28:07.407 [2024-10-07 09:48:56.163797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.407 [2024-10-07 09:48:56.163856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.407 qpair failed and we were unable to recover it. 00:28:07.407 [2024-10-07 09:48:56.163991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.407 [2024-10-07 09:48:56.164039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.407 qpair failed and we were unable to recover it. 00:28:07.407 [2024-10-07 09:48:56.164180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.407 [2024-10-07 09:48:56.164233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.407 qpair failed and we were unable to recover it. 00:28:07.407 [2024-10-07 09:48:56.164426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.407 [2024-10-07 09:48:56.164460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.407 qpair failed and we were unable to recover it. 00:28:07.407 [2024-10-07 09:48:56.164600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.407 [2024-10-07 09:48:56.164633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.407 qpair failed and we were unable to recover it. 00:28:07.407 [2024-10-07 09:48:56.164794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.407 [2024-10-07 09:48:56.164837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.407 qpair failed and we were unable to recover it. 00:28:07.407 [2024-10-07 09:48:56.164989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.407 [2024-10-07 09:48:56.165043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.407 qpair failed and we were unable to recover it. 00:28:07.407 [2024-10-07 09:48:56.165192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.407 [2024-10-07 09:48:56.165247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.407 qpair failed and we were unable to recover it. 00:28:07.407 [2024-10-07 09:48:56.165444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.407 [2024-10-07 09:48:56.165479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.407 qpair failed and we were unable to recover it. 00:28:07.407 [2024-10-07 09:48:56.165616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.407 [2024-10-07 09:48:56.165650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.407 qpair failed and we were unable to recover it. 00:28:07.408 [2024-10-07 09:48:56.165820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.408 [2024-10-07 09:48:56.165863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.408 qpair failed and we were unable to recover it. 00:28:07.408 [2024-10-07 09:48:56.166017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.408 [2024-10-07 09:48:56.166061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.408 qpair failed and we were unable to recover it. 00:28:07.408 [2024-10-07 09:48:56.166292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.408 [2024-10-07 09:48:56.166335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.408 qpair failed and we were unable to recover it. 00:28:07.408 [2024-10-07 09:48:56.166473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.408 [2024-10-07 09:48:56.166516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.408 qpair failed and we were unable to recover it. 00:28:07.408 [2024-10-07 09:48:56.166638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.408 [2024-10-07 09:48:56.166683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.408 qpair failed and we were unable to recover it. 00:28:07.408 [2024-10-07 09:48:56.166846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.408 [2024-10-07 09:48:56.166880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.408 qpair failed and we were unable to recover it. 00:28:07.408 [2024-10-07 09:48:56.167004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.408 [2024-10-07 09:48:56.167038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.408 qpair failed and we were unable to recover it. 00:28:07.408 [2024-10-07 09:48:56.167155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.408 [2024-10-07 09:48:56.167208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.408 qpair failed and we were unable to recover it. 00:28:07.408 [2024-10-07 09:48:56.167351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.408 [2024-10-07 09:48:56.167395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.408 qpair failed and we were unable to recover it. 00:28:07.408 [2024-10-07 09:48:56.167566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.408 [2024-10-07 09:48:56.167601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.408 qpair failed and we were unable to recover it. 00:28:07.408 [2024-10-07 09:48:56.167758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.408 [2024-10-07 09:48:56.167794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.408 qpair failed and we were unable to recover it. 00:28:07.408 [2024-10-07 09:48:56.167906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.408 [2024-10-07 09:48:56.167961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.408 qpair failed and we were unable to recover it. 00:28:07.408 [2024-10-07 09:48:56.168135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.408 [2024-10-07 09:48:56.168178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.408 qpair failed and we were unable to recover it. 00:28:07.408 [2024-10-07 09:48:56.168313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.408 [2024-10-07 09:48:56.168356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.408 qpair failed and we were unable to recover it. 00:28:07.408 [2024-10-07 09:48:56.168521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.408 [2024-10-07 09:48:56.168556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.408 qpair failed and we were unable to recover it. 00:28:07.408 [2024-10-07 09:48:56.168662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.408 [2024-10-07 09:48:56.168708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.408 qpair failed and we were unable to recover it. 00:28:07.408 [2024-10-07 09:48:56.168870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.408 [2024-10-07 09:48:56.168904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.408 qpair failed and we were unable to recover it. 00:28:07.408 [2024-10-07 09:48:56.169046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.408 [2024-10-07 09:48:56.169079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.408 qpair failed and we were unable to recover it. 00:28:07.408 [2024-10-07 09:48:56.169285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.408 [2024-10-07 09:48:56.169328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.408 qpair failed and we were unable to recover it. 00:28:07.408 [2024-10-07 09:48:56.169483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.408 [2024-10-07 09:48:56.169517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.408 qpair failed and we were unable to recover it. 00:28:07.408 [2024-10-07 09:48:56.169620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.408 [2024-10-07 09:48:56.169654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.408 qpair failed and we were unable to recover it. 00:28:07.408 [2024-10-07 09:48:56.169793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.408 [2024-10-07 09:48:56.169828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.408 qpair failed and we were unable to recover it. 00:28:07.408 [2024-10-07 09:48:56.170019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.408 [2024-10-07 09:48:56.170053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.408 qpair failed and we were unable to recover it. 00:28:07.408 [2024-10-07 09:48:56.170222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.408 [2024-10-07 09:48:56.170280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.408 qpair failed and we were unable to recover it. 00:28:07.408 [2024-10-07 09:48:56.170454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.408 [2024-10-07 09:48:56.170512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.408 qpair failed and we were unable to recover it. 00:28:07.408 [2024-10-07 09:48:56.170731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.408 [2024-10-07 09:48:56.170773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.408 qpair failed and we were unable to recover it. 00:28:07.408 [2024-10-07 09:48:56.170890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.408 [2024-10-07 09:48:56.170936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.408 qpair failed and we were unable to recover it. 00:28:07.408 [2024-10-07 09:48:56.171111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.408 [2024-10-07 09:48:56.171154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.408 qpair failed and we were unable to recover it. 00:28:07.408 [2024-10-07 09:48:56.171296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.408 [2024-10-07 09:48:56.171340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.408 qpair failed and we were unable to recover it. 00:28:07.408 [2024-10-07 09:48:56.171541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.408 [2024-10-07 09:48:56.171584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.408 qpair failed and we were unable to recover it. 00:28:07.408 [2024-10-07 09:48:56.171769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.408 [2024-10-07 09:48:56.171806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.408 qpair failed and we were unable to recover it. 00:28:07.408 [2024-10-07 09:48:56.171944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.408 [2024-10-07 09:48:56.171983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.408 qpair failed and we were unable to recover it. 00:28:07.408 [2024-10-07 09:48:56.172148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.408 [2024-10-07 09:48:56.172190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.408 qpair failed and we were unable to recover it. 00:28:07.408 [2024-10-07 09:48:56.172377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.408 [2024-10-07 09:48:56.172419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.409 qpair failed and we were unable to recover it. 00:28:07.409 [2024-10-07 09:48:56.172594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.409 [2024-10-07 09:48:56.172638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.409 qpair failed and we were unable to recover it. 00:28:07.409 [2024-10-07 09:48:56.172835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.409 [2024-10-07 09:48:56.172869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.409 qpair failed and we were unable to recover it. 00:28:07.409 [2024-10-07 09:48:56.173007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.409 [2024-10-07 09:48:56.173040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.409 qpair failed and we were unable to recover it. 00:28:07.409 [2024-10-07 09:48:56.173221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.409 [2024-10-07 09:48:56.173263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.409 qpair failed and we were unable to recover it. 00:28:07.409 [2024-10-07 09:48:56.173400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.409 [2024-10-07 09:48:56.173443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.409 qpair failed and we were unable to recover it. 00:28:07.409 [2024-10-07 09:48:56.173653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.409 [2024-10-07 09:48:56.173695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.409 qpair failed and we were unable to recover it. 00:28:07.409 [2024-10-07 09:48:56.173815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.409 [2024-10-07 09:48:56.173849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.409 qpair failed and we were unable to recover it. 00:28:07.409 [2024-10-07 09:48:56.174042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.409 [2024-10-07 09:48:56.174085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.409 qpair failed and we were unable to recover it. 00:28:07.409 [2024-10-07 09:48:56.174299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.409 [2024-10-07 09:48:56.174334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.409 qpair failed and we were unable to recover it. 00:28:07.409 [2024-10-07 09:48:56.174555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.409 [2024-10-07 09:48:56.174597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.409 qpair failed and we were unable to recover it. 00:28:07.409 [2024-10-07 09:48:56.174759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.409 [2024-10-07 09:48:56.174794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.409 qpair failed and we were unable to recover it. 00:28:07.409 [2024-10-07 09:48:56.174898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.409 [2024-10-07 09:48:56.174933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.409 qpair failed and we were unable to recover it. 00:28:07.409 [2024-10-07 09:48:56.175094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.409 [2024-10-07 09:48:56.175137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.409 qpair failed and we were unable to recover it. 00:28:07.409 [2024-10-07 09:48:56.175304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.409 [2024-10-07 09:48:56.175359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.409 qpair failed and we were unable to recover it. 00:28:07.409 [2024-10-07 09:48:56.175548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.409 [2024-10-07 09:48:56.175591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.409 qpair failed and we were unable to recover it. 00:28:07.409 [2024-10-07 09:48:56.175746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.409 [2024-10-07 09:48:56.175798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.409 qpair failed and we were unable to recover it. 00:28:07.409 [2024-10-07 09:48:56.175973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.409 [2024-10-07 09:48:56.176007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.409 qpair failed and we were unable to recover it. 00:28:07.409 [2024-10-07 09:48:56.176184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.409 [2024-10-07 09:48:56.176217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.409 qpair failed and we were unable to recover it. 00:28:07.409 [2024-10-07 09:48:56.176373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.409 [2024-10-07 09:48:56.176413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.409 qpair failed and we were unable to recover it. 00:28:07.409 [2024-10-07 09:48:56.176600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.409 [2024-10-07 09:48:56.176635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.409 qpair failed and we were unable to recover it. 00:28:07.409 [2024-10-07 09:48:56.176788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.409 [2024-10-07 09:48:56.176824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.409 qpair failed and we were unable to recover it. 00:28:07.409 [2024-10-07 09:48:56.176973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.409 [2024-10-07 09:48:56.177015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.409 qpair failed and we were unable to recover it. 00:28:07.409 [2024-10-07 09:48:56.177214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.409 [2024-10-07 09:48:56.177256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.409 qpair failed and we were unable to recover it. 00:28:07.409 [2024-10-07 09:48:56.177471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.409 [2024-10-07 09:48:56.177522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.409 qpair failed and we were unable to recover it. 00:28:07.409 [2024-10-07 09:48:56.177739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.409 [2024-10-07 09:48:56.177775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.409 qpair failed and we were unable to recover it. 00:28:07.409 [2024-10-07 09:48:56.177903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.409 [2024-10-07 09:48:56.177938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.409 qpair failed and we were unable to recover it. 00:28:07.409 [2024-10-07 09:48:56.178123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.409 [2024-10-07 09:48:56.178174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.409 qpair failed and we were unable to recover it. 00:28:07.409 [2024-10-07 09:48:56.178320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.409 [2024-10-07 09:48:56.178356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.409 qpair failed and we were unable to recover it. 00:28:07.409 [2024-10-07 09:48:56.178593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.409 [2024-10-07 09:48:56.178627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.409 qpair failed and we were unable to recover it. 00:28:07.409 [2024-10-07 09:48:56.178748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.409 [2024-10-07 09:48:56.178782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.409 qpair failed and we were unable to recover it. 00:28:07.409 [2024-10-07 09:48:56.178964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.409 [2024-10-07 09:48:56.179007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.409 qpair failed and we were unable to recover it. 00:28:07.409 [2024-10-07 09:48:56.179204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.409 [2024-10-07 09:48:56.179247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.409 qpair failed and we were unable to recover it. 00:28:07.409 [2024-10-07 09:48:56.179454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.409 [2024-10-07 09:48:56.179489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.409 qpair failed and we were unable to recover it. 00:28:07.409 [2024-10-07 09:48:56.179653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.409 [2024-10-07 09:48:56.179714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.409 qpair failed and we were unable to recover it. 00:28:07.409 [2024-10-07 09:48:56.179830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.409 [2024-10-07 09:48:56.179866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.409 qpair failed and we were unable to recover it. 00:28:07.409 [2024-10-07 09:48:56.179978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.409 [2024-10-07 09:48:56.180014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.409 qpair failed and we were unable to recover it. 00:28:07.409 [2024-10-07 09:48:56.180180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.409 [2024-10-07 09:48:56.180223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.409 qpair failed and we were unable to recover it. 00:28:07.409 [2024-10-07 09:48:56.180393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.409 [2024-10-07 09:48:56.180437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.409 qpair failed and we were unable to recover it. 00:28:07.409 [2024-10-07 09:48:56.180586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.410 [2024-10-07 09:48:56.180628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.410 qpair failed and we were unable to recover it. 00:28:07.410 [2024-10-07 09:48:56.180780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.410 [2024-10-07 09:48:56.180816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.410 qpair failed and we were unable to recover it. 00:28:07.410 [2024-10-07 09:48:56.180964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.410 [2024-10-07 09:48:56.181024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.410 qpair failed and we were unable to recover it. 00:28:07.410 [2024-10-07 09:48:56.181214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.410 [2024-10-07 09:48:56.181249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.410 qpair failed and we were unable to recover it. 00:28:07.410 [2024-10-07 09:48:56.181446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.410 [2024-10-07 09:48:56.181495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.410 qpair failed and we were unable to recover it. 00:28:07.410 [2024-10-07 09:48:56.181645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.410 [2024-10-07 09:48:56.181718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.410 qpair failed and we were unable to recover it. 00:28:07.410 [2024-10-07 09:48:56.181839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.410 [2024-10-07 09:48:56.181874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.410 qpair failed and we were unable to recover it. 00:28:07.410 [2024-10-07 09:48:56.182017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.410 [2024-10-07 09:48:56.182058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.410 qpair failed and we were unable to recover it. 00:28:07.410 [2024-10-07 09:48:56.182305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.410 [2024-10-07 09:48:56.182348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.410 qpair failed and we were unable to recover it. 00:28:07.410 [2024-10-07 09:48:56.182513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.410 [2024-10-07 09:48:56.182555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.410 qpair failed and we were unable to recover it. 00:28:07.410 [2024-10-07 09:48:56.182745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.410 [2024-10-07 09:48:56.182782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.410 qpair failed and we were unable to recover it. 00:28:07.410 [2024-10-07 09:48:56.182891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.410 [2024-10-07 09:48:56.182926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.410 qpair failed and we were unable to recover it. 00:28:07.410 [2024-10-07 09:48:56.183096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.410 [2024-10-07 09:48:56.183138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.410 qpair failed and we were unable to recover it. 00:28:07.410 [2024-10-07 09:48:56.183312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.410 [2024-10-07 09:48:56.183354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.410 qpair failed and we were unable to recover it. 00:28:07.410 [2024-10-07 09:48:56.183524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.410 [2024-10-07 09:48:56.183572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.410 qpair failed and we were unable to recover it. 00:28:07.410 [2024-10-07 09:48:56.183751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.410 [2024-10-07 09:48:56.183787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.410 qpair failed and we were unable to recover it. 00:28:07.410 [2024-10-07 09:48:56.183897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.410 [2024-10-07 09:48:56.183947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.410 qpair failed and we were unable to recover it. 00:28:07.410 [2024-10-07 09:48:56.184133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.410 [2024-10-07 09:48:56.184185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.410 qpair failed and we were unable to recover it. 00:28:07.410 [2024-10-07 09:48:56.184389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.410 [2024-10-07 09:48:56.184432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.410 qpair failed and we were unable to recover it. 00:28:07.410 [2024-10-07 09:48:56.184596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.410 [2024-10-07 09:48:56.184652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.410 qpair failed and we were unable to recover it. 00:28:07.410 [2024-10-07 09:48:56.184816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.410 [2024-10-07 09:48:56.184851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.410 qpair failed and we were unable to recover it. 00:28:07.410 [2024-10-07 09:48:56.185024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.410 [2024-10-07 09:48:56.185091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.410 qpair failed and we were unable to recover it. 00:28:07.410 [2024-10-07 09:48:56.185286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.410 [2024-10-07 09:48:56.185333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.410 qpair failed and we were unable to recover it. 00:28:07.410 [2024-10-07 09:48:56.185519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.410 [2024-10-07 09:48:56.185566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.410 qpair failed and we were unable to recover it. 00:28:07.410 [2024-10-07 09:48:56.185767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.410 [2024-10-07 09:48:56.185804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.410 qpair failed and we were unable to recover it. 00:28:07.410 [2024-10-07 09:48:56.185941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.410 [2024-10-07 09:48:56.186011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.410 qpair failed and we were unable to recover it. 00:28:07.410 [2024-10-07 09:48:56.186226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.410 [2024-10-07 09:48:56.186261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.410 qpair failed and we were unable to recover it. 00:28:07.410 [2024-10-07 09:48:56.186368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.410 [2024-10-07 09:48:56.186405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.410 qpair failed and we were unable to recover it. 00:28:07.410 [2024-10-07 09:48:56.186585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.410 [2024-10-07 09:48:56.186629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.410 qpair failed and we were unable to recover it. 00:28:07.410 [2024-10-07 09:48:56.186806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.410 [2024-10-07 09:48:56.186842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.410 qpair failed and we were unable to recover it. 00:28:07.410 [2024-10-07 09:48:56.187012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.410 [2024-10-07 09:48:56.187074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.410 qpair failed and we were unable to recover it. 00:28:07.410 [2024-10-07 09:48:56.187253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.410 [2024-10-07 09:48:56.187302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.410 qpair failed and we were unable to recover it. 00:28:07.410 [2024-10-07 09:48:56.187494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.410 [2024-10-07 09:48:56.187558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.410 qpair failed and we were unable to recover it. 00:28:07.410 [2024-10-07 09:48:56.187791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.410 [2024-10-07 09:48:56.187827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.410 qpair failed and we were unable to recover it. 00:28:07.410 [2024-10-07 09:48:56.187995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.410 [2024-10-07 09:48:56.188036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.410 qpair failed and we were unable to recover it. 00:28:07.410 [2024-10-07 09:48:56.188176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.410 [2024-10-07 09:48:56.188209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.410 qpair failed and we were unable to recover it. 00:28:07.410 [2024-10-07 09:48:56.188403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.410 [2024-10-07 09:48:56.188438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.410 qpair failed and we were unable to recover it. 00:28:07.410 [2024-10-07 09:48:56.188655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.410 [2024-10-07 09:48:56.188723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.410 qpair failed and we were unable to recover it. 00:28:07.410 [2024-10-07 09:48:56.188882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.410 [2024-10-07 09:48:56.188926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.410 qpair failed and we were unable to recover it. 00:28:07.411 [2024-10-07 09:48:56.189158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.411 [2024-10-07 09:48:56.189193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.411 qpair failed and we were unable to recover it. 00:28:07.411 [2024-10-07 09:48:56.189313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.411 [2024-10-07 09:48:56.189348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.411 qpair failed and we were unable to recover it. 00:28:07.411 [2024-10-07 09:48:56.189487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.411 [2024-10-07 09:48:56.189521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.411 qpair failed and we were unable to recover it. 00:28:07.411 [2024-10-07 09:48:56.189683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.411 [2024-10-07 09:48:56.189719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.411 qpair failed and we were unable to recover it. 00:28:07.411 [2024-10-07 09:48:56.189866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.411 [2024-10-07 09:48:56.189909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.411 qpair failed and we were unable to recover it. 00:28:07.411 [2024-10-07 09:48:56.190081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.411 [2024-10-07 09:48:56.190123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.411 qpair failed and we were unable to recover it. 00:28:07.411 [2024-10-07 09:48:56.190328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.411 [2024-10-07 09:48:56.190370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.411 qpair failed and we were unable to recover it. 00:28:07.411 [2024-10-07 09:48:56.190541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.411 [2024-10-07 09:48:56.190576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.411 qpair failed and we were unable to recover it. 00:28:07.411 [2024-10-07 09:48:56.190747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.411 [2024-10-07 09:48:56.190783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.411 qpair failed and we were unable to recover it. 00:28:07.411 [2024-10-07 09:48:56.190945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.411 [2024-10-07 09:48:56.190990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.411 qpair failed and we were unable to recover it. 00:28:07.411 [2024-10-07 09:48:56.191199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.411 [2024-10-07 09:48:56.191233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.411 qpair failed and we were unable to recover it. 00:28:07.411 [2024-10-07 09:48:56.191374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.411 [2024-10-07 09:48:56.191409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.411 qpair failed and we were unable to recover it. 00:28:07.411 [2024-10-07 09:48:56.191597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.411 [2024-10-07 09:48:56.191639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.411 qpair failed and we were unable to recover it. 00:28:07.411 [2024-10-07 09:48:56.191791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.411 [2024-10-07 09:48:56.191833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.411 qpair failed and we were unable to recover it. 00:28:07.411 [2024-10-07 09:48:56.192010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.411 [2024-10-07 09:48:56.192044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.411 qpair failed and we were unable to recover it. 00:28:07.411 [2024-10-07 09:48:56.192215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.411 [2024-10-07 09:48:56.192250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.411 qpair failed and we were unable to recover it. 00:28:07.411 [2024-10-07 09:48:56.192366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.411 [2024-10-07 09:48:56.192400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.411 qpair failed and we were unable to recover it. 00:28:07.411 [2024-10-07 09:48:56.192574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.411 [2024-10-07 09:48:56.192600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.411 qpair failed and we were unable to recover it. 00:28:07.411 [2024-10-07 09:48:56.192702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.411 [2024-10-07 09:48:56.192728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.411 qpair failed and we were unable to recover it. 00:28:07.411 [2024-10-07 09:48:56.192846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.411 [2024-10-07 09:48:56.192872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.411 qpair failed and we were unable to recover it. 00:28:07.411 [2024-10-07 09:48:56.192951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.411 [2024-10-07 09:48:56.192977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.411 qpair failed and we were unable to recover it. 00:28:07.411 [2024-10-07 09:48:56.193088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.411 [2024-10-07 09:48:56.193114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.411 qpair failed and we were unable to recover it. 00:28:07.411 [2024-10-07 09:48:56.193208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.411 [2024-10-07 09:48:56.193237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.411 qpair failed and we were unable to recover it. 00:28:07.411 [2024-10-07 09:48:56.193347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.411 [2024-10-07 09:48:56.193374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.411 qpair failed and we were unable to recover it. 00:28:07.411 [2024-10-07 09:48:56.193455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.411 [2024-10-07 09:48:56.193481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.411 qpair failed and we were unable to recover it. 00:28:07.411 [2024-10-07 09:48:56.193558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.411 [2024-10-07 09:48:56.193583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.411 qpair failed and we were unable to recover it. 00:28:07.411 [2024-10-07 09:48:56.193720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.411 [2024-10-07 09:48:56.193747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.411 qpair failed and we were unable to recover it. 00:28:07.411 [2024-10-07 09:48:56.193836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.411 [2024-10-07 09:48:56.193862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.411 qpair failed and we were unable to recover it. 00:28:07.411 [2024-10-07 09:48:56.194002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.411 [2024-10-07 09:48:56.194027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.411 qpair failed and we were unable to recover it. 00:28:07.411 [2024-10-07 09:48:56.194107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.411 [2024-10-07 09:48:56.194133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.411 qpair failed and we were unable to recover it. 00:28:07.411 [2024-10-07 09:48:56.194206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.411 [2024-10-07 09:48:56.194232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.411 qpair failed and we were unable to recover it. 00:28:07.411 [2024-10-07 09:48:56.194349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.411 [2024-10-07 09:48:56.194375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.411 qpair failed and we were unable to recover it. 00:28:07.411 [2024-10-07 09:48:56.194454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.411 [2024-10-07 09:48:56.194480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.411 qpair failed and we were unable to recover it. 00:28:07.411 [2024-10-07 09:48:56.194594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.411 [2024-10-07 09:48:56.194620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.411 qpair failed and we were unable to recover it. 00:28:07.411 [2024-10-07 09:48:56.194745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.411 [2024-10-07 09:48:56.194771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.411 qpair failed and we were unable to recover it. 00:28:07.411 [2024-10-07 09:48:56.194888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.411 [2024-10-07 09:48:56.194914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.411 qpair failed and we were unable to recover it. 00:28:07.411 [2024-10-07 09:48:56.195005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.411 [2024-10-07 09:48:56.195031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.411 qpair failed and we were unable to recover it. 00:28:07.411 [2024-10-07 09:48:56.195120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.411 [2024-10-07 09:48:56.195145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.411 qpair failed and we were unable to recover it. 00:28:07.412 [2024-10-07 09:48:56.195260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.412 [2024-10-07 09:48:56.195287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.412 qpair failed and we were unable to recover it. 00:28:07.412 [2024-10-07 09:48:56.195398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.412 [2024-10-07 09:48:56.195424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.412 qpair failed and we were unable to recover it. 00:28:07.412 [2024-10-07 09:48:56.195512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.412 [2024-10-07 09:48:56.195538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.412 qpair failed and we were unable to recover it. 00:28:07.412 [2024-10-07 09:48:56.195621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.412 [2024-10-07 09:48:56.195647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.412 qpair failed and we were unable to recover it. 00:28:07.412 [2024-10-07 09:48:56.195734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.412 [2024-10-07 09:48:56.195760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.412 qpair failed and we were unable to recover it. 00:28:07.412 [2024-10-07 09:48:56.195848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.412 [2024-10-07 09:48:56.195874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.412 qpair failed and we were unable to recover it. 00:28:07.412 [2024-10-07 09:48:56.195959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.412 [2024-10-07 09:48:56.195984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.412 qpair failed and we were unable to recover it. 00:28:07.412 [2024-10-07 09:48:56.196096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.412 [2024-10-07 09:48:56.196122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.412 qpair failed and we were unable to recover it. 00:28:07.412 [2024-10-07 09:48:56.196231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.412 [2024-10-07 09:48:56.196257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.412 qpair failed and we were unable to recover it. 00:28:07.412 [2024-10-07 09:48:56.196366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.412 [2024-10-07 09:48:56.196392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.412 qpair failed and we were unable to recover it. 00:28:07.412 [2024-10-07 09:48:56.196478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.412 [2024-10-07 09:48:56.196503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.412 qpair failed and we were unable to recover it. 00:28:07.412 [2024-10-07 09:48:56.196595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.412 [2024-10-07 09:48:56.196620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.412 qpair failed and we were unable to recover it. 00:28:07.412 [2024-10-07 09:48:56.196744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.412 [2024-10-07 09:48:56.196772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.412 qpair failed and we were unable to recover it. 00:28:07.412 [2024-10-07 09:48:56.196856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.412 [2024-10-07 09:48:56.196882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.412 qpair failed and we were unable to recover it. 00:28:07.412 [2024-10-07 09:48:56.196976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.412 [2024-10-07 09:48:56.197000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.412 qpair failed and we were unable to recover it. 00:28:07.412 [2024-10-07 09:48:56.197074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.412 [2024-10-07 09:48:56.197099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.412 qpair failed and we were unable to recover it. 00:28:07.412 [2024-10-07 09:48:56.197181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.412 [2024-10-07 09:48:56.197206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.412 qpair failed and we were unable to recover it. 00:28:07.412 [2024-10-07 09:48:56.197317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.412 [2024-10-07 09:48:56.197342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.412 qpair failed and we were unable to recover it. 00:28:07.412 [2024-10-07 09:48:56.197422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.412 [2024-10-07 09:48:56.197448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.412 qpair failed and we were unable to recover it. 00:28:07.412 [2024-10-07 09:48:56.197526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.412 [2024-10-07 09:48:56.197553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.412 qpair failed and we were unable to recover it. 00:28:07.412 [2024-10-07 09:48:56.197636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.412 [2024-10-07 09:48:56.197661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.412 qpair failed and we were unable to recover it. 00:28:07.412 [2024-10-07 09:48:56.197744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.412 [2024-10-07 09:48:56.197770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.412 qpair failed and we were unable to recover it. 00:28:07.412 [2024-10-07 09:48:56.197884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.412 [2024-10-07 09:48:56.197909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.412 qpair failed and we were unable to recover it. 00:28:07.412 [2024-10-07 09:48:56.198000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.412 [2024-10-07 09:48:56.198026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.412 qpair failed and we were unable to recover it. 00:28:07.412 [2024-10-07 09:48:56.198135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.412 [2024-10-07 09:48:56.198167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.412 qpair failed and we were unable to recover it. 00:28:07.412 [2024-10-07 09:48:56.198275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.412 [2024-10-07 09:48:56.198301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.412 qpair failed and we were unable to recover it. 00:28:07.412 [2024-10-07 09:48:56.198392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.412 [2024-10-07 09:48:56.198418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.412 qpair failed and we were unable to recover it. 00:28:07.412 [2024-10-07 09:48:56.198493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.412 [2024-10-07 09:48:56.198519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.412 qpair failed and we were unable to recover it. 00:28:07.412 [2024-10-07 09:48:56.198633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.412 [2024-10-07 09:48:56.198659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.412 qpair failed and we were unable to recover it. 00:28:07.412 [2024-10-07 09:48:56.198754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.412 [2024-10-07 09:48:56.198779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.412 qpair failed and we were unable to recover it. 00:28:07.412 [2024-10-07 09:48:56.198887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.412 [2024-10-07 09:48:56.198913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.412 qpair failed and we were unable to recover it. 00:28:07.412 [2024-10-07 09:48:56.199001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.412 [2024-10-07 09:48:56.199027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.412 qpair failed and we were unable to recover it. 00:28:07.412 [2024-10-07 09:48:56.199151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.412 [2024-10-07 09:48:56.199176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.412 qpair failed and we were unable to recover it. 00:28:07.412 [2024-10-07 09:48:56.199266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.412 [2024-10-07 09:48:56.199293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.412 qpair failed and we were unable to recover it. 00:28:07.412 [2024-10-07 09:48:56.199368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.413 [2024-10-07 09:48:56.199394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.413 qpair failed and we were unable to recover it. 00:28:07.413 [2024-10-07 09:48:56.199507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.413 [2024-10-07 09:48:56.199532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.413 qpair failed and we were unable to recover it. 00:28:07.413 [2024-10-07 09:48:56.199644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.413 [2024-10-07 09:48:56.199690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.413 qpair failed and we were unable to recover it. 00:28:07.413 [2024-10-07 09:48:56.199777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.413 [2024-10-07 09:48:56.199802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.413 qpair failed and we were unable to recover it. 00:28:07.413 [2024-10-07 09:48:56.199894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.413 [2024-10-07 09:48:56.199920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.413 qpair failed and we were unable to recover it. 00:28:07.413 [2024-10-07 09:48:56.199990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.413 [2024-10-07 09:48:56.200015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.413 qpair failed and we were unable to recover it. 00:28:07.413 [2024-10-07 09:48:56.200154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.413 [2024-10-07 09:48:56.200179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.413 qpair failed and we were unable to recover it. 00:28:07.413 [2024-10-07 09:48:56.200302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.413 [2024-10-07 09:48:56.200328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.413 qpair failed and we were unable to recover it. 00:28:07.413 [2024-10-07 09:48:56.200416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.413 [2024-10-07 09:48:56.200443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.413 qpair failed and we were unable to recover it. 00:28:07.413 [2024-10-07 09:48:56.200531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.413 [2024-10-07 09:48:56.200557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.413 qpair failed and we were unable to recover it. 00:28:07.413 [2024-10-07 09:48:56.200705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.413 [2024-10-07 09:48:56.200731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.413 qpair failed and we were unable to recover it. 00:28:07.413 [2024-10-07 09:48:56.200851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.413 [2024-10-07 09:48:56.200877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.413 qpair failed and we were unable to recover it. 00:28:07.413 [2024-10-07 09:48:56.200949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.413 [2024-10-07 09:48:56.200974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.413 qpair failed and we were unable to recover it. 00:28:07.413 [2024-10-07 09:48:56.201089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.413 [2024-10-07 09:48:56.201115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.413 qpair failed and we were unable to recover it. 00:28:07.413 [2024-10-07 09:48:56.201197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.413 [2024-10-07 09:48:56.201223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.413 qpair failed and we were unable to recover it. 00:28:07.413 [2024-10-07 09:48:56.201310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.413 [2024-10-07 09:48:56.201336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.413 qpair failed and we were unable to recover it. 00:28:07.413 [2024-10-07 09:48:56.201449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.413 [2024-10-07 09:48:56.201474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.413 qpair failed and we were unable to recover it. 00:28:07.413 [2024-10-07 09:48:56.201585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.413 [2024-10-07 09:48:56.201611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.413 qpair failed and we were unable to recover it. 00:28:07.413 [2024-10-07 09:48:56.201700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.413 [2024-10-07 09:48:56.201726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.413 qpair failed and we were unable to recover it. 00:28:07.413 [2024-10-07 09:48:56.201840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.413 [2024-10-07 09:48:56.201866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.413 qpair failed and we were unable to recover it. 00:28:07.413 [2024-10-07 09:48:56.201983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.413 [2024-10-07 09:48:56.202009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.413 qpair failed and we were unable to recover it. 00:28:07.413 [2024-10-07 09:48:56.202087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.413 [2024-10-07 09:48:56.202113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.413 qpair failed and we were unable to recover it. 00:28:07.413 [2024-10-07 09:48:56.202225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.413 [2024-10-07 09:48:56.202251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.413 qpair failed and we were unable to recover it. 00:28:07.413 [2024-10-07 09:48:56.202336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.413 [2024-10-07 09:48:56.202362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.413 qpair failed and we were unable to recover it. 00:28:07.413 [2024-10-07 09:48:56.202502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.413 [2024-10-07 09:48:56.202527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.413 qpair failed and we were unable to recover it. 00:28:07.413 [2024-10-07 09:48:56.202632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.413 [2024-10-07 09:48:56.202658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.413 qpair failed and we were unable to recover it. 00:28:07.413 [2024-10-07 09:48:56.202749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.413 [2024-10-07 09:48:56.202775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.413 qpair failed and we were unable to recover it. 00:28:07.413 [2024-10-07 09:48:56.202857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.413 [2024-10-07 09:48:56.202883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.413 qpair failed and we were unable to recover it. 00:28:07.413 [2024-10-07 09:48:56.202963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.413 [2024-10-07 09:48:56.202989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.413 qpair failed and we were unable to recover it. 00:28:07.413 [2024-10-07 09:48:56.203099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.413 [2024-10-07 09:48:56.203124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.413 qpair failed and we were unable to recover it. 00:28:07.413 [2024-10-07 09:48:56.203205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.413 [2024-10-07 09:48:56.203230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.413 qpair failed and we were unable to recover it. 00:28:07.413 [2024-10-07 09:48:56.203344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.413 [2024-10-07 09:48:56.203369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.413 qpair failed and we were unable to recover it. 00:28:07.413 [2024-10-07 09:48:56.203508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.413 [2024-10-07 09:48:56.203534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.413 qpair failed and we were unable to recover it. 00:28:07.413 [2024-10-07 09:48:56.203625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.413 [2024-10-07 09:48:56.203650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.413 qpair failed and we were unable to recover it. 00:28:07.413 [2024-10-07 09:48:56.203736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.413 [2024-10-07 09:48:56.203762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.413 qpair failed and we were unable to recover it. 00:28:07.413 [2024-10-07 09:48:56.203877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.413 [2024-10-07 09:48:56.203903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.413 qpair failed and we were unable to recover it. 00:28:07.413 [2024-10-07 09:48:56.203997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.413 [2024-10-07 09:48:56.204023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.413 qpair failed and we were unable to recover it. 00:28:07.413 [2024-10-07 09:48:56.204098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.413 [2024-10-07 09:48:56.204124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.413 qpair failed and we were unable to recover it. 00:28:07.413 [2024-10-07 09:48:56.204238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.413 [2024-10-07 09:48:56.204264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.413 qpair failed and we were unable to recover it. 00:28:07.414 [2024-10-07 09:48:56.204370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.414 [2024-10-07 09:48:56.204396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.414 qpair failed and we were unable to recover it. 00:28:07.414 [2024-10-07 09:48:56.204476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.414 [2024-10-07 09:48:56.204501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.414 qpair failed and we were unable to recover it. 00:28:07.414 [2024-10-07 09:48:56.204585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.414 [2024-10-07 09:48:56.204611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.414 qpair failed and we were unable to recover it. 00:28:07.414 [2024-10-07 09:48:56.204702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.414 [2024-10-07 09:48:56.204728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.414 qpair failed and we were unable to recover it. 00:28:07.414 [2024-10-07 09:48:56.204870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.414 [2024-10-07 09:48:56.204896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.414 qpair failed and we were unable to recover it. 00:28:07.414 [2024-10-07 09:48:56.204976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.414 [2024-10-07 09:48:56.205002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.414 qpair failed and we were unable to recover it. 00:28:07.414 [2024-10-07 09:48:56.205119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.414 [2024-10-07 09:48:56.205145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.414 qpair failed and we were unable to recover it. 00:28:07.414 [2024-10-07 09:48:56.205228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.414 [2024-10-07 09:48:56.205254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.414 qpair failed and we were unable to recover it. 00:28:07.414 [2024-10-07 09:48:56.205360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.414 [2024-10-07 09:48:56.205386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.414 qpair failed and we were unable to recover it. 00:28:07.414 [2024-10-07 09:48:56.205503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.414 [2024-10-07 09:48:56.205529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.414 qpair failed and we were unable to recover it. 00:28:07.414 [2024-10-07 09:48:56.205599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.414 [2024-10-07 09:48:56.205625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.414 qpair failed and we were unable to recover it. 00:28:07.414 [2024-10-07 09:48:56.205713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.414 [2024-10-07 09:48:56.205739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.414 qpair failed and we were unable to recover it. 00:28:07.414 [2024-10-07 09:48:56.205824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.414 [2024-10-07 09:48:56.205850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.414 qpair failed and we were unable to recover it. 00:28:07.414 [2024-10-07 09:48:56.205991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.414 [2024-10-07 09:48:56.206017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.414 qpair failed and we were unable to recover it. 00:28:07.414 [2024-10-07 09:48:56.206094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.414 [2024-10-07 09:48:56.206119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.414 qpair failed and we were unable to recover it. 00:28:07.414 [2024-10-07 09:48:56.206222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.414 [2024-10-07 09:48:56.206248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.414 qpair failed and we were unable to recover it. 00:28:07.414 [2024-10-07 09:48:56.206389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.414 [2024-10-07 09:48:56.206415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.414 qpair failed and we were unable to recover it. 00:28:07.414 [2024-10-07 09:48:56.206529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.414 [2024-10-07 09:48:56.206555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.414 qpair failed and we were unable to recover it. 00:28:07.414 [2024-10-07 09:48:56.206643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.414 [2024-10-07 09:48:56.206676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.414 qpair failed and we were unable to recover it. 00:28:07.414 [2024-10-07 09:48:56.206755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.414 [2024-10-07 09:48:56.206785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.414 qpair failed and we were unable to recover it. 00:28:07.414 [2024-10-07 09:48:56.206878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.414 [2024-10-07 09:48:56.206903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.414 qpair failed and we were unable to recover it. 00:28:07.414 [2024-10-07 09:48:56.207015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.414 [2024-10-07 09:48:56.207040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.414 qpair failed and we were unable to recover it. 00:28:07.414 [2024-10-07 09:48:56.207162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.414 [2024-10-07 09:48:56.207188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.414 qpair failed and we were unable to recover it. 00:28:07.414 [2024-10-07 09:48:56.207287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.414 [2024-10-07 09:48:56.207313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.414 qpair failed and we were unable to recover it. 00:28:07.414 [2024-10-07 09:48:56.207408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.414 [2024-10-07 09:48:56.207434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.414 qpair failed and we were unable to recover it. 00:28:07.414 [2024-10-07 09:48:56.207537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.414 [2024-10-07 09:48:56.207562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.414 qpair failed and we were unable to recover it. 00:28:07.414 [2024-10-07 09:48:56.207638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.414 [2024-10-07 09:48:56.207664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.414 qpair failed and we were unable to recover it. 00:28:07.414 [2024-10-07 09:48:56.207783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.414 [2024-10-07 09:48:56.207809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.414 qpair failed and we were unable to recover it. 00:28:07.414 [2024-10-07 09:48:56.207894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.414 [2024-10-07 09:48:56.207920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.414 qpair failed and we were unable to recover it. 00:28:07.414 [2024-10-07 09:48:56.208030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.414 [2024-10-07 09:48:56.208055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.414 qpair failed and we were unable to recover it. 00:28:07.414 [2024-10-07 09:48:56.208135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.414 [2024-10-07 09:48:56.208161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.414 qpair failed and we were unable to recover it. 00:28:07.414 [2024-10-07 09:48:56.208274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.414 [2024-10-07 09:48:56.208300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.414 qpair failed and we were unable to recover it. 00:28:07.414 [2024-10-07 09:48:56.208386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.414 [2024-10-07 09:48:56.208412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.414 qpair failed and we were unable to recover it. 00:28:07.414 [2024-10-07 09:48:56.208528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.414 [2024-10-07 09:48:56.208554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.414 qpair failed and we were unable to recover it. 00:28:07.414 [2024-10-07 09:48:56.208641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.414 [2024-10-07 09:48:56.208673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.414 qpair failed and we were unable to recover it. 00:28:07.414 [2024-10-07 09:48:56.208765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.414 [2024-10-07 09:48:56.208791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.414 qpair failed and we were unable to recover it. 00:28:07.414 [2024-10-07 09:48:56.208904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.414 [2024-10-07 09:48:56.208930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.414 qpair failed and we were unable to recover it. 00:28:07.414 [2024-10-07 09:48:56.209039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.414 [2024-10-07 09:48:56.209065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.414 qpair failed and we were unable to recover it. 00:28:07.414 [2024-10-07 09:48:56.209202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.414 [2024-10-07 09:48:56.209228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.414 qpair failed and we were unable to recover it. 00:28:07.415 [2024-10-07 09:48:56.209304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.415 [2024-10-07 09:48:56.209330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.415 qpair failed and we were unable to recover it. 00:28:07.415 [2024-10-07 09:48:56.209413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.415 [2024-10-07 09:48:56.209439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.415 qpair failed and we were unable to recover it. 00:28:07.415 [2024-10-07 09:48:56.209553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.415 [2024-10-07 09:48:56.209578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.415 qpair failed and we were unable to recover it. 00:28:07.415 [2024-10-07 09:48:56.209655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.415 [2024-10-07 09:48:56.209688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.415 qpair failed and we were unable to recover it. 00:28:07.415 [2024-10-07 09:48:56.209797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.415 [2024-10-07 09:48:56.209823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.415 qpair failed and we were unable to recover it. 00:28:07.415 [2024-10-07 09:48:56.209931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.415 [2024-10-07 09:48:56.209957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.415 qpair failed and we were unable to recover it. 00:28:07.415 [2024-10-07 09:48:56.210066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.415 [2024-10-07 09:48:56.210092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.415 qpair failed and we were unable to recover it. 00:28:07.415 [2024-10-07 09:48:56.210178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.415 [2024-10-07 09:48:56.210208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.415 qpair failed and we were unable to recover it. 00:28:07.415 [2024-10-07 09:48:56.210316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.415 [2024-10-07 09:48:56.210342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.415 qpair failed and we were unable to recover it. 00:28:07.415 [2024-10-07 09:48:56.210455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.415 [2024-10-07 09:48:56.210480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.415 qpair failed and we were unable to recover it. 00:28:07.415 [2024-10-07 09:48:56.210557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.415 [2024-10-07 09:48:56.210583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.415 qpair failed and we were unable to recover it. 00:28:07.415 [2024-10-07 09:48:56.210694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.415 [2024-10-07 09:48:56.210720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.415 qpair failed and we were unable to recover it. 00:28:07.415 [2024-10-07 09:48:56.210826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.415 [2024-10-07 09:48:56.210852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.415 qpair failed and we were unable to recover it. 00:28:07.415 [2024-10-07 09:48:56.210938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.415 [2024-10-07 09:48:56.210963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.415 qpair failed and we were unable to recover it. 00:28:07.415 [2024-10-07 09:48:56.211051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.415 [2024-10-07 09:48:56.211076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.415 qpair failed and we were unable to recover it. 00:28:07.415 [2024-10-07 09:48:56.211191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.415 [2024-10-07 09:48:56.211216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.415 qpair failed and we were unable to recover it. 00:28:07.415 [2024-10-07 09:48:56.211298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.415 [2024-10-07 09:48:56.211324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.415 qpair failed and we were unable to recover it. 00:28:07.415 [2024-10-07 09:48:56.211410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.415 [2024-10-07 09:48:56.211435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.415 qpair failed and we were unable to recover it. 00:28:07.415 [2024-10-07 09:48:56.211539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.415 [2024-10-07 09:48:56.211564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.415 qpair failed and we were unable to recover it. 00:28:07.415 [2024-10-07 09:48:56.211636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.415 [2024-10-07 09:48:56.211661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.415 qpair failed and we were unable to recover it. 00:28:07.415 [2024-10-07 09:48:56.211802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.415 [2024-10-07 09:48:56.211828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.415 qpair failed and we were unable to recover it. 00:28:07.415 [2024-10-07 09:48:56.211956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.415 [2024-10-07 09:48:56.211981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.415 qpair failed and we were unable to recover it. 00:28:07.415 [2024-10-07 09:48:56.212087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.415 [2024-10-07 09:48:56.212113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.415 qpair failed and we were unable to recover it. 00:28:07.415 [2024-10-07 09:48:56.212221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.415 [2024-10-07 09:48:56.212248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.415 qpair failed and we were unable to recover it. 00:28:07.415 [2024-10-07 09:48:56.212362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.415 [2024-10-07 09:48:56.212387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.415 qpair failed and we were unable to recover it. 00:28:07.415 [2024-10-07 09:48:56.212505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.415 [2024-10-07 09:48:56.212531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.415 qpair failed and we were unable to recover it. 00:28:07.415 [2024-10-07 09:48:56.212644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.415 [2024-10-07 09:48:56.212675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.415 qpair failed and we were unable to recover it. 00:28:07.415 [2024-10-07 09:48:56.212816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.415 [2024-10-07 09:48:56.212842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.415 qpair failed and we were unable to recover it. 00:28:07.415 [2024-10-07 09:48:56.212956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.415 [2024-10-07 09:48:56.212981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.415 qpair failed and we were unable to recover it. 00:28:07.415 [2024-10-07 09:48:56.213093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.415 [2024-10-07 09:48:56.213119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.415 qpair failed and we were unable to recover it. 00:28:07.415 [2024-10-07 09:48:56.213254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.415 [2024-10-07 09:48:56.213279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.415 qpair failed and we were unable to recover it. 00:28:07.415 [2024-10-07 09:48:56.213363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.415 [2024-10-07 09:48:56.213389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.415 qpair failed and we were unable to recover it. 00:28:07.415 [2024-10-07 09:48:56.213528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.415 [2024-10-07 09:48:56.213554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.415 qpair failed and we were unable to recover it. 00:28:07.415 [2024-10-07 09:48:56.213677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.415 [2024-10-07 09:48:56.213703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.415 qpair failed and we were unable to recover it. 00:28:07.415 [2024-10-07 09:48:56.213841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.415 [2024-10-07 09:48:56.213867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.415 qpair failed and we were unable to recover it. 00:28:07.415 [2024-10-07 09:48:56.213959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.415 [2024-10-07 09:48:56.213994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.415 qpair failed and we were unable to recover it. 00:28:07.415 [2024-10-07 09:48:56.214104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.415 [2024-10-07 09:48:56.214129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.415 qpair failed and we were unable to recover it. 00:28:07.415 [2024-10-07 09:48:56.214248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.415 [2024-10-07 09:48:56.214274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.415 qpair failed and we were unable to recover it. 00:28:07.415 [2024-10-07 09:48:56.214384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.415 [2024-10-07 09:48:56.214409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.416 qpair failed and we were unable to recover it. 00:28:07.416 [2024-10-07 09:48:56.214544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.416 [2024-10-07 09:48:56.214569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.416 qpair failed and we were unable to recover it. 00:28:07.416 [2024-10-07 09:48:56.214651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.416 [2024-10-07 09:48:56.214690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.416 qpair failed and we were unable to recover it. 00:28:07.416 [2024-10-07 09:48:56.214825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.416 [2024-10-07 09:48:56.214851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.416 qpair failed and we were unable to recover it. 00:28:07.416 [2024-10-07 09:48:56.214937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.416 [2024-10-07 09:48:56.214963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.416 qpair failed and we were unable to recover it. 00:28:07.416 [2024-10-07 09:48:56.215102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.416 [2024-10-07 09:48:56.215128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.416 qpair failed and we were unable to recover it. 00:28:07.416 [2024-10-07 09:48:56.215211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.416 [2024-10-07 09:48:56.215237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.416 qpair failed and we were unable to recover it. 00:28:07.416 [2024-10-07 09:48:56.215349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.416 [2024-10-07 09:48:56.215374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.416 qpair failed and we were unable to recover it. 00:28:07.416 [2024-10-07 09:48:56.215455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.416 [2024-10-07 09:48:56.215481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.416 qpair failed and we were unable to recover it. 00:28:07.416 [2024-10-07 09:48:56.215587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.416 [2024-10-07 09:48:56.215614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.416 qpair failed and we were unable to recover it. 00:28:07.416 [2024-10-07 09:48:56.215767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.416 [2024-10-07 09:48:56.215793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.416 qpair failed and we were unable to recover it. 00:28:07.416 [2024-10-07 09:48:56.215882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.416 [2024-10-07 09:48:56.215909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.416 qpair failed and we were unable to recover it. 00:28:07.416 [2024-10-07 09:48:56.216020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.416 [2024-10-07 09:48:56.216045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.416 qpair failed and we were unable to recover it. 00:28:07.416 [2024-10-07 09:48:56.216121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.416 [2024-10-07 09:48:56.216146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.416 qpair failed and we were unable to recover it. 00:28:07.416 [2024-10-07 09:48:56.216229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.416 [2024-10-07 09:48:56.216255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.416 qpair failed and we were unable to recover it. 00:28:07.416 [2024-10-07 09:48:56.216337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.416 [2024-10-07 09:48:56.216362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.416 qpair failed and we were unable to recover it. 00:28:07.416 [2024-10-07 09:48:56.216471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.416 [2024-10-07 09:48:56.216497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.416 qpair failed and we were unable to recover it. 00:28:07.416 [2024-10-07 09:48:56.216634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.416 [2024-10-07 09:48:56.216660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.416 qpair failed and we were unable to recover it. 00:28:07.416 [2024-10-07 09:48:56.216742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.416 [2024-10-07 09:48:56.216768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.416 qpair failed and we were unable to recover it. 00:28:07.416 [2024-10-07 09:48:56.216853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.416 [2024-10-07 09:48:56.216879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.416 qpair failed and we were unable to recover it. 00:28:07.416 [2024-10-07 09:48:56.216953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.416 [2024-10-07 09:48:56.216978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.416 qpair failed and we were unable to recover it. 00:28:07.416 [2024-10-07 09:48:56.217092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.416 [2024-10-07 09:48:56.217118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.416 qpair failed and we were unable to recover it. 00:28:07.416 [2024-10-07 09:48:56.217207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.416 [2024-10-07 09:48:56.217233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.416 qpair failed and we were unable to recover it. 00:28:07.416 [2024-10-07 09:48:56.217341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.416 [2024-10-07 09:48:56.217366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.416 qpair failed and we were unable to recover it. 00:28:07.416 [2024-10-07 09:48:56.217504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.416 [2024-10-07 09:48:56.217530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.416 qpair failed and we were unable to recover it. 00:28:07.416 [2024-10-07 09:48:56.217608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.416 [2024-10-07 09:48:56.217633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.416 qpair failed and we were unable to recover it. 00:28:07.416 [2024-10-07 09:48:56.217752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.416 [2024-10-07 09:48:56.217778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.416 qpair failed and we were unable to recover it. 00:28:07.416 [2024-10-07 09:48:56.217870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.416 [2024-10-07 09:48:56.217895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.416 qpair failed and we were unable to recover it. 00:28:07.416 [2024-10-07 09:48:56.217984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.416 [2024-10-07 09:48:56.218011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.416 qpair failed and we were unable to recover it. 00:28:07.416 [2024-10-07 09:48:56.218115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.416 [2024-10-07 09:48:56.218141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.416 qpair failed and we were unable to recover it. 00:28:07.416 [2024-10-07 09:48:56.218248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.416 [2024-10-07 09:48:56.218273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.416 qpair failed and we were unable to recover it. 00:28:07.416 [2024-10-07 09:48:56.218383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.416 [2024-10-07 09:48:56.218409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.416 qpair failed and we were unable to recover it. 00:28:07.416 [2024-10-07 09:48:56.218492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.416 [2024-10-07 09:48:56.218518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.416 qpair failed and we were unable to recover it. 00:28:07.416 [2024-10-07 09:48:56.218633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.416 [2024-10-07 09:48:56.218659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.416 qpair failed and we were unable to recover it. 00:28:07.416 [2024-10-07 09:48:56.218762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.416 [2024-10-07 09:48:56.218787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.416 qpair failed and we were unable to recover it. 00:28:07.416 [2024-10-07 09:48:56.218875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.416 [2024-10-07 09:48:56.218901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.416 qpair failed and we were unable to recover it. 00:28:07.416 [2024-10-07 09:48:56.218988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.416 [2024-10-07 09:48:56.219015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.416 qpair failed and we were unable to recover it. 00:28:07.416 [2024-10-07 09:48:56.219135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.416 [2024-10-07 09:48:56.219165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.416 qpair failed and we were unable to recover it. 00:28:07.416 [2024-10-07 09:48:56.219272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.416 [2024-10-07 09:48:56.219298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.416 qpair failed and we were unable to recover it. 00:28:07.416 [2024-10-07 09:48:56.219401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.417 [2024-10-07 09:48:56.219427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.417 qpair failed and we were unable to recover it. 00:28:07.417 [2024-10-07 09:48:56.219506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.417 [2024-10-07 09:48:56.219531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.417 qpair failed and we were unable to recover it. 00:28:07.417 [2024-10-07 09:48:56.219632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.417 [2024-10-07 09:48:56.219658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.417 qpair failed and we were unable to recover it. 00:28:07.417 [2024-10-07 09:48:56.219773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.417 [2024-10-07 09:48:56.219799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.417 qpair failed and we were unable to recover it. 00:28:07.417 [2024-10-07 09:48:56.219915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.417 [2024-10-07 09:48:56.219940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.417 qpair failed and we were unable to recover it. 00:28:07.417 [2024-10-07 09:48:56.220018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.417 [2024-10-07 09:48:56.220043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.417 qpair failed and we were unable to recover it. 00:28:07.417 [2024-10-07 09:48:56.220119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.417 [2024-10-07 09:48:56.220145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.417 qpair failed and we were unable to recover it. 00:28:07.417 [2024-10-07 09:48:56.220228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.417 [2024-10-07 09:48:56.220253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.417 qpair failed and we were unable to recover it. 00:28:07.417 [2024-10-07 09:48:56.220344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.417 [2024-10-07 09:48:56.220370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.417 qpair failed and we were unable to recover it. 00:28:07.417 [2024-10-07 09:48:56.220476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.417 [2024-10-07 09:48:56.220502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.417 qpair failed and we were unable to recover it. 00:28:07.417 [2024-10-07 09:48:56.220649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.417 [2024-10-07 09:48:56.220685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.417 qpair failed and we were unable to recover it. 00:28:07.417 [2024-10-07 09:48:56.220792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.417 [2024-10-07 09:48:56.220818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.417 qpair failed and we were unable to recover it. 00:28:07.417 [2024-10-07 09:48:56.220936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.417 [2024-10-07 09:48:56.220961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.417 qpair failed and we were unable to recover it. 00:28:07.417 [2024-10-07 09:48:56.221099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.417 [2024-10-07 09:48:56.221125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.417 qpair failed and we were unable to recover it. 00:28:07.417 [2024-10-07 09:48:56.221236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.417 [2024-10-07 09:48:56.221261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.417 qpair failed and we were unable to recover it. 00:28:07.417 [2024-10-07 09:48:56.221343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.417 [2024-10-07 09:48:56.221369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.417 qpair failed and we were unable to recover it. 00:28:07.417 [2024-10-07 09:48:56.221508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.417 [2024-10-07 09:48:56.221534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.417 qpair failed and we were unable to recover it. 00:28:07.417 [2024-10-07 09:48:56.221645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.417 [2024-10-07 09:48:56.221689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.417 qpair failed and we were unable to recover it. 00:28:07.417 [2024-10-07 09:48:56.221805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.417 [2024-10-07 09:48:56.221831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.417 qpair failed and we were unable to recover it. 00:28:07.417 [2024-10-07 09:48:56.222000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.417 [2024-10-07 09:48:56.222026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.417 qpair failed and we were unable to recover it. 00:28:07.417 [2024-10-07 09:48:56.222115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.417 [2024-10-07 09:48:56.222141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.417 qpair failed and we were unable to recover it. 00:28:07.417 [2024-10-07 09:48:56.222249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.417 [2024-10-07 09:48:56.222274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.417 qpair failed and we were unable to recover it. 00:28:07.417 [2024-10-07 09:48:56.222378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.417 [2024-10-07 09:48:56.222403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.417 qpair failed and we were unable to recover it. 00:28:07.417 [2024-10-07 09:48:56.222481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.417 [2024-10-07 09:48:56.222507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.417 qpair failed and we were unable to recover it. 00:28:07.417 [2024-10-07 09:48:56.222579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.417 [2024-10-07 09:48:56.222605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.417 qpair failed and we were unable to recover it. 00:28:07.417 [2024-10-07 09:48:56.222699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.417 [2024-10-07 09:48:56.222730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.417 qpair failed and we were unable to recover it. 00:28:07.417 [2024-10-07 09:48:56.222843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.417 [2024-10-07 09:48:56.222869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.417 qpair failed and we were unable to recover it. 00:28:07.417 [2024-10-07 09:48:56.222977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.417 [2024-10-07 09:48:56.223002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.417 qpair failed and we were unable to recover it. 00:28:07.417 [2024-10-07 09:48:56.223108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.417 [2024-10-07 09:48:56.223133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.417 qpair failed and we were unable to recover it. 00:28:07.417 [2024-10-07 09:48:56.223241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.417 [2024-10-07 09:48:56.223266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.417 qpair failed and we were unable to recover it. 00:28:07.417 [2024-10-07 09:48:56.223348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.417 [2024-10-07 09:48:56.223373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.417 qpair failed and we were unable to recover it. 00:28:07.417 [2024-10-07 09:48:56.223460] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:07.417 [2024-10-07 09:48:56.223508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.417 [2024-10-07 09:48:56.223533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.417 qpair failed and we were unable to recover it. 00:28:07.417 [2024-10-07 09:48:56.223677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.417 [2024-10-07 09:48:56.223704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.417 qpair failed and we were unable to recover it. 00:28:07.417 [2024-10-07 09:48:56.223794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.417 [2024-10-07 09:48:56.223819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.417 qpair failed and we were unable to recover it. 00:28:07.418 [2024-10-07 09:48:56.223933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.418 [2024-10-07 09:48:56.223959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.418 qpair failed and we were unable to recover it. 00:28:07.418 [2024-10-07 09:48:56.224068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.418 [2024-10-07 09:48:56.224093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.418 qpair failed and we were unable to recover it. 00:28:07.418 [2024-10-07 09:48:56.224208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.418 [2024-10-07 09:48:56.224234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.418 qpair failed and we were unable to recover it. 00:28:07.418 [2024-10-07 09:48:56.224324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.418 [2024-10-07 09:48:56.224350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.418 qpair failed and we were unable to recover it. 00:28:07.418 [2024-10-07 09:48:56.224453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.418 [2024-10-07 09:48:56.224478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.418 qpair failed and we were unable to recover it. 00:28:07.418 [2024-10-07 09:48:56.224591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.418 [2024-10-07 09:48:56.224617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.418 qpair failed and we were unable to recover it. 00:28:07.418 [2024-10-07 09:48:56.224715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.418 [2024-10-07 09:48:56.224742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.418 qpair failed and we were unable to recover it. 00:28:07.418 [2024-10-07 09:48:56.224822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.418 [2024-10-07 09:48:56.224847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.418 qpair failed and we were unable to recover it. 00:28:07.418 [2024-10-07 09:48:56.224939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.418 [2024-10-07 09:48:56.224965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.418 qpair failed and we were unable to recover it. 00:28:07.418 [2024-10-07 09:48:56.225073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.418 [2024-10-07 09:48:56.225098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.418 qpair failed and we were unable to recover it. 00:28:07.418 [2024-10-07 09:48:56.225188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.418 [2024-10-07 09:48:56.225213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.418 qpair failed and we were unable to recover it. 00:28:07.418 [2024-10-07 09:48:56.225290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.418 [2024-10-07 09:48:56.225316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.418 qpair failed and we were unable to recover it. 00:28:07.418 [2024-10-07 09:48:56.225391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.418 [2024-10-07 09:48:56.225417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.418 qpair failed and we were unable to recover it. 00:28:07.418 [2024-10-07 09:48:56.225553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.418 [2024-10-07 09:48:56.225578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.418 qpair failed and we were unable to recover it. 00:28:07.418 [2024-10-07 09:48:56.225678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.418 [2024-10-07 09:48:56.225704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.418 qpair failed and we were unable to recover it. 00:28:07.418 [2024-10-07 09:48:56.225819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.418 [2024-10-07 09:48:56.225845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.418 qpair failed and we were unable to recover it. 00:28:07.418 [2024-10-07 09:48:56.225931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.418 [2024-10-07 09:48:56.225956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.418 qpair failed and we were unable to recover it. 00:28:07.418 [2024-10-07 09:48:56.226039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.418 [2024-10-07 09:48:56.226065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.418 qpair failed and we were unable to recover it. 00:28:07.418 [2024-10-07 09:48:56.226183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.418 [2024-10-07 09:48:56.226209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.418 qpair failed and we were unable to recover it. 00:28:07.418 [2024-10-07 09:48:56.226292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.418 [2024-10-07 09:48:56.226317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.418 qpair failed and we were unable to recover it. 00:28:07.418 [2024-10-07 09:48:56.226396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.418 [2024-10-07 09:48:56.226422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.418 qpair failed and we were unable to recover it. 00:28:07.418 [2024-10-07 09:48:56.226505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.418 [2024-10-07 09:48:56.226531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.418 qpair failed and we were unable to recover it. 00:28:07.418 [2024-10-07 09:48:56.226638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.418 [2024-10-07 09:48:56.226663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.418 qpair failed and we were unable to recover it. 00:28:07.418 [2024-10-07 09:48:56.226801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.418 [2024-10-07 09:48:56.226827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.418 qpair failed and we were unable to recover it. 00:28:07.418 [2024-10-07 09:48:56.226918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.418 [2024-10-07 09:48:56.226943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.418 qpair failed and we were unable to recover it. 00:28:07.418 [2024-10-07 09:48:56.227050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.418 [2024-10-07 09:48:56.227075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.418 qpair failed and we were unable to recover it. 00:28:07.418 [2024-10-07 09:48:56.227187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.418 [2024-10-07 09:48:56.227213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.418 qpair failed and we were unable to recover it. 00:28:07.418 [2024-10-07 09:48:56.227321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.418 [2024-10-07 09:48:56.227347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.418 qpair failed and we were unable to recover it. 00:28:07.418 [2024-10-07 09:48:56.227454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.418 [2024-10-07 09:48:56.227480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.418 qpair failed and we were unable to recover it. 00:28:07.418 [2024-10-07 09:48:56.227589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.418 [2024-10-07 09:48:56.227615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.418 qpair failed and we were unable to recover it. 00:28:07.418 [2024-10-07 09:48:56.227741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.418 [2024-10-07 09:48:56.227767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.418 qpair failed and we were unable to recover it. 00:28:07.418 [2024-10-07 09:48:56.227890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.418 [2024-10-07 09:48:56.227916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.418 qpair failed and we were unable to recover it. 00:28:07.418 [2024-10-07 09:48:56.228006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.418 [2024-10-07 09:48:56.228032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.418 qpair failed and we were unable to recover it. 00:28:07.418 [2024-10-07 09:48:56.228143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.418 [2024-10-07 09:48:56.228169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.418 qpair failed and we were unable to recover it. 00:28:07.418 [2024-10-07 09:48:56.228283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.418 [2024-10-07 09:48:56.228309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.418 qpair failed and we were unable to recover it. 00:28:07.418 [2024-10-07 09:48:56.228420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.418 [2024-10-07 09:48:56.228445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.418 qpair failed and we were unable to recover it. 00:28:07.418 [2024-10-07 09:48:56.228530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.418 [2024-10-07 09:48:56.228555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.418 qpair failed and we were unable to recover it. 00:28:07.418 [2024-10-07 09:48:56.228679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.418 [2024-10-07 09:48:56.228706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.418 qpair failed and we were unable to recover it. 00:28:07.418 [2024-10-07 09:48:56.228828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.419 [2024-10-07 09:48:56.228854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.419 qpair failed and we were unable to recover it. 00:28:07.419 [2024-10-07 09:48:56.228971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.419 [2024-10-07 09:48:56.228997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.419 qpair failed and we were unable to recover it. 00:28:07.419 [2024-10-07 09:48:56.229139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.419 [2024-10-07 09:48:56.229164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.419 qpair failed and we were unable to recover it. 00:28:07.419 [2024-10-07 09:48:56.229244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.419 [2024-10-07 09:48:56.229270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.419 qpair failed and we were unable to recover it. 00:28:07.419 [2024-10-07 09:48:56.229377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.419 [2024-10-07 09:48:56.229403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.419 qpair failed and we were unable to recover it. 00:28:07.419 [2024-10-07 09:48:56.229516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.419 [2024-10-07 09:48:56.229542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.419 qpair failed and we were unable to recover it. 00:28:07.419 [2024-10-07 09:48:56.229623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.419 [2024-10-07 09:48:56.229655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.419 qpair failed and we were unable to recover it. 00:28:07.419 [2024-10-07 09:48:56.229740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.419 [2024-10-07 09:48:56.229770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.419 qpair failed and we were unable to recover it. 00:28:07.419 [2024-10-07 09:48:56.229883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.419 [2024-10-07 09:48:56.229908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.419 qpair failed and we were unable to recover it. 00:28:07.419 [2024-10-07 09:48:56.230019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.419 [2024-10-07 09:48:56.230043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.419 qpair failed and we were unable to recover it. 00:28:07.419 [2024-10-07 09:48:56.230131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.419 [2024-10-07 09:48:56.230155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.419 qpair failed and we were unable to recover it. 00:28:07.419 [2024-10-07 09:48:56.230228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.419 [2024-10-07 09:48:56.230253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.419 qpair failed and we were unable to recover it. 00:28:07.419 [2024-10-07 09:48:56.230386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.419 [2024-10-07 09:48:56.230410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.419 qpair failed and we were unable to recover it. 00:28:07.419 [2024-10-07 09:48:56.230546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.419 [2024-10-07 09:48:56.230571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.419 qpair failed and we were unable to recover it. 00:28:07.419 [2024-10-07 09:48:56.230709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.419 [2024-10-07 09:48:56.230734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.419 qpair failed and we were unable to recover it. 00:28:07.419 [2024-10-07 09:48:56.230810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.419 [2024-10-07 09:48:56.230834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.419 qpair failed and we were unable to recover it. 00:28:07.419 [2024-10-07 09:48:56.230915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.419 [2024-10-07 09:48:56.230940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.419 qpair failed and we were unable to recover it. 00:28:07.419 [2024-10-07 09:48:56.231049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.419 [2024-10-07 09:48:56.231073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.419 qpair failed and we were unable to recover it. 00:28:07.419 [2024-10-07 09:48:56.231180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.419 [2024-10-07 09:48:56.231204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.419 qpair failed and we were unable to recover it. 00:28:07.419 [2024-10-07 09:48:56.231308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.419 [2024-10-07 09:48:56.231333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.419 qpair failed and we were unable to recover it. 00:28:07.419 [2024-10-07 09:48:56.231438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.419 [2024-10-07 09:48:56.231462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.419 qpair failed and we were unable to recover it. 00:28:07.419 [2024-10-07 09:48:56.231575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.419 [2024-10-07 09:48:56.231599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.419 qpair failed and we were unable to recover it. 00:28:07.419 [2024-10-07 09:48:56.231695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.419 [2024-10-07 09:48:56.231721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.419 qpair failed and we were unable to recover it. 00:28:07.419 [2024-10-07 09:48:56.231807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.419 [2024-10-07 09:48:56.231832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.419 qpair failed and we were unable to recover it. 00:28:07.419 [2024-10-07 09:48:56.231916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.419 [2024-10-07 09:48:56.231941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.419 qpair failed and we were unable to recover it. 00:28:07.419 [2024-10-07 09:48:56.232077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.419 [2024-10-07 09:48:56.232102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.419 qpair failed and we were unable to recover it. 00:28:07.419 [2024-10-07 09:48:56.232181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.419 [2024-10-07 09:48:56.232206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.419 qpair failed and we were unable to recover it. 00:28:07.419 [2024-10-07 09:48:56.232284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.419 [2024-10-07 09:48:56.232309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.419 qpair failed and we were unable to recover it. 00:28:07.419 [2024-10-07 09:48:56.232396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.419 [2024-10-07 09:48:56.232421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.419 qpair failed and we were unable to recover it. 00:28:07.419 [2024-10-07 09:48:56.232508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.419 [2024-10-07 09:48:56.232533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.419 qpair failed and we were unable to recover it. 00:28:07.419 [2024-10-07 09:48:56.232617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.419 [2024-10-07 09:48:56.232643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.419 qpair failed and we were unable to recover it. 00:28:07.419 [2024-10-07 09:48:56.232739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.419 [2024-10-07 09:48:56.232764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.419 qpair failed and we were unable to recover it. 00:28:07.419 [2024-10-07 09:48:56.232851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.419 [2024-10-07 09:48:56.232876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.419 qpair failed and we were unable to recover it. 00:28:07.419 [2024-10-07 09:48:56.232964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.419 [2024-10-07 09:48:56.232995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.419 qpair failed and we were unable to recover it. 00:28:07.419 [2024-10-07 09:48:56.233134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.419 [2024-10-07 09:48:56.233163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.419 qpair failed and we were unable to recover it. 00:28:07.419 [2024-10-07 09:48:56.233273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.419 [2024-10-07 09:48:56.233298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.419 qpair failed and we were unable to recover it. 00:28:07.419 [2024-10-07 09:48:56.233374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.419 [2024-10-07 09:48:56.233400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.419 qpair failed and we were unable to recover it. 00:28:07.419 [2024-10-07 09:48:56.233479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.419 [2024-10-07 09:48:56.233504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.419 qpair failed and we were unable to recover it. 00:28:07.419 [2024-10-07 09:48:56.233586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.419 [2024-10-07 09:48:56.233612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.419 qpair failed and we were unable to recover it. 00:28:07.419 [2024-10-07 09:48:56.233722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.420 [2024-10-07 09:48:56.233749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.420 qpair failed and we were unable to recover it. 00:28:07.420 [2024-10-07 09:48:56.233838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.420 [2024-10-07 09:48:56.233864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.420 qpair failed and we were unable to recover it. 00:28:07.420 [2024-10-07 09:48:56.233971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.420 [2024-10-07 09:48:56.233997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.420 qpair failed and we were unable to recover it. 00:28:07.420 [2024-10-07 09:48:56.234103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.420 [2024-10-07 09:48:56.234130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.420 qpair failed and we were unable to recover it. 00:28:07.420 [2024-10-07 09:48:56.234239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.420 [2024-10-07 09:48:56.234265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.420 qpair failed and we were unable to recover it. 00:28:07.420 [2024-10-07 09:48:56.234344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.420 [2024-10-07 09:48:56.234370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.420 qpair failed and we were unable to recover it. 00:28:07.420 [2024-10-07 09:48:56.234482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.420 [2024-10-07 09:48:56.234508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.420 qpair failed and we were unable to recover it. 00:28:07.420 [2024-10-07 09:48:56.234592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.420 [2024-10-07 09:48:56.234618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.420 qpair failed and we were unable to recover it. 00:28:07.420 [2024-10-07 09:48:56.234728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.420 [2024-10-07 09:48:56.234755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.420 qpair failed and we were unable to recover it. 00:28:07.420 [2024-10-07 09:48:56.234844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.420 [2024-10-07 09:48:56.234870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.420 qpair failed and we were unable to recover it. 00:28:07.420 [2024-10-07 09:48:56.234989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.420 [2024-10-07 09:48:56.235016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.420 qpair failed and we were unable to recover it. 00:28:07.420 [2024-10-07 09:48:56.235133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.420 [2024-10-07 09:48:56.235159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.420 qpair failed and we were unable to recover it. 00:28:07.420 [2024-10-07 09:48:56.235250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.420 [2024-10-07 09:48:56.235275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.420 qpair failed and we were unable to recover it. 00:28:07.420 [2024-10-07 09:48:56.235353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.420 [2024-10-07 09:48:56.235380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.420 qpair failed and we were unable to recover it. 00:28:07.420 [2024-10-07 09:48:56.235467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.420 [2024-10-07 09:48:56.235493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.420 qpair failed and we were unable to recover it. 00:28:07.420 [2024-10-07 09:48:56.235582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.420 [2024-10-07 09:48:56.235608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.420 qpair failed and we were unable to recover it. 00:28:07.420 [2024-10-07 09:48:56.235725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.420 [2024-10-07 09:48:56.235751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.420 qpair failed and we were unable to recover it. 00:28:07.420 [2024-10-07 09:48:56.235829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.420 [2024-10-07 09:48:56.235855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.420 qpair failed and we were unable to recover it. 00:28:07.420 [2024-10-07 09:48:56.235945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.420 [2024-10-07 09:48:56.235971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.420 qpair failed and we were unable to recover it. 00:28:07.420 [2024-10-07 09:48:56.236078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.420 [2024-10-07 09:48:56.236104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.420 qpair failed and we were unable to recover it. 00:28:07.420 [2024-10-07 09:48:56.236186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.420 [2024-10-07 09:48:56.236212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.420 qpair failed and we were unable to recover it. 00:28:07.420 [2024-10-07 09:48:56.236293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.420 [2024-10-07 09:48:56.236319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.420 qpair failed and we were unable to recover it. 00:28:07.420 [2024-10-07 09:48:56.236410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.420 [2024-10-07 09:48:56.236435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.420 qpair failed and we were unable to recover it. 00:28:07.420 [2024-10-07 09:48:56.236554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.420 [2024-10-07 09:48:56.236581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.420 qpair failed and we were unable to recover it. 00:28:07.420 [2024-10-07 09:48:56.236711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.420 [2024-10-07 09:48:56.236737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.420 qpair failed and we were unable to recover it. 00:28:07.420 [2024-10-07 09:48:56.236820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.420 [2024-10-07 09:48:56.236846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.420 qpair failed and we were unable to recover it. 00:28:07.420 [2024-10-07 09:48:56.236957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.420 [2024-10-07 09:48:56.236983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.420 qpair failed and we were unable to recover it. 00:28:07.420 [2024-10-07 09:48:56.237062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.420 [2024-10-07 09:48:56.237088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.420 qpair failed and we were unable to recover it. 00:28:07.420 [2024-10-07 09:48:56.237178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.420 [2024-10-07 09:48:56.237203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.420 qpair failed and we were unable to recover it. 00:28:07.420 [2024-10-07 09:48:56.237285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.420 [2024-10-07 09:48:56.237310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.420 qpair failed and we were unable to recover it. 00:28:07.420 [2024-10-07 09:48:56.237393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.420 [2024-10-07 09:48:56.237418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.420 qpair failed and we were unable to recover it. 00:28:07.420 [2024-10-07 09:48:56.237498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.420 [2024-10-07 09:48:56.237523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.420 qpair failed and we were unable to recover it. 00:28:07.420 [2024-10-07 09:48:56.237602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.420 [2024-10-07 09:48:56.237628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.420 qpair failed and we were unable to recover it. 00:28:07.420 [2024-10-07 09:48:56.237723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.420 [2024-10-07 09:48:56.237749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.420 qpair failed and we were unable to recover it. 00:28:07.420 [2024-10-07 09:48:56.237863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.420 [2024-10-07 09:48:56.237890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.420 qpair failed and we were unable to recover it. 00:28:07.420 [2024-10-07 09:48:56.237978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.420 [2024-10-07 09:48:56.238003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.420 qpair failed and we were unable to recover it. 00:28:07.420 [2024-10-07 09:48:56.238087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.420 [2024-10-07 09:48:56.238112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.420 qpair failed and we were unable to recover it. 00:28:07.420 [2024-10-07 09:48:56.238189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.420 [2024-10-07 09:48:56.238215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.420 qpair failed and we were unable to recover it. 00:28:07.420 [2024-10-07 09:48:56.238298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.420 [2024-10-07 09:48:56.238324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.420 qpair failed and we were unable to recover it. 00:28:07.420 [2024-10-07 09:48:56.238431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.421 [2024-10-07 09:48:56.238457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.421 qpair failed and we were unable to recover it. 00:28:07.421 [2024-10-07 09:48:56.238531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.421 [2024-10-07 09:48:56.238557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.421 qpair failed and we were unable to recover it. 00:28:07.421 [2024-10-07 09:48:56.238685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.421 [2024-10-07 09:48:56.238711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.421 qpair failed and we were unable to recover it. 00:28:07.421 [2024-10-07 09:48:56.238801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.421 [2024-10-07 09:48:56.238827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.421 qpair failed and we were unable to recover it. 00:28:07.421 [2024-10-07 09:48:56.238968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.421 [2024-10-07 09:48:56.238993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.421 qpair failed and we were unable to recover it. 00:28:07.421 [2024-10-07 09:48:56.239076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.421 [2024-10-07 09:48:56.239103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.421 qpair failed and we were unable to recover it. 00:28:07.421 [2024-10-07 09:48:56.239215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.421 [2024-10-07 09:48:56.239241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.421 qpair failed and we were unable to recover it. 00:28:07.421 [2024-10-07 09:48:56.239328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.421 [2024-10-07 09:48:56.239355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.421 qpair failed and we were unable to recover it. 00:28:07.421 [2024-10-07 09:48:56.239435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.421 [2024-10-07 09:48:56.239461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.421 qpair failed and we were unable to recover it. 00:28:07.421 [2024-10-07 09:48:56.239576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.421 [2024-10-07 09:48:56.239601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.421 qpair failed and we were unable to recover it. 00:28:07.421 [2024-10-07 09:48:56.239684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.421 [2024-10-07 09:48:56.239710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.421 qpair failed and we were unable to recover it. 00:28:07.421 [2024-10-07 09:48:56.239833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.421 [2024-10-07 09:48:56.239859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.421 qpair failed and we were unable to recover it. 00:28:07.421 [2024-10-07 09:48:56.239937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.421 [2024-10-07 09:48:56.239963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.421 qpair failed and we were unable to recover it. 00:28:07.421 [2024-10-07 09:48:56.240050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.421 [2024-10-07 09:48:56.240075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.421 qpair failed and we were unable to recover it. 00:28:07.421 [2024-10-07 09:48:56.240160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.421 [2024-10-07 09:48:56.240184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.421 qpair failed and we were unable to recover it. 00:28:07.421 [2024-10-07 09:48:56.240264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.421 [2024-10-07 09:48:56.240289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.421 qpair failed and we were unable to recover it. 00:28:07.421 [2024-10-07 09:48:56.240377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.421 [2024-10-07 09:48:56.240402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.421 qpair failed and we were unable to recover it. 00:28:07.421 [2024-10-07 09:48:56.240479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.421 [2024-10-07 09:48:56.240505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.421 qpair failed and we were unable to recover it. 00:28:07.421 [2024-10-07 09:48:56.240588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.421 [2024-10-07 09:48:56.240613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.421 qpair failed and we were unable to recover it. 00:28:07.421 [2024-10-07 09:48:56.240702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.421 [2024-10-07 09:48:56.240727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.421 qpair failed and we were unable to recover it. 00:28:07.421 [2024-10-07 09:48:56.240840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.421 [2024-10-07 09:48:56.240866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.421 qpair failed and we were unable to recover it. 00:28:07.421 [2024-10-07 09:48:56.240954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.421 [2024-10-07 09:48:56.240979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.421 qpair failed and we were unable to recover it. 00:28:07.421 [2024-10-07 09:48:56.241118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.421 [2024-10-07 09:48:56.241145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.421 qpair failed and we were unable to recover it. 00:28:07.421 [2024-10-07 09:48:56.241231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.421 [2024-10-07 09:48:56.241257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.421 qpair failed and we were unable to recover it. 00:28:07.421 [2024-10-07 09:48:56.241347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.421 [2024-10-07 09:48:56.241377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.421 qpair failed and we were unable to recover it. 00:28:07.421 [2024-10-07 09:48:56.241458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.421 [2024-10-07 09:48:56.241483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.421 qpair failed and we were unable to recover it. 00:28:07.421 [2024-10-07 09:48:56.241569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.421 [2024-10-07 09:48:56.241595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.421 qpair failed and we were unable to recover it. 00:28:07.421 [2024-10-07 09:48:56.241685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.421 [2024-10-07 09:48:56.241710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.421 qpair failed and we were unable to recover it. 00:28:07.421 [2024-10-07 09:48:56.241790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.421 [2024-10-07 09:48:56.241817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.421 qpair failed and we were unable to recover it. 00:28:07.421 [2024-10-07 09:48:56.241901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.421 [2024-10-07 09:48:56.241927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.421 qpair failed and we were unable to recover it. 00:28:07.421 [2024-10-07 09:48:56.242009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.421 [2024-10-07 09:48:56.242034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.421 qpair failed and we were unable to recover it. 00:28:07.421 [2024-10-07 09:48:56.242151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.421 [2024-10-07 09:48:56.242177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.421 qpair failed and we were unable to recover it. 00:28:07.421 [2024-10-07 09:48:56.242262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.421 [2024-10-07 09:48:56.242288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.421 qpair failed and we were unable to recover it. 00:28:07.421 [2024-10-07 09:48:56.242380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.421 [2024-10-07 09:48:56.242405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.421 qpair failed and we were unable to recover it. 00:28:07.421 [2024-10-07 09:48:56.242494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.421 [2024-10-07 09:48:56.242520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.422 qpair failed and we were unable to recover it. 00:28:07.422 [2024-10-07 09:48:56.242602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.422 [2024-10-07 09:48:56.242627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.422 qpair failed and we were unable to recover it. 00:28:07.422 [2024-10-07 09:48:56.242718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.422 [2024-10-07 09:48:56.242743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.422 qpair failed and we were unable to recover it. 00:28:07.422 [2024-10-07 09:48:56.242853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.422 [2024-10-07 09:48:56.242878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.422 qpair failed and we were unable to recover it. 00:28:07.422 [2024-10-07 09:48:56.242958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.422 [2024-10-07 09:48:56.242982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.422 qpair failed and we were unable to recover it. 00:28:07.422 [2024-10-07 09:48:56.243101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.422 [2024-10-07 09:48:56.243127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.422 qpair failed and we were unable to recover it. 00:28:07.422 [2024-10-07 09:48:56.243233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.422 [2024-10-07 09:48:56.243259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.422 qpair failed and we were unable to recover it. 00:28:07.422 [2024-10-07 09:48:56.243339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.422 [2024-10-07 09:48:56.243365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.422 qpair failed and we were unable to recover it. 00:28:07.422 [2024-10-07 09:48:56.243459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.422 [2024-10-07 09:48:56.243485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.422 qpair failed and we were unable to recover it. 00:28:07.422 [2024-10-07 09:48:56.243573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.422 [2024-10-07 09:48:56.243599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.422 qpair failed and we were unable to recover it. 00:28:07.422 [2024-10-07 09:48:56.243706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.422 [2024-10-07 09:48:56.243733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.422 qpair failed and we were unable to recover it. 00:28:07.422 [2024-10-07 09:48:56.243810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.422 [2024-10-07 09:48:56.243835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.422 qpair failed and we were unable to recover it. 00:28:07.422 [2024-10-07 09:48:56.243917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.422 [2024-10-07 09:48:56.243943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.422 qpair failed and we were unable to recover it. 00:28:07.422 [2024-10-07 09:48:56.244050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.422 [2024-10-07 09:48:56.244075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.422 qpair failed and we were unable to recover it. 00:28:07.422 [2024-10-07 09:48:56.244164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.422 [2024-10-07 09:48:56.244189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.422 qpair failed and we were unable to recover it. 00:28:07.422 [2024-10-07 09:48:56.244270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.422 [2024-10-07 09:48:56.244295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.422 qpair failed and we were unable to recover it. 00:28:07.422 [2024-10-07 09:48:56.244372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.422 [2024-10-07 09:48:56.244396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.422 qpair failed and we were unable to recover it. 00:28:07.422 [2024-10-07 09:48:56.244473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.422 [2024-10-07 09:48:56.244503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.422 qpair failed and we were unable to recover it. 00:28:07.422 [2024-10-07 09:48:56.244583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.422 [2024-10-07 09:48:56.244608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.422 qpair failed and we were unable to recover it. 00:28:07.422 [2024-10-07 09:48:56.244690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.422 [2024-10-07 09:48:56.244715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.422 qpair failed and we were unable to recover it. 00:28:07.422 [2024-10-07 09:48:56.244805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.422 [2024-10-07 09:48:56.244830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.422 qpair failed and we were unable to recover it. 00:28:07.422 [2024-10-07 09:48:56.244918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.422 [2024-10-07 09:48:56.244942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.422 qpair failed and we were unable to recover it. 00:28:07.422 [2024-10-07 09:48:56.245050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.422 [2024-10-07 09:48:56.245077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.422 qpair failed and we were unable to recover it. 00:28:07.422 [2024-10-07 09:48:56.245194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.422 [2024-10-07 09:48:56.245219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.422 qpair failed and we were unable to recover it. 00:28:07.422 [2024-10-07 09:48:56.245333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.422 [2024-10-07 09:48:56.245359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.422 qpair failed and we were unable to recover it. 00:28:07.422 [2024-10-07 09:48:56.245445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.422 [2024-10-07 09:48:56.245469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.422 qpair failed and we were unable to recover it. 00:28:07.422 [2024-10-07 09:48:56.245550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.422 [2024-10-07 09:48:56.245576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.422 qpair failed and we were unable to recover it. 00:28:07.422 [2024-10-07 09:48:56.245661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.422 [2024-10-07 09:48:56.245695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.422 qpair failed and we were unable to recover it. 00:28:07.422 [2024-10-07 09:48:56.245785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.422 [2024-10-07 09:48:56.245811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.422 qpair failed and we were unable to recover it. 00:28:07.422 [2024-10-07 09:48:56.245895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.422 [2024-10-07 09:48:56.245921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.422 qpair failed and we were unable to recover it. 00:28:07.422 [2024-10-07 09:48:56.246006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.422 [2024-10-07 09:48:56.246033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.422 qpair failed and we were unable to recover it. 00:28:07.422 [2024-10-07 09:48:56.246151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.422 [2024-10-07 09:48:56.246176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.422 qpair failed and we were unable to recover it. 00:28:07.422 [2024-10-07 09:48:56.246268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.422 [2024-10-07 09:48:56.246295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.422 qpair failed and we were unable to recover it. 00:28:07.422 [2024-10-07 09:48:56.246388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.422 [2024-10-07 09:48:56.246414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.422 qpair failed and we were unable to recover it. 00:28:07.422 [2024-10-07 09:48:56.246496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.422 [2024-10-07 09:48:56.246520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.422 qpair failed and we were unable to recover it. 00:28:07.422 [2024-10-07 09:48:56.246605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.422 [2024-10-07 09:48:56.246630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.422 qpair failed and we were unable to recover it. 00:28:07.422 [2024-10-07 09:48:56.246718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.422 [2024-10-07 09:48:56.246745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.422 qpair failed and we were unable to recover it. 00:28:07.422 [2024-10-07 09:48:56.246831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.422 [2024-10-07 09:48:56.246856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.422 qpair failed and we were unable to recover it. 00:28:07.422 [2024-10-07 09:48:56.246969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.422 [2024-10-07 09:48:56.246994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.422 qpair failed and we were unable to recover it. 00:28:07.422 [2024-10-07 09:48:56.247068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.423 [2024-10-07 09:48:56.247094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.423 qpair failed and we were unable to recover it. 00:28:07.423 [2024-10-07 09:48:56.247179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.423 [2024-10-07 09:48:56.247205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.423 qpair failed and we were unable to recover it. 00:28:07.423 [2024-10-07 09:48:56.247328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.423 [2024-10-07 09:48:56.247355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.423 qpair failed and we were unable to recover it. 00:28:07.423 [2024-10-07 09:48:56.247439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.423 [2024-10-07 09:48:56.247465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.423 qpair failed and we were unable to recover it. 00:28:07.423 [2024-10-07 09:48:56.247549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.423 [2024-10-07 09:48:56.247575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.423 qpair failed and we were unable to recover it. 00:28:07.423 [2024-10-07 09:48:56.247650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.423 [2024-10-07 09:48:56.247687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.423 qpair failed and we were unable to recover it. 00:28:07.423 [2024-10-07 09:48:56.247780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.423 [2024-10-07 09:48:56.247805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.423 qpair failed and we were unable to recover it. 00:28:07.423 [2024-10-07 09:48:56.247886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.423 [2024-10-07 09:48:56.247912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.423 qpair failed and we were unable to recover it. 00:28:07.423 [2024-10-07 09:48:56.247999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.423 [2024-10-07 09:48:56.248024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.423 qpair failed and we were unable to recover it. 00:28:07.423 [2024-10-07 09:48:56.248102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.423 [2024-10-07 09:48:56.248128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.423 qpair failed and we were unable to recover it. 00:28:07.423 [2024-10-07 09:48:56.248243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.423 [2024-10-07 09:48:56.248268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.423 qpair failed and we were unable to recover it. 00:28:07.423 [2024-10-07 09:48:56.248354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.423 [2024-10-07 09:48:56.248380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.423 qpair failed and we were unable to recover it. 00:28:07.423 [2024-10-07 09:48:56.248490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.423 [2024-10-07 09:48:56.248516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.423 qpair failed and we were unable to recover it. 00:28:07.423 [2024-10-07 09:48:56.248599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.423 [2024-10-07 09:48:56.248625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.423 qpair failed and we were unable to recover it. 00:28:07.423 [2024-10-07 09:48:56.248715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.423 [2024-10-07 09:48:56.248741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.423 qpair failed and we were unable to recover it. 00:28:07.423 [2024-10-07 09:48:56.248820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.423 [2024-10-07 09:48:56.248845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.423 qpair failed and we were unable to recover it. 00:28:07.423 [2024-10-07 09:48:56.248925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.423 [2024-10-07 09:48:56.248950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.423 qpair failed and we were unable to recover it. 00:28:07.423 [2024-10-07 09:48:56.249056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.423 [2024-10-07 09:48:56.249083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.423 qpair failed and we were unable to recover it. 00:28:07.423 [2024-10-07 09:48:56.249173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.423 [2024-10-07 09:48:56.249198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.423 qpair failed and we were unable to recover it. 00:28:07.423 [2024-10-07 09:48:56.249315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.423 [2024-10-07 09:48:56.249358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.423 qpair failed and we were unable to recover it. 00:28:07.423 [2024-10-07 09:48:56.249454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.423 [2024-10-07 09:48:56.249483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.423 qpair failed and we were unable to recover it. 00:28:07.423 [2024-10-07 09:48:56.249578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.423 [2024-10-07 09:48:56.249606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.423 qpair failed and we were unable to recover it. 00:28:07.423 [2024-10-07 09:48:56.249731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.423 [2024-10-07 09:48:56.249760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.423 qpair failed and we were unable to recover it. 00:28:07.423 [2024-10-07 09:48:56.249871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.423 [2024-10-07 09:48:56.249899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.423 qpair failed and we were unable to recover it. 00:28:07.423 [2024-10-07 09:48:56.250045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.423 [2024-10-07 09:48:56.250075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.423 qpair failed and we were unable to recover it. 00:28:07.423 [2024-10-07 09:48:56.250168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.423 [2024-10-07 09:48:56.250196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.423 qpair failed and we were unable to recover it. 00:28:07.423 [2024-10-07 09:48:56.250316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.423 [2024-10-07 09:48:56.250350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.423 qpair failed and we were unable to recover it. 00:28:07.423 [2024-10-07 09:48:56.250438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.423 [2024-10-07 09:48:56.250465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.423 qpair failed and we were unable to recover it. 00:28:07.423 [2024-10-07 09:48:56.250559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.423 [2024-10-07 09:48:56.250585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.423 qpair failed and we were unable to recover it. 00:28:07.423 [2024-10-07 09:48:56.250664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.423 [2024-10-07 09:48:56.250697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.423 qpair failed and we were unable to recover it. 00:28:07.423 [2024-10-07 09:48:56.250798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.423 [2024-10-07 09:48:56.250824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.423 qpair failed and we were unable to recover it. 00:28:07.423 [2024-10-07 09:48:56.250904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.423 [2024-10-07 09:48:56.250930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.423 qpair failed and we were unable to recover it. 00:28:07.423 [2024-10-07 09:48:56.251043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.423 [2024-10-07 09:48:56.251072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.423 qpair failed and we were unable to recover it. 00:28:07.423 [2024-10-07 09:48:56.251186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.423 [2024-10-07 09:48:56.251212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.423 qpair failed and we were unable to recover it. 00:28:07.423 [2024-10-07 09:48:56.251326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.423 [2024-10-07 09:48:56.251351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.423 qpair failed and we were unable to recover it. 00:28:07.423 [2024-10-07 09:48:56.251435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.423 [2024-10-07 09:48:56.251461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.423 qpair failed and we were unable to recover it. 00:28:07.423 [2024-10-07 09:48:56.251537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.423 [2024-10-07 09:48:56.251562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.423 qpair failed and we were unable to recover it. 00:28:07.423 [2024-10-07 09:48:56.251695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.423 [2024-10-07 09:48:56.251722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.423 qpair failed and we were unable to recover it. 00:28:07.423 [2024-10-07 09:48:56.251808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.423 [2024-10-07 09:48:56.251833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.423 qpair failed and we were unable to recover it. 00:28:07.424 [2024-10-07 09:48:56.251922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.424 [2024-10-07 09:48:56.251948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.424 qpair failed and we were unable to recover it. 00:28:07.424 [2024-10-07 09:48:56.252035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.424 [2024-10-07 09:48:56.252060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.424 qpair failed and we were unable to recover it. 00:28:07.424 [2024-10-07 09:48:56.252149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.424 [2024-10-07 09:48:56.252174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.424 qpair failed and we were unable to recover it. 00:28:07.424 [2024-10-07 09:48:56.252261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.424 [2024-10-07 09:48:56.252286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.424 qpair failed and we were unable to recover it. 00:28:07.424 [2024-10-07 09:48:56.252391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.424 [2024-10-07 09:48:56.252416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.424 qpair failed and we were unable to recover it. 00:28:07.424 [2024-10-07 09:48:56.252488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.424 [2024-10-07 09:48:56.252513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.424 qpair failed and we were unable to recover it. 00:28:07.424 [2024-10-07 09:48:56.252591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.424 [2024-10-07 09:48:56.252616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.424 qpair failed and we were unable to recover it. 00:28:07.424 [2024-10-07 09:48:56.252707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.424 [2024-10-07 09:48:56.252733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.424 qpair failed and we were unable to recover it. 00:28:07.424 [2024-10-07 09:48:56.252815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.424 [2024-10-07 09:48:56.252841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.424 qpair failed and we were unable to recover it. 00:28:07.424 [2024-10-07 09:48:56.252916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.424 [2024-10-07 09:48:56.252941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.424 qpair failed and we were unable to recover it. 00:28:07.424 [2024-10-07 09:48:56.253024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.424 [2024-10-07 09:48:56.253049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.424 qpair failed and we were unable to recover it. 00:28:07.424 [2024-10-07 09:48:56.253146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.424 [2024-10-07 09:48:56.253170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.424 qpair failed and we were unable to recover it. 00:28:07.424 [2024-10-07 09:48:56.253263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.424 [2024-10-07 09:48:56.253294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.424 qpair failed and we were unable to recover it. 00:28:07.424 [2024-10-07 09:48:56.253388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.424 [2024-10-07 09:48:56.253416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.424 qpair failed and we were unable to recover it. 00:28:07.424 [2024-10-07 09:48:56.253506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.424 [2024-10-07 09:48:56.253532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.424 qpair failed and we were unable to recover it. 00:28:07.424 [2024-10-07 09:48:56.253649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.424 [2024-10-07 09:48:56.253683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.424 qpair failed and we were unable to recover it. 00:28:07.424 [2024-10-07 09:48:56.253772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.424 [2024-10-07 09:48:56.253800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.424 qpair failed and we were unable to recover it. 00:28:07.424 [2024-10-07 09:48:56.253885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.424 [2024-10-07 09:48:56.253913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.424 qpair failed and we were unable to recover it. 00:28:07.424 [2024-10-07 09:48:56.254003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.424 [2024-10-07 09:48:56.254029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.424 qpair failed and we were unable to recover it. 00:28:07.424 [2024-10-07 09:48:56.254150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.424 [2024-10-07 09:48:56.254177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.424 qpair failed and we were unable to recover it. 00:28:07.424 [2024-10-07 09:48:56.254266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.424 [2024-10-07 09:48:56.254297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.424 qpair failed and we were unable to recover it. 00:28:07.424 [2024-10-07 09:48:56.254413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.424 [2024-10-07 09:48:56.254438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.424 qpair failed and we were unable to recover it. 00:28:07.424 [2024-10-07 09:48:56.254521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.424 [2024-10-07 09:48:56.254547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.424 qpair failed and we were unable to recover it. 00:28:07.424 [2024-10-07 09:48:56.254663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.424 [2024-10-07 09:48:56.254694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.424 qpair failed and we were unable to recover it. 00:28:07.424 [2024-10-07 09:48:56.254782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.424 [2024-10-07 09:48:56.254809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.424 qpair failed and we were unable to recover it. 00:28:07.424 [2024-10-07 09:48:56.254889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.424 [2024-10-07 09:48:56.254914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.424 qpair failed and we were unable to recover it. 00:28:07.424 [2024-10-07 09:48:56.254990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.424 [2024-10-07 09:48:56.255016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.424 qpair failed and we were unable to recover it. 00:28:07.424 [2024-10-07 09:48:56.255108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.424 [2024-10-07 09:48:56.255133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.424 qpair failed and we were unable to recover it. 00:28:07.424 [2024-10-07 09:48:56.255247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.424 [2024-10-07 09:48:56.255275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.424 qpair failed and we were unable to recover it. 00:28:07.424 [2024-10-07 09:48:56.255391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.424 [2024-10-07 09:48:56.255418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.424 qpair failed and we were unable to recover it. 00:28:07.424 [2024-10-07 09:48:56.255510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.424 [2024-10-07 09:48:56.255538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.424 qpair failed and we were unable to recover it. 00:28:07.424 [2024-10-07 09:48:56.255659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.424 [2024-10-07 09:48:56.255694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.424 qpair failed and we were unable to recover it. 00:28:07.424 [2024-10-07 09:48:56.255789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.424 [2024-10-07 09:48:56.255816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.424 qpair failed and we were unable to recover it. 00:28:07.424 [2024-10-07 09:48:56.255901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.424 [2024-10-07 09:48:56.255926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.424 qpair failed and we were unable to recover it. 00:28:07.424 [2024-10-07 09:48:56.256013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.424 [2024-10-07 09:48:56.256039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.424 qpair failed and we were unable to recover it. 00:28:07.424 [2024-10-07 09:48:56.256132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.424 [2024-10-07 09:48:56.256158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.424 qpair failed and we were unable to recover it. 00:28:07.424 [2024-10-07 09:48:56.256239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.424 [2024-10-07 09:48:56.256264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.424 qpair failed and we were unable to recover it. 00:28:07.424 [2024-10-07 09:48:56.256343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.424 [2024-10-07 09:48:56.256368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.424 qpair failed and we were unable to recover it. 00:28:07.424 [2024-10-07 09:48:56.256452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.424 [2024-10-07 09:48:56.256477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.425 qpair failed and we were unable to recover it. 00:28:07.425 [2024-10-07 09:48:56.256565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.425 [2024-10-07 09:48:56.256591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.425 qpair failed and we were unable to recover it. 00:28:07.425 [2024-10-07 09:48:56.256702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.425 [2024-10-07 09:48:56.256728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.425 qpair failed and we were unable to recover it. 00:28:07.425 [2024-10-07 09:48:56.256809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.425 [2024-10-07 09:48:56.256834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.425 qpair failed and we were unable to recover it. 00:28:07.425 [2024-10-07 09:48:56.256944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.425 [2024-10-07 09:48:56.256968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.425 qpair failed and we were unable to recover it. 00:28:07.425 [2024-10-07 09:48:56.257106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.425 [2024-10-07 09:48:56.257133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.425 qpair failed and we were unable to recover it. 00:28:07.425 [2024-10-07 09:48:56.257214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.425 [2024-10-07 09:48:56.257239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.425 qpair failed and we were unable to recover it. 00:28:07.425 [2024-10-07 09:48:56.257327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.425 [2024-10-07 09:48:56.257353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.425 qpair failed and we were unable to recover it. 00:28:07.425 [2024-10-07 09:48:56.257434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.425 [2024-10-07 09:48:56.257460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.425 qpair failed and we were unable to recover it. 00:28:07.425 [2024-10-07 09:48:56.257542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.425 [2024-10-07 09:48:56.257567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.425 qpair failed and we were unable to recover it. 00:28:07.425 [2024-10-07 09:48:56.257678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.425 [2024-10-07 09:48:56.257704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.425 qpair failed and we were unable to recover it. 00:28:07.425 [2024-10-07 09:48:56.257804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.425 [2024-10-07 09:48:56.257829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.425 qpair failed and we were unable to recover it. 00:28:07.425 [2024-10-07 09:48:56.257944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.425 [2024-10-07 09:48:56.257969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.425 qpair failed and we were unable to recover it. 00:28:07.425 [2024-10-07 09:48:56.258053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.425 [2024-10-07 09:48:56.258078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.425 qpair failed and we were unable to recover it. 00:28:07.425 [2024-10-07 09:48:56.258165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.425 [2024-10-07 09:48:56.258190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.425 qpair failed and we were unable to recover it. 00:28:07.425 [2024-10-07 09:48:56.258269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.425 [2024-10-07 09:48:56.258294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.425 qpair failed and we were unable to recover it. 00:28:07.425 [2024-10-07 09:48:56.258403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.425 [2024-10-07 09:48:56.258429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.425 qpair failed and we were unable to recover it. 00:28:07.425 [2024-10-07 09:48:56.258515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.425 [2024-10-07 09:48:56.258540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.425 qpair failed and we were unable to recover it. 00:28:07.425 [2024-10-07 09:48:56.258658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.425 [2024-10-07 09:48:56.258695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.425 qpair failed and we were unable to recover it. 00:28:07.425 [2024-10-07 09:48:56.258784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.425 [2024-10-07 09:48:56.258813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.425 qpair failed and we were unable to recover it. 00:28:07.425 [2024-10-07 09:48:56.258926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.425 [2024-10-07 09:48:56.258952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.425 qpair failed and we were unable to recover it. 00:28:07.425 [2024-10-07 09:48:56.259045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.425 [2024-10-07 09:48:56.259072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.425 qpair failed and we were unable to recover it. 00:28:07.425 [2024-10-07 09:48:56.259187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.425 [2024-10-07 09:48:56.259214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.425 qpair failed and we were unable to recover it. 00:28:07.425 [2024-10-07 09:48:56.259312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.425 [2024-10-07 09:48:56.259339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.425 qpair failed and we were unable to recover it. 00:28:07.425 [2024-10-07 09:48:56.259444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.425 [2024-10-07 09:48:56.259471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.425 qpair failed and we were unable to recover it. 00:28:07.425 [2024-10-07 09:48:56.259559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.425 [2024-10-07 09:48:56.259586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.425 qpair failed and we were unable to recover it. 00:28:07.425 [2024-10-07 09:48:56.259677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.425 [2024-10-07 09:48:56.259704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.425 qpair failed and we were unable to recover it. 00:28:07.425 [2024-10-07 09:48:56.259786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.425 [2024-10-07 09:48:56.259813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.425 qpair failed and we were unable to recover it. 00:28:07.425 [2024-10-07 09:48:56.259896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.425 [2024-10-07 09:48:56.259923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.425 qpair failed and we were unable to recover it. 00:28:07.425 [2024-10-07 09:48:56.260043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.425 [2024-10-07 09:48:56.260071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.425 qpair failed and we were unable to recover it. 00:28:07.425 [2024-10-07 09:48:56.260178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.425 [2024-10-07 09:48:56.260205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.425 qpair failed and we were unable to recover it. 00:28:07.425 [2024-10-07 09:48:56.260292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.425 [2024-10-07 09:48:56.260320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.425 qpair failed and we were unable to recover it. 00:28:07.425 [2024-10-07 09:48:56.260406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.425 [2024-10-07 09:48:56.260433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.425 qpair failed and we were unable to recover it. 00:28:07.425 [2024-10-07 09:48:56.260525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.425 [2024-10-07 09:48:56.260552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.425 qpair failed and we were unable to recover it. 00:28:07.425 [2024-10-07 09:48:56.260629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.425 [2024-10-07 09:48:56.260656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.425 qpair failed and we were unable to recover it. 00:28:07.425 [2024-10-07 09:48:56.260755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.425 [2024-10-07 09:48:56.260780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.425 qpair failed and we were unable to recover it. 00:28:07.425 [2024-10-07 09:48:56.260856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.425 [2024-10-07 09:48:56.260886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.425 qpair failed and we were unable to recover it. 00:28:07.425 [2024-10-07 09:48:56.260998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.425 [2024-10-07 09:48:56.261023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.425 qpair failed and we were unable to recover it. 00:28:07.425 [2024-10-07 09:48:56.261107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.425 [2024-10-07 09:48:56.261133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.426 qpair failed and we were unable to recover it. 00:28:07.426 [2024-10-07 09:48:56.261247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.426 [2024-10-07 09:48:56.261272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.426 qpair failed and we were unable to recover it. 00:28:07.426 [2024-10-07 09:48:56.261377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.426 [2024-10-07 09:48:56.261403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.426 qpair failed and we were unable to recover it. 00:28:07.426 [2024-10-07 09:48:56.261487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.426 [2024-10-07 09:48:56.261512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.426 qpair failed and we were unable to recover it. 00:28:07.426 [2024-10-07 09:48:56.261593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.426 [2024-10-07 09:48:56.261627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.426 qpair failed and we were unable to recover it. 00:28:07.426 [2024-10-07 09:48:56.261723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.426 [2024-10-07 09:48:56.261748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.426 qpair failed and we were unable to recover it. 00:28:07.426 [2024-10-07 09:48:56.261824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.426 [2024-10-07 09:48:56.261850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.426 qpair failed and we were unable to recover it. 00:28:07.426 [2024-10-07 09:48:56.261942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.426 [2024-10-07 09:48:56.261974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.426 qpair failed and we were unable to recover it. 00:28:07.426 [2024-10-07 09:48:56.262059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.426 [2024-10-07 09:48:56.262085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.426 qpair failed and we were unable to recover it. 00:28:07.426 [2024-10-07 09:48:56.262179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.426 [2024-10-07 09:48:56.262205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.426 qpair failed and we were unable to recover it. 00:28:07.426 [2024-10-07 09:48:56.262290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.426 [2024-10-07 09:48:56.262315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.426 qpair failed and we were unable to recover it. 00:28:07.426 [2024-10-07 09:48:56.262434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.426 [2024-10-07 09:48:56.262460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.426 qpair failed and we were unable to recover it. 00:28:07.426 [2024-10-07 09:48:56.262577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.426 [2024-10-07 09:48:56.262602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.426 qpair failed and we were unable to recover it. 00:28:07.426 [2024-10-07 09:48:56.262725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.426 [2024-10-07 09:48:56.262752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.426 qpair failed and we were unable to recover it. 00:28:07.426 [2024-10-07 09:48:56.262861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.426 [2024-10-07 09:48:56.262886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.426 qpair failed and we were unable to recover it. 00:28:07.426 [2024-10-07 09:48:56.262979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.426 [2024-10-07 09:48:56.263004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.426 qpair failed and we were unable to recover it. 00:28:07.426 [2024-10-07 09:48:56.263082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.426 [2024-10-07 09:48:56.263107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.426 qpair failed and we were unable to recover it. 00:28:07.426 [2024-10-07 09:48:56.263187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.426 [2024-10-07 09:48:56.263212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.426 qpair failed and we were unable to recover it. 00:28:07.426 [2024-10-07 09:48:56.263290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.426 [2024-10-07 09:48:56.263316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.426 qpair failed and we were unable to recover it. 00:28:07.426 [2024-10-07 09:48:56.263432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.426 [2024-10-07 09:48:56.263457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.426 qpair failed and we were unable to recover it. 00:28:07.426 [2024-10-07 09:48:56.263538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.426 [2024-10-07 09:48:56.263563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.426 qpair failed and we were unable to recover it. 00:28:07.426 [2024-10-07 09:48:56.263686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.426 [2024-10-07 09:48:56.263712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.426 qpair failed and we were unable to recover it. 00:28:07.426 [2024-10-07 09:48:56.263796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.426 [2024-10-07 09:48:56.263822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.426 qpair failed and we were unable to recover it. 00:28:07.426 [2024-10-07 09:48:56.263903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.426 [2024-10-07 09:48:56.263928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.426 qpair failed and we were unable to recover it. 00:28:07.426 [2024-10-07 09:48:56.264012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.426 [2024-10-07 09:48:56.264038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.426 qpair failed and we were unable to recover it. 00:28:07.426 [2024-10-07 09:48:56.264126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.426 [2024-10-07 09:48:56.264157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.426 qpair failed and we were unable to recover it. 00:28:07.426 [2024-10-07 09:48:56.264243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.426 [2024-10-07 09:48:56.264268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.426 qpair failed and we were unable to recover it. 00:28:07.426 [2024-10-07 09:48:56.264381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.426 [2024-10-07 09:48:56.264407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.426 qpair failed and we were unable to recover it. 00:28:07.426 [2024-10-07 09:48:56.264503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.426 [2024-10-07 09:48:56.264543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.426 qpair failed and we were unable to recover it. 00:28:07.426 [2024-10-07 09:48:56.264644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.426 [2024-10-07 09:48:56.264686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.426 qpair failed and we were unable to recover it. 00:28:07.426 [2024-10-07 09:48:56.264771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.426 [2024-10-07 09:48:56.264799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.426 qpair failed and we were unable to recover it. 00:28:07.426 [2024-10-07 09:48:56.264886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.426 [2024-10-07 09:48:56.264913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.426 qpair failed and we were unable to recover it. 00:28:07.426 [2024-10-07 09:48:56.265035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.426 [2024-10-07 09:48:56.265063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.426 qpair failed and we were unable to recover it. 00:28:07.426 [2024-10-07 09:48:56.265182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.426 [2024-10-07 09:48:56.265209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.426 qpair failed and we were unable to recover it. 00:28:07.426 [2024-10-07 09:48:56.265318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.426 [2024-10-07 09:48:56.265346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.427 qpair failed and we were unable to recover it. 00:28:07.427 [2024-10-07 09:48:56.265433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.427 [2024-10-07 09:48:56.265459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.427 qpair failed and we were unable to recover it. 00:28:07.427 [2024-10-07 09:48:56.265573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.427 [2024-10-07 09:48:56.265602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.427 qpair failed and we were unable to recover it. 00:28:07.427 [2024-10-07 09:48:56.265691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.427 [2024-10-07 09:48:56.265718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.427 qpair failed and we were unable to recover it. 00:28:07.427 [2024-10-07 09:48:56.265838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.427 [2024-10-07 09:48:56.265864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.427 qpair failed and we were unable to recover it. 00:28:07.427 [2024-10-07 09:48:56.265961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.427 [2024-10-07 09:48:56.265988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.427 qpair failed and we were unable to recover it. 00:28:07.427 [2024-10-07 09:48:56.266101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.427 [2024-10-07 09:48:56.266128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.427 qpair failed and we were unable to recover it. 00:28:07.427 [2024-10-07 09:48:56.266216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.427 [2024-10-07 09:48:56.266244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.427 qpair failed and we were unable to recover it. 00:28:07.427 [2024-10-07 09:48:56.266354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.427 [2024-10-07 09:48:56.266382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.427 qpair failed and we were unable to recover it. 00:28:07.427 [2024-10-07 09:48:56.266471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.427 [2024-10-07 09:48:56.266499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.427 qpair failed and we were unable to recover it. 00:28:07.427 [2024-10-07 09:48:56.266579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.427 [2024-10-07 09:48:56.266606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.427 qpair failed and we were unable to recover it. 00:28:07.427 [2024-10-07 09:48:56.266704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.427 [2024-10-07 09:48:56.266733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.427 qpair failed and we were unable to recover it. 00:28:07.427 [2024-10-07 09:48:56.266820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.427 [2024-10-07 09:48:56.266848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.427 qpair failed and we were unable to recover it. 00:28:07.427 [2024-10-07 09:48:56.266969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.427 [2024-10-07 09:48:56.266996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.427 qpair failed and we were unable to recover it. 00:28:07.427 [2024-10-07 09:48:56.267089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.427 [2024-10-07 09:48:56.267117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.427 qpair failed and we were unable to recover it. 00:28:07.427 [2024-10-07 09:48:56.267232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.427 [2024-10-07 09:48:56.267260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.427 qpair failed and we were unable to recover it. 00:28:07.427 [2024-10-07 09:48:56.267378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.427 [2024-10-07 09:48:56.267404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.427 qpair failed and we were unable to recover it. 00:28:07.427 [2024-10-07 09:48:56.267496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.427 [2024-10-07 09:48:56.267523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.427 qpair failed and we were unable to recover it. 00:28:07.427 [2024-10-07 09:48:56.267662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.427 [2024-10-07 09:48:56.267700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.427 qpair failed and we were unable to recover it. 00:28:07.427 [2024-10-07 09:48:56.267814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.427 [2024-10-07 09:48:56.267840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.427 qpair failed and we were unable to recover it. 00:28:07.427 [2024-10-07 09:48:56.267925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.427 [2024-10-07 09:48:56.267951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.427 qpair failed and we were unable to recover it. 00:28:07.427 [2024-10-07 09:48:56.268050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.427 [2024-10-07 09:48:56.268077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.427 qpair failed and we were unable to recover it. 00:28:07.427 [2024-10-07 09:48:56.268198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.427 [2024-10-07 09:48:56.268225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.427 qpair failed and we were unable to recover it. 00:28:07.427 [2024-10-07 09:48:56.268317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.427 [2024-10-07 09:48:56.268345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.427 qpair failed and we were unable to recover it. 00:28:07.427 [2024-10-07 09:48:56.268429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.427 [2024-10-07 09:48:56.268456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.427 qpair failed and we were unable to recover it. 00:28:07.427 [2024-10-07 09:48:56.268556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.427 [2024-10-07 09:48:56.268584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.427 qpair failed and we were unable to recover it. 00:28:07.427 [2024-10-07 09:48:56.268727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.427 [2024-10-07 09:48:56.268755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.427 qpair failed and we were unable to recover it. 00:28:07.427 [2024-10-07 09:48:56.268849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.427 [2024-10-07 09:48:56.268875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.427 qpair failed and we were unable to recover it. 00:28:07.427 [2024-10-07 09:48:56.268984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.427 [2024-10-07 09:48:56.269012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.427 qpair failed and we were unable to recover it. 00:28:07.427 [2024-10-07 09:48:56.269154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.427 [2024-10-07 09:48:56.269181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.427 qpair failed and we were unable to recover it. 00:28:07.427 [2024-10-07 09:48:56.269290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.427 [2024-10-07 09:48:56.269317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.427 qpair failed and we were unable to recover it. 00:28:07.427 [2024-10-07 09:48:56.269427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.427 [2024-10-07 09:48:56.269454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.427 qpair failed and we were unable to recover it. 00:28:07.427 [2024-10-07 09:48:56.269577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.427 [2024-10-07 09:48:56.269605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.427 qpair failed and we were unable to recover it. 00:28:07.427 [2024-10-07 09:48:56.269713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.427 [2024-10-07 09:48:56.269741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.427 qpair failed and we were unable to recover it. 00:28:07.427 [2024-10-07 09:48:56.269851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.427 [2024-10-07 09:48:56.269878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.427 qpair failed and we were unable to recover it. 00:28:07.427 [2024-10-07 09:48:56.269964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.427 [2024-10-07 09:48:56.269992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.427 qpair failed and we were unable to recover it. 00:28:07.427 [2024-10-07 09:48:56.270106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.427 [2024-10-07 09:48:56.270132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.427 qpair failed and we were unable to recover it. 00:28:07.427 [2024-10-07 09:48:56.270223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.427 [2024-10-07 09:48:56.270250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.427 qpair failed and we were unable to recover it. 00:28:07.427 [2024-10-07 09:48:56.270361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.427 [2024-10-07 09:48:56.270389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.427 qpair failed and we were unable to recover it. 00:28:07.428 [2024-10-07 09:48:56.270496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.428 [2024-10-07 09:48:56.270522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.428 qpair failed and we were unable to recover it. 00:28:07.428 [2024-10-07 09:48:56.270635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.428 [2024-10-07 09:48:56.270663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.428 qpair failed and we were unable to recover it. 00:28:07.428 [2024-10-07 09:48:56.270767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.428 [2024-10-07 09:48:56.270794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.428 qpair failed and we were unable to recover it. 00:28:07.428 [2024-10-07 09:48:56.270942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.428 [2024-10-07 09:48:56.270978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.428 qpair failed and we were unable to recover it. 00:28:07.428 [2024-10-07 09:48:56.271073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.428 [2024-10-07 09:48:56.271100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.428 qpair failed and we were unable to recover it. 00:28:07.428 [2024-10-07 09:48:56.271180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.428 [2024-10-07 09:48:56.271207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.428 qpair failed and we were unable to recover it. 00:28:07.428 [2024-10-07 09:48:56.271332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.428 [2024-10-07 09:48:56.271359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.428 qpair failed and we were unable to recover it. 00:28:07.428 [2024-10-07 09:48:56.271442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.428 [2024-10-07 09:48:56.271469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.428 qpair failed and we were unable to recover it. 00:28:07.428 [2024-10-07 09:48:56.271590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.428 [2024-10-07 09:48:56.271617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.428 qpair failed and we were unable to recover it. 00:28:07.428 [2024-10-07 09:48:56.271736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.428 [2024-10-07 09:48:56.271763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.428 qpair failed and we were unable to recover it. 00:28:07.428 [2024-10-07 09:48:56.271880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.428 [2024-10-07 09:48:56.271907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.428 qpair failed and we were unable to recover it. 00:28:07.428 [2024-10-07 09:48:56.272050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.428 [2024-10-07 09:48:56.272085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.428 qpair failed and we were unable to recover it. 00:28:07.428 [2024-10-07 09:48:56.272177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.428 [2024-10-07 09:48:56.272203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.428 qpair failed and we were unable to recover it. 00:28:07.428 [2024-10-07 09:48:56.272319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.428 [2024-10-07 09:48:56.272347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.428 qpair failed and we were unable to recover it. 00:28:07.428 [2024-10-07 09:48:56.272459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.428 [2024-10-07 09:48:56.272487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.428 qpair failed and we were unable to recover it. 00:28:07.428 [2024-10-07 09:48:56.272587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.428 [2024-10-07 09:48:56.272614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.428 qpair failed and we were unable to recover it. 00:28:07.428 [2024-10-07 09:48:56.272739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.428 [2024-10-07 09:48:56.272769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.428 qpair failed and we were unable to recover it. 00:28:07.428 [2024-10-07 09:48:56.272885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.428 [2024-10-07 09:48:56.272912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.428 qpair failed and we were unable to recover it. 00:28:07.428 [2024-10-07 09:48:56.273011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.428 [2024-10-07 09:48:56.273037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.428 qpair failed and we were unable to recover it. 00:28:07.428 [2024-10-07 09:48:56.273150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.428 [2024-10-07 09:48:56.273184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.428 qpair failed and we were unable to recover it. 00:28:07.428 [2024-10-07 09:48:56.273326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.428 [2024-10-07 09:48:56.273353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.428 qpair failed and we were unable to recover it. 00:28:07.428 [2024-10-07 09:48:56.273469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.428 [2024-10-07 09:48:56.273496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.428 qpair failed and we were unable to recover it. 00:28:07.428 [2024-10-07 09:48:56.273585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.428 [2024-10-07 09:48:56.273614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.428 qpair failed and we were unable to recover it. 00:28:07.428 [2024-10-07 09:48:56.273735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.428 [2024-10-07 09:48:56.273765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.428 qpair failed and we were unable to recover it. 00:28:07.428 [2024-10-07 09:48:56.273851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.428 [2024-10-07 09:48:56.273882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.428 qpair failed and we were unable to recover it. 00:28:07.428 [2024-10-07 09:48:56.273998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.428 [2024-10-07 09:48:56.274026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.428 qpair failed and we were unable to recover it. 00:28:07.428 [2024-10-07 09:48:56.274115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.428 [2024-10-07 09:48:56.274149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.428 qpair failed and we were unable to recover it. 00:28:07.428 [2024-10-07 09:48:56.274232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.428 [2024-10-07 09:48:56.274260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.428 qpair failed and we were unable to recover it. 00:28:07.428 [2024-10-07 09:48:56.274348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.428 [2024-10-07 09:48:56.274375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.428 qpair failed and we were unable to recover it. 00:28:07.428 [2024-10-07 09:48:56.274463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.428 [2024-10-07 09:48:56.274489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.428 qpair failed and we were unable to recover it. 00:28:07.428 [2024-10-07 09:48:56.274578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.428 [2024-10-07 09:48:56.274605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.428 qpair failed and we were unable to recover it. 00:28:07.428 [2024-10-07 09:48:56.274729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.428 [2024-10-07 09:48:56.274758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.428 qpair failed and we were unable to recover it. 00:28:07.428 [2024-10-07 09:48:56.274850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.428 [2024-10-07 09:48:56.274877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.428 qpair failed and we were unable to recover it. 00:28:07.428 [2024-10-07 09:48:56.275016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.428 [2024-10-07 09:48:56.275065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.428 qpair failed and we were unable to recover it. 00:28:07.428 [2024-10-07 09:48:56.275220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.428 [2024-10-07 09:48:56.275249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.428 qpair failed and we were unable to recover it. 00:28:07.428 [2024-10-07 09:48:56.275362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.428 [2024-10-07 09:48:56.275389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.428 qpair failed and we were unable to recover it. 00:28:07.428 [2024-10-07 09:48:56.275476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.428 [2024-10-07 09:48:56.275502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.428 qpair failed and we were unable to recover it. 00:28:07.428 [2024-10-07 09:48:56.275610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.428 [2024-10-07 09:48:56.275636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.428 qpair failed and we were unable to recover it. 00:28:07.428 [2024-10-07 09:48:56.275738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.429 [2024-10-07 09:48:56.275764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.429 qpair failed and we were unable to recover it. 00:28:07.429 [2024-10-07 09:48:56.275853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.429 [2024-10-07 09:48:56.275880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.429 qpair failed and we were unable to recover it. 00:28:07.429 [2024-10-07 09:48:56.275969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.429 [2024-10-07 09:48:56.275996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.429 qpair failed and we were unable to recover it. 00:28:07.429 [2024-10-07 09:48:56.276106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.429 [2024-10-07 09:48:56.276132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.429 qpair failed and we were unable to recover it. 00:28:07.429 [2024-10-07 09:48:56.276227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.429 [2024-10-07 09:48:56.276254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.429 qpair failed and we were unable to recover it. 00:28:07.429 [2024-10-07 09:48:56.276390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.429 [2024-10-07 09:48:56.276416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.429 qpair failed and we were unable to recover it. 00:28:07.429 [2024-10-07 09:48:56.276527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.429 [2024-10-07 09:48:56.276553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.429 qpair failed and we were unable to recover it. 00:28:07.429 [2024-10-07 09:48:56.276700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.429 [2024-10-07 09:48:56.276730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.429 qpair failed and we were unable to recover it. 00:28:07.429 [2024-10-07 09:48:56.276858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.429 [2024-10-07 09:48:56.276885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.429 qpair failed and we were unable to recover it. 00:28:07.429 [2024-10-07 09:48:56.276970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.429 [2024-10-07 09:48:56.276997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.429 qpair failed and we were unable to recover it. 00:28:07.429 [2024-10-07 09:48:56.277116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.429 [2024-10-07 09:48:56.277143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.429 qpair failed and we were unable to recover it. 00:28:07.429 [2024-10-07 09:48:56.277231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.429 [2024-10-07 09:48:56.277259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.429 qpair failed and we were unable to recover it. 00:28:07.429 [2024-10-07 09:48:56.277380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.429 [2024-10-07 09:48:56.277412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.429 qpair failed and we were unable to recover it. 00:28:07.429 [2024-10-07 09:48:56.277528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.429 [2024-10-07 09:48:56.277555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.429 qpair failed and we were unable to recover it. 00:28:07.429 [2024-10-07 09:48:56.277641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.429 [2024-10-07 09:48:56.277674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.429 qpair failed and we were unable to recover it. 00:28:07.429 [2024-10-07 09:48:56.277764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.429 [2024-10-07 09:48:56.277791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.429 qpair failed and we were unable to recover it. 00:28:07.429 [2024-10-07 09:48:56.277903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.429 [2024-10-07 09:48:56.277930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.429 qpair failed and we were unable to recover it. 00:28:07.429 [2024-10-07 09:48:56.278039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.429 [2024-10-07 09:48:56.278066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.429 qpair failed and we were unable to recover it. 00:28:07.429 [2024-10-07 09:48:56.278178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.429 [2024-10-07 09:48:56.278204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.429 qpair failed and we were unable to recover it. 00:28:07.429 [2024-10-07 09:48:56.278316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.429 [2024-10-07 09:48:56.278343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.429 qpair failed and we were unable to recover it. 00:28:07.429 [2024-10-07 09:48:56.278457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.429 [2024-10-07 09:48:56.278484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.429 qpair failed and we were unable to recover it. 00:28:07.429 [2024-10-07 09:48:56.278564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.429 [2024-10-07 09:48:56.278597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.429 qpair failed and we were unable to recover it. 00:28:07.429 [2024-10-07 09:48:56.278688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.429 [2024-10-07 09:48:56.278716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.429 qpair failed and we were unable to recover it. 00:28:07.429 [2024-10-07 09:48:56.278823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.429 [2024-10-07 09:48:56.278850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.429 qpair failed and we were unable to recover it. 00:28:07.429 [2024-10-07 09:48:56.278935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.429 [2024-10-07 09:48:56.278962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.429 qpair failed and we were unable to recover it. 00:28:07.429 [2024-10-07 09:48:56.279075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.429 [2024-10-07 09:48:56.279103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.429 qpair failed and we were unable to recover it. 00:28:07.429 [2024-10-07 09:48:56.279216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.429 [2024-10-07 09:48:56.279242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.429 qpair failed and we were unable to recover it. 00:28:07.429 [2024-10-07 09:48:56.279359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.429 [2024-10-07 09:48:56.279388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.429 qpair failed and we were unable to recover it. 00:28:07.429 [2024-10-07 09:48:56.279504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.429 [2024-10-07 09:48:56.279530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.429 qpair failed and we were unable to recover it. 00:28:07.429 [2024-10-07 09:48:56.279620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.429 [2024-10-07 09:48:56.279646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.429 qpair failed and we were unable to recover it. 00:28:07.429 [2024-10-07 09:48:56.279762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.429 [2024-10-07 09:48:56.279787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.429 qpair failed and we were unable to recover it. 00:28:07.429 [2024-10-07 09:48:56.279868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.429 [2024-10-07 09:48:56.279893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.429 qpair failed and we were unable to recover it. 00:28:07.429 [2024-10-07 09:48:56.280025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.429 [2024-10-07 09:48:56.280051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.429 qpair failed and we were unable to recover it. 00:28:07.429 [2024-10-07 09:48:56.280164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.429 [2024-10-07 09:48:56.280190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.429 qpair failed and we were unable to recover it. 00:28:07.429 [2024-10-07 09:48:56.280302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.429 [2024-10-07 09:48:56.280327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.429 qpair failed and we were unable to recover it. 00:28:07.429 [2024-10-07 09:48:56.280416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.429 [2024-10-07 09:48:56.280442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.429 qpair failed and we were unable to recover it. 00:28:07.429 [2024-10-07 09:48:56.280531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.429 [2024-10-07 09:48:56.280558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.429 qpair failed and we were unable to recover it. 00:28:07.429 [2024-10-07 09:48:56.280681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.429 [2024-10-07 09:48:56.280707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.429 qpair failed and we were unable to recover it. 00:28:07.429 [2024-10-07 09:48:56.280819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.429 [2024-10-07 09:48:56.280844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.430 qpair failed and we were unable to recover it. 00:28:07.430 [2024-10-07 09:48:56.280954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.430 [2024-10-07 09:48:56.280979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.430 qpair failed and we were unable to recover it. 00:28:07.430 [2024-10-07 09:48:56.281096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.430 [2024-10-07 09:48:56.281122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.430 qpair failed and we were unable to recover it. 00:28:07.430 [2024-10-07 09:48:56.281234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.430 [2024-10-07 09:48:56.281260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.430 qpair failed and we were unable to recover it. 00:28:07.430 [2024-10-07 09:48:56.281368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.430 [2024-10-07 09:48:56.281394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.430 qpair failed and we were unable to recover it. 00:28:07.430 [2024-10-07 09:48:56.281529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.430 [2024-10-07 09:48:56.281554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.430 qpair failed and we were unable to recover it. 00:28:07.430 [2024-10-07 09:48:56.281640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.430 [2024-10-07 09:48:56.281673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.430 qpair failed and we were unable to recover it. 00:28:07.430 [2024-10-07 09:48:56.281752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.430 [2024-10-07 09:48:56.281778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.430 qpair failed and we were unable to recover it. 00:28:07.430 [2024-10-07 09:48:56.281895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.430 [2024-10-07 09:48:56.281921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.430 qpair failed and we were unable to recover it. 00:28:07.430 [2024-10-07 09:48:56.282052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.430 [2024-10-07 09:48:56.282078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.430 qpair failed and we were unable to recover it. 00:28:07.430 [2024-10-07 09:48:56.282196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.430 [2024-10-07 09:48:56.282222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.430 qpair failed and we were unable to recover it. 00:28:07.430 [2024-10-07 09:48:56.282341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.430 [2024-10-07 09:48:56.282366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.430 qpair failed and we were unable to recover it. 00:28:07.430 [2024-10-07 09:48:56.282456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.430 [2024-10-07 09:48:56.282482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.430 qpair failed and we were unable to recover it. 00:28:07.430 [2024-10-07 09:48:56.282594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.430 [2024-10-07 09:48:56.282620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.430 qpair failed and we were unable to recover it. 00:28:07.430 [2024-10-07 09:48:56.282773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.430 [2024-10-07 09:48:56.282800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.430 qpair failed and we were unable to recover it. 00:28:07.430 [2024-10-07 09:48:56.282881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.430 [2024-10-07 09:48:56.282907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.430 qpair failed and we were unable to recover it. 00:28:07.430 [2024-10-07 09:48:56.283005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.430 [2024-10-07 09:48:56.283030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.430 qpair failed and we were unable to recover it. 00:28:07.430 [2024-10-07 09:48:56.283142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.430 [2024-10-07 09:48:56.283169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.430 qpair failed and we were unable to recover it. 00:28:07.430 [2024-10-07 09:48:56.283285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.430 [2024-10-07 09:48:56.283312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.430 qpair failed and we were unable to recover it. 00:28:07.430 [2024-10-07 09:48:56.283429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.430 [2024-10-07 09:48:56.283454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.430 qpair failed and we were unable to recover it. 00:28:07.430 [2024-10-07 09:48:56.283535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.430 [2024-10-07 09:48:56.283562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.430 qpair failed and we were unable to recover it. 00:28:07.430 [2024-10-07 09:48:56.283648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.430 [2024-10-07 09:48:56.283691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.430 qpair failed and we were unable to recover it. 00:28:07.430 [2024-10-07 09:48:56.283801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.430 [2024-10-07 09:48:56.283826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.430 qpair failed and we were unable to recover it. 00:28:07.430 [2024-10-07 09:48:56.283968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.430 [2024-10-07 09:48:56.283998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.430 qpair failed and we were unable to recover it. 00:28:07.430 [2024-10-07 09:48:56.284119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.430 [2024-10-07 09:48:56.284145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.430 qpair failed and we were unable to recover it. 00:28:07.430 [2024-10-07 09:48:56.284227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.430 [2024-10-07 09:48:56.284254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.430 qpair failed and we were unable to recover it. 00:28:07.430 [2024-10-07 09:48:56.284351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.430 [2024-10-07 09:48:56.284377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.430 qpair failed and we were unable to recover it. 00:28:07.430 [2024-10-07 09:48:56.284458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.430 [2024-10-07 09:48:56.284485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.430 qpair failed and we were unable to recover it. 00:28:07.430 [2024-10-07 09:48:56.284598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.430 [2024-10-07 09:48:56.284624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.430 qpair failed and we were unable to recover it. 00:28:07.430 [2024-10-07 09:48:56.284721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.430 [2024-10-07 09:48:56.284747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.430 qpair failed and we were unable to recover it. 00:28:07.430 [2024-10-07 09:48:56.284838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.430 [2024-10-07 09:48:56.284864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.430 qpair failed and we were unable to recover it. 00:28:07.430 [2024-10-07 09:48:56.284985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.430 [2024-10-07 09:48:56.285011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.430 qpair failed and we were unable to recover it. 00:28:07.430 [2024-10-07 09:48:56.285124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.430 [2024-10-07 09:48:56.285150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.430 qpair failed and we were unable to recover it. 00:28:07.430 [2024-10-07 09:48:56.285228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.430 [2024-10-07 09:48:56.285253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.430 qpair failed and we were unable to recover it. 00:28:07.430 [2024-10-07 09:48:56.285336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.431 [2024-10-07 09:48:56.285362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.431 qpair failed and we were unable to recover it. 00:28:07.431 [2024-10-07 09:48:56.285447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.431 [2024-10-07 09:48:56.285472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.431 qpair failed and we were unable to recover it. 00:28:07.431 [2024-10-07 09:48:56.285592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.431 [2024-10-07 09:48:56.285617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.431 qpair failed and we were unable to recover it. 00:28:07.431 [2024-10-07 09:48:56.285743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.431 [2024-10-07 09:48:56.285770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.431 qpair failed and we were unable to recover it. 00:28:07.431 [2024-10-07 09:48:56.285853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.431 [2024-10-07 09:48:56.285880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.431 qpair failed and we were unable to recover it. 00:28:07.431 [2024-10-07 09:48:56.285957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.431 [2024-10-07 09:48:56.285983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.431 qpair failed and we were unable to recover it. 00:28:07.431 [2024-10-07 09:48:56.286096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.431 [2024-10-07 09:48:56.286122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.431 qpair failed and we were unable to recover it. 00:28:07.431 [2024-10-07 09:48:56.286196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.431 [2024-10-07 09:48:56.286223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.431 qpair failed and we were unable to recover it. 00:28:07.431 [2024-10-07 09:48:56.286338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.431 [2024-10-07 09:48:56.286364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.431 qpair failed and we were unable to recover it. 00:28:07.431 [2024-10-07 09:48:56.286474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.431 [2024-10-07 09:48:56.286515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.431 qpair failed and we were unable to recover it. 00:28:07.431 [2024-10-07 09:48:56.286600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.431 [2024-10-07 09:48:56.286628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.431 qpair failed and we were unable to recover it. 00:28:07.431 [2024-10-07 09:48:56.286777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.431 [2024-10-07 09:48:56.286817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.431 qpair failed and we were unable to recover it. 00:28:07.431 [2024-10-07 09:48:56.286967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.431 [2024-10-07 09:48:56.286996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.431 qpair failed and we were unable to recover it. 00:28:07.431 [2024-10-07 09:48:56.287088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.431 [2024-10-07 09:48:56.287116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.431 qpair failed and we were unable to recover it. 00:28:07.431 [2024-10-07 09:48:56.287203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.431 [2024-10-07 09:48:56.287231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.431 qpair failed and we were unable to recover it. 00:28:07.431 [2024-10-07 09:48:56.287315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.431 [2024-10-07 09:48:56.287342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.431 qpair failed and we were unable to recover it. 00:28:07.431 [2024-10-07 09:48:56.287438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.431 [2024-10-07 09:48:56.287466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.431 qpair failed and we were unable to recover it. 00:28:07.431 [2024-10-07 09:48:56.287558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.431 [2024-10-07 09:48:56.287584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.431 qpair failed and we were unable to recover it. 00:28:07.431 [2024-10-07 09:48:56.287708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.431 [2024-10-07 09:48:56.287736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.431 qpair failed and we were unable to recover it. 00:28:07.431 [2024-10-07 09:48:56.287850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.431 [2024-10-07 09:48:56.287878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.431 qpair failed and we were unable to recover it. 00:28:07.431 [2024-10-07 09:48:56.288002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.431 [2024-10-07 09:48:56.288035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.431 qpair failed and we were unable to recover it. 00:28:07.431 [2024-10-07 09:48:56.288143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.431 [2024-10-07 09:48:56.288171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.431 qpair failed and we were unable to recover it. 00:28:07.431 [2024-10-07 09:48:56.288286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.431 [2024-10-07 09:48:56.288313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.431 qpair failed and we were unable to recover it. 00:28:07.431 [2024-10-07 09:48:56.288392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.431 [2024-10-07 09:48:56.288418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.431 qpair failed and we were unable to recover it. 00:28:07.431 [2024-10-07 09:48:56.288521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.431 [2024-10-07 09:48:56.288548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.431 qpair failed and we were unable to recover it. 00:28:07.431 [2024-10-07 09:48:56.288687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.431 [2024-10-07 09:48:56.288726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.431 qpair failed and we were unable to recover it. 00:28:07.431 [2024-10-07 09:48:56.288849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.431 [2024-10-07 09:48:56.288877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.431 qpair failed and we were unable to recover it. 00:28:07.431 [2024-10-07 09:48:56.288989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.431 [2024-10-07 09:48:56.289016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.431 qpair failed and we were unable to recover it. 00:28:07.431 [2024-10-07 09:48:56.289161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.431 [2024-10-07 09:48:56.289188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.431 qpair failed and we were unable to recover it. 00:28:07.431 [2024-10-07 09:48:56.289271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.431 [2024-10-07 09:48:56.289303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.431 qpair failed and we were unable to recover it. 00:28:07.431 [2024-10-07 09:48:56.289449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.431 [2024-10-07 09:48:56.289475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.431 qpair failed and we were unable to recover it. 00:28:07.431 [2024-10-07 09:48:56.289592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.431 [2024-10-07 09:48:56.289618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.431 qpair failed and we were unable to recover it. 00:28:07.431 [2024-10-07 09:48:56.289714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.431 [2024-10-07 09:48:56.289740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.431 qpair failed and we were unable to recover it. 00:28:07.431 [2024-10-07 09:48:56.289833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.431 [2024-10-07 09:48:56.289860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.431 qpair failed and we were unable to recover it. 00:28:07.431 [2024-10-07 09:48:56.289949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.431 [2024-10-07 09:48:56.289985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.431 qpair failed and we were unable to recover it. 00:28:07.431 [2024-10-07 09:48:56.290075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.431 [2024-10-07 09:48:56.290102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.431 qpair failed and we were unable to recover it. 00:28:07.431 [2024-10-07 09:48:56.290219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.431 [2024-10-07 09:48:56.290245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.431 qpair failed and we were unable to recover it. 00:28:07.431 [2024-10-07 09:48:56.290361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.431 [2024-10-07 09:48:56.290388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.431 qpair failed and we were unable to recover it. 00:28:07.431 [2024-10-07 09:48:56.290529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.431 [2024-10-07 09:48:56.290555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.431 qpair failed and we were unable to recover it. 00:28:07.432 [2024-10-07 09:48:56.290703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.432 [2024-10-07 09:48:56.290742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.432 qpair failed and we were unable to recover it. 00:28:07.432 [2024-10-07 09:48:56.290845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.432 [2024-10-07 09:48:56.290873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.432 qpair failed and we were unable to recover it. 00:28:07.432 [2024-10-07 09:48:56.290997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.432 [2024-10-07 09:48:56.291024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.432 qpair failed and we were unable to recover it. 00:28:07.432 [2024-10-07 09:48:56.291105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.432 [2024-10-07 09:48:56.291131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.432 qpair failed and we were unable to recover it. 00:28:07.432 [2024-10-07 09:48:56.291227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.432 [2024-10-07 09:48:56.291253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.432 qpair failed and we were unable to recover it. 00:28:07.432 [2024-10-07 09:48:56.291335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.432 [2024-10-07 09:48:56.291362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.432 qpair failed and we were unable to recover it. 00:28:07.432 [2024-10-07 09:48:56.291450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.432 [2024-10-07 09:48:56.291479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.432 qpair failed and we were unable to recover it. 00:28:07.432 [2024-10-07 09:48:56.291565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.432 [2024-10-07 09:48:56.291593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.432 qpair failed and we were unable to recover it. 00:28:07.432 [2024-10-07 09:48:56.291690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.432 [2024-10-07 09:48:56.291718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.432 qpair failed and we were unable to recover it. 00:28:07.432 [2024-10-07 09:48:56.291832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.432 [2024-10-07 09:48:56.291859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.432 qpair failed and we were unable to recover it. 00:28:07.432 [2024-10-07 09:48:56.291946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.432 [2024-10-07 09:48:56.291974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.432 qpair failed and we were unable to recover it. 00:28:07.432 [2024-10-07 09:48:56.292056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.432 [2024-10-07 09:48:56.292083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.432 qpair failed and we were unable to recover it. 00:28:07.432 [2024-10-07 09:48:56.292200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.432 [2024-10-07 09:48:56.292228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.432 qpair failed and we were unable to recover it. 00:28:07.432 [2024-10-07 09:48:56.292353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.432 [2024-10-07 09:48:56.292380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.432 qpair failed and we were unable to recover it. 00:28:07.432 [2024-10-07 09:48:56.292462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.432 [2024-10-07 09:48:56.292489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.432 qpair failed and we were unable to recover it. 00:28:07.432 [2024-10-07 09:48:56.292602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.432 [2024-10-07 09:48:56.292628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.432 qpair failed and we were unable to recover it. 00:28:07.432 [2024-10-07 09:48:56.292757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.432 [2024-10-07 09:48:56.292785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.432 qpair failed and we were unable to recover it. 00:28:07.432 [2024-10-07 09:48:56.292887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.432 [2024-10-07 09:48:56.292914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.432 qpair failed and we were unable to recover it. 00:28:07.432 [2024-10-07 09:48:56.293007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.432 [2024-10-07 09:48:56.293042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.432 qpair failed and we were unable to recover it. 00:28:07.432 [2024-10-07 09:48:56.293157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.432 [2024-10-07 09:48:56.293183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.432 qpair failed and we were unable to recover it. 00:28:07.432 [2024-10-07 09:48:56.293274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.432 [2024-10-07 09:48:56.293301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.432 qpair failed and we were unable to recover it. 00:28:07.432 [2024-10-07 09:48:56.293419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.432 [2024-10-07 09:48:56.293446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.432 qpair failed and we were unable to recover it. 00:28:07.432 [2024-10-07 09:48:56.293592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.432 [2024-10-07 09:48:56.293619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.432 qpair failed and we were unable to recover it. 00:28:07.432 [2024-10-07 09:48:56.293737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.432 [2024-10-07 09:48:56.293767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.432 qpair failed and we were unable to recover it. 00:28:07.432 [2024-10-07 09:48:56.293862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.432 [2024-10-07 09:48:56.293888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.432 qpair failed and we were unable to recover it. 00:28:07.432 [2024-10-07 09:48:56.294007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.432 [2024-10-07 09:48:56.294034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.432 qpair failed and we were unable to recover it. 00:28:07.432 [2024-10-07 09:48:56.294114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.432 [2024-10-07 09:48:56.294142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.432 qpair failed and we were unable to recover it. 00:28:07.432 [2024-10-07 09:48:56.294229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.432 [2024-10-07 09:48:56.294256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.432 qpair failed and we were unable to recover it. 00:28:07.432 [2024-10-07 09:48:56.294350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.432 [2024-10-07 09:48:56.294377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.432 qpair failed and we were unable to recover it. 00:28:07.432 [2024-10-07 09:48:56.294498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.432 [2024-10-07 09:48:56.294525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.432 qpair failed and we were unable to recover it. 00:28:07.432 [2024-10-07 09:48:56.294676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.432 [2024-10-07 09:48:56.294709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.432 qpair failed and we were unable to recover it. 00:28:07.432 [2024-10-07 09:48:56.294798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.432 [2024-10-07 09:48:56.294825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.432 qpair failed and we were unable to recover it. 00:28:07.432 [2024-10-07 09:48:56.294912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.432 [2024-10-07 09:48:56.294938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.432 qpair failed and we were unable to recover it. 00:28:07.432 [2024-10-07 09:48:56.295052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.432 [2024-10-07 09:48:56.295078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.432 qpair failed and we were unable to recover it. 00:28:07.432 [2024-10-07 09:48:56.295185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.432 [2024-10-07 09:48:56.295211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.432 qpair failed and we were unable to recover it. 00:28:07.432 [2024-10-07 09:48:56.295292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.432 [2024-10-07 09:48:56.295318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.432 qpair failed and we were unable to recover it. 00:28:07.432 [2024-10-07 09:48:56.295398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.432 [2024-10-07 09:48:56.295423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.432 qpair failed and we were unable to recover it. 00:28:07.432 [2024-10-07 09:48:56.295531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.432 [2024-10-07 09:48:56.295557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.432 qpair failed and we were unable to recover it. 00:28:07.432 [2024-10-07 09:48:56.295700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.433 [2024-10-07 09:48:56.295727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.433 qpair failed and we were unable to recover it. 00:28:07.433 [2024-10-07 09:48:56.295811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.433 [2024-10-07 09:48:56.295838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.433 qpair failed and we were unable to recover it. 00:28:07.433 [2024-10-07 09:48:56.295919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.433 [2024-10-07 09:48:56.295945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.433 qpair failed and we were unable to recover it. 00:28:07.433 [2024-10-07 09:48:56.296037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.433 [2024-10-07 09:48:56.296064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.433 qpair failed and we were unable to recover it. 00:28:07.433 [2024-10-07 09:48:56.296155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.433 [2024-10-07 09:48:56.296181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.433 qpair failed and we were unable to recover it. 00:28:07.433 [2024-10-07 09:48:56.296261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.433 [2024-10-07 09:48:56.296287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.433 qpair failed and we were unable to recover it. 00:28:07.433 [2024-10-07 09:48:56.296406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.433 [2024-10-07 09:48:56.296435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.433 qpair failed and we were unable to recover it. 00:28:07.433 [2024-10-07 09:48:56.296581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.433 [2024-10-07 09:48:56.296608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.433 qpair failed and we were unable to recover it. 00:28:07.433 [2024-10-07 09:48:56.296766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.433 [2024-10-07 09:48:56.296793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.433 qpair failed and we were unable to recover it. 00:28:07.433 [2024-10-07 09:48:56.296876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.433 [2024-10-07 09:48:56.296903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.433 qpair failed and we were unable to recover it. 00:28:07.433 [2024-10-07 09:48:56.297015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.433 [2024-10-07 09:48:56.297042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.433 qpair failed and we were unable to recover it. 00:28:07.433 [2024-10-07 09:48:56.297195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.433 [2024-10-07 09:48:56.297222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.433 qpair failed and we were unable to recover it. 00:28:07.433 [2024-10-07 09:48:56.297337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.433 [2024-10-07 09:48:56.297364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.433 qpair failed and we were unable to recover it. 00:28:07.433 [2024-10-07 09:48:56.297473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.433 [2024-10-07 09:48:56.297500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.433 qpair failed and we were unable to recover it. 00:28:07.433 [2024-10-07 09:48:56.297641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.433 [2024-10-07 09:48:56.297690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.433 qpair failed and we were unable to recover it. 00:28:07.433 [2024-10-07 09:48:56.297830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.433 [2024-10-07 09:48:56.297858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.433 qpair failed and we were unable to recover it. 00:28:07.433 [2024-10-07 09:48:56.297944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.433 [2024-10-07 09:48:56.297981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.433 qpair failed and we were unable to recover it. 00:28:07.433 [2024-10-07 09:48:56.298096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.433 [2024-10-07 09:48:56.298125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.433 qpair failed and we were unable to recover it. 00:28:07.433 [2024-10-07 09:48:56.298212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.433 [2024-10-07 09:48:56.298238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.433 qpair failed and we were unable to recover it. 00:28:07.433 [2024-10-07 09:48:56.298366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.433 [2024-10-07 09:48:56.298394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.433 qpair failed and we were unable to recover it. 00:28:07.433 [2024-10-07 09:48:56.298509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.433 [2024-10-07 09:48:56.298537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.433 qpair failed and we were unable to recover it. 00:28:07.433 [2024-10-07 09:48:56.298619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.433 [2024-10-07 09:48:56.298645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.433 qpair failed and we were unable to recover it. 00:28:07.433 [2024-10-07 09:48:56.298773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.433 [2024-10-07 09:48:56.298813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.433 qpair failed and we were unable to recover it. 00:28:07.433 [2024-10-07 09:48:56.298925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.433 [2024-10-07 09:48:56.298953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.433 qpair failed and we were unable to recover it. 00:28:07.433 [2024-10-07 09:48:56.299045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.433 [2024-10-07 09:48:56.299072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.433 qpair failed and we were unable to recover it. 00:28:07.433 [2024-10-07 09:48:56.299185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.433 [2024-10-07 09:48:56.299212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.433 qpair failed and we were unable to recover it. 00:28:07.433 [2024-10-07 09:48:56.299299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.433 [2024-10-07 09:48:56.299325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.433 qpair failed and we were unable to recover it. 00:28:07.433 [2024-10-07 09:48:56.299406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.433 [2024-10-07 09:48:56.299432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.433 qpair failed and we were unable to recover it. 00:28:07.433 [2024-10-07 09:48:56.299548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.433 [2024-10-07 09:48:56.299575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.433 qpair failed and we were unable to recover it. 00:28:07.433 [2024-10-07 09:48:56.299662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.433 [2024-10-07 09:48:56.299699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.433 qpair failed and we were unable to recover it. 00:28:07.433 [2024-10-07 09:48:56.299814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.433 [2024-10-07 09:48:56.299840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.433 qpair failed and we were unable to recover it. 00:28:07.433 [2024-10-07 09:48:56.299951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.433 [2024-10-07 09:48:56.299988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.433 qpair failed and we were unable to recover it. 00:28:07.433 [2024-10-07 09:48:56.300071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.433 [2024-10-07 09:48:56.300098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.433 qpair failed and we were unable to recover it. 00:28:07.433 [2024-10-07 09:48:56.300181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.433 [2024-10-07 09:48:56.300207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.433 qpair failed and we were unable to recover it. 00:28:07.433 [2024-10-07 09:48:56.300317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.433 [2024-10-07 09:48:56.300347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.433 qpair failed and we were unable to recover it. 00:28:07.433 [2024-10-07 09:48:56.300464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.433 [2024-10-07 09:48:56.300491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.433 qpair failed and we were unable to recover it. 00:28:07.433 [2024-10-07 09:48:56.300618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.433 [2024-10-07 09:48:56.300660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.433 qpair failed and we were unable to recover it. 00:28:07.433 [2024-10-07 09:48:56.300794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.433 [2024-10-07 09:48:56.300821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.433 qpair failed and we were unable to recover it. 00:28:07.433 [2024-10-07 09:48:56.300933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.433 [2024-10-07 09:48:56.300960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.434 qpair failed and we were unable to recover it. 00:28:07.434 [2024-10-07 09:48:56.301074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.434 [2024-10-07 09:48:56.301101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.434 qpair failed and we were unable to recover it. 00:28:07.434 [2024-10-07 09:48:56.301185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.434 [2024-10-07 09:48:56.301212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.434 qpair failed and we were unable to recover it. 00:28:07.434 [2024-10-07 09:48:56.301296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.434 [2024-10-07 09:48:56.301323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.434 qpair failed and we were unable to recover it. 00:28:07.434 [2024-10-07 09:48:56.301408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.434 [2024-10-07 09:48:56.301434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.434 qpair failed and we were unable to recover it. 00:28:07.434 [2024-10-07 09:48:56.301521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.434 [2024-10-07 09:48:56.301546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.434 qpair failed and we were unable to recover it. 00:28:07.434 [2024-10-07 09:48:56.301662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.434 [2024-10-07 09:48:56.301697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.434 qpair failed and we were unable to recover it. 00:28:07.434 [2024-10-07 09:48:56.301788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.434 [2024-10-07 09:48:56.301815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.434 qpair failed and we were unable to recover it. 00:28:07.434 [2024-10-07 09:48:56.301937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.434 [2024-10-07 09:48:56.301963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.434 qpair failed and we were unable to recover it. 00:28:07.434 [2024-10-07 09:48:56.302080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.434 [2024-10-07 09:48:56.302106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.434 qpair failed and we were unable to recover it. 00:28:07.434 [2024-10-07 09:48:56.302242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.434 [2024-10-07 09:48:56.302268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.434 qpair failed and we were unable to recover it. 00:28:07.434 [2024-10-07 09:48:56.302350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.434 [2024-10-07 09:48:56.302376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.434 qpair failed and we were unable to recover it. 00:28:07.434 [2024-10-07 09:48:56.302461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.434 [2024-10-07 09:48:56.302490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.434 qpair failed and we were unable to recover it. 00:28:07.434 [2024-10-07 09:48:56.302576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.434 [2024-10-07 09:48:56.302603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.434 qpair failed and we were unable to recover it. 00:28:07.434 [2024-10-07 09:48:56.302757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.434 [2024-10-07 09:48:56.302783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.434 qpair failed and we were unable to recover it. 00:28:07.434 [2024-10-07 09:48:56.302867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.434 [2024-10-07 09:48:56.302894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.434 qpair failed and we were unable to recover it. 00:28:07.434 [2024-10-07 09:48:56.303011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.434 [2024-10-07 09:48:56.303037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.434 qpair failed and we were unable to recover it. 00:28:07.434 [2024-10-07 09:48:56.303151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.434 [2024-10-07 09:48:56.303177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.434 qpair failed and we were unable to recover it. 00:28:07.434 [2024-10-07 09:48:56.303269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.434 [2024-10-07 09:48:56.303295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.434 qpair failed and we were unable to recover it. 00:28:07.434 [2024-10-07 09:48:56.303433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.434 [2024-10-07 09:48:56.303459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.434 qpair failed and we were unable to recover it. 00:28:07.434 [2024-10-07 09:48:56.303594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.434 [2024-10-07 09:48:56.303620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.434 qpair failed and we were unable to recover it. 00:28:07.434 [2024-10-07 09:48:56.303785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.434 [2024-10-07 09:48:56.303816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.434 qpair failed and we were unable to recover it. 00:28:07.434 [2024-10-07 09:48:56.303930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.434 [2024-10-07 09:48:56.303957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.434 qpair failed and we were unable to recover it. 00:28:07.434 [2024-10-07 09:48:56.304069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.434 [2024-10-07 09:48:56.304096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.434 qpair failed and we were unable to recover it. 00:28:07.434 [2024-10-07 09:48:56.304205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.434 [2024-10-07 09:48:56.304231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.434 qpair failed and we were unable to recover it. 00:28:07.434 [2024-10-07 09:48:56.304312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.434 [2024-10-07 09:48:56.304338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.434 qpair failed and we were unable to recover it. 00:28:07.434 [2024-10-07 09:48:56.304449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.434 [2024-10-07 09:48:56.304476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.434 qpair failed and we were unable to recover it. 00:28:07.434 [2024-10-07 09:48:56.304564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.434 [2024-10-07 09:48:56.304592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.434 qpair failed and we were unable to recover it. 00:28:07.434 [2024-10-07 09:48:56.304712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.434 [2024-10-07 09:48:56.304740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.434 qpair failed and we were unable to recover it. 00:28:07.434 [2024-10-07 09:48:56.304829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.434 [2024-10-07 09:48:56.304857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.434 qpair failed and we were unable to recover it. 00:28:07.434 [2024-10-07 09:48:56.304976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.434 [2024-10-07 09:48:56.305002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.434 qpair failed and we were unable to recover it. 00:28:07.434 [2024-10-07 09:48:56.305112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.434 [2024-10-07 09:48:56.305139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.434 qpair failed and we were unable to recover it. 00:28:07.434 [2024-10-07 09:48:56.305221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.434 [2024-10-07 09:48:56.305247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.434 qpair failed and we were unable to recover it. 00:28:07.434 [2024-10-07 09:48:56.305359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.434 [2024-10-07 09:48:56.305386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.434 qpair failed and we were unable to recover it. 00:28:07.434 [2024-10-07 09:48:56.305511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.434 [2024-10-07 09:48:56.305550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.434 qpair failed and we were unable to recover it. 00:28:07.434 [2024-10-07 09:48:56.305677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.434 [2024-10-07 09:48:56.305707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.434 qpair failed and we were unable to recover it. 00:28:07.434 [2024-10-07 09:48:56.305819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.434 [2024-10-07 09:48:56.305846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.434 qpair failed and we were unable to recover it. 00:28:07.434 [2024-10-07 09:48:56.305958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.434 [2024-10-07 09:48:56.305984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.434 qpair failed and we were unable to recover it. 00:28:07.434 [2024-10-07 09:48:56.306069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.434 [2024-10-07 09:48:56.306096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.434 qpair failed and we were unable to recover it. 00:28:07.434 [2024-10-07 09:48:56.306212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.435 [2024-10-07 09:48:56.306238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.435 qpair failed and we were unable to recover it. 00:28:07.435 [2024-10-07 09:48:56.306316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.435 [2024-10-07 09:48:56.306342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.435 qpair failed and we were unable to recover it. 00:28:07.435 [2024-10-07 09:48:56.306423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.435 [2024-10-07 09:48:56.306449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.435 qpair failed and we were unable to recover it. 00:28:07.435 [2024-10-07 09:48:56.306535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.435 [2024-10-07 09:48:56.306562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.435 qpair failed and we were unable to recover it. 00:28:07.435 [2024-10-07 09:48:56.306676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.435 [2024-10-07 09:48:56.306702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.435 qpair failed and we were unable to recover it. 00:28:07.435 [2024-10-07 09:48:56.306785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.435 [2024-10-07 09:48:56.306811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.435 qpair failed and we were unable to recover it. 00:28:07.435 [2024-10-07 09:48:56.306955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.435 [2024-10-07 09:48:56.306980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.435 qpair failed and we were unable to recover it. 00:28:07.435 [2024-10-07 09:48:56.307088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.435 [2024-10-07 09:48:56.307114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.435 qpair failed and we were unable to recover it. 00:28:07.435 [2024-10-07 09:48:56.307200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.435 [2024-10-07 09:48:56.307226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.435 qpair failed and we were unable to recover it. 00:28:07.435 [2024-10-07 09:48:56.307304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.435 [2024-10-07 09:48:56.307334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.435 qpair failed and we were unable to recover it. 00:28:07.435 [2024-10-07 09:48:56.307473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.435 [2024-10-07 09:48:56.307499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.435 qpair failed and we were unable to recover it. 00:28:07.435 [2024-10-07 09:48:56.307582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.435 [2024-10-07 09:48:56.307610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.435 qpair failed and we were unable to recover it. 00:28:07.435 [2024-10-07 09:48:56.307713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.435 [2024-10-07 09:48:56.307740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.435 qpair failed and we were unable to recover it. 00:28:07.435 [2024-10-07 09:48:56.307855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.435 [2024-10-07 09:48:56.307881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.435 qpair failed and we were unable to recover it. 00:28:07.435 [2024-10-07 09:48:56.307967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.435 [2024-10-07 09:48:56.307994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.435 qpair failed and we were unable to recover it. 00:28:07.435 [2024-10-07 09:48:56.308100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.435 [2024-10-07 09:48:56.308126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.435 qpair failed and we were unable to recover it. 00:28:07.435 [2024-10-07 09:48:56.308214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.435 [2024-10-07 09:48:56.308240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.435 qpair failed and we were unable to recover it. 00:28:07.435 [2024-10-07 09:48:56.308311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.435 [2024-10-07 09:48:56.308336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.435 qpair failed and we were unable to recover it. 00:28:07.435 [2024-10-07 09:48:56.308444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.435 [2024-10-07 09:48:56.308470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.435 qpair failed and we were unable to recover it. 00:28:07.435 [2024-10-07 09:48:56.308578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.435 [2024-10-07 09:48:56.308603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.435 qpair failed and we were unable to recover it. 00:28:07.435 [2024-10-07 09:48:56.308746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.435 [2024-10-07 09:48:56.308772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.435 qpair failed and we were unable to recover it. 00:28:07.435 [2024-10-07 09:48:56.308897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.435 [2024-10-07 09:48:56.308937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.435 qpair failed and we were unable to recover it. 00:28:07.435 [2024-10-07 09:48:56.309037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.435 [2024-10-07 09:48:56.309065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.435 qpair failed and we were unable to recover it. 00:28:07.435 [2024-10-07 09:48:56.309184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.435 [2024-10-07 09:48:56.309212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.435 qpair failed and we were unable to recover it. 00:28:07.435 [2024-10-07 09:48:56.309296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.435 [2024-10-07 09:48:56.309323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.435 qpair failed and we were unable to recover it. 00:28:07.435 [2024-10-07 09:48:56.309410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.435 [2024-10-07 09:48:56.309438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.435 qpair failed and we were unable to recover it. 00:28:07.435 [2024-10-07 09:48:56.309539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.435 [2024-10-07 09:48:56.309566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.435 qpair failed and we were unable to recover it. 00:28:07.435 [2024-10-07 09:48:56.309658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.435 [2024-10-07 09:48:56.309692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.435 qpair failed and we were unable to recover it. 00:28:07.435 [2024-10-07 09:48:56.309803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.435 [2024-10-07 09:48:56.309829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.435 qpair failed and we were unable to recover it. 00:28:07.435 [2024-10-07 09:48:56.309937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.435 [2024-10-07 09:48:56.309975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.435 qpair failed and we were unable to recover it. 00:28:07.435 [2024-10-07 09:48:56.310117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.435 [2024-10-07 09:48:56.310144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.435 qpair failed and we were unable to recover it. 00:28:07.435 [2024-10-07 09:48:56.310233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.435 [2024-10-07 09:48:56.310260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.435 qpair failed and we were unable to recover it. 00:28:07.435 [2024-10-07 09:48:56.310401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.435 [2024-10-07 09:48:56.310428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.435 qpair failed and we were unable to recover it. 00:28:07.435 [2024-10-07 09:48:56.310517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.435 [2024-10-07 09:48:56.310544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.435 qpair failed and we were unable to recover it. 00:28:07.435 [2024-10-07 09:48:56.310630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.436 [2024-10-07 09:48:56.310657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.436 qpair failed and we were unable to recover it. 00:28:07.436 [2024-10-07 09:48:56.310749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.436 [2024-10-07 09:48:56.310775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.436 qpair failed and we were unable to recover it. 00:28:07.436 [2024-10-07 09:48:56.310890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.436 [2024-10-07 09:48:56.310916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.436 qpair failed and we were unable to recover it. 00:28:07.436 [2024-10-07 09:48:56.311028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.436 [2024-10-07 09:48:56.311054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.436 qpair failed and we were unable to recover it. 00:28:07.436 [2024-10-07 09:48:56.311167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.436 [2024-10-07 09:48:56.311193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.436 qpair failed and we were unable to recover it. 00:28:07.436 [2024-10-07 09:48:56.311297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.436 [2024-10-07 09:48:56.311323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.436 qpair failed and we were unable to recover it. 00:28:07.436 [2024-10-07 09:48:56.311404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.436 [2024-10-07 09:48:56.311430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.436 qpair failed and we were unable to recover it. 00:28:07.436 [2024-10-07 09:48:56.311532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.436 [2024-10-07 09:48:56.311557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.436 qpair failed and we were unable to recover it. 00:28:07.436 [2024-10-07 09:48:56.311685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.436 [2024-10-07 09:48:56.311712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.436 qpair failed and we were unable to recover it. 00:28:07.436 [2024-10-07 09:48:56.311793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.436 [2024-10-07 09:48:56.311818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.436 qpair failed and we were unable to recover it. 00:28:07.436 [2024-10-07 09:48:56.311920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.436 [2024-10-07 09:48:56.311946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.436 qpair failed and we were unable to recover it. 00:28:07.436 [2024-10-07 09:48:56.312033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.436 [2024-10-07 09:48:56.312058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.436 qpair failed and we were unable to recover it. 00:28:07.436 [2024-10-07 09:48:56.312138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.436 [2024-10-07 09:48:56.312163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.436 qpair failed and we were unable to recover it. 00:28:07.436 [2024-10-07 09:48:56.312271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.436 [2024-10-07 09:48:56.312296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.436 qpair failed and we were unable to recover it. 00:28:07.436 [2024-10-07 09:48:56.312440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.436 [2024-10-07 09:48:56.312466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.436 qpair failed and we were unable to recover it. 00:28:07.436 [2024-10-07 09:48:56.312543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.436 [2024-10-07 09:48:56.312568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.436 qpair failed and we were unable to recover it. 00:28:07.436 [2024-10-07 09:48:56.312689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.436 [2024-10-07 09:48:56.312716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.436 qpair failed and we were unable to recover it. 00:28:07.436 [2024-10-07 09:48:56.312801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.436 [2024-10-07 09:48:56.312826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.436 qpair failed and we were unable to recover it. 00:28:07.436 [2024-10-07 09:48:56.312940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.436 [2024-10-07 09:48:56.312965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.436 qpair failed and we were unable to recover it. 00:28:07.436 [2024-10-07 09:48:56.313045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.436 [2024-10-07 09:48:56.313070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.436 qpair failed and we were unable to recover it. 00:28:07.436 [2024-10-07 09:48:56.313156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.436 [2024-10-07 09:48:56.313181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.436 qpair failed and we were unable to recover it. 00:28:07.436 [2024-10-07 09:48:56.313267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.436 [2024-10-07 09:48:56.313292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.436 qpair failed and we were unable to recover it. 00:28:07.436 [2024-10-07 09:48:56.313377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.436 [2024-10-07 09:48:56.313402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.436 qpair failed and we were unable to recover it. 00:28:07.436 [2024-10-07 09:48:56.313509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.436 [2024-10-07 09:48:56.313534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.436 qpair failed and we were unable to recover it. 00:28:07.436 [2024-10-07 09:48:56.313645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.436 [2024-10-07 09:48:56.313675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.436 qpair failed and we were unable to recover it. 00:28:07.436 [2024-10-07 09:48:56.313789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.436 [2024-10-07 09:48:56.313815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.436 qpair failed and we were unable to recover it. 00:28:07.436 [2024-10-07 09:48:56.313896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.436 [2024-10-07 09:48:56.313920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.436 qpair failed and we were unable to recover it. 00:28:07.436 [2024-10-07 09:48:56.314004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.436 [2024-10-07 09:48:56.314030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.436 qpair failed and we were unable to recover it. 00:28:07.436 [2024-10-07 09:48:56.314138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.436 [2024-10-07 09:48:56.314163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.436 qpair failed and we were unable to recover it. 00:28:07.436 [2024-10-07 09:48:56.314237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.436 [2024-10-07 09:48:56.314271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.436 qpair failed and we were unable to recover it. 00:28:07.436 [2024-10-07 09:48:56.314356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.436 [2024-10-07 09:48:56.314381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.436 qpair failed and we were unable to recover it. 00:28:07.436 [2024-10-07 09:48:56.314494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.436 [2024-10-07 09:48:56.314520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.436 qpair failed and we were unable to recover it. 00:28:07.436 [2024-10-07 09:48:56.314632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.436 [2024-10-07 09:48:56.314657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.436 qpair failed and we were unable to recover it. 00:28:07.436 [2024-10-07 09:48:56.314736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.436 [2024-10-07 09:48:56.314762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.436 qpair failed and we were unable to recover it. 00:28:07.436 [2024-10-07 09:48:56.314873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.436 [2024-10-07 09:48:56.314898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.436 qpair failed and we were unable to recover it. 00:28:07.436 [2024-10-07 09:48:56.314979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.436 [2024-10-07 09:48:56.315005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.436 qpair failed and we were unable to recover it. 00:28:07.436 [2024-10-07 09:48:56.315088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.436 [2024-10-07 09:48:56.315113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.436 qpair failed and we were unable to recover it. 00:28:07.436 [2024-10-07 09:48:56.315223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.436 [2024-10-07 09:48:56.315249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.436 qpair failed and we were unable to recover it. 00:28:07.436 [2024-10-07 09:48:56.315333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.436 [2024-10-07 09:48:56.315357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.436 qpair failed and we were unable to recover it. 00:28:07.437 [2024-10-07 09:48:56.315465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.437 [2024-10-07 09:48:56.315491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.437 qpair failed and we were unable to recover it. 00:28:07.437 [2024-10-07 09:48:56.315601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.437 [2024-10-07 09:48:56.315626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.437 qpair failed and we were unable to recover it. 00:28:07.437 [2024-10-07 09:48:56.315745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.437 [2024-10-07 09:48:56.315784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.437 qpair failed and we were unable to recover it. 00:28:07.437 [2024-10-07 09:48:56.315875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.437 [2024-10-07 09:48:56.315902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.437 qpair failed and we were unable to recover it. 00:28:07.437 [2024-10-07 09:48:56.316030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.437 [2024-10-07 09:48:56.316057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.437 qpair failed and we were unable to recover it. 00:28:07.437 [2024-10-07 09:48:56.316172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.437 [2024-10-07 09:48:56.316199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.437 qpair failed and we were unable to recover it. 00:28:07.437 [2024-10-07 09:48:56.316280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.437 [2024-10-07 09:48:56.316305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.437 qpair failed and we were unable to recover it. 00:28:07.437 [2024-10-07 09:48:56.316418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.437 [2024-10-07 09:48:56.316445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.437 qpair failed and we were unable to recover it. 00:28:07.437 [2024-10-07 09:48:56.316554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.437 [2024-10-07 09:48:56.316580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.437 qpair failed and we were unable to recover it. 00:28:07.437 [2024-10-07 09:48:56.316691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.437 [2024-10-07 09:48:56.316718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.437 qpair failed and we were unable to recover it. 00:28:07.437 [2024-10-07 09:48:56.316839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.437 [2024-10-07 09:48:56.316865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.437 qpair failed and we were unable to recover it. 00:28:07.437 [2024-10-07 09:48:56.316970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.437 [2024-10-07 09:48:56.316995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.437 qpair failed and we were unable to recover it. 00:28:07.437 [2024-10-07 09:48:56.317103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.437 [2024-10-07 09:48:56.317129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.437 qpair failed and we were unable to recover it. 00:28:07.437 [2024-10-07 09:48:56.317250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.437 [2024-10-07 09:48:56.317277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.437 qpair failed and we were unable to recover it. 00:28:07.437 [2024-10-07 09:48:56.317375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.437 [2024-10-07 09:48:56.317404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.437 qpair failed and we were unable to recover it. 00:28:07.437 [2024-10-07 09:48:56.317487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.437 [2024-10-07 09:48:56.317513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.437 qpair failed and we were unable to recover it. 00:28:07.437 [2024-10-07 09:48:56.317595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.437 [2024-10-07 09:48:56.317622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.437 qpair failed and we were unable to recover it. 00:28:07.437 [2024-10-07 09:48:56.317722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.437 [2024-10-07 09:48:56.317748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.437 qpair failed and we were unable to recover it. 00:28:07.437 [2024-10-07 09:48:56.317833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.437 [2024-10-07 09:48:56.317857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.437 qpair failed and we were unable to recover it. 00:28:07.437 [2024-10-07 09:48:56.317964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.437 [2024-10-07 09:48:56.317990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.437 qpair failed and we were unable to recover it. 00:28:07.437 [2024-10-07 09:48:56.318096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.437 [2024-10-07 09:48:56.318122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.437 qpair failed and we were unable to recover it. 00:28:07.437 [2024-10-07 09:48:56.318233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.437 [2024-10-07 09:48:56.318258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.437 qpair failed and we were unable to recover it. 00:28:07.437 [2024-10-07 09:48:56.318340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.437 [2024-10-07 09:48:56.318364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.437 qpair failed and we were unable to recover it. 00:28:07.437 [2024-10-07 09:48:56.318473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.437 [2024-10-07 09:48:56.318498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.437 qpair failed and we were unable to recover it. 00:28:07.437 [2024-10-07 09:48:56.318641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.437 [2024-10-07 09:48:56.318673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.437 qpair failed and we were unable to recover it. 00:28:07.437 [2024-10-07 09:48:56.318767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.437 [2024-10-07 09:48:56.318793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.437 qpair failed and we were unable to recover it. 00:28:07.437 [2024-10-07 09:48:56.318933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.437 [2024-10-07 09:48:56.318958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.437 qpair failed and we were unable to recover it. 00:28:07.437 [2024-10-07 09:48:56.319040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.437 [2024-10-07 09:48:56.319065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.437 qpair failed and we were unable to recover it. 00:28:07.437 [2024-10-07 09:48:56.319148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.437 [2024-10-07 09:48:56.319173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.437 qpair failed and we were unable to recover it. 00:28:07.437 [2024-10-07 09:48:56.319256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.437 [2024-10-07 09:48:56.319283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.437 qpair failed and we were unable to recover it. 00:28:07.437 [2024-10-07 09:48:56.319392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.437 [2024-10-07 09:48:56.319416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.437 qpair failed and we were unable to recover it. 00:28:07.437 [2024-10-07 09:48:56.319496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.437 [2024-10-07 09:48:56.319520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.437 qpair failed and we were unable to recover it. 00:28:07.437 [2024-10-07 09:48:56.319600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.437 [2024-10-07 09:48:56.319625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.437 qpair failed and we were unable to recover it. 00:28:07.437 [2024-10-07 09:48:56.319747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.437 [2024-10-07 09:48:56.319774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.437 qpair failed and we were unable to recover it. 00:28:07.437 [2024-10-07 09:48:56.319887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.437 [2024-10-07 09:48:56.319916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.437 qpair failed and we were unable to recover it. 00:28:07.437 [2024-10-07 09:48:56.319996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.437 [2024-10-07 09:48:56.320021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.437 qpair failed and we were unable to recover it. 00:28:07.437 [2024-10-07 09:48:56.320105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.437 [2024-10-07 09:48:56.320131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.437 qpair failed and we were unable to recover it. 00:28:07.437 [2024-10-07 09:48:56.320241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.437 [2024-10-07 09:48:56.320267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.437 qpair failed and we were unable to recover it. 00:28:07.437 [2024-10-07 09:48:56.320373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.437 [2024-10-07 09:48:56.320399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.437 qpair failed and we were unable to recover it. 00:28:07.437 [2024-10-07 09:48:56.320534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.437 [2024-10-07 09:48:56.320561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.437 qpair failed and we were unable to recover it. 00:28:07.437 [2024-10-07 09:48:56.320638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.437 [2024-10-07 09:48:56.320664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.438 qpair failed and we were unable to recover it. 00:28:07.438 [2024-10-07 09:48:56.320757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.438 [2024-10-07 09:48:56.320783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.438 qpair failed and we were unable to recover it. 00:28:07.438 [2024-10-07 09:48:56.320888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.438 [2024-10-07 09:48:56.320915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.438 qpair failed and we were unable to recover it. 00:28:07.438 [2024-10-07 09:48:56.321026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.438 [2024-10-07 09:48:56.321051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.438 qpair failed and we were unable to recover it. 00:28:07.438 [2024-10-07 09:48:56.321171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.438 [2024-10-07 09:48:56.321197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.438 qpair failed and we were unable to recover it. 00:28:07.438 [2024-10-07 09:48:56.321315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.438 [2024-10-07 09:48:56.321342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.438 qpair failed and we were unable to recover it. 00:28:07.438 [2024-10-07 09:48:56.321460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.438 [2024-10-07 09:48:56.321488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.438 qpair failed and we were unable to recover it. 00:28:07.438 [2024-10-07 09:48:56.321628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.438 [2024-10-07 09:48:56.321653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.438 qpair failed and we were unable to recover it. 00:28:07.438 [2024-10-07 09:48:56.321758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.438 [2024-10-07 09:48:56.321784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.438 qpair failed and we were unable to recover it. 00:28:07.438 [2024-10-07 09:48:56.321868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.438 [2024-10-07 09:48:56.321893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.438 qpair failed and we were unable to recover it. 00:28:07.438 [2024-10-07 09:48:56.322006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.438 [2024-10-07 09:48:56.322031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.438 qpair failed and we were unable to recover it. 00:28:07.438 [2024-10-07 09:48:56.322149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.438 [2024-10-07 09:48:56.322175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.438 qpair failed and we were unable to recover it. 00:28:07.438 [2024-10-07 09:48:56.322259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.438 [2024-10-07 09:48:56.322284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.438 qpair failed and we were unable to recover it. 00:28:07.438 [2024-10-07 09:48:56.322407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.438 [2024-10-07 09:48:56.322432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.438 qpair failed and we were unable to recover it. 00:28:07.438 [2024-10-07 09:48:56.322544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.438 [2024-10-07 09:48:56.322570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.438 qpair failed and we were unable to recover it. 00:28:07.438 [2024-10-07 09:48:56.322657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.438 [2024-10-07 09:48:56.322690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.438 qpair failed and we were unable to recover it. 00:28:07.438 [2024-10-07 09:48:56.322797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.438 [2024-10-07 09:48:56.322822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.438 qpair failed and we were unable to recover it. 00:28:07.438 [2024-10-07 09:48:56.322933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.438 [2024-10-07 09:48:56.322959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.438 qpair failed and we were unable to recover it. 00:28:07.438 [2024-10-07 09:48:56.323043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.438 [2024-10-07 09:48:56.323069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.438 qpair failed and we were unable to recover it. 00:28:07.438 [2024-10-07 09:48:56.323201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.438 [2024-10-07 09:48:56.323226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.438 qpair failed and we were unable to recover it. 00:28:07.438 [2024-10-07 09:48:56.323306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.438 [2024-10-07 09:48:56.323332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.438 qpair failed and we were unable to recover it. 00:28:07.438 [2024-10-07 09:48:56.323444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.438 [2024-10-07 09:48:56.323469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.438 qpair failed and we were unable to recover it. 00:28:07.438 [2024-10-07 09:48:56.323546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.438 [2024-10-07 09:48:56.323572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.438 qpair failed and we were unable to recover it. 00:28:07.438 [2024-10-07 09:48:56.323660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.438 [2024-10-07 09:48:56.323691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.438 qpair failed and we were unable to recover it. 00:28:07.438 [2024-10-07 09:48:56.323805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.438 [2024-10-07 09:48:56.323836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.438 qpair failed and we were unable to recover it. 00:28:07.438 [2024-10-07 09:48:56.323981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.438 [2024-10-07 09:48:56.324007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.438 qpair failed and we were unable to recover it. 00:28:07.438 [2024-10-07 09:48:56.324086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.438 [2024-10-07 09:48:56.324112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.438 qpair failed and we were unable to recover it. 00:28:07.438 [2024-10-07 09:48:56.324221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.438 [2024-10-07 09:48:56.324247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.438 qpair failed and we were unable to recover it. 00:28:07.438 [2024-10-07 09:48:56.324360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.438 [2024-10-07 09:48:56.324386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.438 qpair failed and we were unable to recover it. 00:28:07.438 [2024-10-07 09:48:56.324493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.438 [2024-10-07 09:48:56.324518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.438 qpair failed and we were unable to recover it. 00:28:07.438 [2024-10-07 09:48:56.324625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.438 [2024-10-07 09:48:56.324651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.438 qpair failed and we were unable to recover it. 00:28:07.438 [2024-10-07 09:48:56.324769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.438 [2024-10-07 09:48:56.324799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.438 qpair failed and we were unable to recover it. 00:28:07.438 [2024-10-07 09:48:56.324907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.438 [2024-10-07 09:48:56.324933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.438 qpair failed and we were unable to recover it. 00:28:07.438 [2024-10-07 09:48:56.325043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.438 [2024-10-07 09:48:56.325069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.438 qpair failed and we were unable to recover it. 00:28:07.438 [2024-10-07 09:48:56.325149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.438 [2024-10-07 09:48:56.325175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.438 qpair failed and we were unable to recover it. 00:28:07.438 [2024-10-07 09:48:56.325310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.438 [2024-10-07 09:48:56.325336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.438 qpair failed and we were unable to recover it. 00:28:07.438 [2024-10-07 09:48:56.325423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.438 [2024-10-07 09:48:56.325449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.438 qpair failed and we were unable to recover it. 00:28:07.438 [2024-10-07 09:48:56.325597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.438 [2024-10-07 09:48:56.325622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.438 qpair failed and we were unable to recover it. 00:28:07.438 [2024-10-07 09:48:56.325738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.438 [2024-10-07 09:48:56.325765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.438 qpair failed and we were unable to recover it. 00:28:07.438 [2024-10-07 09:48:56.325905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.438 [2024-10-07 09:48:56.325931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.438 qpair failed and we were unable to recover it. 00:28:07.438 [2024-10-07 09:48:56.326013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.438 [2024-10-07 09:48:56.326038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.438 qpair failed and we were unable to recover it. 00:28:07.438 [2024-10-07 09:48:56.326133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.438 [2024-10-07 09:48:56.326158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.438 qpair failed and we were unable to recover it. 00:28:07.438 [2024-10-07 09:48:56.326239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.438 [2024-10-07 09:48:56.326264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.439 qpair failed and we were unable to recover it. 00:28:07.439 [2024-10-07 09:48:56.326349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.439 [2024-10-07 09:48:56.326375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.439 qpair failed and we were unable to recover it. 00:28:07.439 [2024-10-07 09:48:56.326481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.439 [2024-10-07 09:48:56.326507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.439 qpair failed and we were unable to recover it. 00:28:07.439 [2024-10-07 09:48:56.326588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.439 [2024-10-07 09:48:56.326612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.439 qpair failed and we were unable to recover it. 00:28:07.439 [2024-10-07 09:48:56.326701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.439 [2024-10-07 09:48:56.326728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.439 qpair failed and we were unable to recover it. 00:28:07.439 [2024-10-07 09:48:56.326836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.439 [2024-10-07 09:48:56.326861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.439 qpair failed and we were unable to recover it. 00:28:07.439 [2024-10-07 09:48:56.326942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.439 [2024-10-07 09:48:56.326968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.439 qpair failed and we were unable to recover it. 00:28:07.439 [2024-10-07 09:48:56.327104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.439 [2024-10-07 09:48:56.327130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.439 qpair failed and we were unable to recover it. 00:28:07.439 [2024-10-07 09:48:56.327245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.439 [2024-10-07 09:48:56.327271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.439 qpair failed and we were unable to recover it. 00:28:07.439 [2024-10-07 09:48:56.327352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.439 [2024-10-07 09:48:56.327378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.439 qpair failed and we were unable to recover it. 00:28:07.439 [2024-10-07 09:48:56.327495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.439 [2024-10-07 09:48:56.327520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.439 qpair failed and we were unable to recover it. 00:28:07.439 [2024-10-07 09:48:56.327631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.439 [2024-10-07 09:48:56.327656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.439 qpair failed and we were unable to recover it. 00:28:07.439 [2024-10-07 09:48:56.327751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.439 [2024-10-07 09:48:56.327777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.439 qpair failed and we were unable to recover it. 00:28:07.439 [2024-10-07 09:48:56.327861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.439 [2024-10-07 09:48:56.327887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.439 qpair failed and we were unable to recover it. 00:28:07.439 [2024-10-07 09:48:56.328000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.439 [2024-10-07 09:48:56.328025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.439 qpair failed and we were unable to recover it. 00:28:07.439 [2024-10-07 09:48:56.328142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.439 [2024-10-07 09:48:56.328168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.439 qpair failed and we were unable to recover it. 00:28:07.439 [2024-10-07 09:48:56.328255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.439 [2024-10-07 09:48:56.328284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.439 qpair failed and we were unable to recover it. 00:28:07.439 [2024-10-07 09:48:56.328373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.439 [2024-10-07 09:48:56.328399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.439 qpair failed and we were unable to recover it. 00:28:07.439 [2024-10-07 09:48:56.328508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.439 [2024-10-07 09:48:56.328533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.439 qpair failed and we were unable to recover it. 00:28:07.439 [2024-10-07 09:48:56.328618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.439 [2024-10-07 09:48:56.328645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.439 qpair failed and we were unable to recover it. 00:28:07.439 [2024-10-07 09:48:56.328759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.439 [2024-10-07 09:48:56.328784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.439 qpair failed and we were unable to recover it. 00:28:07.439 [2024-10-07 09:48:56.328863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.439 [2024-10-07 09:48:56.328888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.439 qpair failed and we were unable to recover it. 00:28:07.439 [2024-10-07 09:48:56.328998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.439 [2024-10-07 09:48:56.329024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.439 qpair failed and we were unable to recover it. 00:28:07.439 [2024-10-07 09:48:56.329109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.439 [2024-10-07 09:48:56.329134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.439 qpair failed and we were unable to recover it. 00:28:07.439 [2024-10-07 09:48:56.329243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.439 [2024-10-07 09:48:56.329268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.439 qpair failed and we were unable to recover it. 00:28:07.439 [2024-10-07 09:48:56.329381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.439 [2024-10-07 09:48:56.329407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.439 qpair failed and we were unable to recover it. 00:28:07.439 [2024-10-07 09:48:56.329478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.439 [2024-10-07 09:48:56.329504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.439 qpair failed and we were unable to recover it. 00:28:07.439 [2024-10-07 09:48:56.329623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.439 [2024-10-07 09:48:56.329649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.439 qpair failed and we were unable to recover it. 00:28:07.439 [2024-10-07 09:48:56.329742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.439 [2024-10-07 09:48:56.329767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.439 qpair failed and we were unable to recover it. 00:28:07.439 [2024-10-07 09:48:56.329871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.439 [2024-10-07 09:48:56.329896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.439 qpair failed and we were unable to recover it. 00:28:07.439 [2024-10-07 09:48:56.329985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.439 [2024-10-07 09:48:56.330012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.439 qpair failed and we were unable to recover it. 00:28:07.439 [2024-10-07 09:48:56.330153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.439 [2024-10-07 09:48:56.330179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.439 qpair failed and we were unable to recover it. 00:28:07.439 [2024-10-07 09:48:56.330266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.439 [2024-10-07 09:48:56.330292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.439 qpair failed and we were unable to recover it. 00:28:07.439 [2024-10-07 09:48:56.330373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.439 [2024-10-07 09:48:56.330397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.439 qpair failed and we were unable to recover it. 00:28:07.439 [2024-10-07 09:48:56.330537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.439 [2024-10-07 09:48:56.330562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.439 qpair failed and we were unable to recover it. 00:28:07.439 [2024-10-07 09:48:56.330679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.439 [2024-10-07 09:48:56.330707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.439 qpair failed and we were unable to recover it. 00:28:07.439 [2024-10-07 09:48:56.330815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.439 [2024-10-07 09:48:56.330840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.439 qpair failed and we were unable to recover it. 00:28:07.439 [2024-10-07 09:48:56.330954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.439 [2024-10-07 09:48:56.330979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.439 qpair failed and we were unable to recover it. 00:28:07.440 [2024-10-07 09:48:56.331086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.440 [2024-10-07 09:48:56.331110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.440 qpair failed and we were unable to recover it. 00:28:07.440 [2024-10-07 09:48:56.331193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.440 [2024-10-07 09:48:56.331218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.440 qpair failed and we were unable to recover it. 00:28:07.440 [2024-10-07 09:48:56.331327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.440 [2024-10-07 09:48:56.331350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.440 qpair failed and we were unable to recover it. 00:28:07.440 [2024-10-07 09:48:56.331453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.440 [2024-10-07 09:48:56.331478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.440 qpair failed and we were unable to recover it. 00:28:07.440 [2024-10-07 09:48:56.331554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.440 [2024-10-07 09:48:56.331578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.440 qpair failed and we were unable to recover it. 00:28:07.440 [2024-10-07 09:48:56.331717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.440 [2024-10-07 09:48:56.331746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.440 qpair failed and we were unable to recover it. 00:28:07.440 [2024-10-07 09:48:56.331855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.440 [2024-10-07 09:48:56.331880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.440 qpair failed and we were unable to recover it. 00:28:07.440 [2024-10-07 09:48:56.331991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.440 [2024-10-07 09:48:56.332017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.440 qpair failed and we were unable to recover it. 00:28:07.440 [2024-10-07 09:48:56.332089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.440 [2024-10-07 09:48:56.332113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.440 qpair failed and we were unable to recover it. 00:28:07.440 [2024-10-07 09:48:56.332216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.440 [2024-10-07 09:48:56.332241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.440 qpair failed and we were unable to recover it. 00:28:07.440 [2024-10-07 09:48:56.332321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.440 [2024-10-07 09:48:56.332344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.440 qpair failed and we were unable to recover it. 00:28:07.440 [2024-10-07 09:48:56.332422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.440 [2024-10-07 09:48:56.332447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.440 qpair failed and we were unable to recover it. 00:28:07.440 [2024-10-07 09:48:56.332566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.440 [2024-10-07 09:48:56.332590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.440 qpair failed and we were unable to recover it. 00:28:07.440 [2024-10-07 09:48:56.332725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.440 [2024-10-07 09:48:56.332750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.440 qpair failed and we were unable to recover it. 00:28:07.440 [2024-10-07 09:48:56.332849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.440 [2024-10-07 09:48:56.332873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.440 qpair failed and we were unable to recover it. 00:28:07.440 [2024-10-07 09:48:56.333010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.440 [2024-10-07 09:48:56.333035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.440 qpair failed and we were unable to recover it. 00:28:07.440 [2024-10-07 09:48:56.333109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.440 [2024-10-07 09:48:56.333133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.440 qpair failed and we were unable to recover it. 00:28:07.440 [2024-10-07 09:48:56.333226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.440 [2024-10-07 09:48:56.333252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.440 qpair failed and we were unable to recover it. 00:28:07.440 [2024-10-07 09:48:56.333364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.440 [2024-10-07 09:48:56.333387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.440 qpair failed and we were unable to recover it. 00:28:07.440 [2024-10-07 09:48:56.333478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.440 [2024-10-07 09:48:56.333504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.440 qpair failed and we were unable to recover it. 00:28:07.440 [2024-10-07 09:48:56.333639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.440 [2024-10-07 09:48:56.333663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.440 qpair failed and we were unable to recover it. 00:28:07.440 [2024-10-07 09:48:56.333753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.440 [2024-10-07 09:48:56.333777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.440 qpair failed and we were unable to recover it. 00:28:07.440 [2024-10-07 09:48:56.333858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.440 [2024-10-07 09:48:56.333883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.440 qpair failed and we were unable to recover it. 00:28:07.440 [2024-10-07 09:48:56.333963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.440 [2024-10-07 09:48:56.333987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.440 qpair failed and we were unable to recover it. 00:28:07.440 [2024-10-07 09:48:56.334095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.440 [2024-10-07 09:48:56.334120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.440 qpair failed and we were unable to recover it. 00:28:07.440 [2024-10-07 09:48:56.334201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.440 [2024-10-07 09:48:56.334226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.440 qpair failed and we were unable to recover it. 00:28:07.440 [2024-10-07 09:48:56.334313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.440 [2024-10-07 09:48:56.334339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.440 qpair failed and we were unable to recover it. 00:28:07.440 [2024-10-07 09:48:56.334433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.440 [2024-10-07 09:48:56.334459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.440 qpair failed and we were unable to recover it. 00:28:07.441 [2024-10-07 09:48:56.334568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.441 [2024-10-07 09:48:56.334595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.441 qpair failed and we were unable to recover it. 00:28:07.441 [2024-10-07 09:48:56.334681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.441 [2024-10-07 09:48:56.334707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.441 qpair failed and we were unable to recover it. 00:28:07.441 [2024-10-07 09:48:56.334791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.441 [2024-10-07 09:48:56.334816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.441 qpair failed and we were unable to recover it. 00:28:07.441 [2024-10-07 09:48:56.334906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.441 [2024-10-07 09:48:56.334932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.441 qpair failed and we were unable to recover it. 00:28:07.441 [2024-10-07 09:48:56.335041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.441 [2024-10-07 09:48:56.335067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.441 qpair failed and we were unable to recover it. 00:28:07.441 [2024-10-07 09:48:56.335181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.441 [2024-10-07 09:48:56.335206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.441 qpair failed and we were unable to recover it. 00:28:07.441 [2024-10-07 09:48:56.335326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.441 [2024-10-07 09:48:56.335351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.441 qpair failed and we were unable to recover it. 00:28:07.441 [2024-10-07 09:48:56.335458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.441 [2024-10-07 09:48:56.335484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.441 qpair failed and we were unable to recover it. 00:28:07.441 [2024-10-07 09:48:56.335566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.441 [2024-10-07 09:48:56.335591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.441 qpair failed and we were unable to recover it. 00:28:07.441 [2024-10-07 09:48:56.335702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.441 [2024-10-07 09:48:56.335729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.441 qpair failed and we were unable to recover it. 00:28:07.441 [2024-10-07 09:48:56.335813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.441 [2024-10-07 09:48:56.335838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.441 qpair failed and we were unable to recover it. 00:28:07.441 [2024-10-07 09:48:56.335927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.441 [2024-10-07 09:48:56.335952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.441 qpair failed and we were unable to recover it. 00:28:07.441 [2024-10-07 09:48:56.336028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.441 [2024-10-07 09:48:56.336053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.441 qpair failed and we were unable to recover it. 00:28:07.441 [2024-10-07 09:48:56.336136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.441 [2024-10-07 09:48:56.336160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.441 qpair failed and we were unable to recover it. 00:28:07.441 [2024-10-07 09:48:56.336242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.441 [2024-10-07 09:48:56.336267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.441 qpair failed and we were unable to recover it. 00:28:07.441 [2024-10-07 09:48:56.336354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.441 [2024-10-07 09:48:56.336379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.441 qpair failed and we were unable to recover it. 00:28:07.441 [2024-10-07 09:48:56.336464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.441 [2024-10-07 09:48:56.336489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.441 qpair failed and we were unable to recover it. 00:28:07.441 [2024-10-07 09:48:56.336564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.441 [2024-10-07 09:48:56.336588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.441 qpair failed and we were unable to recover it. 00:28:07.441 [2024-10-07 09:48:56.336684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.441 [2024-10-07 09:48:56.336711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.441 qpair failed and we were unable to recover it. 00:28:07.441 [2024-10-07 09:48:56.336789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.441 [2024-10-07 09:48:56.336814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.441 qpair failed and we were unable to recover it. 00:28:07.441 [2024-10-07 09:48:56.336895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.441 [2024-10-07 09:48:56.336920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.441 qpair failed and we were unable to recover it. 00:28:07.441 [2024-10-07 09:48:56.337001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.441 [2024-10-07 09:48:56.337027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.441 qpair failed and we were unable to recover it. 00:28:07.441 [2024-10-07 09:48:56.337108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.441 [2024-10-07 09:48:56.337134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.441 qpair failed and we were unable to recover it. 00:28:07.441 [2024-10-07 09:48:56.337218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.441 [2024-10-07 09:48:56.337245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.441 qpair failed and we were unable to recover it. 00:28:07.441 [2024-10-07 09:48:56.337286] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb91f0 (9): Bad file descriptor 00:28:07.441 [2024-10-07 09:48:56.337409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.441 [2024-10-07 09:48:56.337449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.441 qpair failed and we were unable to recover it. 00:28:07.441 [2024-10-07 09:48:56.337542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.441 [2024-10-07 09:48:56.337571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.441 qpair failed and we were unable to recover it. 00:28:07.441 [2024-10-07 09:48:56.337598] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:07.441 [2024-10-07 09:48:56.337631] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:07.441 [2024-10-07 09:48:56.337649] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:07.441 [2024-10-07 09:48:56.337662] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:07.441 [2024-10-07 09:48:56.337660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.441 [2024-10-07 09:48:56.337684] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:07.441 [2024-10-07 09:48:56.337696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.441 qpair failed and we were unable to recover it. 00:28:07.441 [2024-10-07 09:48:56.337807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.441 [2024-10-07 09:48:56.337834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.441 qpair failed and we were unable to recover it. 00:28:07.441 [2024-10-07 09:48:56.337952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.441 [2024-10-07 09:48:56.337978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.441 qpair failed and we were unable to recover it. 00:28:07.441 [2024-10-07 09:48:56.338089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.441 [2024-10-07 09:48:56.338116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.441 qpair failed and we were unable to recover it. 00:28:07.441 [2024-10-07 09:48:56.338203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.441 [2024-10-07 09:48:56.338236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.441 qpair failed and we were unable to recover it. 00:28:07.441 [2024-10-07 09:48:56.338332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.441 [2024-10-07 09:48:56.338359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.441 qpair failed and we were unable to recover it. 00:28:07.441 [2024-10-07 09:48:56.338451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.441 [2024-10-07 09:48:56.338476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.441 qpair failed and we were unable to recover it. 00:28:07.441 [2024-10-07 09:48:56.338551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.441 [2024-10-07 09:48:56.338576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.441 qpair failed and we were unable to recover it. 00:28:07.441 [2024-10-07 09:48:56.338660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.441 [2024-10-07 09:48:56.338691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.441 qpair failed and we were unable to recover it. 00:28:07.441 [2024-10-07 09:48:56.338781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.441 [2024-10-07 09:48:56.338806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.441 qpair failed and we were unable to recover it. 00:28:07.441 [2024-10-07 09:48:56.338892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.442 [2024-10-07 09:48:56.338917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.442 qpair failed and we were unable to recover it. 00:28:07.442 [2024-10-07 09:48:56.338992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.442 [2024-10-07 09:48:56.339017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.442 qpair failed and we were unable to recover it. 00:28:07.442 [2024-10-07 09:48:56.339106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.442 [2024-10-07 09:48:56.339132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.442 qpair failed and we were unable to recover it. 00:28:07.442 [2024-10-07 09:48:56.339221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.442 [2024-10-07 09:48:56.339246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.442 qpair failed and we were unable to recover it. 00:28:07.442 [2024-10-07 09:48:56.339333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.442 [2024-10-07 09:48:56.339358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.442 qpair failed and we were unable to recover it. 00:28:07.442 [2024-10-07 09:48:56.339448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.442 [2024-10-07 09:48:56.339474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.442 qpair failed and we were unable to recover it. 00:28:07.442 [2024-10-07 09:48:56.339551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.442 [2024-10-07 09:48:56.339576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.442 [2024-10-07 09:48:56.339523] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 5 00:28:07.442 qpair failed and we were unable to recover it. 00:28:07.442 [2024-10-07 09:48:56.339550] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 6 00:28:07.442 [2024-10-07 09:48:56.339663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.442 [2024-10-07 09:48:56.339600] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 7 00:28:07.442 [2024-10-07 09:48:56.339706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.442 [2024-10-07 09:48:56.339603] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:28:07.442 qpair failed and we were unable to recover it. 00:28:07.442 [2024-10-07 09:48:56.339829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.442 [2024-10-07 09:48:56.339853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.442 qpair failed and we were unable to recover it. 00:28:07.442 [2024-10-07 09:48:56.339960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.442 [2024-10-07 09:48:56.339986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.442 qpair failed and we were unable to recover it. 00:28:07.442 [2024-10-07 09:48:56.340077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.442 [2024-10-07 09:48:56.340101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.442 qpair failed and we were unable to recover it. 00:28:07.442 [2024-10-07 09:48:56.340187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.442 [2024-10-07 09:48:56.340212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.442 qpair failed and we were unable to recover it. 00:28:07.442 [2024-10-07 09:48:56.340302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.442 [2024-10-07 09:48:56.340327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.442 qpair failed and we were unable to recover it. 00:28:07.442 [2024-10-07 09:48:56.340407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.442 [2024-10-07 09:48:56.340431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.442 qpair failed and we were unable to recover it. 00:28:07.442 [2024-10-07 09:48:56.340522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.442 [2024-10-07 09:48:56.340547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.442 qpair failed and we were unable to recover it. 00:28:07.442 [2024-10-07 09:48:56.340631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.442 [2024-10-07 09:48:56.340655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.442 qpair failed and we were unable to recover it. 00:28:07.442 [2024-10-07 09:48:56.340750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.442 [2024-10-07 09:48:56.340782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.442 qpair failed and we were unable to recover it. 00:28:07.442 [2024-10-07 09:48:56.340884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.442 [2024-10-07 09:48:56.340914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.442 qpair failed and we were unable to recover it. 00:28:07.442 [2024-10-07 09:48:56.341010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.442 [2024-10-07 09:48:56.341038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.442 qpair failed and we were unable to recover it. 00:28:07.442 [2024-10-07 09:48:56.341131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.442 [2024-10-07 09:48:56.341158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.442 qpair failed and we were unable to recover it. 00:28:07.442 [2024-10-07 09:48:56.341243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.442 [2024-10-07 09:48:56.341271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.442 qpair failed and we were unable to recover it. 00:28:07.442 [2024-10-07 09:48:56.341356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.442 [2024-10-07 09:48:56.341383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.442 qpair failed and we were unable to recover it. 00:28:07.442 [2024-10-07 09:48:56.341468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.442 [2024-10-07 09:48:56.341495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.442 qpair failed and we were unable to recover it. 00:28:07.442 [2024-10-07 09:48:56.341580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.442 [2024-10-07 09:48:56.341606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.442 qpair failed and we were unable to recover it. 00:28:07.442 [2024-10-07 09:48:56.341682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.442 [2024-10-07 09:48:56.341708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.442 qpair failed and we were unable to recover it. 00:28:07.442 [2024-10-07 09:48:56.341790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.442 [2024-10-07 09:48:56.341814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.442 qpair failed and we were unable to recover it. 00:28:07.442 [2024-10-07 09:48:56.341905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.442 [2024-10-07 09:48:56.341930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.442 qpair failed and we were unable to recover it. 00:28:07.442 [2024-10-07 09:48:56.342014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.442 [2024-10-07 09:48:56.342040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.442 qpair failed and we were unable to recover it. 00:28:07.442 [2024-10-07 09:48:56.342122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.442 [2024-10-07 09:48:56.342147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.442 qpair failed and we were unable to recover it. 00:28:07.442 [2024-10-07 09:48:56.342226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.442 [2024-10-07 09:48:56.342251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.442 qpair failed and we were unable to recover it. 00:28:07.442 [2024-10-07 09:48:56.342336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.442 [2024-10-07 09:48:56.342361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.442 qpair failed and we were unable to recover it. 00:28:07.442 [2024-10-07 09:48:56.342443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.442 [2024-10-07 09:48:56.342472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.442 qpair failed and we were unable to recover it. 00:28:07.442 [2024-10-07 09:48:56.342596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.442 [2024-10-07 09:48:56.342630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.442 qpair failed and we were unable to recover it. 00:28:07.442 [2024-10-07 09:48:56.342728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.442 [2024-10-07 09:48:56.342758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.442 qpair failed and we were unable to recover it. 00:28:07.442 [2024-10-07 09:48:56.342874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.442 [2024-10-07 09:48:56.342901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.442 qpair failed and we were unable to recover it. 00:28:07.442 [2024-10-07 09:48:56.342990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.442 [2024-10-07 09:48:56.343017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.442 qpair failed and we were unable to recover it. 00:28:07.442 [2024-10-07 09:48:56.343109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.442 [2024-10-07 09:48:56.343143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.442 qpair failed and we were unable to recover it. 00:28:07.442 [2024-10-07 09:48:56.343240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.443 [2024-10-07 09:48:56.343267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.443 qpair failed and we were unable to recover it. 00:28:07.443 [2024-10-07 09:48:56.343350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.443 [2024-10-07 09:48:56.343377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.443 qpair failed and we were unable to recover it. 00:28:07.443 [2024-10-07 09:48:56.343460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.443 [2024-10-07 09:48:56.343487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.443 qpair failed and we were unable to recover it. 00:28:07.443 [2024-10-07 09:48:56.343564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.443 [2024-10-07 09:48:56.343590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.443 qpair failed and we were unable to recover it. 00:28:07.443 [2024-10-07 09:48:56.343682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.443 [2024-10-07 09:48:56.343708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.443 qpair failed and we were unable to recover it. 00:28:07.443 [2024-10-07 09:48:56.343820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.443 [2024-10-07 09:48:56.343846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.443 qpair failed and we were unable to recover it. 00:28:07.443 [2024-10-07 09:48:56.343923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.443 [2024-10-07 09:48:56.343948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.443 qpair failed and we were unable to recover it. 00:28:07.443 [2024-10-07 09:48:56.344028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.443 [2024-10-07 09:48:56.344053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.443 qpair failed and we were unable to recover it. 00:28:07.443 [2024-10-07 09:48:56.344137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.443 [2024-10-07 09:48:56.344162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.443 qpair failed and we were unable to recover it. 00:28:07.443 [2024-10-07 09:48:56.344248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.443 [2024-10-07 09:48:56.344273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.443 qpair failed and we were unable to recover it. 00:28:07.443 [2024-10-07 09:48:56.344384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.443 [2024-10-07 09:48:56.344408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.443 qpair failed and we were unable to recover it. 00:28:07.443 [2024-10-07 09:48:56.344485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.443 [2024-10-07 09:48:56.344511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.443 qpair failed and we were unable to recover it. 00:28:07.443 [2024-10-07 09:48:56.344588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.443 [2024-10-07 09:48:56.344614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.443 qpair failed and we were unable to recover it. 00:28:07.443 [2024-10-07 09:48:56.344697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.443 [2024-10-07 09:48:56.344723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.443 qpair failed and we were unable to recover it. 00:28:07.443 [2024-10-07 09:48:56.344796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.443 [2024-10-07 09:48:56.344821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.443 qpair failed and we were unable to recover it. 00:28:07.443 [2024-10-07 09:48:56.344895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.443 [2024-10-07 09:48:56.344919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.443 qpair failed and we were unable to recover it. 00:28:07.443 [2024-10-07 09:48:56.345020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.443 [2024-10-07 09:48:56.345046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.443 qpair failed and we were unable to recover it. 00:28:07.443 [2024-10-07 09:48:56.345121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.443 [2024-10-07 09:48:56.345146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.443 qpair failed and we were unable to recover it. 00:28:07.443 [2024-10-07 09:48:56.345235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.443 [2024-10-07 09:48:56.345264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.443 qpair failed and we were unable to recover it. 00:28:07.443 [2024-10-07 09:48:56.345361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.443 [2024-10-07 09:48:56.345389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.443 qpair failed and we were unable to recover it. 00:28:07.443 [2024-10-07 09:48:56.345478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.443 [2024-10-07 09:48:56.345506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.443 qpair failed and we were unable to recover it. 00:28:07.443 [2024-10-07 09:48:56.345584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.443 [2024-10-07 09:48:56.345611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.443 qpair failed and we were unable to recover it. 00:28:07.443 [2024-10-07 09:48:56.345692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.443 [2024-10-07 09:48:56.345724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.443 qpair failed and we were unable to recover it. 00:28:07.443 [2024-10-07 09:48:56.345812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.443 [2024-10-07 09:48:56.345842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.443 qpair failed and we were unable to recover it. 00:28:07.443 [2024-10-07 09:48:56.345926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.443 [2024-10-07 09:48:56.345952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.443 qpair failed and we were unable to recover it. 00:28:07.443 [2024-10-07 09:48:56.346037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.443 [2024-10-07 09:48:56.346062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.443 qpair failed and we were unable to recover it. 00:28:07.443 [2024-10-07 09:48:56.346132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.443 [2024-10-07 09:48:56.346157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.443 qpair failed and we were unable to recover it. 00:28:07.443 [2024-10-07 09:48:56.346236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.443 [2024-10-07 09:48:56.346261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.443 qpair failed and we were unable to recover it. 00:28:07.443 [2024-10-07 09:48:56.346346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.443 [2024-10-07 09:48:56.346372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.443 qpair failed and we were unable to recover it. 00:28:07.443 [2024-10-07 09:48:56.346452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.443 [2024-10-07 09:48:56.346477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.443 qpair failed and we were unable to recover it. 00:28:07.443 [2024-10-07 09:48:56.346560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.443 [2024-10-07 09:48:56.346585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.443 qpair failed and we were unable to recover it. 00:28:07.443 [2024-10-07 09:48:56.346687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.443 [2024-10-07 09:48:56.346714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.443 qpair failed and we were unable to recover it. 00:28:07.443 [2024-10-07 09:48:56.346793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.443 [2024-10-07 09:48:56.346818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.443 qpair failed and we were unable to recover it. 00:28:07.443 [2024-10-07 09:48:56.346893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.443 [2024-10-07 09:48:56.346917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.443 qpair failed and we were unable to recover it. 00:28:07.443 [2024-10-07 09:48:56.347014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.443 [2024-10-07 09:48:56.347039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.443 qpair failed and we were unable to recover it. 00:28:07.443 [2024-10-07 09:48:56.347117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.443 [2024-10-07 09:48:56.347142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.443 qpair failed and we were unable to recover it. 00:28:07.443 [2024-10-07 09:48:56.347235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.443 [2024-10-07 09:48:56.347262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.443 qpair failed and we were unable to recover it. 00:28:07.443 [2024-10-07 09:48:56.347351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.443 [2024-10-07 09:48:56.347376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.443 qpair failed and we were unable to recover it. 00:28:07.443 [2024-10-07 09:48:56.347456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.443 [2024-10-07 09:48:56.347482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.443 qpair failed and we were unable to recover it. 00:28:07.444 [2024-10-07 09:48:56.347563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.444 [2024-10-07 09:48:56.347587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.444 qpair failed and we were unable to recover it. 00:28:07.444 [2024-10-07 09:48:56.347680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.444 [2024-10-07 09:48:56.347706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.444 qpair failed and we were unable to recover it. 00:28:07.444 [2024-10-07 09:48:56.347785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.444 [2024-10-07 09:48:56.347813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.444 qpair failed and we were unable to recover it. 00:28:07.444 [2024-10-07 09:48:56.347891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.444 [2024-10-07 09:48:56.347916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.444 qpair failed and we were unable to recover it. 00:28:07.444 [2024-10-07 09:48:56.348010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.444 [2024-10-07 09:48:56.348036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.444 qpair failed and we were unable to recover it. 00:28:07.444 [2024-10-07 09:48:56.348118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.444 [2024-10-07 09:48:56.348143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.444 qpair failed and we were unable to recover it. 00:28:07.444 [2024-10-07 09:48:56.348225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.444 [2024-10-07 09:48:56.348250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.444 qpair failed and we were unable to recover it. 00:28:07.444 [2024-10-07 09:48:56.348335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.444 [2024-10-07 09:48:56.348367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.444 qpair failed and we were unable to recover it. 00:28:07.444 [2024-10-07 09:48:56.348452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.444 [2024-10-07 09:48:56.348476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.444 qpair failed and we were unable to recover it. 00:28:07.444 [2024-10-07 09:48:56.348551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.444 [2024-10-07 09:48:56.348576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.444 qpair failed and we were unable to recover it. 00:28:07.444 [2024-10-07 09:48:56.348680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.444 [2024-10-07 09:48:56.348714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.444 qpair failed and we were unable to recover it. 00:28:07.444 [2024-10-07 09:48:56.348808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.444 [2024-10-07 09:48:56.348834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.444 qpair failed and we were unable to recover it. 00:28:07.444 [2024-10-07 09:48:56.348913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.444 [2024-10-07 09:48:56.348938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.444 qpair failed and we were unable to recover it. 00:28:07.444 [2024-10-07 09:48:56.349033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.444 [2024-10-07 09:48:56.349060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.444 qpair failed and we were unable to recover it. 00:28:07.444 [2024-10-07 09:48:56.349149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.444 [2024-10-07 09:48:56.349175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.444 qpair failed and we were unable to recover it. 00:28:07.444 [2024-10-07 09:48:56.349265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.444 [2024-10-07 09:48:56.349291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.444 qpair failed and we were unable to recover it. 00:28:07.444 [2024-10-07 09:48:56.349375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.444 [2024-10-07 09:48:56.349400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.444 qpair failed and we were unable to recover it. 00:28:07.444 [2024-10-07 09:48:56.349508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.444 [2024-10-07 09:48:56.349534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.444 qpair failed and we were unable to recover it. 00:28:07.444 [2024-10-07 09:48:56.349611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.444 [2024-10-07 09:48:56.349636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.444 qpair failed and we were unable to recover it. 00:28:07.444 [2024-10-07 09:48:56.349746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.444 [2024-10-07 09:48:56.349771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.444 qpair failed and we were unable to recover it. 00:28:07.444 [2024-10-07 09:48:56.349851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.444 [2024-10-07 09:48:56.349875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.444 qpair failed and we were unable to recover it. 00:28:07.444 [2024-10-07 09:48:56.349970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.444 [2024-10-07 09:48:56.349995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.444 qpair failed and we were unable to recover it. 00:28:07.444 [2024-10-07 09:48:56.350077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.444 [2024-10-07 09:48:56.350103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.444 qpair failed and we were unable to recover it. 00:28:07.444 [2024-10-07 09:48:56.350189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.444 [2024-10-07 09:48:56.350214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.444 qpair failed and we were unable to recover it. 00:28:07.444 [2024-10-07 09:48:56.350302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.444 [2024-10-07 09:48:56.350328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.444 qpair failed and we were unable to recover it. 00:28:07.444 [2024-10-07 09:48:56.350440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.444 [2024-10-07 09:48:56.350465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.444 qpair failed and we were unable to recover it. 00:28:07.444 [2024-10-07 09:48:56.350556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.444 [2024-10-07 09:48:56.350581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.444 qpair failed and we were unable to recover it. 00:28:07.444 [2024-10-07 09:48:56.350676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.444 [2024-10-07 09:48:56.350702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.444 qpair failed and we were unable to recover it. 00:28:07.444 [2024-10-07 09:48:56.350786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.444 [2024-10-07 09:48:56.350812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.444 qpair failed and we were unable to recover it. 00:28:07.444 [2024-10-07 09:48:56.350896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.444 [2024-10-07 09:48:56.350921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.444 qpair failed and we were unable to recover it. 00:28:07.444 [2024-10-07 09:48:56.351000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.444 [2024-10-07 09:48:56.351026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.444 qpair failed and we were unable to recover it. 00:28:07.444 [2024-10-07 09:48:56.351107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.445 [2024-10-07 09:48:56.351133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.445 qpair failed and we were unable to recover it. 00:28:07.445 [2024-10-07 09:48:56.351221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.445 [2024-10-07 09:48:56.351247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.445 qpair failed and we were unable to recover it. 00:28:07.445 [2024-10-07 09:48:56.351333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.445 [2024-10-07 09:48:56.351365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.445 qpair failed and we were unable to recover it. 00:28:07.445 [2024-10-07 09:48:56.351472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.445 [2024-10-07 09:48:56.351500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.445 qpair failed and we were unable to recover it. 00:28:07.445 [2024-10-07 09:48:56.351593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.445 [2024-10-07 09:48:56.351620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.445 qpair failed and we were unable to recover it. 00:28:07.445 [2024-10-07 09:48:56.351725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.445 [2024-10-07 09:48:56.351754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.445 qpair failed and we were unable to recover it. 00:28:07.445 [2024-10-07 09:48:56.351847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.445 [2024-10-07 09:48:56.351874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.445 qpair failed and we were unable to recover it. 00:28:07.445 [2024-10-07 09:48:56.351987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.445 [2024-10-07 09:48:56.352014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.445 qpair failed and we were unable to recover it. 00:28:07.445 [2024-10-07 09:48:56.352106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.445 [2024-10-07 09:48:56.352138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.445 qpair failed and we were unable to recover it. 00:28:07.445 [2024-10-07 09:48:56.352265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.445 [2024-10-07 09:48:56.352292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.445 qpair failed and we were unable to recover it. 00:28:07.445 [2024-10-07 09:48:56.352376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.445 [2024-10-07 09:48:56.352414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.735 qpair failed and we were unable to recover it. 00:28:07.735 [2024-10-07 09:48:56.352525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.735 [2024-10-07 09:48:56.352558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.735 qpair failed and we were unable to recover it. 00:28:07.735 [2024-10-07 09:48:56.352645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.735 [2024-10-07 09:48:56.352677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.735 qpair failed and we were unable to recover it. 00:28:07.735 [2024-10-07 09:48:56.352764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.735 [2024-10-07 09:48:56.352791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.735 qpair failed and we were unable to recover it. 00:28:07.735 [2024-10-07 09:48:56.352880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.735 [2024-10-07 09:48:56.352905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.735 qpair failed and we were unable to recover it. 00:28:07.735 [2024-10-07 09:48:56.352982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.735 [2024-10-07 09:48:56.353008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.735 qpair failed and we were unable to recover it. 00:28:07.735 [2024-10-07 09:48:56.353082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.735 [2024-10-07 09:48:56.353107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.735 qpair failed and we were unable to recover it. 00:28:07.735 [2024-10-07 09:48:56.353191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.735 [2024-10-07 09:48:56.353217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.735 qpair failed and we were unable to recover it. 00:28:07.735 [2024-10-07 09:48:56.353303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.735 [2024-10-07 09:48:56.353340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.735 qpair failed and we were unable to recover it. 00:28:07.735 [2024-10-07 09:48:56.353423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.735 [2024-10-07 09:48:56.353448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.735 qpair failed and we were unable to recover it. 00:28:07.735 [2024-10-07 09:48:56.353530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.735 [2024-10-07 09:48:56.353556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.735 qpair failed and we were unable to recover it. 00:28:07.735 [2024-10-07 09:48:56.353641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.735 [2024-10-07 09:48:56.353671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.735 qpair failed and we were unable to recover it. 00:28:07.735 [2024-10-07 09:48:56.353760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.735 [2024-10-07 09:48:56.353785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.735 qpair failed and we were unable to recover it. 00:28:07.735 [2024-10-07 09:48:56.353888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.735 [2024-10-07 09:48:56.353914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.735 qpair failed and we were unable to recover it. 00:28:07.735 [2024-10-07 09:48:56.354002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.735 [2024-10-07 09:48:56.354028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.735 qpair failed and we were unable to recover it. 00:28:07.735 [2024-10-07 09:48:56.354105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.736 [2024-10-07 09:48:56.354130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.736 qpair failed and we were unable to recover it. 00:28:07.736 [2024-10-07 09:48:56.354228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.736 [2024-10-07 09:48:56.354253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.736 qpair failed and we were unable to recover it. 00:28:07.736 [2024-10-07 09:48:56.354343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.736 [2024-10-07 09:48:56.354371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.736 qpair failed and we were unable to recover it. 00:28:07.736 [2024-10-07 09:48:56.354483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.736 [2024-10-07 09:48:56.354508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.736 qpair failed and we were unable to recover it. 00:28:07.736 [2024-10-07 09:48:56.354590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.736 [2024-10-07 09:48:56.354616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.736 qpair failed and we were unable to recover it. 00:28:07.736 [2024-10-07 09:48:56.354706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.736 [2024-10-07 09:48:56.354732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.736 qpair failed and we were unable to recover it. 00:28:07.736 [2024-10-07 09:48:56.354825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.736 [2024-10-07 09:48:56.354849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.736 qpair failed and we were unable to recover it. 00:28:07.736 [2024-10-07 09:48:56.354934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.736 [2024-10-07 09:48:56.354960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.736 qpair failed and we were unable to recover it. 00:28:07.736 [2024-10-07 09:48:56.355039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.736 [2024-10-07 09:48:56.355064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.736 qpair failed and we were unable to recover it. 00:28:07.736 [2024-10-07 09:48:56.355147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.736 [2024-10-07 09:48:56.355172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.736 qpair failed and we were unable to recover it. 00:28:07.736 [2024-10-07 09:48:56.355247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.736 [2024-10-07 09:48:56.355271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.736 qpair failed and we were unable to recover it. 00:28:07.736 [2024-10-07 09:48:56.355354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.736 [2024-10-07 09:48:56.355379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.736 qpair failed and we were unable to recover it. 00:28:07.736 [2024-10-07 09:48:56.355487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.736 [2024-10-07 09:48:56.355512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.736 qpair failed and we were unable to recover it. 00:28:07.736 [2024-10-07 09:48:56.355597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.736 [2024-10-07 09:48:56.355623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.736 qpair failed and we were unable to recover it. 00:28:07.736 [2024-10-07 09:48:56.355732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.736 [2024-10-07 09:48:56.355758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.736 qpair failed and we were unable to recover it. 00:28:07.736 [2024-10-07 09:48:56.355840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.736 [2024-10-07 09:48:56.355865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.736 qpair failed and we were unable to recover it. 00:28:07.736 [2024-10-07 09:48:56.355950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.736 [2024-10-07 09:48:56.355975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.736 qpair failed and we were unable to recover it. 00:28:07.736 [2024-10-07 09:48:56.356053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.736 [2024-10-07 09:48:56.356077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.736 qpair failed and we were unable to recover it. 00:28:07.736 [2024-10-07 09:48:56.356155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.736 [2024-10-07 09:48:56.356179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.736 qpair failed and we were unable to recover it. 00:28:07.736 [2024-10-07 09:48:56.356254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.736 [2024-10-07 09:48:56.356279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.736 qpair failed and we were unable to recover it. 00:28:07.736 [2024-10-07 09:48:56.356369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.736 [2024-10-07 09:48:56.356399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.736 qpair failed and we were unable to recover it. 00:28:07.736 [2024-10-07 09:48:56.356489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.736 [2024-10-07 09:48:56.356520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.736 qpair failed and we were unable to recover it. 00:28:07.736 [2024-10-07 09:48:56.356621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.736 [2024-10-07 09:48:56.356649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.736 qpair failed and we were unable to recover it. 00:28:07.736 [2024-10-07 09:48:56.356738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.736 [2024-10-07 09:48:56.356764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.736 qpair failed and we were unable to recover it. 00:28:07.736 [2024-10-07 09:48:56.356846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.736 [2024-10-07 09:48:56.356872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.736 qpair failed and we were unable to recover it. 00:28:07.736 [2024-10-07 09:48:56.356984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.736 [2024-10-07 09:48:56.357010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.736 qpair failed and we were unable to recover it. 00:28:07.736 [2024-10-07 09:48:56.357085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.736 [2024-10-07 09:48:56.357110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.736 qpair failed and we were unable to recover it. 00:28:07.736 [2024-10-07 09:48:56.357189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.736 [2024-10-07 09:48:56.357214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.736 qpair failed and we were unable to recover it. 00:28:07.736 [2024-10-07 09:48:56.357302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.736 [2024-10-07 09:48:56.357328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.736 qpair failed and we were unable to recover it. 00:28:07.736 [2024-10-07 09:48:56.357406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.736 [2024-10-07 09:48:56.357431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.736 qpair failed and we were unable to recover it. 00:28:07.736 [2024-10-07 09:48:56.357507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.736 [2024-10-07 09:48:56.357532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.736 qpair failed and we were unable to recover it. 00:28:07.736 [2024-10-07 09:48:56.357612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.736 [2024-10-07 09:48:56.357638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.736 qpair failed and we were unable to recover it. 00:28:07.736 [2024-10-07 09:48:56.357729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.736 [2024-10-07 09:48:56.357754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.736 qpair failed and we were unable to recover it. 00:28:07.736 [2024-10-07 09:48:56.357839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.736 [2024-10-07 09:48:56.357866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.736 qpair failed and we were unable to recover it. 00:28:07.736 [2024-10-07 09:48:56.357953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.736 [2024-10-07 09:48:56.357978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.736 qpair failed and we were unable to recover it. 00:28:07.736 [2024-10-07 09:48:56.358056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.736 [2024-10-07 09:48:56.358082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.736 qpair failed and we were unable to recover it. 00:28:07.736 [2024-10-07 09:48:56.358164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.736 [2024-10-07 09:48:56.358189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.736 qpair failed and we were unable to recover it. 00:28:07.736 [2024-10-07 09:48:56.358263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.736 [2024-10-07 09:48:56.358288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.736 qpair failed and we were unable to recover it. 00:28:07.737 [2024-10-07 09:48:56.358366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.737 [2024-10-07 09:48:56.358392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.737 qpair failed and we were unable to recover it. 00:28:07.737 [2024-10-07 09:48:56.358467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.737 [2024-10-07 09:48:56.358493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.737 qpair failed and we were unable to recover it. 00:28:07.737 [2024-10-07 09:48:56.358566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.737 [2024-10-07 09:48:56.358592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.737 qpair failed and we were unable to recover it. 00:28:07.737 [2024-10-07 09:48:56.358681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.737 [2024-10-07 09:48:56.358707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.737 qpair failed and we were unable to recover it. 00:28:07.737 [2024-10-07 09:48:56.358798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.737 [2024-10-07 09:48:56.358823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.737 qpair failed and we were unable to recover it. 00:28:07.737 [2024-10-07 09:48:56.358902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.737 [2024-10-07 09:48:56.358927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.737 qpair failed and we were unable to recover it. 00:28:07.737 [2024-10-07 09:48:56.359010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.737 [2024-10-07 09:48:56.359034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.737 qpair failed and we were unable to recover it. 00:28:07.737 [2024-10-07 09:48:56.359150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.737 [2024-10-07 09:48:56.359175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.737 qpair failed and we were unable to recover it. 00:28:07.737 [2024-10-07 09:48:56.359259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.737 [2024-10-07 09:48:56.359285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.737 qpair failed and we were unable to recover it. 00:28:07.737 [2024-10-07 09:48:56.359363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.737 [2024-10-07 09:48:56.359388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.737 qpair failed and we were unable to recover it. 00:28:07.737 [2024-10-07 09:48:56.359468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.737 [2024-10-07 09:48:56.359494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.737 qpair failed and we were unable to recover it. 00:28:07.737 [2024-10-07 09:48:56.359576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.737 [2024-10-07 09:48:56.359605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.737 qpair failed and we were unable to recover it. 00:28:07.737 [2024-10-07 09:48:56.359700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.737 [2024-10-07 09:48:56.359730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.737 qpair failed and we were unable to recover it. 00:28:07.737 [2024-10-07 09:48:56.359823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.737 [2024-10-07 09:48:56.359850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.737 qpair failed and we were unable to recover it. 00:28:07.737 [2024-10-07 09:48:56.359941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.737 [2024-10-07 09:48:56.359968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.737 qpair failed and we were unable to recover it. 00:28:07.737 [2024-10-07 09:48:56.360057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.737 [2024-10-07 09:48:56.360084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.737 qpair failed and we were unable to recover it. 00:28:07.737 [2024-10-07 09:48:56.360177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.737 [2024-10-07 09:48:56.360205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.737 qpair failed and we were unable to recover it. 00:28:07.737 [2024-10-07 09:48:56.360323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.737 [2024-10-07 09:48:56.360350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.737 qpair failed and we were unable to recover it. 00:28:07.737 [2024-10-07 09:48:56.360440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.737 [2024-10-07 09:48:56.360467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.737 qpair failed and we were unable to recover it. 00:28:07.737 [2024-10-07 09:48:56.360558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.737 [2024-10-07 09:48:56.360583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.737 qpair failed and we were unable to recover it. 00:28:07.737 [2024-10-07 09:48:56.360675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.737 [2024-10-07 09:48:56.360702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.737 qpair failed and we were unable to recover it. 00:28:07.737 [2024-10-07 09:48:56.360780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.737 [2024-10-07 09:48:56.360806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.737 qpair failed and we were unable to recover it. 00:28:07.737 [2024-10-07 09:48:56.360892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.737 [2024-10-07 09:48:56.360917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.737 qpair failed and we were unable to recover it. 00:28:07.737 [2024-10-07 09:48:56.361003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.737 [2024-10-07 09:48:56.361028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.737 qpair failed and we were unable to recover it. 00:28:07.737 [2024-10-07 09:48:56.361108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.737 [2024-10-07 09:48:56.361132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.737 qpair failed and we were unable to recover it. 00:28:07.737 [2024-10-07 09:48:56.361218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.737 [2024-10-07 09:48:56.361243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.737 qpair failed and we were unable to recover it. 00:28:07.737 [2024-10-07 09:48:56.361316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.737 [2024-10-07 09:48:56.361340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.737 qpair failed and we were unable to recover it. 00:28:07.737 [2024-10-07 09:48:56.361418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.737 [2024-10-07 09:48:56.361443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.737 qpair failed and we were unable to recover it. 00:28:07.737 [2024-10-07 09:48:56.361521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.737 [2024-10-07 09:48:56.361546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.737 qpair failed and we were unable to recover it. 00:28:07.737 [2024-10-07 09:48:56.361651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.737 [2024-10-07 09:48:56.361682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.737 qpair failed and we were unable to recover it. 00:28:07.737 [2024-10-07 09:48:56.361768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.737 [2024-10-07 09:48:56.361793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.737 qpair failed and we were unable to recover it. 00:28:07.737 [2024-10-07 09:48:56.361876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.737 [2024-10-07 09:48:56.361901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.737 qpair failed and we were unable to recover it. 00:28:07.737 [2024-10-07 09:48:56.362016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.737 [2024-10-07 09:48:56.362042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.737 qpair failed and we were unable to recover it. 00:28:07.737 [2024-10-07 09:48:56.362121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.737 [2024-10-07 09:48:56.362146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.737 qpair failed and we were unable to recover it. 00:28:07.737 [2024-10-07 09:48:56.362230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.737 [2024-10-07 09:48:56.362256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.737 qpair failed and we were unable to recover it. 00:28:07.737 [2024-10-07 09:48:56.362331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.737 [2024-10-07 09:48:56.362356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.737 qpair failed and we were unable to recover it. 00:28:07.737 [2024-10-07 09:48:56.362442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.737 [2024-10-07 09:48:56.362466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.737 qpair failed and we were unable to recover it. 00:28:07.737 [2024-10-07 09:48:56.362542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.737 [2024-10-07 09:48:56.362567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.737 qpair failed and we were unable to recover it. 00:28:07.737 [2024-10-07 09:48:56.362657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.738 [2024-10-07 09:48:56.362691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.738 qpair failed and we were unable to recover it. 00:28:07.738 [2024-10-07 09:48:56.362772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.738 [2024-10-07 09:48:56.362798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.738 qpair failed and we were unable to recover it. 00:28:07.738 [2024-10-07 09:48:56.362880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.738 [2024-10-07 09:48:56.362906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.738 qpair failed and we were unable to recover it. 00:28:07.738 [2024-10-07 09:48:56.362983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.738 [2024-10-07 09:48:56.363008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.738 qpair failed and we were unable to recover it. 00:28:07.738 [2024-10-07 09:48:56.363084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.738 [2024-10-07 09:48:56.363109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.738 qpair failed and we were unable to recover it. 00:28:07.738 [2024-10-07 09:48:56.363184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.738 [2024-10-07 09:48:56.363210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.738 qpair failed and we were unable to recover it. 00:28:07.738 [2024-10-07 09:48:56.363286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.738 [2024-10-07 09:48:56.363311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.738 qpair failed and we were unable to recover it. 00:28:07.738 [2024-10-07 09:48:56.363384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.738 [2024-10-07 09:48:56.363409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.738 qpair failed and we were unable to recover it. 00:28:07.738 [2024-10-07 09:48:56.363528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.738 [2024-10-07 09:48:56.363554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.738 qpair failed and we were unable to recover it. 00:28:07.738 [2024-10-07 09:48:56.363630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.738 [2024-10-07 09:48:56.363655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.738 qpair failed and we were unable to recover it. 00:28:07.738 [2024-10-07 09:48:56.363751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.738 [2024-10-07 09:48:56.363777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.738 qpair failed and we were unable to recover it. 00:28:07.738 [2024-10-07 09:48:56.363857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.738 [2024-10-07 09:48:56.363882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.738 qpair failed and we were unable to recover it. 00:28:07.738 [2024-10-07 09:48:56.363970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.738 [2024-10-07 09:48:56.363994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.738 qpair failed and we were unable to recover it. 00:28:07.738 [2024-10-07 09:48:56.364067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.738 [2024-10-07 09:48:56.364092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.738 qpair failed and we were unable to recover it. 00:28:07.738 [2024-10-07 09:48:56.364172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.738 [2024-10-07 09:48:56.364198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.738 qpair failed and we were unable to recover it. 00:28:07.738 [2024-10-07 09:48:56.364288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.738 [2024-10-07 09:48:56.364313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.738 qpair failed and we were unable to recover it. 00:28:07.738 [2024-10-07 09:48:56.364394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.738 [2024-10-07 09:48:56.364419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.738 qpair failed and we were unable to recover it. 00:28:07.738 [2024-10-07 09:48:56.364502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.738 [2024-10-07 09:48:56.364528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.738 qpair failed and we were unable to recover it. 00:28:07.738 [2024-10-07 09:48:56.364607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.738 [2024-10-07 09:48:56.364633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.738 qpair failed and we were unable to recover it. 00:28:07.738 [2024-10-07 09:48:56.364722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.738 [2024-10-07 09:48:56.364748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.738 qpair failed and we were unable to recover it. 00:28:07.738 [2024-10-07 09:48:56.364827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.738 [2024-10-07 09:48:56.364852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.738 qpair failed and we were unable to recover it. 00:28:07.738 [2024-10-07 09:48:56.364931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.738 [2024-10-07 09:48:56.364956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.738 qpair failed and we were unable to recover it. 00:28:07.738 [2024-10-07 09:48:56.365044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.738 [2024-10-07 09:48:56.365070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.738 qpair failed and we were unable to recover it. 00:28:07.738 [2024-10-07 09:48:56.365185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.738 [2024-10-07 09:48:56.365210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.738 qpair failed and we were unable to recover it. 00:28:07.738 [2024-10-07 09:48:56.365286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.738 [2024-10-07 09:48:56.365312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.738 qpair failed and we were unable to recover it. 00:28:07.738 [2024-10-07 09:48:56.365387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.738 [2024-10-07 09:48:56.365413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.738 qpair failed and we were unable to recover it. 00:28:07.738 [2024-10-07 09:48:56.365496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.738 [2024-10-07 09:48:56.365521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.738 qpair failed and we were unable to recover it. 00:28:07.738 [2024-10-07 09:48:56.365599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.738 [2024-10-07 09:48:56.365629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.738 qpair failed and we were unable to recover it. 00:28:07.738 [2024-10-07 09:48:56.365717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.738 [2024-10-07 09:48:56.365743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.738 qpair failed and we were unable to recover it. 00:28:07.738 [2024-10-07 09:48:56.365857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.738 [2024-10-07 09:48:56.365882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.738 qpair failed and we were unable to recover it. 00:28:07.738 [2024-10-07 09:48:56.365958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.738 [2024-10-07 09:48:56.365983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.738 qpair failed and we were unable to recover it. 00:28:07.738 [2024-10-07 09:48:56.366069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.738 [2024-10-07 09:48:56.366095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.738 qpair failed and we were unable to recover it. 00:28:07.738 [2024-10-07 09:48:56.366172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.738 [2024-10-07 09:48:56.366199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.738 qpair failed and we were unable to recover it. 00:28:07.738 [2024-10-07 09:48:56.366276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.738 [2024-10-07 09:48:56.366302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.738 qpair failed and we were unable to recover it. 00:28:07.738 [2024-10-07 09:48:56.366409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.738 [2024-10-07 09:48:56.366434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.738 qpair failed and we were unable to recover it. 00:28:07.738 [2024-10-07 09:48:56.366522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.738 [2024-10-07 09:48:56.366547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.738 qpair failed and we were unable to recover it. 00:28:07.738 [2024-10-07 09:48:56.366630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.738 [2024-10-07 09:48:56.366655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.738 qpair failed and we were unable to recover it. 00:28:07.738 [2024-10-07 09:48:56.366742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.738 [2024-10-07 09:48:56.366766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.738 qpair failed and we were unable to recover it. 00:28:07.739 [2024-10-07 09:48:56.366849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.739 [2024-10-07 09:48:56.366874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.739 qpair failed and we were unable to recover it. 00:28:07.739 [2024-10-07 09:48:56.366948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.739 [2024-10-07 09:48:56.366974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.739 qpair failed and we were unable to recover it. 00:28:07.739 [2024-10-07 09:48:56.367047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.739 [2024-10-07 09:48:56.367072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.739 qpair failed and we were unable to recover it. 00:28:07.739 [2024-10-07 09:48:56.367163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.739 [2024-10-07 09:48:56.367188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.739 qpair failed and we were unable to recover it. 00:28:07.739 [2024-10-07 09:48:56.367261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.739 [2024-10-07 09:48:56.367285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.739 qpair failed and we were unable to recover it. 00:28:07.739 [2024-10-07 09:48:56.367375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.739 [2024-10-07 09:48:56.367405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.739 qpair failed and we were unable to recover it. 00:28:07.739 [2024-10-07 09:48:56.367490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.739 [2024-10-07 09:48:56.367518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.739 qpair failed and we were unable to recover it. 00:28:07.739 [2024-10-07 09:48:56.367640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.739 [2024-10-07 09:48:56.367678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.739 qpair failed and we were unable to recover it. 00:28:07.739 [2024-10-07 09:48:56.367757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.739 [2024-10-07 09:48:56.367785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.739 qpair failed and we were unable to recover it. 00:28:07.739 [2024-10-07 09:48:56.367869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.739 [2024-10-07 09:48:56.367896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.739 qpair failed and we were unable to recover it. 00:28:07.739 [2024-10-07 09:48:56.367980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.739 [2024-10-07 09:48:56.368013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.739 qpair failed and we were unable to recover it. 00:28:07.739 [2024-10-07 09:48:56.368100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.739 [2024-10-07 09:48:56.368126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.739 qpair failed and we were unable to recover it. 00:28:07.739 [2024-10-07 09:48:56.368238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.739 [2024-10-07 09:48:56.368264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.739 qpair failed and we were unable to recover it. 00:28:07.739 [2024-10-07 09:48:56.368350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.739 [2024-10-07 09:48:56.368376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.739 qpair failed and we were unable to recover it. 00:28:07.739 [2024-10-07 09:48:56.368453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.739 [2024-10-07 09:48:56.368478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.739 qpair failed and we were unable to recover it. 00:28:07.739 [2024-10-07 09:48:56.368596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.739 [2024-10-07 09:48:56.368622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.739 qpair failed and we were unable to recover it. 00:28:07.739 [2024-10-07 09:48:56.368707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.739 [2024-10-07 09:48:56.368739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.739 qpair failed and we were unable to recover it. 00:28:07.739 [2024-10-07 09:48:56.368817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.739 [2024-10-07 09:48:56.368842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.739 qpair failed and we were unable to recover it. 00:28:07.739 [2024-10-07 09:48:56.368924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.739 [2024-10-07 09:48:56.368948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.739 qpair failed and we were unable to recover it. 00:28:07.739 [2024-10-07 09:48:56.369025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.739 [2024-10-07 09:48:56.369050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.739 qpair failed and we were unable to recover it. 00:28:07.739 [2024-10-07 09:48:56.369134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.739 [2024-10-07 09:48:56.369159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.739 qpair failed and we were unable to recover it. 00:28:07.739 [2024-10-07 09:48:56.369237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.739 [2024-10-07 09:48:56.369263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.739 qpair failed and we were unable to recover it. 00:28:07.739 [2024-10-07 09:48:56.369347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.739 [2024-10-07 09:48:56.369373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.739 qpair failed and we were unable to recover it. 00:28:07.739 [2024-10-07 09:48:56.369454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.739 [2024-10-07 09:48:56.369479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.739 qpair failed and we were unable to recover it. 00:28:07.739 [2024-10-07 09:48:56.369567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.739 [2024-10-07 09:48:56.369592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.739 qpair failed and we were unable to recover it. 00:28:07.739 [2024-10-07 09:48:56.369677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.739 [2024-10-07 09:48:56.369702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.739 qpair failed and we were unable to recover it. 00:28:07.739 [2024-10-07 09:48:56.369782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.739 [2024-10-07 09:48:56.369807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.739 qpair failed and we were unable to recover it. 00:28:07.739 [2024-10-07 09:48:56.369887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.739 [2024-10-07 09:48:56.369913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.739 qpair failed and we were unable to recover it. 00:28:07.739 [2024-10-07 09:48:56.369997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.739 [2024-10-07 09:48:56.370023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.739 qpair failed and we were unable to recover it. 00:28:07.739 [2024-10-07 09:48:56.370096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.739 [2024-10-07 09:48:56.370122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.739 qpair failed and we were unable to recover it. 00:28:07.739 [2024-10-07 09:48:56.370204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.739 [2024-10-07 09:48:56.370229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.739 qpair failed and we were unable to recover it. 00:28:07.739 [2024-10-07 09:48:56.370334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.739 [2024-10-07 09:48:56.370360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.739 qpair failed and we were unable to recover it. 00:28:07.739 [2024-10-07 09:48:56.370443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.739 [2024-10-07 09:48:56.370468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.739 qpair failed and we were unable to recover it. 00:28:07.739 [2024-10-07 09:48:56.370557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.739 [2024-10-07 09:48:56.370583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.739 qpair failed and we were unable to recover it. 00:28:07.739 [2024-10-07 09:48:56.370664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.739 [2024-10-07 09:48:56.370696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.739 qpair failed and we were unable to recover it. 00:28:07.740 [2024-10-07 09:48:56.370777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.740 [2024-10-07 09:48:56.370802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.740 qpair failed and we were unable to recover it. 00:28:07.740 [2024-10-07 09:48:56.370885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.740 [2024-10-07 09:48:56.370911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.740 qpair failed and we were unable to recover it. 00:28:07.740 [2024-10-07 09:48:56.371017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.740 [2024-10-07 09:48:56.371043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.740 qpair failed and we were unable to recover it. 00:28:07.740 [2024-10-07 09:48:56.371118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.740 [2024-10-07 09:48:56.371143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.740 qpair failed and we were unable to recover it. 00:28:07.740 [2024-10-07 09:48:56.371225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.740 [2024-10-07 09:48:56.371250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.740 qpair failed and we were unable to recover it. 00:28:07.740 [2024-10-07 09:48:56.371320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.740 [2024-10-07 09:48:56.371344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.740 qpair failed and we were unable to recover it. 00:28:07.740 [2024-10-07 09:48:56.371461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.740 [2024-10-07 09:48:56.371487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.740 qpair failed and we were unable to recover it. 00:28:07.740 [2024-10-07 09:48:56.371573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.740 [2024-10-07 09:48:56.371599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.740 qpair failed and we were unable to recover it. 00:28:07.740 [2024-10-07 09:48:56.371687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.740 [2024-10-07 09:48:56.371717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.740 qpair failed and we were unable to recover it. 00:28:07.740 [2024-10-07 09:48:56.371796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.740 [2024-10-07 09:48:56.371822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.740 qpair failed and we were unable to recover it. 00:28:07.740 [2024-10-07 09:48:56.371897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.740 [2024-10-07 09:48:56.371924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.740 qpair failed and we were unable to recover it. 00:28:07.740 [2024-10-07 09:48:56.371996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.740 [2024-10-07 09:48:56.372020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.740 qpair failed and we were unable to recover it. 00:28:07.740 [2024-10-07 09:48:56.372099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.740 [2024-10-07 09:48:56.372125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.740 qpair failed and we were unable to recover it. 00:28:07.740 [2024-10-07 09:48:56.372206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.740 [2024-10-07 09:48:56.372231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.740 qpair failed and we were unable to recover it. 00:28:07.740 [2024-10-07 09:48:56.372316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.740 [2024-10-07 09:48:56.372342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.740 qpair failed and we were unable to recover it. 00:28:07.740 [2024-10-07 09:48:56.372427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.740 [2024-10-07 09:48:56.372453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.740 qpair failed and we were unable to recover it. 00:28:07.740 [2024-10-07 09:48:56.372530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.740 [2024-10-07 09:48:56.372556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.740 qpair failed and we were unable to recover it. 00:28:07.740 [2024-10-07 09:48:56.372689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.740 [2024-10-07 09:48:56.372716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.740 qpair failed and we were unable to recover it. 00:28:07.740 [2024-10-07 09:48:56.372795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.740 [2024-10-07 09:48:56.372820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.740 qpair failed and we were unable to recover it. 00:28:07.740 [2024-10-07 09:48:56.372909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.740 [2024-10-07 09:48:56.372934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.740 qpair failed and we were unable to recover it. 00:28:07.740 [2024-10-07 09:48:56.373019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.740 [2024-10-07 09:48:56.373044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.740 qpair failed and we were unable to recover it. 00:28:07.740 [2024-10-07 09:48:56.373125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.740 [2024-10-07 09:48:56.373150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.740 qpair failed and we were unable to recover it. 00:28:07.740 [2024-10-07 09:48:56.373236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.740 [2024-10-07 09:48:56.373263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.740 qpair failed and we were unable to recover it. 00:28:07.740 [2024-10-07 09:48:56.373340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.740 [2024-10-07 09:48:56.373366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.740 qpair failed and we were unable to recover it. 00:28:07.740 [2024-10-07 09:48:56.373446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.740 [2024-10-07 09:48:56.373472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.740 qpair failed and we were unable to recover it. 00:28:07.740 [2024-10-07 09:48:56.373552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.740 [2024-10-07 09:48:56.373578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.740 qpair failed and we were unable to recover it. 00:28:07.740 [2024-10-07 09:48:56.373688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.740 [2024-10-07 09:48:56.373715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.740 qpair failed and we were unable to recover it. 00:28:07.740 [2024-10-07 09:48:56.373793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.740 [2024-10-07 09:48:56.373819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.740 qpair failed and we were unable to recover it. 00:28:07.740 [2024-10-07 09:48:56.373906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.740 [2024-10-07 09:48:56.373932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.740 qpair failed and we were unable to recover it. 00:28:07.740 [2024-10-07 09:48:56.374015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.740 [2024-10-07 09:48:56.374041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.740 qpair failed and we were unable to recover it. 00:28:07.740 [2024-10-07 09:48:56.374129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.740 [2024-10-07 09:48:56.374154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.741 qpair failed and we were unable to recover it. 00:28:07.741 [2024-10-07 09:48:56.374233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.741 [2024-10-07 09:48:56.374259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.741 qpair failed and we were unable to recover it. 00:28:07.741 [2024-10-07 09:48:56.374346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.741 [2024-10-07 09:48:56.374372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.741 qpair failed and we were unable to recover it. 00:28:07.741 [2024-10-07 09:48:56.374450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.741 [2024-10-07 09:48:56.374475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.741 qpair failed and we were unable to recover it. 00:28:07.741 [2024-10-07 09:48:56.374592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.741 [2024-10-07 09:48:56.374618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.741 qpair failed and we were unable to recover it. 00:28:07.741 [2024-10-07 09:48:56.374716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.741 [2024-10-07 09:48:56.374742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.741 qpair failed and we were unable to recover it. 00:28:07.741 [2024-10-07 09:48:56.374834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.741 [2024-10-07 09:48:56.374860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.741 qpair failed and we were unable to recover it. 00:28:07.741 [2024-10-07 09:48:56.374944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.741 [2024-10-07 09:48:56.374969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.741 qpair failed and we were unable to recover it. 00:28:07.741 [2024-10-07 09:48:56.375046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.741 [2024-10-07 09:48:56.375072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.741 qpair failed and we were unable to recover it. 00:28:07.741 [2024-10-07 09:48:56.375149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.741 [2024-10-07 09:48:56.375174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.741 qpair failed and we were unable to recover it. 00:28:07.741 [2024-10-07 09:48:56.375258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.741 [2024-10-07 09:48:56.375285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.741 qpair failed and we were unable to recover it. 00:28:07.741 [2024-10-07 09:48:56.375393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.741 [2024-10-07 09:48:56.375418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.741 qpair failed and we were unable to recover it. 00:28:07.741 [2024-10-07 09:48:56.375491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.741 [2024-10-07 09:48:56.375519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.741 qpair failed and we were unable to recover it. 00:28:07.741 [2024-10-07 09:48:56.375603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.741 [2024-10-07 09:48:56.375629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.741 qpair failed and we were unable to recover it. 00:28:07.741 [2024-10-07 09:48:56.375721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.741 [2024-10-07 09:48:56.375747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.741 qpair failed and we were unable to recover it. 00:28:07.741 [2024-10-07 09:48:56.375827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.741 [2024-10-07 09:48:56.375853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.741 qpair failed and we were unable to recover it. 00:28:07.741 [2024-10-07 09:48:56.375936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.741 [2024-10-07 09:48:56.375962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.741 qpair failed and we were unable to recover it. 00:28:07.741 [2024-10-07 09:48:56.376040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.741 [2024-10-07 09:48:56.376065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.741 qpair failed and we were unable to recover it. 00:28:07.741 [2024-10-07 09:48:56.376137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.741 [2024-10-07 09:48:56.376162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.741 qpair failed and we were unable to recover it. 00:28:07.741 [2024-10-07 09:48:56.376240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.741 [2024-10-07 09:48:56.376271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.741 qpair failed and we were unable to recover it. 00:28:07.741 [2024-10-07 09:48:56.376349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.741 [2024-10-07 09:48:56.376374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.741 qpair failed and we were unable to recover it. 00:28:07.741 [2024-10-07 09:48:56.376453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.741 [2024-10-07 09:48:56.376479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.741 qpair failed and we were unable to recover it. 00:28:07.741 [2024-10-07 09:48:56.376600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.741 [2024-10-07 09:48:56.376626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.741 qpair failed and we were unable to recover it. 00:28:07.741 [2024-10-07 09:48:56.376710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.741 [2024-10-07 09:48:56.376736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.741 qpair failed and we were unable to recover it. 00:28:07.741 [2024-10-07 09:48:56.376814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.741 [2024-10-07 09:48:56.376840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.741 qpair failed and we were unable to recover it. 00:28:07.741 [2024-10-07 09:48:56.376927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.741 [2024-10-07 09:48:56.376953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.741 qpair failed and we were unable to recover it. 00:28:07.741 [2024-10-07 09:48:56.377040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.741 [2024-10-07 09:48:56.377066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.741 qpair failed and we were unable to recover it. 00:28:07.741 [2024-10-07 09:48:56.377162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.741 [2024-10-07 09:48:56.377204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.741 qpair failed and we were unable to recover it. 00:28:07.741 [2024-10-07 09:48:56.377337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.741 [2024-10-07 09:48:56.377366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.741 qpair failed and we were unable to recover it. 00:28:07.741 [2024-10-07 09:48:56.377451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.741 [2024-10-07 09:48:56.377480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.741 qpair failed and we were unable to recover it. 00:28:07.741 [2024-10-07 09:48:56.377574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.741 [2024-10-07 09:48:56.377601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.741 qpair failed and we were unable to recover it. 00:28:07.741 [2024-10-07 09:48:56.377723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.741 [2024-10-07 09:48:56.377751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.741 qpair failed and we were unable to recover it. 00:28:07.741 [2024-10-07 09:48:56.377834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.741 [2024-10-07 09:48:56.377860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.741 qpair failed and we were unable to recover it. 00:28:07.741 [2024-10-07 09:48:56.377952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.741 [2024-10-07 09:48:56.377979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.741 qpair failed and we were unable to recover it. 00:28:07.741 [2024-10-07 09:48:56.378059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.741 [2024-10-07 09:48:56.378086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.741 qpair failed and we were unable to recover it. 00:28:07.741 [2024-10-07 09:48:56.378162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.742 [2024-10-07 09:48:56.378189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.742 qpair failed and we were unable to recover it. 00:28:07.742 [2024-10-07 09:48:56.378301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.742 [2024-10-07 09:48:56.378328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.742 qpair failed and we were unable to recover it. 00:28:07.742 [2024-10-07 09:48:56.378423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.742 [2024-10-07 09:48:56.378449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.742 qpair failed and we were unable to recover it. 00:28:07.742 [2024-10-07 09:48:56.378526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.742 [2024-10-07 09:48:56.378552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.742 qpair failed and we were unable to recover it. 00:28:07.742 [2024-10-07 09:48:56.378633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.742 [2024-10-07 09:48:56.378659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.742 qpair failed and we were unable to recover it. 00:28:07.742 [2024-10-07 09:48:56.378748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.742 [2024-10-07 09:48:56.378774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.742 qpair failed and we were unable to recover it. 00:28:07.742 [2024-10-07 09:48:56.378857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.742 [2024-10-07 09:48:56.378882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.742 qpair failed and we were unable to recover it. 00:28:07.742 [2024-10-07 09:48:56.378967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.742 [2024-10-07 09:48:56.378993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.742 qpair failed and we were unable to recover it. 00:28:07.742 [2024-10-07 09:48:56.379079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.742 [2024-10-07 09:48:56.379105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.742 qpair failed and we were unable to recover it. 00:28:07.742 [2024-10-07 09:48:56.379189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.742 [2024-10-07 09:48:56.379215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.742 qpair failed and we were unable to recover it. 00:28:07.742 [2024-10-07 09:48:56.379300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.742 [2024-10-07 09:48:56.379326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.742 qpair failed and we were unable to recover it. 00:28:07.742 [2024-10-07 09:48:56.379446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.742 [2024-10-07 09:48:56.379471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.742 qpair failed and we were unable to recover it. 00:28:07.742 [2024-10-07 09:48:56.379550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.742 [2024-10-07 09:48:56.379576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.742 qpair failed and we were unable to recover it. 00:28:07.742 [2024-10-07 09:48:56.379654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.742 [2024-10-07 09:48:56.379685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.742 qpair failed and we were unable to recover it. 00:28:07.742 [2024-10-07 09:48:56.379774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.742 [2024-10-07 09:48:56.379800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.742 qpair failed and we were unable to recover it. 00:28:07.742 [2024-10-07 09:48:56.379879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.742 [2024-10-07 09:48:56.379905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.742 qpair failed and we were unable to recover it. 00:28:07.742 [2024-10-07 09:48:56.379984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.742 [2024-10-07 09:48:56.380013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.742 qpair failed and we were unable to recover it. 00:28:07.742 [2024-10-07 09:48:56.380101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.742 [2024-10-07 09:48:56.380128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.742 qpair failed and we were unable to recover it. 00:28:07.742 [2024-10-07 09:48:56.380212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.742 [2024-10-07 09:48:56.380245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.742 qpair failed and we were unable to recover it. 00:28:07.742 [2024-10-07 09:48:56.380331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.742 [2024-10-07 09:48:56.380357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.742 qpair failed and we were unable to recover it. 00:28:07.742 [2024-10-07 09:48:56.380436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.742 [2024-10-07 09:48:56.380462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.742 qpair failed and we were unable to recover it. 00:28:07.742 [2024-10-07 09:48:56.380536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.742 [2024-10-07 09:48:56.380563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.742 qpair failed and we were unable to recover it. 00:28:07.742 [2024-10-07 09:48:56.380643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.742 [2024-10-07 09:48:56.380677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.742 qpair failed and we were unable to recover it. 00:28:07.742 [2024-10-07 09:48:56.380766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.742 [2024-10-07 09:48:56.380792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.742 qpair failed and we were unable to recover it. 00:28:07.742 [2024-10-07 09:48:56.380874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.742 [2024-10-07 09:48:56.380901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.742 qpair failed and we were unable to recover it. 00:28:07.742 [2024-10-07 09:48:56.381020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.742 [2024-10-07 09:48:56.381046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.742 qpair failed and we were unable to recover it. 00:28:07.742 [2024-10-07 09:48:56.381131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.742 [2024-10-07 09:48:56.381157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.742 qpair failed and we were unable to recover it. 00:28:07.742 [2024-10-07 09:48:56.381238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.742 [2024-10-07 09:48:56.381264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.742 qpair failed and we were unable to recover it. 00:28:07.742 [2024-10-07 09:48:56.381347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.742 [2024-10-07 09:48:56.381376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.742 qpair failed and we were unable to recover it. 00:28:07.742 [2024-10-07 09:48:56.381469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.742 [2024-10-07 09:48:56.381497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.742 qpair failed and we were unable to recover it. 00:28:07.742 [2024-10-07 09:48:56.381579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.742 [2024-10-07 09:48:56.381606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.742 qpair failed and we were unable to recover it. 00:28:07.742 [2024-10-07 09:48:56.381695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.742 [2024-10-07 09:48:56.381723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.742 qpair failed and we were unable to recover it. 00:28:07.742 [2024-10-07 09:48:56.381819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.742 [2024-10-07 09:48:56.381846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.742 qpair failed and we were unable to recover it. 00:28:07.742 [2024-10-07 09:48:56.381960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.742 [2024-10-07 09:48:56.381986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.742 qpair failed and we were unable to recover it. 00:28:07.742 [2024-10-07 09:48:56.382068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.742 [2024-10-07 09:48:56.382095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.742 qpair failed and we were unable to recover it. 00:28:07.742 [2024-10-07 09:48:56.382212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.742 [2024-10-07 09:48:56.382246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.742 qpair failed and we were unable to recover it. 00:28:07.742 [2024-10-07 09:48:56.382331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.742 [2024-10-07 09:48:56.382359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.742 qpair failed and we were unable to recover it. 00:28:07.742 [2024-10-07 09:48:56.382470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.742 [2024-10-07 09:48:56.382497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.743 qpair failed and we were unable to recover it. 00:28:07.743 [2024-10-07 09:48:56.382589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.743 [2024-10-07 09:48:56.382616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.743 qpair failed and we were unable to recover it. 00:28:07.743 [2024-10-07 09:48:56.382705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.743 [2024-10-07 09:48:56.382734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.743 qpair failed and we were unable to recover it. 00:28:07.743 [2024-10-07 09:48:56.382821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.743 [2024-10-07 09:48:56.382848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.743 qpair failed and we were unable to recover it. 00:28:07.743 [2024-10-07 09:48:56.382930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.743 [2024-10-07 09:48:56.382956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.743 qpair failed and we were unable to recover it. 00:28:07.743 [2024-10-07 09:48:56.383040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.743 [2024-10-07 09:48:56.383066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.743 qpair failed and we were unable to recover it. 00:28:07.743 [2024-10-07 09:48:56.383176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.743 [2024-10-07 09:48:56.383202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.743 qpair failed and we were unable to recover it. 00:28:07.743 [2024-10-07 09:48:56.383287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.743 [2024-10-07 09:48:56.383313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.743 qpair failed and we were unable to recover it. 00:28:07.743 [2024-10-07 09:48:56.383389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.743 [2024-10-07 09:48:56.383415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.743 qpair failed and we were unable to recover it. 00:28:07.743 [2024-10-07 09:48:56.383499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.743 [2024-10-07 09:48:56.383526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.743 qpair failed and we were unable to recover it. 00:28:07.743 [2024-10-07 09:48:56.383649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.743 [2024-10-07 09:48:56.383683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.743 qpair failed and we were unable to recover it. 00:28:07.743 [2024-10-07 09:48:56.383771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.743 [2024-10-07 09:48:56.383798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.743 qpair failed and we were unable to recover it. 00:28:07.743 [2024-10-07 09:48:56.383897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.743 [2024-10-07 09:48:56.383924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.743 qpair failed and we were unable to recover it. 00:28:07.743 [2024-10-07 09:48:56.384068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.743 [2024-10-07 09:48:56.384095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.743 qpair failed and we were unable to recover it. 00:28:07.743 [2024-10-07 09:48:56.384204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.743 [2024-10-07 09:48:56.384236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.743 qpair failed and we were unable to recover it. 00:28:07.743 [2024-10-07 09:48:56.384320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.743 [2024-10-07 09:48:56.384347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.743 qpair failed and we were unable to recover it. 00:28:07.743 [2024-10-07 09:48:56.384471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.743 [2024-10-07 09:48:56.384497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.743 qpair failed and we were unable to recover it. 00:28:07.743 [2024-10-07 09:48:56.384604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.743 [2024-10-07 09:48:56.384631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.743 qpair failed and we were unable to recover it. 00:28:07.743 [2024-10-07 09:48:56.384728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.743 [2024-10-07 09:48:56.384757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.743 qpair failed and we were unable to recover it. 00:28:07.743 [2024-10-07 09:48:56.384846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.743 [2024-10-07 09:48:56.384871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.743 qpair failed and we were unable to recover it. 00:28:07.743 [2024-10-07 09:48:56.385011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.743 [2024-10-07 09:48:56.385036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.743 qpair failed and we were unable to recover it. 00:28:07.743 [2024-10-07 09:48:56.385118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.743 [2024-10-07 09:48:56.385144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.743 qpair failed and we were unable to recover it. 00:28:07.743 [2024-10-07 09:48:56.385259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.743 [2024-10-07 09:48:56.385285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.743 qpair failed and we were unable to recover it. 00:28:07.743 [2024-10-07 09:48:56.385361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.743 [2024-10-07 09:48:56.385387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.743 qpair failed and we were unable to recover it. 00:28:07.743 [2024-10-07 09:48:56.385470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.743 [2024-10-07 09:48:56.385495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.743 qpair failed and we were unable to recover it. 00:28:07.743 [2024-10-07 09:48:56.385577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.743 [2024-10-07 09:48:56.385602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.743 qpair failed and we were unable to recover it. 00:28:07.743 [2024-10-07 09:48:56.385682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.743 [2024-10-07 09:48:56.385708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.743 qpair failed and we were unable to recover it. 00:28:07.743 [2024-10-07 09:48:56.385792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.743 [2024-10-07 09:48:56.385817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.743 qpair failed and we were unable to recover it. 00:28:07.743 [2024-10-07 09:48:56.385934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.743 [2024-10-07 09:48:56.385959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.743 qpair failed and we were unable to recover it. 00:28:07.743 [2024-10-07 09:48:56.386032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.743 [2024-10-07 09:48:56.386057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.743 qpair failed and we were unable to recover it. 00:28:07.743 [2024-10-07 09:48:56.386131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.743 [2024-10-07 09:48:56.386156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.743 qpair failed and we were unable to recover it. 00:28:07.743 [2024-10-07 09:48:56.386272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.743 [2024-10-07 09:48:56.386298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.743 qpair failed and we were unable to recover it. 00:28:07.743 [2024-10-07 09:48:56.386405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.743 [2024-10-07 09:48:56.386431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.743 qpair failed and we were unable to recover it. 00:28:07.743 [2024-10-07 09:48:56.386508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.743 [2024-10-07 09:48:56.386533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.743 qpair failed and we were unable to recover it. 00:28:07.743 [2024-10-07 09:48:56.386646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.743 [2024-10-07 09:48:56.386697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.743 qpair failed and we were unable to recover it. 00:28:07.744 [2024-10-07 09:48:56.386791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.744 [2024-10-07 09:48:56.386816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.744 qpair failed and we were unable to recover it. 00:28:07.744 [2024-10-07 09:48:56.386902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.744 [2024-10-07 09:48:56.386928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.744 qpair failed and we were unable to recover it. 00:28:07.744 [2024-10-07 09:48:56.387009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.744 [2024-10-07 09:48:56.387035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.744 qpair failed and we were unable to recover it. 00:28:07.744 [2024-10-07 09:48:56.387118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.744 [2024-10-07 09:48:56.387143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.744 qpair failed and we were unable to recover it. 00:28:07.744 [2024-10-07 09:48:56.387281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.744 [2024-10-07 09:48:56.387307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.744 qpair failed and we were unable to recover it. 00:28:07.744 [2024-10-07 09:48:56.387386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.744 [2024-10-07 09:48:56.387411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.744 qpair failed and we were unable to recover it. 00:28:07.744 [2024-10-07 09:48:56.387484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.744 [2024-10-07 09:48:56.387509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.744 qpair failed and we were unable to recover it. 00:28:07.744 [2024-10-07 09:48:56.387596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.744 [2024-10-07 09:48:56.387625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.744 qpair failed and we were unable to recover it. 00:28:07.744 [2024-10-07 09:48:56.387721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.744 [2024-10-07 09:48:56.387749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.744 qpair failed and we were unable to recover it. 00:28:07.744 [2024-10-07 09:48:56.387830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.744 [2024-10-07 09:48:56.387856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.744 qpair failed and we were unable to recover it. 00:28:07.744 [2024-10-07 09:48:56.387934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.744 [2024-10-07 09:48:56.387961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.744 qpair failed and we were unable to recover it. 00:28:07.744 [2024-10-07 09:48:56.388083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.744 [2024-10-07 09:48:56.388111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.744 qpair failed and we were unable to recover it. 00:28:07.744 [2024-10-07 09:48:56.388198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.744 [2024-10-07 09:48:56.388224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.744 qpair failed and we were unable to recover it. 00:28:07.744 [2024-10-07 09:48:56.388367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.744 [2024-10-07 09:48:56.388394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.744 qpair failed and we were unable to recover it. 00:28:07.744 [2024-10-07 09:48:56.388470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.744 [2024-10-07 09:48:56.388496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.744 qpair failed and we were unable to recover it. 00:28:07.744 [2024-10-07 09:48:56.388580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.744 [2024-10-07 09:48:56.388605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.744 qpair failed and we were unable to recover it. 00:28:07.744 [2024-10-07 09:48:56.388694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.744 [2024-10-07 09:48:56.388720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.744 qpair failed and we were unable to recover it. 00:28:07.744 [2024-10-07 09:48:56.388799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.744 [2024-10-07 09:48:56.388825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.744 qpair failed and we were unable to recover it. 00:28:07.744 [2024-10-07 09:48:56.388913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.744 [2024-10-07 09:48:56.388938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.744 qpair failed and we were unable to recover it. 00:28:07.744 [2024-10-07 09:48:56.389020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.744 [2024-10-07 09:48:56.389048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.744 qpair failed and we were unable to recover it. 00:28:07.744 [2024-10-07 09:48:56.389165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.744 [2024-10-07 09:48:56.389193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.744 qpair failed and we were unable to recover it. 00:28:07.744 [2024-10-07 09:48:56.389274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.744 [2024-10-07 09:48:56.389300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.744 qpair failed and we were unable to recover it. 00:28:07.744 [2024-10-07 09:48:56.389386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.744 [2024-10-07 09:48:56.389414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.744 qpair failed and we were unable to recover it. 00:28:07.744 [2024-10-07 09:48:56.389489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.744 [2024-10-07 09:48:56.389517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.744 qpair failed and we were unable to recover it. 00:28:07.744 [2024-10-07 09:48:56.389589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.744 [2024-10-07 09:48:56.389615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.744 qpair failed and we were unable to recover it. 00:28:07.744 [2024-10-07 09:48:56.389726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.744 [2024-10-07 09:48:56.389753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.744 qpair failed and we were unable to recover it. 00:28:07.744 [2024-10-07 09:48:56.389840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.744 [2024-10-07 09:48:56.389866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.744 qpair failed and we were unable to recover it. 00:28:07.744 [2024-10-07 09:48:56.389982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.744 [2024-10-07 09:48:56.390009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.744 qpair failed and we were unable to recover it. 00:28:07.744 [2024-10-07 09:48:56.390091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.744 [2024-10-07 09:48:56.390118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.744 qpair failed and we were unable to recover it. 00:28:07.744 [2024-10-07 09:48:56.390231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.744 [2024-10-07 09:48:56.390259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.744 qpair failed and we were unable to recover it. 00:28:07.744 [2024-10-07 09:48:56.390335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.745 [2024-10-07 09:48:56.390361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.745 qpair failed and we were unable to recover it. 00:28:07.745 [2024-10-07 09:48:56.390468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.745 [2024-10-07 09:48:56.390494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.745 qpair failed and we were unable to recover it. 00:28:07.745 [2024-10-07 09:48:56.390573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.745 [2024-10-07 09:48:56.390599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.745 qpair failed and we were unable to recover it. 00:28:07.745 [2024-10-07 09:48:56.390684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.745 [2024-10-07 09:48:56.390716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.745 qpair failed and we were unable to recover it. 00:28:07.745 [2024-10-07 09:48:56.390832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.745 [2024-10-07 09:48:56.390858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.745 qpair failed and we were unable to recover it. 00:28:07.745 [2024-10-07 09:48:56.390950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.745 [2024-10-07 09:48:56.390976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.745 qpair failed and we were unable to recover it. 00:28:07.745 [2024-10-07 09:48:56.391050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.745 [2024-10-07 09:48:56.391077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.745 qpair failed and we were unable to recover it. 00:28:07.745 [2024-10-07 09:48:56.391186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.745 [2024-10-07 09:48:56.391214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.745 qpair failed and we were unable to recover it. 00:28:07.745 [2024-10-07 09:48:56.391307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.745 [2024-10-07 09:48:56.391333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.745 qpair failed and we were unable to recover it. 00:28:07.745 [2024-10-07 09:48:56.391407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.745 [2024-10-07 09:48:56.391432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.745 qpair failed and we were unable to recover it. 00:28:07.745 [2024-10-07 09:48:56.391510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.745 [2024-10-07 09:48:56.391535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.745 qpair failed and we were unable to recover it. 00:28:07.745 [2024-10-07 09:48:56.391650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.745 [2024-10-07 09:48:56.391683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.745 qpair failed and we were unable to recover it. 00:28:07.745 [2024-10-07 09:48:56.391756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.745 [2024-10-07 09:48:56.391782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.745 qpair failed and we were unable to recover it. 00:28:07.745 [2024-10-07 09:48:56.391860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.745 [2024-10-07 09:48:56.391885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.745 qpair failed and we were unable to recover it. 00:28:07.745 [2024-10-07 09:48:56.391991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.745 [2024-10-07 09:48:56.392016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.745 qpair failed and we were unable to recover it. 00:28:07.745 [2024-10-07 09:48:56.392095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.745 [2024-10-07 09:48:56.392120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.745 qpair failed and we were unable to recover it. 00:28:07.745 [2024-10-07 09:48:56.392201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.745 [2024-10-07 09:48:56.392230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.745 qpair failed and we were unable to recover it. 00:28:07.745 [2024-10-07 09:48:56.392328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.745 [2024-10-07 09:48:56.392355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.745 qpair failed and we were unable to recover it. 00:28:07.745 [2024-10-07 09:48:56.392465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.745 [2024-10-07 09:48:56.392492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.745 qpair failed and we were unable to recover it. 00:28:07.745 [2024-10-07 09:48:56.392573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.745 [2024-10-07 09:48:56.392601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.745 qpair failed and we were unable to recover it. 00:28:07.745 [2024-10-07 09:48:56.392683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.745 [2024-10-07 09:48:56.392710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.745 qpair failed and we were unable to recover it. 00:28:07.745 [2024-10-07 09:48:56.392792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.745 [2024-10-07 09:48:56.392820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.745 qpair failed and we were unable to recover it. 00:28:07.745 [2024-10-07 09:48:56.392931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.745 [2024-10-07 09:48:56.392957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.745 qpair failed and we were unable to recover it. 00:28:07.745 [2024-10-07 09:48:56.393039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.745 [2024-10-07 09:48:56.393066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.745 qpair failed and we were unable to recover it. 00:28:07.745 [2024-10-07 09:48:56.393149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.745 [2024-10-07 09:48:56.393176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.745 qpair failed and we were unable to recover it. 00:28:07.745 [2024-10-07 09:48:56.393250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.745 [2024-10-07 09:48:56.393277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.745 qpair failed and we were unable to recover it. 00:28:07.745 [2024-10-07 09:48:56.393357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.745 [2024-10-07 09:48:56.393384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.745 qpair failed and we were unable to recover it. 00:28:07.745 [2024-10-07 09:48:56.393494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.745 [2024-10-07 09:48:56.393519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.745 qpair failed and we were unable to recover it. 00:28:07.745 [2024-10-07 09:48:56.393598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.745 [2024-10-07 09:48:56.393624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.745 qpair failed and we were unable to recover it. 00:28:07.745 [2024-10-07 09:48:56.393748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.745 [2024-10-07 09:48:56.393775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.745 qpair failed and we were unable to recover it. 00:28:07.746 [2024-10-07 09:48:56.393852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.746 [2024-10-07 09:48:56.393882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.746 qpair failed and we were unable to recover it. 00:28:07.746 [2024-10-07 09:48:56.393960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.746 [2024-10-07 09:48:56.393984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.746 qpair failed and we were unable to recover it. 00:28:07.746 [2024-10-07 09:48:56.394071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.746 [2024-10-07 09:48:56.394097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.746 qpair failed and we were unable to recover it. 00:28:07.746 [2024-10-07 09:48:56.394171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.746 [2024-10-07 09:48:56.394196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.746 qpair failed and we were unable to recover it. 00:28:07.746 [2024-10-07 09:48:56.394287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.746 [2024-10-07 09:48:56.394316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.746 qpair failed and we were unable to recover it. 00:28:07.746 [2024-10-07 09:48:56.394398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.746 [2024-10-07 09:48:56.394425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.746 qpair failed and we were unable to recover it. 00:28:07.746 [2024-10-07 09:48:56.394519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.746 [2024-10-07 09:48:56.394545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.746 qpair failed and we were unable to recover it. 00:28:07.746 [2024-10-07 09:48:56.394629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.746 [2024-10-07 09:48:56.394657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.746 qpair failed and we were unable to recover it. 00:28:07.746 [2024-10-07 09:48:56.394789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.746 [2024-10-07 09:48:56.394815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.746 qpair failed and we were unable to recover it. 00:28:07.746 [2024-10-07 09:48:56.394899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.746 [2024-10-07 09:48:56.394927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.746 qpair failed and we were unable to recover it. 00:28:07.746 [2024-10-07 09:48:56.395017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.746 [2024-10-07 09:48:56.395045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.746 qpair failed and we were unable to recover it. 00:28:07.746 [2024-10-07 09:48:56.395165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.746 [2024-10-07 09:48:56.395193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.746 qpair failed and we were unable to recover it. 00:28:07.746 [2024-10-07 09:48:56.395281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.746 [2024-10-07 09:48:56.395307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.746 qpair failed and we were unable to recover it. 00:28:07.746 [2024-10-07 09:48:56.395387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.746 [2024-10-07 09:48:56.395414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.746 qpair failed and we were unable to recover it. 00:28:07.746 [2024-10-07 09:48:56.395523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.746 [2024-10-07 09:48:56.395549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.746 qpair failed and we were unable to recover it. 00:28:07.746 [2024-10-07 09:48:56.395632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.746 [2024-10-07 09:48:56.395657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.746 qpair failed and we were unable to recover it. 00:28:07.746 [2024-10-07 09:48:56.395745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.746 [2024-10-07 09:48:56.395770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.746 qpair failed and we were unable to recover it. 00:28:07.746 [2024-10-07 09:48:56.395857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.746 [2024-10-07 09:48:56.395883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.746 qpair failed and we were unable to recover it. 00:28:07.746 [2024-10-07 09:48:56.395965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.746 [2024-10-07 09:48:56.395989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.746 qpair failed and we were unable to recover it. 00:28:07.746 [2024-10-07 09:48:56.396096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.746 [2024-10-07 09:48:56.396121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.746 qpair failed and we were unable to recover it. 00:28:07.746 [2024-10-07 09:48:56.396228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.746 [2024-10-07 09:48:56.396254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.746 qpair failed and we were unable to recover it. 00:28:07.746 [2024-10-07 09:48:56.396361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.746 [2024-10-07 09:48:56.396386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.746 qpair failed and we were unable to recover it. 00:28:07.746 [2024-10-07 09:48:56.396480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.746 [2024-10-07 09:48:56.396509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.746 qpair failed and we were unable to recover it. 00:28:07.746 [2024-10-07 09:48:56.396588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.746 [2024-10-07 09:48:56.396615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.746 qpair failed and we were unable to recover it. 00:28:07.746 [2024-10-07 09:48:56.396696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.746 [2024-10-07 09:48:56.396723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.746 qpair failed and we were unable to recover it. 00:28:07.746 [2024-10-07 09:48:56.396806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.746 [2024-10-07 09:48:56.396832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.746 qpair failed and we were unable to recover it. 00:28:07.746 [2024-10-07 09:48:56.396919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.746 [2024-10-07 09:48:56.396946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.746 qpair failed and we were unable to recover it. 00:28:07.746 [2024-10-07 09:48:56.397080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.746 [2024-10-07 09:48:56.397121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.746 qpair failed and we were unable to recover it. 00:28:07.746 [2024-10-07 09:48:56.397237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.746 [2024-10-07 09:48:56.397264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.746 qpair failed and we were unable to recover it. 00:28:07.746 [2024-10-07 09:48:56.397353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.746 [2024-10-07 09:48:56.397379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.746 qpair failed and we were unable to recover it. 00:28:07.746 [2024-10-07 09:48:56.397489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.746 [2024-10-07 09:48:56.397514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.746 qpair failed and we were unable to recover it. 00:28:07.746 [2024-10-07 09:48:56.397597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.746 [2024-10-07 09:48:56.397622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.746 qpair failed and we were unable to recover it. 00:28:07.746 [2024-10-07 09:48:56.397714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.746 [2024-10-07 09:48:56.397740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.746 qpair failed and we were unable to recover it. 00:28:07.746 [2024-10-07 09:48:56.397855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.746 [2024-10-07 09:48:56.397883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.746 qpair failed and we were unable to recover it. 00:28:07.746 [2024-10-07 09:48:56.397968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.746 [2024-10-07 09:48:56.397994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.746 qpair failed and we were unable to recover it. 00:28:07.746 [2024-10-07 09:48:56.398115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.746 [2024-10-07 09:48:56.398143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.746 qpair failed and we were unable to recover it. 00:28:07.746 [2024-10-07 09:48:56.398224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.747 [2024-10-07 09:48:56.398252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.747 qpair failed and we were unable to recover it. 00:28:07.747 [2024-10-07 09:48:56.398335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.747 [2024-10-07 09:48:56.398362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.747 qpair failed and we were unable to recover it. 00:28:07.747 [2024-10-07 09:48:56.398448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.747 [2024-10-07 09:48:56.398476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.747 qpair failed and we were unable to recover it. 00:28:07.747 [2024-10-07 09:48:56.398559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.747 [2024-10-07 09:48:56.398586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.747 qpair failed and we were unable to recover it. 00:28:07.747 [2024-10-07 09:48:56.398684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.747 [2024-10-07 09:48:56.398724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.747 qpair failed and we were unable to recover it. 00:28:07.747 [2024-10-07 09:48:56.398827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.747 [2024-10-07 09:48:56.398855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.747 qpair failed and we were unable to recover it. 00:28:07.747 [2024-10-07 09:48:56.398936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.747 [2024-10-07 09:48:56.398962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.747 qpair failed and we were unable to recover it. 00:28:07.747 [2024-10-07 09:48:56.399051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.747 [2024-10-07 09:48:56.399076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.747 qpair failed and we were unable to recover it. 00:28:07.747 [2024-10-07 09:48:56.399192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.747 [2024-10-07 09:48:56.399219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.747 qpair failed and we were unable to recover it. 00:28:07.747 [2024-10-07 09:48:56.399329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.747 [2024-10-07 09:48:56.399355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.747 qpair failed and we were unable to recover it. 00:28:07.747 [2024-10-07 09:48:56.399437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.747 [2024-10-07 09:48:56.399464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.747 qpair failed and we were unable to recover it. 00:28:07.747 [2024-10-07 09:48:56.399573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.747 [2024-10-07 09:48:56.399599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.747 qpair failed and we were unable to recover it. 00:28:07.747 [2024-10-07 09:48:56.399712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.747 [2024-10-07 09:48:56.399742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.747 qpair failed and we were unable to recover it. 00:28:07.747 [2024-10-07 09:48:56.399855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.747 [2024-10-07 09:48:56.399883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.747 qpair failed and we were unable to recover it. 00:28:07.747 [2024-10-07 09:48:56.399977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.747 [2024-10-07 09:48:56.400005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.747 qpair failed and we were unable to recover it. 00:28:07.747 [2024-10-07 09:48:56.400087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.747 [2024-10-07 09:48:56.400114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.747 qpair failed and we were unable to recover it. 00:28:07.747 [2024-10-07 09:48:56.400209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.747 [2024-10-07 09:48:56.400236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.747 qpair failed and we were unable to recover it. 00:28:07.747 [2024-10-07 09:48:56.400351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.747 [2024-10-07 09:48:56.400377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.747 qpair failed and we were unable to recover it. 00:28:07.747 [2024-10-07 09:48:56.400465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.747 [2024-10-07 09:48:56.400492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.747 qpair failed and we were unable to recover it. 00:28:07.747 [2024-10-07 09:48:56.400564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.747 [2024-10-07 09:48:56.400591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.747 qpair failed and we were unable to recover it. 00:28:07.747 [2024-10-07 09:48:56.400712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.747 [2024-10-07 09:48:56.400740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.747 qpair failed and we were unable to recover it. 00:28:07.747 [2024-10-07 09:48:56.400825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.747 [2024-10-07 09:48:56.400852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.747 qpair failed and we were unable to recover it. 00:28:07.747 [2024-10-07 09:48:56.400967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.747 [2024-10-07 09:48:56.400993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.747 qpair failed and we were unable to recover it. 00:28:07.747 [2024-10-07 09:48:56.401115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.747 [2024-10-07 09:48:56.401142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.747 qpair failed and we were unable to recover it. 00:28:07.747 [2024-10-07 09:48:56.401223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.747 [2024-10-07 09:48:56.401251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.747 qpair failed and we were unable to recover it. 00:28:07.747 [2024-10-07 09:48:56.401365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.747 [2024-10-07 09:48:56.401392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.747 qpair failed and we were unable to recover it. 00:28:07.747 [2024-10-07 09:48:56.401510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.747 [2024-10-07 09:48:56.401537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.747 qpair failed and we were unable to recover it. 00:28:07.747 [2024-10-07 09:48:56.401611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.747 [2024-10-07 09:48:56.401637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.747 qpair failed and we were unable to recover it. 00:28:07.747 [2024-10-07 09:48:56.401732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.747 [2024-10-07 09:48:56.401759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.747 qpair failed and we were unable to recover it. 00:28:07.747 [2024-10-07 09:48:56.401876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.747 [2024-10-07 09:48:56.401903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.747 qpair failed and we were unable to recover it. 00:28:07.747 [2024-10-07 09:48:56.402027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.747 [2024-10-07 09:48:56.402054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.747 qpair failed and we were unable to recover it. 00:28:07.747 [2024-10-07 09:48:56.402137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.747 [2024-10-07 09:48:56.402170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.747 qpair failed and we were unable to recover it. 00:28:07.747 [2024-10-07 09:48:56.402294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.747 [2024-10-07 09:48:56.402321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.747 qpair failed and we were unable to recover it. 00:28:07.748 [2024-10-07 09:48:56.402399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.748 [2024-10-07 09:48:56.402426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.748 qpair failed and we were unable to recover it. 00:28:07.748 [2024-10-07 09:48:56.402510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.748 [2024-10-07 09:48:56.402536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.748 qpair failed and we were unable to recover it. 00:28:07.748 [2024-10-07 09:48:56.402652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.748 [2024-10-07 09:48:56.402685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.748 qpair failed and we were unable to recover it. 00:28:07.748 [2024-10-07 09:48:56.402831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.748 [2024-10-07 09:48:56.402858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.748 qpair failed and we were unable to recover it. 00:28:07.748 [2024-10-07 09:48:56.402933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.748 [2024-10-07 09:48:56.402959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.748 qpair failed and we were unable to recover it. 00:28:07.748 [2024-10-07 09:48:56.403070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.748 [2024-10-07 09:48:56.403097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.748 qpair failed and we were unable to recover it. 00:28:07.748 [2024-10-07 09:48:56.403175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.748 [2024-10-07 09:48:56.403202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.748 qpair failed and we were unable to recover it. 00:28:07.748 [2024-10-07 09:48:56.403322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.748 [2024-10-07 09:48:56.403348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.748 qpair failed and we were unable to recover it. 00:28:07.748 [2024-10-07 09:48:56.403465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.748 [2024-10-07 09:48:56.403492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.748 qpair failed and we were unable to recover it. 00:28:07.748 [2024-10-07 09:48:56.403623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.748 [2024-10-07 09:48:56.403662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.748 qpair failed and we were unable to recover it. 00:28:07.748 [2024-10-07 09:48:56.403768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.748 [2024-10-07 09:48:56.403796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.748 qpair failed and we were unable to recover it. 00:28:07.748 [2024-10-07 09:48:56.403884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.748 [2024-10-07 09:48:56.403910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.748 qpair failed and we were unable to recover it. 00:28:07.748 [2024-10-07 09:48:56.404005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.748 [2024-10-07 09:48:56.404031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.748 qpair failed and we were unable to recover it. 00:28:07.748 [2024-10-07 09:48:56.404148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.748 [2024-10-07 09:48:56.404174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.748 qpair failed and we were unable to recover it. 00:28:07.748 [2024-10-07 09:48:56.404265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.748 [2024-10-07 09:48:56.404292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.748 qpair failed and we were unable to recover it. 00:28:07.748 [2024-10-07 09:48:56.404376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.748 [2024-10-07 09:48:56.404404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.748 qpair failed and we were unable to recover it. 00:28:07.748 [2024-10-07 09:48:56.404513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.748 [2024-10-07 09:48:56.404552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.748 qpair failed and we were unable to recover it. 00:28:07.748 [2024-10-07 09:48:56.404639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.748 [2024-10-07 09:48:56.404673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.748 qpair failed and we were unable to recover it. 00:28:07.748 [2024-10-07 09:48:56.404766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.748 [2024-10-07 09:48:56.404792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.748 qpair failed and we were unable to recover it. 00:28:07.748 [2024-10-07 09:48:56.404883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.748 [2024-10-07 09:48:56.404909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.748 qpair failed and we were unable to recover it. 00:28:07.748 [2024-10-07 09:48:56.404999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.748 [2024-10-07 09:48:56.405024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.748 qpair failed and we were unable to recover it. 00:28:07.748 [2024-10-07 09:48:56.405165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.748 [2024-10-07 09:48:56.405193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.748 qpair failed and we were unable to recover it. 00:28:07.748 [2024-10-07 09:48:56.405280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.748 [2024-10-07 09:48:56.405308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.748 qpair failed and we were unable to recover it. 00:28:07.748 [2024-10-07 09:48:56.405386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.748 [2024-10-07 09:48:56.405413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.748 qpair failed and we were unable to recover it. 00:28:07.748 [2024-10-07 09:48:56.405499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.748 [2024-10-07 09:48:56.405526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.748 qpair failed and we were unable to recover it. 00:28:07.748 [2024-10-07 09:48:56.405610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.748 [2024-10-07 09:48:56.405636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.748 qpair failed and we were unable to recover it. 00:28:07.748 [2024-10-07 09:48:56.405730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.748 [2024-10-07 09:48:56.405757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.748 qpair failed and we were unable to recover it. 00:28:07.748 [2024-10-07 09:48:56.405866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.748 [2024-10-07 09:48:56.405892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.748 qpair failed and we were unable to recover it. 00:28:07.748 [2024-10-07 09:48:56.405969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.748 [2024-10-07 09:48:56.405994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.748 qpair failed and we were unable to recover it. 00:28:07.748 [2024-10-07 09:48:56.406067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.748 [2024-10-07 09:48:56.406094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.748 qpair failed and we were unable to recover it. 00:28:07.748 [2024-10-07 09:48:56.406174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.748 [2024-10-07 09:48:56.406199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.748 qpair failed and we were unable to recover it. 00:28:07.748 [2024-10-07 09:48:56.406275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.748 [2024-10-07 09:48:56.406301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.748 qpair failed and we were unable to recover it. 00:28:07.748 [2024-10-07 09:48:56.406416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.748 [2024-10-07 09:48:56.406442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.748 qpair failed and we were unable to recover it. 00:28:07.748 [2024-10-07 09:48:56.406524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.748 [2024-10-07 09:48:56.406552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.748 qpair failed and we were unable to recover it. 00:28:07.748 [2024-10-07 09:48:56.406671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.748 [2024-10-07 09:48:56.406701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.748 qpair failed and we were unable to recover it. 00:28:07.748 [2024-10-07 09:48:56.406817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.748 [2024-10-07 09:48:56.406844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.748 qpair failed and we were unable to recover it. 00:28:07.748 [2024-10-07 09:48:56.406921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.748 [2024-10-07 09:48:56.406948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.748 qpair failed and we were unable to recover it. 00:28:07.748 [2024-10-07 09:48:56.407032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.749 [2024-10-07 09:48:56.407059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.749 qpair failed and we were unable to recover it. 00:28:07.749 [2024-10-07 09:48:56.407150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.749 [2024-10-07 09:48:56.407182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.749 qpair failed and we were unable to recover it. 00:28:07.749 [2024-10-07 09:48:56.407262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.749 [2024-10-07 09:48:56.407290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.749 qpair failed and we were unable to recover it. 00:28:07.749 [2024-10-07 09:48:56.407399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.749 [2024-10-07 09:48:56.407438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.749 qpair failed and we were unable to recover it. 00:28:07.749 [2024-10-07 09:48:56.407542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.749 [2024-10-07 09:48:56.407570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.749 qpair failed and we were unable to recover it. 00:28:07.749 [2024-10-07 09:48:56.407655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.749 [2024-10-07 09:48:56.407691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.749 qpair failed and we were unable to recover it. 00:28:07.749 [2024-10-07 09:48:56.407775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.749 [2024-10-07 09:48:56.407802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.749 qpair failed and we were unable to recover it. 00:28:07.749 [2024-10-07 09:48:56.407896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.749 [2024-10-07 09:48:56.407922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.749 qpair failed and we were unable to recover it. 00:28:07.749 [2024-10-07 09:48:56.408004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.749 [2024-10-07 09:48:56.408029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.749 qpair failed and we were unable to recover it. 00:28:07.749 [2024-10-07 09:48:56.408111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.749 [2024-10-07 09:48:56.408137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.749 qpair failed and we were unable to recover it. 00:28:07.749 [2024-10-07 09:48:56.408247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.749 [2024-10-07 09:48:56.408273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.749 qpair failed and we were unable to recover it. 00:28:07.749 [2024-10-07 09:48:56.408349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.749 [2024-10-07 09:48:56.408374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.749 qpair failed and we were unable to recover it. 00:28:07.749 [2024-10-07 09:48:56.408483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.749 [2024-10-07 09:48:56.408509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.749 qpair failed and we were unable to recover it. 00:28:07.749 [2024-10-07 09:48:56.408588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.749 [2024-10-07 09:48:56.408614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.749 qpair failed and we were unable to recover it. 00:28:07.749 [2024-10-07 09:48:56.408698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.749 [2024-10-07 09:48:56.408727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.749 qpair failed and we were unable to recover it. 00:28:07.749 [2024-10-07 09:48:56.408819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.749 [2024-10-07 09:48:56.408846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.749 qpair failed and we were unable to recover it. 00:28:07.749 [2024-10-07 09:48:56.408938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.749 [2024-10-07 09:48:56.408964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.749 qpair failed and we were unable to recover it. 00:28:07.749 [2024-10-07 09:48:56.409071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.749 [2024-10-07 09:48:56.409097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.749 qpair failed and we were unable to recover it. 00:28:07.749 [2024-10-07 09:48:56.409180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.749 [2024-10-07 09:48:56.409206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.749 qpair failed and we were unable to recover it. 00:28:07.749 [2024-10-07 09:48:56.409322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.749 [2024-10-07 09:48:56.409348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.749 qpair failed and we were unable to recover it. 00:28:07.749 [2024-10-07 09:48:56.409455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.749 [2024-10-07 09:48:56.409481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.749 qpair failed and we were unable to recover it. 00:28:07.749 [2024-10-07 09:48:56.409554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.749 [2024-10-07 09:48:56.409580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.749 qpair failed and we were unable to recover it. 00:28:07.749 [2024-10-07 09:48:56.409656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.749 [2024-10-07 09:48:56.409688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.749 qpair failed and we were unable to recover it. 00:28:07.749 [2024-10-07 09:48:56.409768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.749 [2024-10-07 09:48:56.409794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.749 qpair failed and we were unable to recover it. 00:28:07.749 [2024-10-07 09:48:56.409871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.749 [2024-10-07 09:48:56.409897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.749 qpair failed and we were unable to recover it. 00:28:07.749 [2024-10-07 09:48:56.409983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.749 [2024-10-07 09:48:56.410008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.749 qpair failed and we were unable to recover it. 00:28:07.749 [2024-10-07 09:48:56.410120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.749 [2024-10-07 09:48:56.410148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.749 qpair failed and we were unable to recover it. 00:28:07.749 [2024-10-07 09:48:56.410263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.749 [2024-10-07 09:48:56.410291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.749 qpair failed and we were unable to recover it. 00:28:07.749 [2024-10-07 09:48:56.410407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.749 [2024-10-07 09:48:56.410438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.749 qpair failed and we were unable to recover it. 00:28:07.749 [2024-10-07 09:48:56.410525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.749 [2024-10-07 09:48:56.410552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.749 qpair failed and we were unable to recover it. 00:28:07.749 [2024-10-07 09:48:56.410637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.749 [2024-10-07 09:48:56.410670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.749 qpair failed and we were unable to recover it. 00:28:07.749 [2024-10-07 09:48:56.410785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.749 [2024-10-07 09:48:56.410812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.749 qpair failed and we were unable to recover it. 00:28:07.749 [2024-10-07 09:48:56.410898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.749 [2024-10-07 09:48:56.410925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.749 qpair failed and we were unable to recover it. 00:28:07.749 [2024-10-07 09:48:56.411003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.749 [2024-10-07 09:48:56.411030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.749 qpair failed and we were unable to recover it. 00:28:07.749 [2024-10-07 09:48:56.411144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.749 [2024-10-07 09:48:56.411171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.749 qpair failed and we were unable to recover it. 00:28:07.749 [2024-10-07 09:48:56.411246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.749 [2024-10-07 09:48:56.411273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.749 qpair failed and we were unable to recover it. 00:28:07.749 [2024-10-07 09:48:56.411350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.749 [2024-10-07 09:48:56.411377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.749 qpair failed and we were unable to recover it. 00:28:07.749 [2024-10-07 09:48:56.411501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.750 [2024-10-07 09:48:56.411527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.750 qpair failed and we were unable to recover it. 00:28:07.750 [2024-10-07 09:48:56.411639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.750 [2024-10-07 09:48:56.411671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.750 qpair failed and we were unable to recover it. 00:28:07.750 [2024-10-07 09:48:56.411761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.750 [2024-10-07 09:48:56.411786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.750 qpair failed and we were unable to recover it. 00:28:07.750 [2024-10-07 09:48:56.411867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.750 [2024-10-07 09:48:56.411893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.750 qpair failed and we were unable to recover it. 00:28:07.750 [2024-10-07 09:48:56.411976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.750 [2024-10-07 09:48:56.412003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.750 qpair failed and we were unable to recover it. 00:28:07.750 [2024-10-07 09:48:56.412094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.750 [2024-10-07 09:48:56.412122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.750 qpair failed and we were unable to recover it. 00:28:07.750 [2024-10-07 09:48:56.412239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.750 [2024-10-07 09:48:56.412265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.750 qpair failed and we were unable to recover it. 00:28:07.750 [2024-10-07 09:48:56.412369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.750 [2024-10-07 09:48:56.412394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.750 qpair failed and we were unable to recover it. 00:28:07.750 [2024-10-07 09:48:56.412505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.750 [2024-10-07 09:48:56.412531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.750 qpair failed and we were unable to recover it. 00:28:07.750 [2024-10-07 09:48:56.412617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.750 [2024-10-07 09:48:56.412644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.750 qpair failed and we were unable to recover it. 00:28:07.750 [2024-10-07 09:48:56.412734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.750 [2024-10-07 09:48:56.412763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.750 qpair failed and we were unable to recover it. 00:28:07.750 [2024-10-07 09:48:56.412852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.750 [2024-10-07 09:48:56.412877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.750 qpair failed and we were unable to recover it. 00:28:07.750 [2024-10-07 09:48:56.412965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.750 [2024-10-07 09:48:56.412991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.750 qpair failed and we were unable to recover it. 00:28:07.750 [2024-10-07 09:48:56.413096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.750 [2024-10-07 09:48:56.413122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.750 qpair failed and we were unable to recover it. 00:28:07.750 [2024-10-07 09:48:56.413199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.750 [2024-10-07 09:48:56.413228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.750 qpair failed and we were unable to recover it. 00:28:07.750 [2024-10-07 09:48:56.413347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.750 [2024-10-07 09:48:56.413374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.750 qpair failed and we were unable to recover it. 00:28:07.750 [2024-10-07 09:48:56.413459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.750 [2024-10-07 09:48:56.413486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.750 qpair failed and we were unable to recover it. 00:28:07.750 [2024-10-07 09:48:56.413567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.750 [2024-10-07 09:48:56.413593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.750 qpair failed and we were unable to recover it. 00:28:07.750 [2024-10-07 09:48:56.413715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.750 [2024-10-07 09:48:56.413741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.750 qpair failed and we were unable to recover it. 00:28:07.750 [2024-10-07 09:48:56.413820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.750 [2024-10-07 09:48:56.413846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.750 qpair failed and we were unable to recover it. 00:28:07.750 [2024-10-07 09:48:56.413929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.750 [2024-10-07 09:48:56.413955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.750 qpair failed and we were unable to recover it. 00:28:07.750 [2024-10-07 09:48:56.414047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.750 [2024-10-07 09:48:56.414073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.750 qpair failed and we were unable to recover it. 00:28:07.750 [2024-10-07 09:48:56.414152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.750 [2024-10-07 09:48:56.414180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.750 qpair failed and we were unable to recover it. 00:28:07.750 [2024-10-07 09:48:56.414291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.750 [2024-10-07 09:48:56.414318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.750 qpair failed and we were unable to recover it. 00:28:07.750 [2024-10-07 09:48:56.414401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.750 [2024-10-07 09:48:56.414427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.750 qpair failed and we were unable to recover it. 00:28:07.750 [2024-10-07 09:48:56.414508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.750 [2024-10-07 09:48:56.414534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.750 qpair failed and we were unable to recover it. 00:28:07.750 [2024-10-07 09:48:56.414610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.750 [2024-10-07 09:48:56.414635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.750 qpair failed and we were unable to recover it. 00:28:07.750 [2024-10-07 09:48:56.414727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.750 [2024-10-07 09:48:56.414754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.750 qpair failed and we were unable to recover it. 00:28:07.750 [2024-10-07 09:48:56.414828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.750 [2024-10-07 09:48:56.414854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.750 qpair failed and we were unable to recover it. 00:28:07.750 [2024-10-07 09:48:56.414962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.750 [2024-10-07 09:48:56.414987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.750 qpair failed and we were unable to recover it. 00:28:07.750 [2024-10-07 09:48:56.415071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.750 [2024-10-07 09:48:56.415097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.750 qpair failed and we were unable to recover it. 00:28:07.750 [2024-10-07 09:48:56.415210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.750 [2024-10-07 09:48:56.415237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.750 qpair failed and we were unable to recover it. 00:28:07.750 [2024-10-07 09:48:56.415361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.751 [2024-10-07 09:48:56.415388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.751 qpair failed and we were unable to recover it. 00:28:07.751 [2024-10-07 09:48:56.415500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.751 [2024-10-07 09:48:56.415527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.751 qpair failed and we were unable to recover it. 00:28:07.751 [2024-10-07 09:48:56.415605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.751 [2024-10-07 09:48:56.415631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.751 qpair failed and we were unable to recover it. 00:28:07.751 [2024-10-07 09:48:56.415751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.751 [2024-10-07 09:48:56.415779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.751 qpair failed and we were unable to recover it. 00:28:07.751 [2024-10-07 09:48:56.415868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.751 [2024-10-07 09:48:56.415896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.751 qpair failed and we were unable to recover it. 00:28:07.751 [2024-10-07 09:48:56.416006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.751 [2024-10-07 09:48:56.416033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.751 qpair failed and we were unable to recover it. 00:28:07.751 [2024-10-07 09:48:56.416111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.751 [2024-10-07 09:48:56.416138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.751 qpair failed and we were unable to recover it. 00:28:07.751 [2024-10-07 09:48:56.416255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.751 [2024-10-07 09:48:56.416281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.751 qpair failed and we were unable to recover it. 00:28:07.751 [2024-10-07 09:48:56.416393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.751 [2024-10-07 09:48:56.416419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.751 qpair failed and we were unable to recover it. 00:28:07.751 [2024-10-07 09:48:56.416528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.751 [2024-10-07 09:48:56.416554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.751 qpair failed and we were unable to recover it. 00:28:07.751 [2024-10-07 09:48:56.416670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.751 [2024-10-07 09:48:56.416698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.751 qpair failed and we were unable to recover it. 00:28:07.751 [2024-10-07 09:48:56.416789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.751 [2024-10-07 09:48:56.416816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.751 qpair failed and we were unable to recover it. 00:28:07.751 [2024-10-07 09:48:56.416930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.751 [2024-10-07 09:48:56.416956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.751 qpair failed and we were unable to recover it. 00:28:07.751 [2024-10-07 09:48:56.417042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.751 [2024-10-07 09:48:56.417069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.751 qpair failed and we were unable to recover it. 00:28:07.751 [2024-10-07 09:48:56.417148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.751 [2024-10-07 09:48:56.417175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.751 qpair failed and we were unable to recover it. 00:28:07.751 [2024-10-07 09:48:56.417251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.751 [2024-10-07 09:48:56.417277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.751 qpair failed and we were unable to recover it. 00:28:07.751 [2024-10-07 09:48:56.417370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.751 [2024-10-07 09:48:56.417398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.751 qpair failed and we were unable to recover it. 00:28:07.751 [2024-10-07 09:48:56.417497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.751 [2024-10-07 09:48:56.417536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.751 qpair failed and we were unable to recover it. 00:28:07.751 [2024-10-07 09:48:56.417661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.751 [2024-10-07 09:48:56.417708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.751 qpair failed and we were unable to recover it. 00:28:07.751 [2024-10-07 09:48:56.417793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.751 [2024-10-07 09:48:56.417820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.751 qpair failed and we were unable to recover it. 00:28:07.751 [2024-10-07 09:48:56.417900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.751 [2024-10-07 09:48:56.417926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.751 qpair failed and we were unable to recover it. 00:28:07.751 [2024-10-07 09:48:56.418007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.751 [2024-10-07 09:48:56.418033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.751 qpair failed and we were unable to recover it. 00:28:07.751 [2024-10-07 09:48:56.418148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.751 [2024-10-07 09:48:56.418174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.751 qpair failed and we were unable to recover it. 00:28:07.751 [2024-10-07 09:48:56.418254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.751 [2024-10-07 09:48:56.418280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.751 qpair failed and we were unable to recover it. 00:28:07.751 [2024-10-07 09:48:56.418387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.751 [2024-10-07 09:48:56.418412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.751 qpair failed and we were unable to recover it. 00:28:07.751 [2024-10-07 09:48:56.418500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.751 [2024-10-07 09:48:56.418525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.751 qpair failed and we were unable to recover it. 00:28:07.751 [2024-10-07 09:48:56.418601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.751 [2024-10-07 09:48:56.418634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.751 qpair failed and we were unable to recover it. 00:28:07.751 [2024-10-07 09:48:56.418729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.751 [2024-10-07 09:48:56.418756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.751 qpair failed and we were unable to recover it. 00:28:07.751 [2024-10-07 09:48:56.418871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.751 [2024-10-07 09:48:56.418898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.751 qpair failed and we were unable to recover it. 00:28:07.751 [2024-10-07 09:48:56.419005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.751 [2024-10-07 09:48:56.419032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.751 qpair failed and we were unable to recover it. 00:28:07.751 [2024-10-07 09:48:56.419113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.751 [2024-10-07 09:48:56.419140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.751 qpair failed and we were unable to recover it. 00:28:07.751 [2024-10-07 09:48:56.419216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.751 [2024-10-07 09:48:56.419242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.751 qpair failed and we were unable to recover it. 00:28:07.751 [2024-10-07 09:48:56.419352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.751 [2024-10-07 09:48:56.419378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.751 qpair failed and we were unable to recover it. 00:28:07.751 [2024-10-07 09:48:56.419486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.751 [2024-10-07 09:48:56.419513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.751 qpair failed and we were unable to recover it. 00:28:07.751 [2024-10-07 09:48:56.419587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.751 [2024-10-07 09:48:56.419614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.751 qpair failed and we were unable to recover it. 00:28:07.751 [2024-10-07 09:48:56.419699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.751 [2024-10-07 09:48:56.419727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.751 qpair failed and we were unable to recover it. 00:28:07.751 [2024-10-07 09:48:56.419853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.751 [2024-10-07 09:48:56.419892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.752 qpair failed and we were unable to recover it. 00:28:07.752 [2024-10-07 09:48:56.419990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.752 [2024-10-07 09:48:56.420018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.752 qpair failed and we were unable to recover it. 00:28:07.752 [2024-10-07 09:48:56.420105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.752 [2024-10-07 09:48:56.420130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.752 qpair failed and we were unable to recover it. 00:28:07.752 [2024-10-07 09:48:56.420208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.752 [2024-10-07 09:48:56.420233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.752 qpair failed and we were unable to recover it. 00:28:07.752 [2024-10-07 09:48:56.420322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.752 [2024-10-07 09:48:56.420349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.752 qpair failed and we were unable to recover it. 00:28:07.752 [2024-10-07 09:48:56.420430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.752 [2024-10-07 09:48:56.420458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.752 qpair failed and we were unable to recover it. 00:28:07.752 [2024-10-07 09:48:56.420573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.752 [2024-10-07 09:48:56.420600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.752 qpair failed and we were unable to recover it. 00:28:07.752 [2024-10-07 09:48:56.420694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.752 [2024-10-07 09:48:56.420720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.752 qpair failed and we were unable to recover it. 00:28:07.752 [2024-10-07 09:48:56.420799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.752 [2024-10-07 09:48:56.420824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.752 qpair failed and we were unable to recover it. 00:28:07.752 [2024-10-07 09:48:56.420904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.752 [2024-10-07 09:48:56.420929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.752 qpair failed and we were unable to recover it. 00:28:07.752 [2024-10-07 09:48:56.421016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.752 [2024-10-07 09:48:56.421042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.752 qpair failed and we were unable to recover it. 00:28:07.752 [2024-10-07 09:48:56.421117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.752 [2024-10-07 09:48:56.421144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.752 qpair failed and we were unable to recover it. 00:28:07.752 [2024-10-07 09:48:56.421223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.752 [2024-10-07 09:48:56.421250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.752 qpair failed and we were unable to recover it. 00:28:07.752 [2024-10-07 09:48:56.421333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.752 [2024-10-07 09:48:56.421359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.752 qpair failed and we were unable to recover it. 00:28:07.752 [2024-10-07 09:48:56.421438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.752 [2024-10-07 09:48:56.421463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.752 qpair failed and we were unable to recover it. 00:28:07.752 [2024-10-07 09:48:56.421548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.752 [2024-10-07 09:48:56.421576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.752 qpair failed and we were unable to recover it. 00:28:07.752 [2024-10-07 09:48:56.421693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.752 [2024-10-07 09:48:56.421721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.752 qpair failed and we were unable to recover it. 00:28:07.752 [2024-10-07 09:48:56.421806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.752 [2024-10-07 09:48:56.421833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.752 qpair failed and we were unable to recover it. 00:28:07.752 [2024-10-07 09:48:56.421911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.752 [2024-10-07 09:48:56.421936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.752 qpair failed and we were unable to recover it. 00:28:07.752 [2024-10-07 09:48:56.422055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.752 [2024-10-07 09:48:56.422082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.752 qpair failed and we were unable to recover it. 00:28:07.752 [2024-10-07 09:48:56.422152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.752 [2024-10-07 09:48:56.422178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.752 qpair failed and we were unable to recover it. 00:28:07.752 [2024-10-07 09:48:56.422261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.752 [2024-10-07 09:48:56.422289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.752 qpair failed and we were unable to recover it. 00:28:07.752 [2024-10-07 09:48:56.422402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.752 [2024-10-07 09:48:56.422428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.752 qpair failed and we were unable to recover it. 00:28:07.752 [2024-10-07 09:48:56.422514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.752 [2024-10-07 09:48:56.422543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.752 qpair failed and we were unable to recover it. 00:28:07.752 [2024-10-07 09:48:56.422616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.752 [2024-10-07 09:48:56.422643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.752 qpair failed and we were unable to recover it. 00:28:07.752 [2024-10-07 09:48:56.422785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.752 [2024-10-07 09:48:56.422813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.752 qpair failed and we were unable to recover it. 00:28:07.752 [2024-10-07 09:48:56.422896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.752 [2024-10-07 09:48:56.422923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.752 qpair failed and we were unable to recover it. 00:28:07.752 [2024-10-07 09:48:56.423034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.752 [2024-10-07 09:48:56.423061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.752 qpair failed and we were unable to recover it. 00:28:07.752 [2024-10-07 09:48:56.423172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.752 [2024-10-07 09:48:56.423198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.752 qpair failed and we were unable to recover it. 00:28:07.752 [2024-10-07 09:48:56.423282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.752 [2024-10-07 09:48:56.423309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.752 qpair failed and we were unable to recover it. 00:28:07.752 [2024-10-07 09:48:56.423390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.752 [2024-10-07 09:48:56.423416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.752 qpair failed and we were unable to recover it. 00:28:07.752 [2024-10-07 09:48:56.423542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.752 [2024-10-07 09:48:56.423568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.752 qpair failed and we were unable to recover it. 00:28:07.752 [2024-10-07 09:48:56.423674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.752 [2024-10-07 09:48:56.423700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.752 qpair failed and we were unable to recover it. 00:28:07.752 [2024-10-07 09:48:56.423770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.752 [2024-10-07 09:48:56.423796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.752 qpair failed and we were unable to recover it. 00:28:07.752 [2024-10-07 09:48:56.423876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.752 [2024-10-07 09:48:56.423902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.752 qpair failed and we were unable to recover it. 00:28:07.752 [2024-10-07 09:48:56.423981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.752 [2024-10-07 09:48:56.424006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.752 qpair failed and we were unable to recover it. 00:28:07.752 [2024-10-07 09:48:56.424087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.752 [2024-10-07 09:48:56.424115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.752 qpair failed and we were unable to recover it. 00:28:07.752 [2024-10-07 09:48:56.424193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.752 [2024-10-07 09:48:56.424220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.752 qpair failed and we were unable to recover it. 00:28:07.752 [2024-10-07 09:48:56.424310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.752 [2024-10-07 09:48:56.424337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.752 qpair failed and we were unable to recover it. 00:28:07.752 [2024-10-07 09:48:56.424475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.753 [2024-10-07 09:48:56.424502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.753 qpair failed and we were unable to recover it. 00:28:07.753 [2024-10-07 09:48:56.424590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.753 [2024-10-07 09:48:56.424618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.753 qpair failed and we were unable to recover it. 00:28:07.753 [2024-10-07 09:48:56.424713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.753 [2024-10-07 09:48:56.424742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.753 qpair failed and we were unable to recover it. 00:28:07.753 [2024-10-07 09:48:56.424854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.753 [2024-10-07 09:48:56.424879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.753 qpair failed and we were unable to recover it. 00:28:07.753 [2024-10-07 09:48:56.425002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.753 [2024-10-07 09:48:56.425028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.753 qpair failed and we were unable to recover it. 00:28:07.753 [2024-10-07 09:48:56.425112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.753 [2024-10-07 09:48:56.425139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.753 qpair failed and we were unable to recover it. 00:28:07.753 [2024-10-07 09:48:56.425222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.753 [2024-10-07 09:48:56.425249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.753 qpair failed and we were unable to recover it. 00:28:07.753 [2024-10-07 09:48:56.425331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.753 [2024-10-07 09:48:56.425357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.753 qpair failed and we were unable to recover it. 00:28:07.753 [2024-10-07 09:48:56.425460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.753 [2024-10-07 09:48:56.425486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.753 qpair failed and we were unable to recover it. 00:28:07.753 [2024-10-07 09:48:56.425592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.753 [2024-10-07 09:48:56.425617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.753 qpair failed and we were unable to recover it. 00:28:07.753 [2024-10-07 09:48:56.425735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.753 [2024-10-07 09:48:56.425762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.753 qpair failed and we were unable to recover it. 00:28:07.753 [2024-10-07 09:48:56.425841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.753 [2024-10-07 09:48:56.425867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.753 qpair failed and we were unable to recover it. 00:28:07.753 [2024-10-07 09:48:56.425947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.753 [2024-10-07 09:48:56.425972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.753 qpair failed and we were unable to recover it. 00:28:07.753 [2024-10-07 09:48:56.426081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.753 [2024-10-07 09:48:56.426107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.753 qpair failed and we were unable to recover it. 00:28:07.753 [2024-10-07 09:48:56.426184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.753 [2024-10-07 09:48:56.426210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.753 qpair failed and we were unable to recover it. 00:28:07.753 [2024-10-07 09:48:56.426317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.753 [2024-10-07 09:48:56.426344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.753 qpair failed and we were unable to recover it. 00:28:07.753 [2024-10-07 09:48:56.426459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.753 [2024-10-07 09:48:56.426486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.753 qpair failed and we were unable to recover it. 00:28:07.753 [2024-10-07 09:48:56.426561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.753 [2024-10-07 09:48:56.426586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.753 qpair failed and we were unable to recover it. 00:28:07.753 [2024-10-07 09:48:56.426663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.753 [2024-10-07 09:48:56.426702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.753 qpair failed and we were unable to recover it. 00:28:07.753 [2024-10-07 09:48:56.426790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.753 [2024-10-07 09:48:56.426816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.753 qpair failed and we were unable to recover it. 00:28:07.753 [2024-10-07 09:48:56.426932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.753 [2024-10-07 09:48:56.426957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.753 qpair failed and we were unable to recover it. 00:28:07.753 [2024-10-07 09:48:56.427041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.753 [2024-10-07 09:48:56.427068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.753 qpair failed and we were unable to recover it. 00:28:07.753 [2024-10-07 09:48:56.427161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.753 [2024-10-07 09:48:56.427189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.753 qpair failed and we were unable to recover it. 00:28:07.753 [2024-10-07 09:48:56.427277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.753 [2024-10-07 09:48:56.427305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.753 qpair failed and we were unable to recover it. 00:28:07.753 [2024-10-07 09:48:56.427413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.753 [2024-10-07 09:48:56.427439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.753 qpair failed and we were unable to recover it. 00:28:07.753 [2024-10-07 09:48:56.427518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.753 [2024-10-07 09:48:56.427545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.753 qpair failed and we were unable to recover it. 00:28:07.753 [2024-10-07 09:48:56.427656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.753 [2024-10-07 09:48:56.427691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.753 qpair failed and we were unable to recover it. 00:28:07.753 [2024-10-07 09:48:56.427779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.753 [2024-10-07 09:48:56.427806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.753 qpair failed and we were unable to recover it. 00:28:07.753 [2024-10-07 09:48:56.427879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.753 [2024-10-07 09:48:56.427906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.753 qpair failed and we were unable to recover it. 00:28:07.753 [2024-10-07 09:48:56.427989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.753 [2024-10-07 09:48:56.428016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.753 qpair failed and we were unable to recover it. 00:28:07.753 [2024-10-07 09:48:56.428144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.753 [2024-10-07 09:48:56.428171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.753 qpair failed and we were unable to recover it. 00:28:07.753 [2024-10-07 09:48:56.428240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.753 [2024-10-07 09:48:56.428267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.753 qpair failed and we were unable to recover it. 00:28:07.753 [2024-10-07 09:48:56.428351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.753 [2024-10-07 09:48:56.428378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.753 qpair failed and we were unable to recover it. 00:28:07.753 [2024-10-07 09:48:56.428498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.753 [2024-10-07 09:48:56.428523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.753 qpair failed and we were unable to recover it. 00:28:07.753 [2024-10-07 09:48:56.428648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.753 [2024-10-07 09:48:56.428697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:07.753 qpair failed and we were unable to recover it. 00:28:07.753 [2024-10-07 09:48:56.428793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.753 [2024-10-07 09:48:56.428821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.753 qpair failed and we were unable to recover it. 00:28:07.753 [2024-10-07 09:48:56.428898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.753 [2024-10-07 09:48:56.428924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.753 qpair failed and we were unable to recover it. 00:28:07.753 [2024-10-07 09:48:56.429029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.753 [2024-10-07 09:48:56.429055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.753 qpair failed and we were unable to recover it. 00:28:07.753 [2024-10-07 09:48:56.429160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.753 [2024-10-07 09:48:56.429186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.753 qpair failed and we were unable to recover it. 00:28:07.754 [2024-10-07 09:48:56.429271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.754 [2024-10-07 09:48:56.429298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.754 qpair failed and we were unable to recover it. 00:28:07.754 [2024-10-07 09:48:56.429387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.754 [2024-10-07 09:48:56.429426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.754 qpair failed and we were unable to recover it. 00:28:07.754 [2024-10-07 09:48:56.429552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.754 [2024-10-07 09:48:56.429582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:07.754 qpair failed and we were unable to recover it. 00:28:07.754 [2024-10-07 09:48:56.429677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.754 [2024-10-07 09:48:56.429706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:07.754 qpair failed and we were unable to recover it. 00:28:07.754 [2024-10-07 09:48:56.429794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.754 [2024-10-07 09:48:56.429821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:07.754 qpair failed and we were unable to recover it. 00:28:07.754 [2024-10-07 09:48:56.429911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.754 [2024-10-07 09:48:56.429938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:07.754 qpair failed and we were unable to recover it. 00:28:07.754 [2024-10-07 09:48:56.430054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.754 [2024-10-07 09:48:56.430086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:07.754 qpair failed and we were unable to recover it. 00:28:07.754 [2024-10-07 09:48:56.430166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.754 [2024-10-07 09:48:56.430193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:07.754 qpair failed and we were unable to recover it. 00:28:07.754 [2024-10-07 09:48:56.430281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.754 [2024-10-07 09:48:56.430311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.754 qpair failed and we were unable to recover it. 00:28:07.754 [2024-10-07 09:48:56.430390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.754 [2024-10-07 09:48:56.430417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.754 qpair failed and we were unable to recover it. 00:28:07.754 [2024-10-07 09:48:56.430494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.754 [2024-10-07 09:48:56.430521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.754 qpair failed and we were unable to recover it. 00:28:07.754 [2024-10-07 09:48:56.430594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.754 [2024-10-07 09:48:56.430619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.754 qpair failed and we were unable to recover it. 00:28:07.754 [2024-10-07 09:48:56.430705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.754 [2024-10-07 09:48:56.430731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.754 qpair failed and we were unable to recover it. 00:28:07.754 [2024-10-07 09:48:56.430809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.754 [2024-10-07 09:48:56.430834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.754 qpair failed and we were unable to recover it. 00:28:07.754 [2024-10-07 09:48:56.430938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.754 [2024-10-07 09:48:56.430964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.754 qpair failed and we were unable to recover it. 00:28:07.754 [2024-10-07 09:48:56.431060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.754 [2024-10-07 09:48:56.431086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.754 qpair failed and we were unable to recover it. 00:28:07.754 [2024-10-07 09:48:56.431163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.754 [2024-10-07 09:48:56.431188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.754 qpair failed and we were unable to recover it. 00:28:07.754 [2024-10-07 09:48:56.431270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.754 [2024-10-07 09:48:56.431296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.754 qpair failed and we were unable to recover it. 00:28:07.754 [2024-10-07 09:48:56.431373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.754 [2024-10-07 09:48:56.431398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.754 qpair failed and we were unable to recover it. 00:28:07.754 [2024-10-07 09:48:56.431473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.754 [2024-10-07 09:48:56.431498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.754 qpair failed and we were unable to recover it. 00:28:07.754 [2024-10-07 09:48:56.431613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.754 [2024-10-07 09:48:56.431642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:07.754 qpair failed and we were unable to recover it. 00:28:07.754 [2024-10-07 09:48:56.431730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.754 [2024-10-07 09:48:56.431757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:07.754 qpair failed and we were unable to recover it. 00:28:07.754 [2024-10-07 09:48:56.431837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.754 [2024-10-07 09:48:56.431864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:07.754 qpair failed and we were unable to recover it. 00:28:07.754 [2024-10-07 09:48:56.431952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.754 [2024-10-07 09:48:56.431978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:07.754 qpair failed and we were unable to recover it. 00:28:07.754 [2024-10-07 09:48:56.432063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.754 [2024-10-07 09:48:56.432089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:07.754 qpair failed and we were unable to recover it. 00:28:07.754 [2024-10-07 09:48:56.432175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.754 [2024-10-07 09:48:56.432204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.754 qpair failed and we were unable to recover it. 00:28:07.754 [2024-10-07 09:48:56.432283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.754 [2024-10-07 09:48:56.432310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.754 qpair failed and we were unable to recover it. 00:28:07.754 [2024-10-07 09:48:56.432411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.754 [2024-10-07 09:48:56.432437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.754 qpair failed and we were unable to recover it. 00:28:07.754 [2024-10-07 09:48:56.432552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.754 [2024-10-07 09:48:56.432579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.754 qpair failed and we were unable to recover it. 00:28:07.754 [2024-10-07 09:48:56.432654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.754 [2024-10-07 09:48:56.432687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.754 qpair failed and we were unable to recover it. 00:28:07.754 [2024-10-07 09:48:56.432772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.754 [2024-10-07 09:48:56.432798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.754 qpair failed and we were unable to recover it. 00:28:07.754 [2024-10-07 09:48:56.432906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.754 [2024-10-07 09:48:56.432932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.754 qpair failed and we were unable to recover it. 00:28:07.754 [2024-10-07 09:48:56.433006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.754 [2024-10-07 09:48:56.433030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.754 qpair failed and we were unable to recover it. 00:28:07.754 [2024-10-07 09:48:56.433108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.754 [2024-10-07 09:48:56.433138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.754 qpair failed and we were unable to recover it. 00:28:07.754 [2024-10-07 09:48:56.433244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.755 [2024-10-07 09:48:56.433269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.755 qpair failed and we were unable to recover it. 00:28:07.755 [2024-10-07 09:48:56.433375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.755 [2024-10-07 09:48:56.433400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.755 qpair failed and we were unable to recover it. 00:28:07.755 [2024-10-07 09:48:56.433477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.755 [2024-10-07 09:48:56.433502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.755 qpair failed and we were unable to recover it. 00:28:07.755 [2024-10-07 09:48:56.433584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.755 [2024-10-07 09:48:56.433612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:07.755 qpair failed and we were unable to recover it. 00:28:07.755 [2024-10-07 09:48:56.433705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.755 [2024-10-07 09:48:56.433734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.755 qpair failed and we were unable to recover it. 00:28:07.755 [2024-10-07 09:48:56.433818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.755 [2024-10-07 09:48:56.433846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.755 qpair failed and we were unable to recover it. 00:28:07.755 [2024-10-07 09:48:56.433929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.755 [2024-10-07 09:48:56.433955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.755 qpair failed and we were unable to recover it. 00:28:07.755 [2024-10-07 09:48:56.434035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.755 [2024-10-07 09:48:56.434062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.755 qpair failed and we were unable to recover it. 00:28:07.755 [2024-10-07 09:48:56.434150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.755 [2024-10-07 09:48:56.434178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.755 qpair failed and we were unable to recover it. 00:28:07.755 [2024-10-07 09:48:56.434252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.755 [2024-10-07 09:48:56.434279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.755 qpair failed and we were unable to recover it. 00:28:07.755 [2024-10-07 09:48:56.434416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.755 [2024-10-07 09:48:56.434442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.755 qpair failed and we were unable to recover it. 00:28:07.755 [2024-10-07 09:48:56.434528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.755 [2024-10-07 09:48:56.434556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.755 qpair failed and we were unable to recover it. 00:28:07.755 [2024-10-07 09:48:56.434635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.755 [2024-10-07 09:48:56.434662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.755 qpair failed and we were unable to recover it. 00:28:07.755 [2024-10-07 09:48:56.434762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.755 [2024-10-07 09:48:56.434790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:07.755 qpair failed and we were unable to recover it. 00:28:07.755 [2024-10-07 09:48:56.434881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.755 [2024-10-07 09:48:56.434907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:07.755 qpair failed and we were unable to recover it. 00:28:07.755 [2024-10-07 09:48:56.434992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.755 [2024-10-07 09:48:56.435018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:07.755 qpair failed and we were unable to recover it. 00:28:07.755 [2024-10-07 09:48:56.435125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.755 [2024-10-07 09:48:56.435151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:07.755 qpair failed and we were unable to recover it. 00:28:07.755 [2024-10-07 09:48:56.435228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.755 [2024-10-07 09:48:56.435254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:07.755 qpair failed and we were unable to recover it. 00:28:07.755 [2024-10-07 09:48:56.435361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.755 [2024-10-07 09:48:56.435389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.755 qpair failed and we were unable to recover it. 00:28:07.755 [2024-10-07 09:48:56.435482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.755 [2024-10-07 09:48:56.435521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.755 qpair failed and we were unable to recover it. 00:28:07.755 [2024-10-07 09:48:56.435615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.755 [2024-10-07 09:48:56.435643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.755 qpair failed and we were unable to recover it. 00:28:07.755 [2024-10-07 09:48:56.435730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.755 [2024-10-07 09:48:56.435757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:07.755 qpair failed and we were unable to recover it. 00:28:07.755 [2024-10-07 09:48:56.435872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.755 [2024-10-07 09:48:56.435898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:07.755 qpair failed and we were unable to recover it. 00:28:07.755 [2024-10-07 09:48:56.435984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.755 [2024-10-07 09:48:56.436010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:07.755 qpair failed and we were unable to recover it. 00:28:07.755 [2024-10-07 09:48:56.436123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.755 [2024-10-07 09:48:56.436149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:07.755 qpair failed and we were unable to recover it. 00:28:07.755 [2024-10-07 09:48:56.436255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.755 [2024-10-07 09:48:56.436281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:07.755 qpair failed and we were unable to recover it. 00:28:07.755 [2024-10-07 09:48:56.436403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.755 [2024-10-07 09:48:56.436430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:07.755 qpair failed and we were unable to recover it. 00:28:07.755 [2024-10-07 09:48:56.436538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.755 [2024-10-07 09:48:56.436564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:07.755 qpair failed and we were unable to recover it. 00:28:07.755 [2024-10-07 09:48:56.436647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.755 [2024-10-07 09:48:56.436679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:07.755 qpair failed and we were unable to recover it. 00:28:07.755 [2024-10-07 09:48:56.436762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.755 [2024-10-07 09:48:56.436788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:07.755 qpair failed and we were unable to recover it. 00:28:07.755 [2024-10-07 09:48:56.436871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.755 [2024-10-07 09:48:56.436897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:07.755 qpair failed and we were unable to recover it. 00:28:07.755 [2024-10-07 09:48:56.436977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.755 [2024-10-07 09:48:56.437004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:07.755 qpair failed and we were unable to recover it. 00:28:07.755 [2024-10-07 09:48:56.437116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.755 [2024-10-07 09:48:56.437142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:07.755 qpair failed and we were unable to recover it. 00:28:07.755 [2024-10-07 09:48:56.437225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.755 [2024-10-07 09:48:56.437251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:07.755 qpair failed and we were unable to recover it. 00:28:07.755 [2024-10-07 09:48:56.437328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.755 [2024-10-07 09:48:56.437355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:07.755 qpair failed and we were unable to recover it. 00:28:07.755 [2024-10-07 09:48:56.437434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.755 [2024-10-07 09:48:56.437461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:07.755 qpair failed and we were unable to recover it. 00:28:07.755 [2024-10-07 09:48:56.437549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.755 [2024-10-07 09:48:56.437575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:07.755 qpair failed and we were unable to recover it. 00:28:07.755 [2024-10-07 09:48:56.437654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.755 [2024-10-07 09:48:56.437690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:07.755 qpair failed and we were unable to recover it. 00:28:07.755 [2024-10-07 09:48:56.437775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.755 [2024-10-07 09:48:56.437801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:07.756 qpair failed and we were unable to recover it. 00:28:07.756 [2024-10-07 09:48:56.437884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.756 [2024-10-07 09:48:56.437923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:07.756 qpair failed and we were unable to recover it. 00:28:07.756 [2024-10-07 09:48:56.438022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.756 [2024-10-07 09:48:56.438049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:07.756 qpair failed and we were unable to recover it. 00:28:07.756 [2024-10-07 09:48:56.438131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.756 [2024-10-07 09:48:56.438158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:07.756 qpair failed and we were unable to recover it. 00:28:07.756 [2024-10-07 09:48:56.438241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.756 [2024-10-07 09:48:56.438268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:07.756 qpair failed and we were unable to recover it. 00:28:07.756 [2024-10-07 09:48:56.438344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.756 [2024-10-07 09:48:56.438371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:07.756 qpair failed and we were unable to recover it. 00:28:07.756 [2024-10-07 09:48:56.438451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.756 [2024-10-07 09:48:56.438478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:07.756 qpair failed and we were unable to recover it. 00:28:07.756 [2024-10-07 09:48:56.438597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.756 [2024-10-07 09:48:56.438637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.756 qpair failed and we were unable to recover it. 00:28:07.756 [2024-10-07 09:48:56.438773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.756 [2024-10-07 09:48:56.438812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.756 qpair failed and we were unable to recover it. 00:28:07.756 [2024-10-07 09:48:56.438900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.756 [2024-10-07 09:48:56.438928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.756 qpair failed and we were unable to recover it. 00:28:07.756 [2024-10-07 09:48:56.439012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.756 [2024-10-07 09:48:56.439039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.756 qpair failed and we were unable to recover it. 00:28:07.756 [2024-10-07 09:48:56.439149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.756 [2024-10-07 09:48:56.439176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.756 qpair failed and we were unable to recover it. 00:28:07.756 [2024-10-07 09:48:56.439264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.756 [2024-10-07 09:48:56.439291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.756 qpair failed and we were unable to recover it. 00:28:07.756 [2024-10-07 09:48:56.439436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.756 [2024-10-07 09:48:56.439464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:07.756 qpair failed and we were unable to recover it. 00:28:07.756 [2024-10-07 09:48:56.439611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.756 [2024-10-07 09:48:56.439637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:07.756 qpair failed and we were unable to recover it. 00:28:07.756 [2024-10-07 09:48:56.439742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.756 [2024-10-07 09:48:56.439769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:07.756 qpair failed and we were unable to recover it. 00:28:07.756 [2024-10-07 09:48:56.439845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.756 [2024-10-07 09:48:56.439871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:07.756 qpair failed and we were unable to recover it. 00:28:07.756 [2024-10-07 09:48:56.439952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.756 [2024-10-07 09:48:56.439978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:07.756 qpair failed and we were unable to recover it. 00:28:07.756 [2024-10-07 09:48:56.440060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.756 [2024-10-07 09:48:56.440086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:07.756 qpair failed and we were unable to recover it. 00:28:07.756 [2024-10-07 09:48:56.440160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.756 [2024-10-07 09:48:56.440186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:07.756 qpair failed and we were unable to recover it. 00:28:07.756 [2024-10-07 09:48:56.440265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.756 [2024-10-07 09:48:56.440291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:07.756 qpair failed and we were unable to recover it. 00:28:07.756 [2024-10-07 09:48:56.440373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.756 [2024-10-07 09:48:56.440399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:07.756 qpair failed and we were unable to recover it. 00:28:07.756 [2024-10-07 09:48:56.440517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.756 [2024-10-07 09:48:56.440543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:07.756 qpair failed and we were unable to recover it. 00:28:07.756 [2024-10-07 09:48:56.440631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.756 [2024-10-07 09:48:56.440657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:07.756 qpair failed and we were unable to recover it. 00:28:07.756 [2024-10-07 09:48:56.440773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.756 [2024-10-07 09:48:56.440799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:07.756 qpair failed and we were unable to recover it. 00:28:07.756 [2024-10-07 09:48:56.440915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.756 [2024-10-07 09:48:56.440942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:07.756 qpair failed and we were unable to recover it. 00:28:07.756 [2024-10-07 09:48:56.441051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.756 [2024-10-07 09:48:56.441077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:07.756 qpair failed and we were unable to recover it. 00:28:07.756 [2024-10-07 09:48:56.441219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.756 [2024-10-07 09:48:56.441245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:07.756 qpair failed and we were unable to recover it. 00:28:07.756 [2024-10-07 09:48:56.441329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.756 [2024-10-07 09:48:56.441358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.756 qpair failed and we were unable to recover it. 00:28:07.756 [2024-10-07 09:48:56.441477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.756 [2024-10-07 09:48:56.441503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.756 qpair failed and we were unable to recover it. 00:28:07.756 [2024-10-07 09:48:56.441586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.756 [2024-10-07 09:48:56.441613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.756 qpair failed and we were unable to recover it. 00:28:07.756 [2024-10-07 09:48:56.441700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.756 [2024-10-07 09:48:56.441727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.756 qpair failed and we were unable to recover it. 00:28:07.756 [2024-10-07 09:48:56.441830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.756 [2024-10-07 09:48:56.441856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.756 qpair failed and we were unable to recover it. 00:28:07.756 [2024-10-07 09:48:56.441967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.756 [2024-10-07 09:48:56.441994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.756 qpair failed and we were unable to recover it. 00:28:07.756 [2024-10-07 09:48:56.442079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.756 [2024-10-07 09:48:56.442107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:07.756 qpair failed and we were unable to recover it. 00:28:07.756 [2024-10-07 09:48:56.442225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.756 [2024-10-07 09:48:56.442251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:07.756 qpair failed and we were unable to recover it. 00:28:07.756 [2024-10-07 09:48:56.442331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.756 [2024-10-07 09:48:56.442357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:07.756 qpair failed and we were unable to recover it. 00:28:07.756 [2024-10-07 09:48:56.442481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.756 [2024-10-07 09:48:56.442512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:07.756 qpair failed and we were unable to recover it. 00:28:07.756 [2024-10-07 09:48:56.442592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.756 [2024-10-07 09:48:56.442618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:07.756 qpair failed and we were unable to recover it. 00:28:07.756 [2024-10-07 09:48:56.442716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.757 [2024-10-07 09:48:56.442744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:07.757 qpair failed and we were unable to recover it. 00:28:07.757 [2024-10-07 09:48:56.442830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.757 [2024-10-07 09:48:56.442856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:07.757 qpair failed and we were unable to recover it. 00:28:07.757 [2024-10-07 09:48:56.442941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.757 [2024-10-07 09:48:56.442972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:07.757 qpair failed and we were unable to recover it. 00:28:07.757 [2024-10-07 09:48:56.443093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.757 [2024-10-07 09:48:56.443120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:07.757 qpair failed and we were unable to recover it. 00:28:07.757 [2024-10-07 09:48:56.443228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.757 [2024-10-07 09:48:56.443256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.757 qpair failed and we were unable to recover it. 00:28:07.757 [2024-10-07 09:48:56.443348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.757 [2024-10-07 09:48:56.443387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.757 qpair failed and we were unable to recover it. 00:28:07.757 [2024-10-07 09:48:56.443485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.757 [2024-10-07 09:48:56.443512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.757 qpair failed and we were unable to recover it. 00:28:07.757 [2024-10-07 09:48:56.443619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.757 [2024-10-07 09:48:56.443645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.757 qpair failed and we were unable to recover it. 00:28:07.757 [2024-10-07 09:48:56.443747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.757 [2024-10-07 09:48:56.443774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.757 qpair failed and we were unable to recover it. 00:28:07.757 [2024-10-07 09:48:56.443903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.757 [2024-10-07 09:48:56.443931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.757 qpair failed and we were unable to recover it. 00:28:07.757 [2024-10-07 09:48:56.444023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.757 [2024-10-07 09:48:56.444049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.757 qpair failed and we were unable to recover it. 00:28:07.757 [2024-10-07 09:48:56.444138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.757 [2024-10-07 09:48:56.444165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.757 qpair failed and we were unable to recover it. 00:28:07.757 [2024-10-07 09:48:56.444252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.757 [2024-10-07 09:48:56.444280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.757 qpair failed and we were unable to recover it. 00:28:07.757 [2024-10-07 09:48:56.444360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.757 [2024-10-07 09:48:56.444386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.757 qpair failed and we were unable to recover it. 00:28:07.757 [2024-10-07 09:48:56.444498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.757 [2024-10-07 09:48:56.444524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.757 qpair failed and we were unable to recover it. 00:28:07.757 [2024-10-07 09:48:56.444639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.757 [2024-10-07 09:48:56.444672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.757 qpair failed and we were unable to recover it. 00:28:07.757 [2024-10-07 09:48:56.444762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.757 [2024-10-07 09:48:56.444788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.757 qpair failed and we were unable to recover it. 00:28:07.757 [2024-10-07 09:48:56.444865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.757 [2024-10-07 09:48:56.444891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.757 qpair failed and we were unable to recover it. 00:28:07.757 [2024-10-07 09:48:56.444976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.757 [2024-10-07 09:48:56.445002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.757 qpair failed and we were unable to recover it. 00:28:07.757 [2024-10-07 09:48:56.445077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.757 [2024-10-07 09:48:56.445102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.757 qpair failed and we were unable to recover it. 00:28:07.757 [2024-10-07 09:48:56.445209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.757 [2024-10-07 09:48:56.445234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.757 qpair failed and we were unable to recover it. 00:28:07.757 [2024-10-07 09:48:56.445312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.757 [2024-10-07 09:48:56.445337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.757 qpair failed and we were unable to recover it. 00:28:07.757 [2024-10-07 09:48:56.445419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.757 [2024-10-07 09:48:56.445448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:07.757 qpair failed and we were unable to recover it. 00:28:07.757 [2024-10-07 09:48:56.445564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.757 [2024-10-07 09:48:56.445589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:07.757 qpair failed and we were unable to recover it. 00:28:07.757 [2024-10-07 09:48:56.445672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.757 [2024-10-07 09:48:56.445700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:07.757 qpair failed and we were unable to recover it. 00:28:07.757 [2024-10-07 09:48:56.445778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.757 [2024-10-07 09:48:56.445804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:07.757 qpair failed and we were unable to recover it. 00:28:07.757 [2024-10-07 09:48:56.445886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.757 [2024-10-07 09:48:56.445912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:07.757 qpair failed and we were unable to recover it. 00:28:07.757 [2024-10-07 09:48:56.445997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.757 [2024-10-07 09:48:56.446024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:07.757 qpair failed and we were unable to recover it. 00:28:07.757 [2024-10-07 09:48:56.446112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.757 [2024-10-07 09:48:56.446139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.757 qpair failed and we were unable to recover it. 00:28:07.757 [2024-10-07 09:48:56.446251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.757 [2024-10-07 09:48:56.446285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.757 qpair failed and we were unable to recover it. 00:28:07.757 [2024-10-07 09:48:56.446377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.757 [2024-10-07 09:48:56.446406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.757 qpair failed and we were unable to recover it. 00:28:07.757 [2024-10-07 09:48:56.446520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.757 [2024-10-07 09:48:56.446546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.757 qpair failed and we were unable to recover it. 00:28:07.757 [2024-10-07 09:48:56.446629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.757 [2024-10-07 09:48:56.446657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.757 qpair failed and we were unable to recover it. 00:28:07.757 [2024-10-07 09:48:56.446781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.757 [2024-10-07 09:48:56.446808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.757 qpair failed and we were unable to recover it. 00:28:07.757 [2024-10-07 09:48:56.446895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.757 [2024-10-07 09:48:56.446922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.757 qpair failed and we were unable to recover it. 00:28:07.757 [2024-10-07 09:48:56.447035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.757 [2024-10-07 09:48:56.447063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.757 qpair failed and we were unable to recover it. 00:28:07.757 [2024-10-07 09:48:56.447173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.757 [2024-10-07 09:48:56.447199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.757 qpair failed and we were unable to recover it. 00:28:07.757 [2024-10-07 09:48:56.447309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.757 [2024-10-07 09:48:56.447335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.757 qpair failed and we were unable to recover it. 00:28:07.757 [2024-10-07 09:48:56.447445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.758 [2024-10-07 09:48:56.447472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.758 qpair failed and we were unable to recover it. 00:28:07.758 [2024-10-07 09:48:56.447552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.758 [2024-10-07 09:48:56.447578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.758 qpair failed and we were unable to recover it. 00:28:07.758 [2024-10-07 09:48:56.447663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.758 [2024-10-07 09:48:56.447702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:07.758 qpair failed and we were unable to recover it. 00:28:07.758 [2024-10-07 09:48:56.447824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.758 [2024-10-07 09:48:56.447850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:07.758 qpair failed and we were unable to recover it. 00:28:07.758 [2024-10-07 09:48:56.447950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.758 [2024-10-07 09:48:56.447977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:07.758 qpair failed and we were unable to recover it. 00:28:07.758 [2024-10-07 09:48:56.448063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.758 [2024-10-07 09:48:56.448089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:07.758 qpair failed and we were unable to recover it. 00:28:07.758 [2024-10-07 09:48:56.448173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.758 [2024-10-07 09:48:56.448202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:07.758 qpair failed and we were unable to recover it. 00:28:07.758 [2024-10-07 09:48:56.448318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.758 [2024-10-07 09:48:56.448344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:07.758 qpair failed and we were unable to recover it. 00:28:07.758 [2024-10-07 09:48:56.448439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.758 [2024-10-07 09:48:56.448466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.758 qpair failed and we were unable to recover it. 00:28:07.758 [2024-10-07 09:48:56.448547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.758 [2024-10-07 09:48:56.448575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.758 qpair failed and we were unable to recover it. 00:28:07.758 [2024-10-07 09:48:56.448659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.758 [2024-10-07 09:48:56.448693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.758 qpair failed and we were unable to recover it. 00:28:07.758 [2024-10-07 09:48:56.448777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.758 [2024-10-07 09:48:56.448804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.758 qpair failed and we were unable to recover it. 00:28:07.758 [2024-10-07 09:48:56.448894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.758 [2024-10-07 09:48:56.448920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.758 qpair failed and we were unable to recover it. 00:28:07.758 [2024-10-07 09:48:56.449039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.758 [2024-10-07 09:48:56.449066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.758 qpair failed and we were unable to recover it. 00:28:07.758 [2024-10-07 09:48:56.449150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.758 [2024-10-07 09:48:56.449176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.758 qpair failed and we were unable to recover it. 00:28:07.758 [2024-10-07 09:48:56.449259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.758 [2024-10-07 09:48:56.449286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.758 qpair failed and we were unable to recover it. 00:28:07.758 [2024-10-07 09:48:56.449369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.758 [2024-10-07 09:48:56.449395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.758 qpair failed and we were unable to recover it. 00:28:07.758 [2024-10-07 09:48:56.449474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.758 [2024-10-07 09:48:56.449502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:07.758 qpair failed and we were unable to recover it. 00:28:07.758 [2024-10-07 09:48:56.449611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.758 [2024-10-07 09:48:56.449637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.758 qpair failed and we were unable to recover it. 00:28:07.758 [2024-10-07 09:48:56.449816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.758 [2024-10-07 09:48:56.449845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.758 qpair failed and we were unable to recover it. 00:28:07.758 [2024-10-07 09:48:56.449921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.758 [2024-10-07 09:48:56.449948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.758 qpair failed and we were unable to recover it. 00:28:07.758 [2024-10-07 09:48:56.450028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.758 [2024-10-07 09:48:56.450054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.758 qpair failed and we were unable to recover it. 00:28:07.758 [2024-10-07 09:48:56.450138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.758 [2024-10-07 09:48:56.450164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.758 qpair failed and we were unable to recover it. 00:28:07.758 [2024-10-07 09:48:56.450274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.758 [2024-10-07 09:48:56.450301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.758 qpair failed and we were unable to recover it. 00:28:07.758 [2024-10-07 09:48:56.450407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.758 [2024-10-07 09:48:56.450432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.758 qpair failed and we were unable to recover it. 00:28:07.758 [2024-10-07 09:48:56.450517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.758 [2024-10-07 09:48:56.450542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.758 qpair failed and we were unable to recover it. 00:28:07.758 [2024-10-07 09:48:56.450623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.758 [2024-10-07 09:48:56.450649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.758 qpair failed and we were unable to recover it. 00:28:07.758 [2024-10-07 09:48:56.450751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.758 [2024-10-07 09:48:56.450779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:07.758 qpair failed and we were unable to recover it. 00:28:07.758 [2024-10-07 09:48:56.450860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.758 [2024-10-07 09:48:56.450886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:07.758 qpair failed and we were unable to recover it. 00:28:07.758 [2024-10-07 09:48:56.451000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.758 [2024-10-07 09:48:56.451026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:07.758 qpair failed and we were unable to recover it. 00:28:07.758 [2024-10-07 09:48:56.451105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.758 [2024-10-07 09:48:56.451131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:07.758 qpair failed and we were unable to recover it. 00:28:07.758 [2024-10-07 09:48:56.451245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.758 [2024-10-07 09:48:56.451279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:07.758 qpair failed and we were unable to recover it. 00:28:07.758 [2024-10-07 09:48:56.451391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.758 [2024-10-07 09:48:56.451417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:07.758 qpair failed and we were unable to recover it. 00:28:07.758 [2024-10-07 09:48:56.451491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.758 [2024-10-07 09:48:56.451519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.758 qpair failed and we were unable to recover it. 00:28:07.758 [2024-10-07 09:48:56.451604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.758 [2024-10-07 09:48:56.451629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.758 qpair failed and we were unable to recover it. 00:28:07.758 [2024-10-07 09:48:56.451718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.758 [2024-10-07 09:48:56.451747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.758 qpair failed and we were unable to recover it. 00:28:07.758 [2024-10-07 09:48:56.451860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.758 [2024-10-07 09:48:56.451888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.758 qpair failed and we were unable to recover it. 00:28:07.758 [2024-10-07 09:48:56.451970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.758 [2024-10-07 09:48:56.451997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.758 qpair failed and we were unable to recover it. 00:28:07.758 [2024-10-07 09:48:56.452109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.758 [2024-10-07 09:48:56.452135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.758 qpair failed and we were unable to recover it. 00:28:07.759 [2024-10-07 09:48:56.452222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.759 [2024-10-07 09:48:56.452250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:07.759 qpair failed and we were unable to recover it. 00:28:07.759 [2024-10-07 09:48:56.452366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.759 [2024-10-07 09:48:56.452393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:07.759 qpair failed and we were unable to recover it. 00:28:07.759 [2024-10-07 09:48:56.452468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.759 [2024-10-07 09:48:56.452495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.759 qpair failed and we were unable to recover it. 00:28:07.759 [2024-10-07 09:48:56.452573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.759 [2024-10-07 09:48:56.452599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.759 qpair failed and we were unable to recover it. 00:28:07.759 [2024-10-07 09:48:56.452683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.759 [2024-10-07 09:48:56.452709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.759 qpair failed and we were unable to recover it. 00:28:07.759 [2024-10-07 09:48:56.452797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.759 [2024-10-07 09:48:56.452822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.759 qpair failed and we were unable to recover it. 00:28:07.759 [2024-10-07 09:48:56.452918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.759 [2024-10-07 09:48:56.452947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.759 qpair failed and we were unable to recover it. 00:28:07.759 [2024-10-07 09:48:56.453033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.759 [2024-10-07 09:48:56.453062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.759 qpair failed and we were unable to recover it. 00:28:07.759 [2024-10-07 09:48:56.453148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.759 [2024-10-07 09:48:56.453179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.759 qpair failed and we were unable to recover it. 00:28:07.759 [2024-10-07 09:48:56.453292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.759 [2024-10-07 09:48:56.453318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.759 qpair failed and we were unable to recover it. 00:28:07.759 [2024-10-07 09:48:56.453400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.759 [2024-10-07 09:48:56.453426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.759 qpair failed and we were unable to recover it. 00:28:07.759 [2024-10-07 09:48:56.453535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.759 [2024-10-07 09:48:56.453563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.759 qpair failed and we were unable to recover it. 00:28:07.759 [2024-10-07 09:48:56.453649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.759 [2024-10-07 09:48:56.453684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.759 qpair failed and we were unable to recover it. 00:28:07.759 [2024-10-07 09:48:56.453770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.759 [2024-10-07 09:48:56.453796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.759 qpair failed and we were unable to recover it. 00:28:07.759 [2024-10-07 09:48:56.453879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.759 [2024-10-07 09:48:56.453905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.759 qpair failed and we were unable to recover it. 00:28:07.759 [2024-10-07 09:48:56.453985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.759 [2024-10-07 09:48:56.454012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.759 qpair failed and we were unable to recover it. 00:28:07.759 [2024-10-07 09:48:56.454094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.759 [2024-10-07 09:48:56.454121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.759 qpair failed and we were unable to recover it. 00:28:07.759 [2024-10-07 09:48:56.454235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.759 [2024-10-07 09:48:56.454261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.759 qpair failed and we were unable to recover it. 00:28:07.759 [2024-10-07 09:48:56.454341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.759 [2024-10-07 09:48:56.454367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.759 qpair failed and we were unable to recover it. 00:28:07.759 [2024-10-07 09:48:56.454454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.759 [2024-10-07 09:48:56.454484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.759 qpair failed and we were unable to recover it. 00:28:07.759 [2024-10-07 09:48:56.454562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.759 [2024-10-07 09:48:56.454588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.759 qpair failed and we were unable to recover it. 00:28:07.759 [2024-10-07 09:48:56.454683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.759 [2024-10-07 09:48:56.454709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.759 qpair failed and we were unable to recover it. 00:28:07.759 [2024-10-07 09:48:56.454800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.759 [2024-10-07 09:48:56.454828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.759 qpair failed and we were unable to recover it. 00:28:07.759 [2024-10-07 09:48:56.454906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.759 [2024-10-07 09:48:56.454933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.759 qpair failed and we were unable to recover it. 00:28:07.759 [2024-10-07 09:48:56.455020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.759 [2024-10-07 09:48:56.455047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.759 qpair failed and we were unable to recover it. 00:28:07.759 [2024-10-07 09:48:56.455187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.759 [2024-10-07 09:48:56.455213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.759 qpair failed and we were unable to recover it. 00:28:07.759 [2024-10-07 09:48:56.455297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.759 [2024-10-07 09:48:56.455323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.759 qpair failed and we were unable to recover it. 00:28:07.759 [2024-10-07 09:48:56.455442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.759 [2024-10-07 09:48:56.455468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.759 qpair failed and we were unable to recover it. 00:28:07.759 [2024-10-07 09:48:56.455550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.759 [2024-10-07 09:48:56.455577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.759 qpair failed and we were unable to recover it. 00:28:07.759 [2024-10-07 09:48:56.455689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.759 [2024-10-07 09:48:56.455716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.759 qpair failed and we were unable to recover it. 00:28:07.759 [2024-10-07 09:48:56.455798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.759 [2024-10-07 09:48:56.455825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.759 qpair failed and we were unable to recover it. 00:28:07.759 [2024-10-07 09:48:56.455938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.759 [2024-10-07 09:48:56.455963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.759 qpair failed and we were unable to recover it. 00:28:07.759 [2024-10-07 09:48:56.456048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.759 [2024-10-07 09:48:56.456074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.759 qpair failed and we were unable to recover it. 00:28:07.759 [2024-10-07 09:48:56.456156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.759 [2024-10-07 09:48:56.456181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.759 qpair failed and we were unable to recover it. 00:28:07.759 [2024-10-07 09:48:56.456252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.760 [2024-10-07 09:48:56.456278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.760 qpair failed and we were unable to recover it. 00:28:07.760 [2024-10-07 09:48:56.456379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.760 [2024-10-07 09:48:56.456406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.760 qpair failed and we were unable to recover it. 00:28:07.760 [2024-10-07 09:48:56.456490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.760 [2024-10-07 09:48:56.456516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.760 qpair failed and we were unable to recover it. 00:28:07.760 [2024-10-07 09:48:56.456654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.760 [2024-10-07 09:48:56.456686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.760 qpair failed and we were unable to recover it. 00:28:07.760 [2024-10-07 09:48:56.456775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.760 [2024-10-07 09:48:56.456800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.760 qpair failed and we were unable to recover it. 00:28:07.760 [2024-10-07 09:48:56.456878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.760 [2024-10-07 09:48:56.456904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.760 qpair failed and we were unable to recover it. 00:28:07.760 [2024-10-07 09:48:56.456979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.760 [2024-10-07 09:48:56.457004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.760 qpair failed and we were unable to recover it. 00:28:07.760 [2024-10-07 09:48:56.457110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.760 [2024-10-07 09:48:56.457136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.760 qpair failed and we were unable to recover it. 00:28:07.760 [2024-10-07 09:48:56.457215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.760 [2024-10-07 09:48:56.457241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.760 qpair failed and we were unable to recover it. 00:28:07.760 [2024-10-07 09:48:56.457319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.760 [2024-10-07 09:48:56.457348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.760 qpair failed and we were unable to recover it. 00:28:07.760 [2024-10-07 09:48:56.457437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.760 [2024-10-07 09:48:56.457465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.760 qpair failed and we were unable to recover it. 00:28:07.760 [2024-10-07 09:48:56.457546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.760 [2024-10-07 09:48:56.457572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.760 qpair failed and we were unable to recover it. 00:28:07.760 [2024-10-07 09:48:56.457661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.760 [2024-10-07 09:48:56.457696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.760 qpair failed and we were unable to recover it. 00:28:07.760 [2024-10-07 09:48:56.457803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.760 [2024-10-07 09:48:56.457829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.760 qpair failed and we were unable to recover it. 00:28:07.760 [2024-10-07 09:48:56.457939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.760 [2024-10-07 09:48:56.457965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.760 qpair failed and we were unable to recover it. 00:28:07.760 [2024-10-07 09:48:56.458050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.760 [2024-10-07 09:48:56.458075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.760 qpair failed and we were unable to recover it. 00:28:07.760 [2024-10-07 09:48:56.458181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.760 [2024-10-07 09:48:56.458207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.760 qpair failed and we were unable to recover it. 00:28:07.760 [2024-10-07 09:48:56.458281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.760 [2024-10-07 09:48:56.458307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.760 qpair failed and we were unable to recover it. 00:28:07.760 [2024-10-07 09:48:56.458402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.760 [2024-10-07 09:48:56.458431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.760 qpair failed and we were unable to recover it. 00:28:07.760 [2024-10-07 09:48:56.458519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.760 [2024-10-07 09:48:56.458546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.760 qpair failed and we were unable to recover it. 00:28:07.760 [2024-10-07 09:48:56.458659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.760 [2024-10-07 09:48:56.458693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.760 qpair failed and we were unable to recover it. 00:28:07.760 [2024-10-07 09:48:56.458772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.760 [2024-10-07 09:48:56.458798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.760 qpair failed and we were unable to recover it. 00:28:07.760 [2024-10-07 09:48:56.458876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.760 [2024-10-07 09:48:56.458903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.760 qpair failed and we were unable to recover it. 00:28:07.760 [2024-10-07 09:48:56.458986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.760 [2024-10-07 09:48:56.459013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.760 qpair failed and we were unable to recover it. 00:28:07.760 [2024-10-07 09:48:56.459125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.760 [2024-10-07 09:48:56.459152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.760 qpair failed and we were unable to recover it. 00:28:07.760 [2024-10-07 09:48:56.459238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.760 [2024-10-07 09:48:56.459264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.760 qpair failed and we were unable to recover it. 00:28:07.760 [2024-10-07 09:48:56.459353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.760 [2024-10-07 09:48:56.459379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.760 qpair failed and we were unable to recover it. 00:28:07.760 [2024-10-07 09:48:56.459463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.760 [2024-10-07 09:48:56.459488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.760 qpair failed and we were unable to recover it. 00:28:07.760 [2024-10-07 09:48:56.459563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.760 [2024-10-07 09:48:56.459589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.760 qpair failed and we were unable to recover it. 00:28:07.760 [2024-10-07 09:48:56.459706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.760 [2024-10-07 09:48:56.459733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.760 qpair failed and we were unable to recover it. 00:28:07.760 [2024-10-07 09:48:56.459814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.760 [2024-10-07 09:48:56.459840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.760 qpair failed and we were unable to recover it. 00:28:07.760 [2024-10-07 09:48:56.459921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.760 [2024-10-07 09:48:56.459947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.760 qpair failed and we were unable to recover it. 00:28:07.760 [2024-10-07 09:48:56.460058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.760 [2024-10-07 09:48:56.460084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.760 qpair failed and we were unable to recover it. 00:28:07.760 [2024-10-07 09:48:56.460161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.760 [2024-10-07 09:48:56.460189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.760 qpair failed and we were unable to recover it. 00:28:07.760 [2024-10-07 09:48:56.460272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.760 [2024-10-07 09:48:56.460298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.760 qpair failed and we were unable to recover it. 00:28:07.760 [2024-10-07 09:48:56.460374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.760 [2024-10-07 09:48:56.460401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.760 qpair failed and we were unable to recover it. 00:28:07.760 [2024-10-07 09:48:56.460536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.760 [2024-10-07 09:48:56.460562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.760 qpair failed and we were unable to recover it. 00:28:07.760 [2024-10-07 09:48:56.460637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.760 [2024-10-07 09:48:56.460663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.760 qpair failed and we were unable to recover it. 00:28:07.760 [2024-10-07 09:48:56.460753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.760 [2024-10-07 09:48:56.460781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.760 qpair failed and we were unable to recover it. 00:28:07.761 [2024-10-07 09:48:56.460890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.761 [2024-10-07 09:48:56.460917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.761 qpair failed and we were unable to recover it. 00:28:07.761 [2024-10-07 09:48:56.460998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.761 [2024-10-07 09:48:56.461024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.761 qpair failed and we were unable to recover it. 00:28:07.761 [2024-10-07 09:48:56.461132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.761 [2024-10-07 09:48:56.461157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.761 qpair failed and we were unable to recover it. 00:28:07.761 [2024-10-07 09:48:56.461234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.761 [2024-10-07 09:48:56.461259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.761 qpair failed and we were unable to recover it. 00:28:07.761 [2024-10-07 09:48:56.461372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.761 [2024-10-07 09:48:56.461399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.761 qpair failed and we were unable to recover it. 00:28:07.761 [2024-10-07 09:48:56.461481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.761 [2024-10-07 09:48:56.461507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.761 qpair failed and we were unable to recover it. 00:28:07.761 [2024-10-07 09:48:56.461591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.761 [2024-10-07 09:48:56.461619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.761 qpair failed and we were unable to recover it. 00:28:07.761 [2024-10-07 09:48:56.461737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.761 [2024-10-07 09:48:56.461764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.761 qpair failed and we were unable to recover it. 00:28:07.761 [2024-10-07 09:48:56.461855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.761 [2024-10-07 09:48:56.461882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.761 qpair failed and we were unable to recover it. 00:28:07.761 [2024-10-07 09:48:56.461967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.761 [2024-10-07 09:48:56.461994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.761 qpair failed and we were unable to recover it. 00:28:07.761 [2024-10-07 09:48:56.462107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.761 [2024-10-07 09:48:56.462134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.761 qpair failed and we were unable to recover it. 00:28:07.761 [2024-10-07 09:48:56.462244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.761 [2024-10-07 09:48:56.462270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.761 qpair failed and we were unable to recover it. 00:28:07.761 [2024-10-07 09:48:56.462366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.761 [2024-10-07 09:48:56.462394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.761 qpair failed and we were unable to recover it. 00:28:07.761 [2024-10-07 09:48:56.462503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.761 [2024-10-07 09:48:56.462533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.761 qpair failed and we were unable to recover it. 00:28:07.761 [2024-10-07 09:48:56.462620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.761 [2024-10-07 09:48:56.462646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.761 qpair failed and we were unable to recover it. 00:28:07.761 [2024-10-07 09:48:56.462734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.761 [2024-10-07 09:48:56.462759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.761 qpair failed and we were unable to recover it. 00:28:07.761 [2024-10-07 09:48:56.462836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.761 [2024-10-07 09:48:56.462861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.761 qpair failed and we were unable to recover it. 00:28:07.761 [2024-10-07 09:48:56.462942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.761 [2024-10-07 09:48:56.462968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.761 qpair failed and we were unable to recover it. 00:28:07.761 [2024-10-07 09:48:56.463102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.761 [2024-10-07 09:48:56.463127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.761 qpair failed and we were unable to recover it. 00:28:07.761 [2024-10-07 09:48:56.463203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.761 [2024-10-07 09:48:56.463228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.761 qpair failed and we were unable to recover it. 00:28:07.761 [2024-10-07 09:48:56.463306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.761 [2024-10-07 09:48:56.463332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.761 qpair failed and we were unable to recover it. 00:28:07.761 [2024-10-07 09:48:56.463417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.761 [2024-10-07 09:48:56.463445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.761 qpair failed and we were unable to recover it. 00:28:07.761 [2024-10-07 09:48:56.463526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.761 [2024-10-07 09:48:56.463552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.761 qpair failed and we were unable to recover it. 00:28:07.761 [2024-10-07 09:48:56.463638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.761 [2024-10-07 09:48:56.463671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.761 qpair failed and we were unable to recover it. 00:28:07.761 [2024-10-07 09:48:56.463755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.761 [2024-10-07 09:48:56.463781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.761 qpair failed and we were unable to recover it. 00:28:07.761 [2024-10-07 09:48:56.463858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.761 [2024-10-07 09:48:56.463884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.761 qpair failed and we were unable to recover it. 00:28:07.761 [2024-10-07 09:48:56.463971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.761 [2024-10-07 09:48:56.463998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.761 qpair failed and we were unable to recover it. 00:28:07.761 [2024-10-07 09:48:56.464086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.761 [2024-10-07 09:48:56.464114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.761 qpair failed and we were unable to recover it. 00:28:07.761 [2024-10-07 09:48:56.464229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.761 [2024-10-07 09:48:56.464254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.761 qpair failed and we were unable to recover it. 00:28:07.761 [2024-10-07 09:48:56.464332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.761 [2024-10-07 09:48:56.464357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.761 qpair failed and we were unable to recover it. 00:28:07.761 [2024-10-07 09:48:56.464463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.761 [2024-10-07 09:48:56.464488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.761 qpair failed and we were unable to recover it. 00:28:07.761 [2024-10-07 09:48:56.464565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.761 [2024-10-07 09:48:56.464590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.761 qpair failed and we were unable to recover it. 00:28:07.761 [2024-10-07 09:48:56.464685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.761 [2024-10-07 09:48:56.464712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.761 qpair failed and we were unable to recover it. 00:28:07.761 [2024-10-07 09:48:56.464786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.761 [2024-10-07 09:48:56.464811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.761 qpair failed and we were unable to recover it. 00:28:07.761 [2024-10-07 09:48:56.464894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.761 [2024-10-07 09:48:56.464920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.761 qpair failed and we were unable to recover it. 00:28:07.761 [2024-10-07 09:48:56.465000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.761 [2024-10-07 09:48:56.465026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.761 qpair failed and we were unable to recover it. 00:28:07.761 [2024-10-07 09:48:56.465106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.761 [2024-10-07 09:48:56.465133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.761 qpair failed and we were unable to recover it. 00:28:07.761 [2024-10-07 09:48:56.465270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.761 [2024-10-07 09:48:56.465296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.761 qpair failed and we were unable to recover it. 00:28:07.761 [2024-10-07 09:48:56.465380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.761 [2024-10-07 09:48:56.465405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.762 qpair failed and we were unable to recover it. 00:28:07.762 [2024-10-07 09:48:56.465491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.762 [2024-10-07 09:48:56.465517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.762 qpair failed and we were unable to recover it. 00:28:07.762 [2024-10-07 09:48:56.465588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.762 [2024-10-07 09:48:56.465614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.762 qpair failed and we were unable to recover it. 00:28:07.762 [2024-10-07 09:48:56.465703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.762 [2024-10-07 09:48:56.465729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.762 qpair failed and we were unable to recover it. 00:28:07.762 [2024-10-07 09:48:56.465806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.762 [2024-10-07 09:48:56.465831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.762 qpair failed and we were unable to recover it. 00:28:07.762 [2024-10-07 09:48:56.465913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.762 [2024-10-07 09:48:56.465939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.762 qpair failed and we were unable to recover it. 00:28:07.762 [2024-10-07 09:48:56.466080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.762 [2024-10-07 09:48:56.466106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.762 qpair failed and we were unable to recover it. 00:28:07.762 [2024-10-07 09:48:56.466187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.762 [2024-10-07 09:48:56.466213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.762 qpair failed and we were unable to recover it. 00:28:07.762 [2024-10-07 09:48:56.466286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.762 [2024-10-07 09:48:56.466312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.762 qpair failed and we were unable to recover it. 00:28:07.762 [2024-10-07 09:48:56.466388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.762 [2024-10-07 09:48:56.466414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.762 qpair failed and we were unable to recover it. 00:28:07.762 [2024-10-07 09:48:56.466495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.762 [2024-10-07 09:48:56.466521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.762 qpair failed and we were unable to recover it. 00:28:07.762 [2024-10-07 09:48:56.466631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.762 [2024-10-07 09:48:56.466656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.762 qpair failed and we were unable to recover it. 00:28:07.762 [2024-10-07 09:48:56.466773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.762 [2024-10-07 09:48:56.466799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.762 qpair failed and we were unable to recover it. 00:28:07.762 [2024-10-07 09:48:56.466885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.762 [2024-10-07 09:48:56.466914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.762 qpair failed and we were unable to recover it. 00:28:07.762 [2024-10-07 09:48:56.467051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.762 [2024-10-07 09:48:56.467077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.762 qpair failed and we were unable to recover it. 00:28:07.762 [2024-10-07 09:48:56.467174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.762 [2024-10-07 09:48:56.467202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.762 qpair failed and we were unable to recover it. 00:28:07.762 [2024-10-07 09:48:56.467288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.762 [2024-10-07 09:48:56.467314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.762 qpair failed and we were unable to recover it. 00:28:07.762 [2024-10-07 09:48:56.467386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.762 [2024-10-07 09:48:56.467411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.762 qpair failed and we were unable to recover it. 00:28:07.762 [2024-10-07 09:48:56.467492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.762 [2024-10-07 09:48:56.467518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.762 qpair failed and we were unable to recover it. 00:28:07.762 [2024-10-07 09:48:56.467627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.762 [2024-10-07 09:48:56.467652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.762 qpair failed and we were unable to recover it. 00:28:07.762 [2024-10-07 09:48:56.467730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.762 [2024-10-07 09:48:56.467756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.762 qpair failed and we were unable to recover it. 00:28:07.762 [2024-10-07 09:48:56.467865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.762 [2024-10-07 09:48:56.467891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.762 qpair failed and we were unable to recover it. 00:28:07.762 [2024-10-07 09:48:56.467963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.762 [2024-10-07 09:48:56.467989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.762 qpair failed and we were unable to recover it. 00:28:07.762 [2024-10-07 09:48:56.468071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.762 [2024-10-07 09:48:56.468096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.762 qpair failed and we were unable to recover it. 00:28:07.762 [2024-10-07 09:48:56.468202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.762 [2024-10-07 09:48:56.468228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.762 qpair failed and we were unable to recover it. 00:28:07.762 [2024-10-07 09:48:56.468339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.762 [2024-10-07 09:48:56.468365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.762 qpair failed and we were unable to recover it. 00:28:07.762 [2024-10-07 09:48:56.468437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.762 [2024-10-07 09:48:56.468463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.762 qpair failed and we were unable to recover it. 00:28:07.762 [2024-10-07 09:48:56.468575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.762 [2024-10-07 09:48:56.468601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.762 qpair failed and we were unable to recover it. 00:28:07.762 [2024-10-07 09:48:56.468675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.762 [2024-10-07 09:48:56.468702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.762 qpair failed and we were unable to recover it. 00:28:07.762 [2024-10-07 09:48:56.468773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.762 [2024-10-07 09:48:56.468799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.762 qpair failed and we were unable to recover it. 00:28:07.762 [2024-10-07 09:48:56.468887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.762 [2024-10-07 09:48:56.468913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.762 qpair failed and we were unable to recover it. 00:28:07.762 [2024-10-07 09:48:56.469025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.762 [2024-10-07 09:48:56.469050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.762 qpair failed and we were unable to recover it. 00:28:07.762 [2024-10-07 09:48:56.469126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.762 [2024-10-07 09:48:56.469152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.762 qpair failed and we were unable to recover it. 00:28:07.762 [2024-10-07 09:48:56.469242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.762 [2024-10-07 09:48:56.469268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.762 qpair failed and we were unable to recover it. 00:28:07.762 [2024-10-07 09:48:56.469415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.762 [2024-10-07 09:48:56.469440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.762 qpair failed and we were unable to recover it. 00:28:07.762 [2024-10-07 09:48:56.469519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.762 [2024-10-07 09:48:56.469545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.762 qpair failed and we were unable to recover it. 00:28:07.762 [2024-10-07 09:48:56.469622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.762 [2024-10-07 09:48:56.469648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.762 qpair failed and we were unable to recover it. 00:28:07.762 [2024-10-07 09:48:56.469731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.762 [2024-10-07 09:48:56.469756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.762 qpair failed and we were unable to recover it. 00:28:07.762 [2024-10-07 09:48:56.469835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.762 [2024-10-07 09:48:56.469860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.762 qpair failed and we were unable to recover it. 00:28:07.762 [2024-10-07 09:48:56.469940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.763 [2024-10-07 09:48:56.469966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.763 qpair failed and we were unable to recover it. 00:28:07.763 [2024-10-07 09:48:56.470038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.763 [2024-10-07 09:48:56.470063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.763 qpair failed and we were unable to recover it. 00:28:07.763 [2024-10-07 09:48:56.470143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.763 [2024-10-07 09:48:56.470168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.763 qpair failed and we were unable to recover it. 00:28:07.763 [2024-10-07 09:48:56.470281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.763 [2024-10-07 09:48:56.470307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.763 qpair failed and we were unable to recover it. 00:28:07.763 [2024-10-07 09:48:56.470393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.763 09:48:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:07.763 [2024-10-07 09:48:56.470418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.763 qpair failed and we were unable to recover it. 00:28:07.763 [2024-10-07 09:48:56.470489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.763 [2024-10-07 09:48:56.470515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.763 qpair failed and we were unable to recover it. 00:28:07.763 09:48:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:28:07.763 [2024-10-07 09:48:56.470629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.763 [2024-10-07 09:48:56.470655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.763 qpair failed and we were unable to recover it. 00:28:07.763 09:48:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:28:07.763 [2024-10-07 09:48:56.470750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.763 [2024-10-07 09:48:56.470776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.763 qpair failed and we were unable to recover it. 00:28:07.763 09:48:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:07.763 [2024-10-07 09:48:56.470854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.763 [2024-10-07 09:48:56.470879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.763 qpair failed and we were unable to recover it. 00:28:07.763 09:48:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:07.763 [2024-10-07 09:48:56.470969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.763 [2024-10-07 09:48:56.470996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.763 qpair failed and we were unable to recover it. 00:28:07.763 [2024-10-07 09:48:56.471073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.763 [2024-10-07 09:48:56.471099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.763 qpair failed and we were unable to recover it. 00:28:07.763 [2024-10-07 09:48:56.471187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.763 [2024-10-07 09:48:56.471223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.763 qpair failed and we were unable to recover it. 00:28:07.763 [2024-10-07 09:48:56.471368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.763 [2024-10-07 09:48:56.471396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.763 qpair failed and we were unable to recover it. 00:28:07.763 [2024-10-07 09:48:56.471502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.763 [2024-10-07 09:48:56.471528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.763 qpair failed and we were unable to recover it. 00:28:07.763 [2024-10-07 09:48:56.471611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.763 [2024-10-07 09:48:56.471637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.763 qpair failed and we were unable to recover it. 00:28:07.763 [2024-10-07 09:48:56.471734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.763 [2024-10-07 09:48:56.471761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.763 qpair failed and we were unable to recover it. 00:28:07.763 [2024-10-07 09:48:56.471848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.763 [2024-10-07 09:48:56.471875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.763 qpair failed and we were unable to recover it. 00:28:07.763 [2024-10-07 09:48:56.471959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.763 [2024-10-07 09:48:56.471998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.763 qpair failed and we were unable to recover it. 00:28:07.763 [2024-10-07 09:48:56.472111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.763 [2024-10-07 09:48:56.472138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.763 qpair failed and we were unable to recover it. 00:28:07.763 [2024-10-07 09:48:56.472259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.763 [2024-10-07 09:48:56.472295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.763 qpair failed and we were unable to recover it. 00:28:07.763 [2024-10-07 09:48:56.472382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.763 [2024-10-07 09:48:56.472410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.763 qpair failed and we were unable to recover it. 00:28:07.763 [2024-10-07 09:48:56.472552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.763 [2024-10-07 09:48:56.472580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.763 qpair failed and we were unable to recover it. 00:28:07.763 [2024-10-07 09:48:56.472654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.763 [2024-10-07 09:48:56.472702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.763 qpair failed and we were unable to recover it. 00:28:07.763 [2024-10-07 09:48:56.472782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.763 [2024-10-07 09:48:56.472810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.763 qpair failed and we were unable to recover it. 00:28:07.763 [2024-10-07 09:48:56.472908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.763 [2024-10-07 09:48:56.472936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.763 qpair failed and we were unable to recover it. 00:28:07.763 [2024-10-07 09:48:56.473023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.763 [2024-10-07 09:48:56.473051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.763 qpair failed and we were unable to recover it. 00:28:07.763 [2024-10-07 09:48:56.473142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.763 [2024-10-07 09:48:56.473170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.763 qpair failed and we were unable to recover it. 00:28:07.763 [2024-10-07 09:48:56.473253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.763 [2024-10-07 09:48:56.473288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.763 qpair failed and we were unable to recover it. 00:28:07.763 [2024-10-07 09:48:56.473377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.763 [2024-10-07 09:48:56.473403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.763 qpair failed and we were unable to recover it. 00:28:07.763 [2024-10-07 09:48:56.473524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.763 [2024-10-07 09:48:56.473550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.763 qpair failed and we were unable to recover it. 00:28:07.763 [2024-10-07 09:48:56.473628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.763 [2024-10-07 09:48:56.473654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.763 qpair failed and we were unable to recover it. 00:28:07.763 [2024-10-07 09:48:56.473761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.763 [2024-10-07 09:48:56.473787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.763 qpair failed and we were unable to recover it. 00:28:07.763 [2024-10-07 09:48:56.473892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.764 [2024-10-07 09:48:56.473917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.764 qpair failed and we were unable to recover it. 00:28:07.764 [2024-10-07 09:48:56.474009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.764 [2024-10-07 09:48:56.474035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.764 qpair failed and we were unable to recover it. 00:28:07.764 [2024-10-07 09:48:56.474111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.764 [2024-10-07 09:48:56.474136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.764 qpair failed and we were unable to recover it. 00:28:07.764 [2024-10-07 09:48:56.474279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.764 [2024-10-07 09:48:56.474313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.764 qpair failed and we were unable to recover it. 00:28:07.764 [2024-10-07 09:48:56.474423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.764 [2024-10-07 09:48:56.474449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.764 qpair failed and we were unable to recover it. 00:28:07.764 [2024-10-07 09:48:56.474523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.764 [2024-10-07 09:48:56.474549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.764 qpair failed and we were unable to recover it. 00:28:07.764 [2024-10-07 09:48:56.474655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.764 [2024-10-07 09:48:56.474690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.764 qpair failed and we were unable to recover it. 00:28:07.764 [2024-10-07 09:48:56.474808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.764 [2024-10-07 09:48:56.474834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.764 qpair failed and we were unable to recover it. 00:28:07.764 [2024-10-07 09:48:56.474914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.764 [2024-10-07 09:48:56.474940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.764 qpair failed and we were unable to recover it. 00:28:07.764 [2024-10-07 09:48:56.475050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.764 [2024-10-07 09:48:56.475076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.764 qpair failed and we were unable to recover it. 00:28:07.764 [2024-10-07 09:48:56.475167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.764 [2024-10-07 09:48:56.475200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.764 qpair failed and we were unable to recover it. 00:28:07.764 [2024-10-07 09:48:56.475286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.764 [2024-10-07 09:48:56.475311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.764 qpair failed and we were unable to recover it. 00:28:07.764 [2024-10-07 09:48:56.475397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.764 [2024-10-07 09:48:56.475428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.764 qpair failed and we were unable to recover it. 00:28:07.764 [2024-10-07 09:48:56.475518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.764 [2024-10-07 09:48:56.475544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.764 qpair failed and we were unable to recover it. 00:28:07.764 [2024-10-07 09:48:56.475685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.764 [2024-10-07 09:48:56.475712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.764 qpair failed and we were unable to recover it. 00:28:07.764 [2024-10-07 09:48:56.475794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.764 [2024-10-07 09:48:56.475820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.764 qpair failed and we were unable to recover it. 00:28:07.764 [2024-10-07 09:48:56.475934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.764 [2024-10-07 09:48:56.475961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.764 qpair failed and we were unable to recover it. 00:28:07.764 [2024-10-07 09:48:56.476050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.764 [2024-10-07 09:48:56.476076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.764 qpair failed and we were unable to recover it. 00:28:07.764 [2024-10-07 09:48:56.476156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.764 [2024-10-07 09:48:56.476182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.764 qpair failed and we were unable to recover it. 00:28:07.764 [2024-10-07 09:48:56.476260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.764 [2024-10-07 09:48:56.476285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.764 qpair failed and we were unable to recover it. 00:28:07.764 [2024-10-07 09:48:56.476380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.764 [2024-10-07 09:48:56.476417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.764 qpair failed and we were unable to recover it. 00:28:07.764 [2024-10-07 09:48:56.476498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.764 [2024-10-07 09:48:56.476526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.764 qpair failed and we were unable to recover it. 00:28:07.764 [2024-10-07 09:48:56.476611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.764 [2024-10-07 09:48:56.476638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.764 qpair failed and we were unable to recover it. 00:28:07.764 [2024-10-07 09:48:56.476732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.764 [2024-10-07 09:48:56.476761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.764 qpair failed and we were unable to recover it. 00:28:07.764 [2024-10-07 09:48:56.476847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.764 [2024-10-07 09:48:56.476874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.764 qpair failed and we were unable to recover it. 00:28:07.764 [2024-10-07 09:48:56.476954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.764 [2024-10-07 09:48:56.476981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.764 qpair failed and we were unable to recover it. 00:28:07.764 [2024-10-07 09:48:56.477058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.764 [2024-10-07 09:48:56.477085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.764 qpair failed and we were unable to recover it. 00:28:07.764 [2024-10-07 09:48:56.477160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.764 [2024-10-07 09:48:56.477187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.764 qpair failed and we were unable to recover it. 00:28:07.764 [2024-10-07 09:48:56.477291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.764 [2024-10-07 09:48:56.477318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.764 qpair failed and we were unable to recover it. 00:28:07.764 [2024-10-07 09:48:56.477393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.764 [2024-10-07 09:48:56.477420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.764 qpair failed and we were unable to recover it. 00:28:07.764 [2024-10-07 09:48:56.477542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.764 [2024-10-07 09:48:56.477568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.764 qpair failed and we were unable to recover it. 00:28:07.764 [2024-10-07 09:48:56.477649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.764 [2024-10-07 09:48:56.477693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.764 qpair failed and we were unable to recover it. 00:28:07.764 [2024-10-07 09:48:56.477784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.764 [2024-10-07 09:48:56.477812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.764 qpair failed and we were unable to recover it. 00:28:07.764 [2024-10-07 09:48:56.477890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.764 [2024-10-07 09:48:56.477917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.764 qpair failed and we were unable to recover it. 00:28:07.764 [2024-10-07 09:48:56.478029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.764 [2024-10-07 09:48:56.478056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.764 qpair failed and we were unable to recover it. 00:28:07.764 [2024-10-07 09:48:56.478139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.764 [2024-10-07 09:48:56.478166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.764 qpair failed and we were unable to recover it. 00:28:07.764 [2024-10-07 09:48:56.478243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.764 [2024-10-07 09:48:56.478271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.764 qpair failed and we were unable to recover it. 00:28:07.764 [2024-10-07 09:48:56.478396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.764 [2024-10-07 09:48:56.478423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.765 qpair failed and we were unable to recover it. 00:28:07.765 [2024-10-07 09:48:56.478520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.765 [2024-10-07 09:48:56.478548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.765 qpair failed and we were unable to recover it. 00:28:07.765 [2024-10-07 09:48:56.478661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.765 [2024-10-07 09:48:56.478696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.765 qpair failed and we were unable to recover it. 00:28:07.765 [2024-10-07 09:48:56.478788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.765 [2024-10-07 09:48:56.478815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.765 qpair failed and we were unable to recover it. 00:28:07.765 [2024-10-07 09:48:56.478891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.765 [2024-10-07 09:48:56.478917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.765 qpair failed and we were unable to recover it. 00:28:07.765 [2024-10-07 09:48:56.479005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.765 [2024-10-07 09:48:56.479031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.765 qpair failed and we were unable to recover it. 00:28:07.765 [2024-10-07 09:48:56.479149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.765 [2024-10-07 09:48:56.479176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.765 qpair failed and we were unable to recover it. 00:28:07.765 [2024-10-07 09:48:56.479259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.765 [2024-10-07 09:48:56.479287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.765 qpair failed and we were unable to recover it. 00:28:07.765 [2024-10-07 09:48:56.479402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.765 [2024-10-07 09:48:56.479429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.765 qpair failed and we were unable to recover it. 00:28:07.765 [2024-10-07 09:48:56.479542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.765 [2024-10-07 09:48:56.479569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.765 qpair failed and we were unable to recover it. 00:28:07.765 [2024-10-07 09:48:56.479642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.765 [2024-10-07 09:48:56.479675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.765 qpair failed and we were unable to recover it. 00:28:07.765 [2024-10-07 09:48:56.479760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.765 [2024-10-07 09:48:56.479787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.765 qpair failed and we were unable to recover it. 00:28:07.765 [2024-10-07 09:48:56.479865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.765 [2024-10-07 09:48:56.479892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.765 qpair failed and we were unable to recover it. 00:28:07.765 [2024-10-07 09:48:56.480011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.765 [2024-10-07 09:48:56.480049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.765 qpair failed and we were unable to recover it. 00:28:07.765 [2024-10-07 09:48:56.480166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.765 [2024-10-07 09:48:56.480192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.765 qpair failed and we were unable to recover it. 00:28:07.765 [2024-10-07 09:48:56.480288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.765 [2024-10-07 09:48:56.480315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.765 qpair failed and we were unable to recover it. 00:28:07.765 [2024-10-07 09:48:56.480396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.765 [2024-10-07 09:48:56.480422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.765 qpair failed and we were unable to recover it. 00:28:07.765 [2024-10-07 09:48:56.480535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.765 [2024-10-07 09:48:56.480574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.765 qpair failed and we were unable to recover it. 00:28:07.765 [2024-10-07 09:48:56.480698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.765 [2024-10-07 09:48:56.480727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.765 qpair failed and we were unable to recover it. 00:28:07.765 [2024-10-07 09:48:56.480818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.765 [2024-10-07 09:48:56.480844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.765 qpair failed and we were unable to recover it. 00:28:07.765 [2024-10-07 09:48:56.480957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.765 [2024-10-07 09:48:56.480983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.765 qpair failed and we were unable to recover it. 00:28:07.765 [2024-10-07 09:48:56.481074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.765 [2024-10-07 09:48:56.481100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.765 qpair failed and we were unable to recover it. 00:28:07.765 [2024-10-07 09:48:56.481206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.765 [2024-10-07 09:48:56.481231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.765 qpair failed and we were unable to recover it. 00:28:07.765 [2024-10-07 09:48:56.481320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.765 [2024-10-07 09:48:56.481345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.765 qpair failed and we were unable to recover it. 00:28:07.765 [2024-10-07 09:48:56.481424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.765 [2024-10-07 09:48:56.481450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.765 qpair failed and we were unable to recover it. 00:28:07.765 [2024-10-07 09:48:56.481556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.765 [2024-10-07 09:48:56.481582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.765 qpair failed and we were unable to recover it. 00:28:07.765 [2024-10-07 09:48:56.481681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.765 [2024-10-07 09:48:56.481710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.765 qpair failed and we were unable to recover it. 00:28:07.765 [2024-10-07 09:48:56.481826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.765 [2024-10-07 09:48:56.481853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.765 qpair failed and we were unable to recover it. 00:28:07.765 [2024-10-07 09:48:56.481935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.765 [2024-10-07 09:48:56.481961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.765 qpair failed and we were unable to recover it. 00:28:07.765 [2024-10-07 09:48:56.482041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.765 [2024-10-07 09:48:56.482067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.765 qpair failed and we were unable to recover it. 00:28:07.765 [2024-10-07 09:48:56.482153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.765 [2024-10-07 09:48:56.482179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.765 qpair failed and we were unable to recover it. 00:28:07.765 [2024-10-07 09:48:56.482284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.765 [2024-10-07 09:48:56.482315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.765 qpair failed and we were unable to recover it. 00:28:07.765 [2024-10-07 09:48:56.482432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.765 [2024-10-07 09:48:56.482457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.765 qpair failed and we were unable to recover it. 00:28:07.765 [2024-10-07 09:48:56.482540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.765 [2024-10-07 09:48:56.482566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.765 qpair failed and we were unable to recover it. 00:28:07.765 [2024-10-07 09:48:56.482646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.765 [2024-10-07 09:48:56.482691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.765 qpair failed and we were unable to recover it. 00:28:07.765 [2024-10-07 09:48:56.482777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.765 [2024-10-07 09:48:56.482805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.765 qpair failed and we were unable to recover it. 00:28:07.765 [2024-10-07 09:48:56.482900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.765 [2024-10-07 09:48:56.482927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.765 qpair failed and we were unable to recover it. 00:28:07.765 [2024-10-07 09:48:56.483007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.766 [2024-10-07 09:48:56.483033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.766 qpair failed and we were unable to recover it. 00:28:07.766 [2024-10-07 09:48:56.483123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.766 [2024-10-07 09:48:56.483150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.766 qpair failed and we were unable to recover it. 00:28:07.766 [2024-10-07 09:48:56.483268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.766 [2024-10-07 09:48:56.483294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.766 qpair failed and we were unable to recover it. 00:28:07.766 [2024-10-07 09:48:56.483409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.766 [2024-10-07 09:48:56.483436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.766 qpair failed and we were unable to recover it. 00:28:07.766 [2024-10-07 09:48:56.483519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.766 [2024-10-07 09:48:56.483545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.766 qpair failed and we were unable to recover it. 00:28:07.766 [2024-10-07 09:48:56.483629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.766 [2024-10-07 09:48:56.483661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.766 qpair failed and we were unable to recover it. 00:28:07.766 [2024-10-07 09:48:56.483779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.766 [2024-10-07 09:48:56.483805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.766 qpair failed and we were unable to recover it. 00:28:07.766 [2024-10-07 09:48:56.483883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.766 [2024-10-07 09:48:56.483909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.766 qpair failed and we were unable to recover it. 00:28:07.766 [2024-10-07 09:48:56.483993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.766 [2024-10-07 09:48:56.484019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.766 qpair failed and we were unable to recover it. 00:28:07.766 [2024-10-07 09:48:56.484129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.766 [2024-10-07 09:48:56.484155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.766 qpair failed and we were unable to recover it. 00:28:07.766 [2024-10-07 09:48:56.484235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.766 [2024-10-07 09:48:56.484261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.766 qpair failed and we were unable to recover it. 00:28:07.766 [2024-10-07 09:48:56.484334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.766 [2024-10-07 09:48:56.484360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.766 qpair failed and we were unable to recover it. 00:28:07.766 [2024-10-07 09:48:56.484469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.766 [2024-10-07 09:48:56.484496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.766 qpair failed and we were unable to recover it. 00:28:07.766 [2024-10-07 09:48:56.484582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.766 [2024-10-07 09:48:56.484608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.766 qpair failed and we were unable to recover it. 00:28:07.766 [2024-10-07 09:48:56.484695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.766 [2024-10-07 09:48:56.484722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.766 qpair failed and we were unable to recover it. 00:28:07.766 [2024-10-07 09:48:56.484804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.766 [2024-10-07 09:48:56.484830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.766 qpair failed and we were unable to recover it. 00:28:07.766 [2024-10-07 09:48:56.484908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.766 [2024-10-07 09:48:56.484934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.766 qpair failed and we were unable to recover it. 00:28:07.766 [2024-10-07 09:48:56.485021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.766 [2024-10-07 09:48:56.485055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.766 qpair failed and we were unable to recover it. 00:28:07.766 [2024-10-07 09:48:56.485165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.766 [2024-10-07 09:48:56.485191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.766 qpair failed and we were unable to recover it. 00:28:07.766 [2024-10-07 09:48:56.485295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.766 [2024-10-07 09:48:56.485321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.766 qpair failed and we were unable to recover it. 00:28:07.766 [2024-10-07 09:48:56.485401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.766 [2024-10-07 09:48:56.485427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.766 qpair failed and we were unable to recover it. 00:28:07.766 [2024-10-07 09:48:56.485535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.766 [2024-10-07 09:48:56.485562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.766 qpair failed and we were unable to recover it. 00:28:07.766 [2024-10-07 09:48:56.485650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.766 [2024-10-07 09:48:56.485683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.766 qpair failed and we were unable to recover it. 00:28:07.766 [2024-10-07 09:48:56.485824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.766 [2024-10-07 09:48:56.485850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.766 qpair failed and we were unable to recover it. 00:28:07.766 [2024-10-07 09:48:56.485932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.766 [2024-10-07 09:48:56.485957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.766 qpair failed and we were unable to recover it. 00:28:07.766 [2024-10-07 09:48:56.486065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.766 [2024-10-07 09:48:56.486091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.766 qpair failed and we were unable to recover it. 00:28:07.766 [2024-10-07 09:48:56.486165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.766 [2024-10-07 09:48:56.486191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.766 qpair failed and we were unable to recover it. 00:28:07.766 [2024-10-07 09:48:56.486303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.766 [2024-10-07 09:48:56.486328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.766 qpair failed and we were unable to recover it. 00:28:07.766 [2024-10-07 09:48:56.486459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.766 [2024-10-07 09:48:56.486486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.766 qpair failed and we were unable to recover it. 00:28:07.766 [2024-10-07 09:48:56.486570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.766 [2024-10-07 09:48:56.486597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.766 qpair failed and we were unable to recover it. 00:28:07.766 [2024-10-07 09:48:56.486746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.766 [2024-10-07 09:48:56.486773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.766 qpair failed and we were unable to recover it. 00:28:07.766 [2024-10-07 09:48:56.486853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.766 [2024-10-07 09:48:56.486880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.766 qpair failed and we were unable to recover it. 00:28:07.766 [2024-10-07 09:48:56.486959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.766 [2024-10-07 09:48:56.486985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.766 qpair failed and we were unable to recover it. 00:28:07.766 [2024-10-07 09:48:56.487131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.766 [2024-10-07 09:48:56.487157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.766 qpair failed and we were unable to recover it. 00:28:07.766 [2024-10-07 09:48:56.487241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.766 [2024-10-07 09:48:56.487267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.766 qpair failed and we were unable to recover it. 00:28:07.766 [2024-10-07 09:48:56.487348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.766 [2024-10-07 09:48:56.487373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.766 qpair failed and we were unable to recover it. 00:28:07.766 [2024-10-07 09:48:56.487445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.766 [2024-10-07 09:48:56.487472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.766 qpair failed and we were unable to recover it. 00:28:07.766 [2024-10-07 09:48:56.487582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.766 [2024-10-07 09:48:56.487608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.766 qpair failed and we were unable to recover it. 00:28:07.766 [2024-10-07 09:48:56.487704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.766 [2024-10-07 09:48:56.487731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.766 qpair failed and we were unable to recover it. 00:28:07.766 [2024-10-07 09:48:56.487855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.767 [2024-10-07 09:48:56.487881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.767 qpair failed and we were unable to recover it. 00:28:07.767 [2024-10-07 09:48:56.487962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.767 [2024-10-07 09:48:56.487989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.767 qpair failed and we were unable to recover it. 00:28:07.767 [2024-10-07 09:48:56.488067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.767 [2024-10-07 09:48:56.488092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.767 qpair failed and we were unable to recover it. 00:28:07.767 [2024-10-07 09:48:56.488202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.767 [2024-10-07 09:48:56.488229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.767 qpair failed and we were unable to recover it. 00:28:07.767 [2024-10-07 09:48:56.488348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.767 [2024-10-07 09:48:56.488377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.767 qpair failed and we were unable to recover it. 00:28:07.767 [2024-10-07 09:48:56.488466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.767 [2024-10-07 09:48:56.488492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.767 qpair failed and we were unable to recover it. 00:28:07.767 [2024-10-07 09:48:56.488562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.767 [2024-10-07 09:48:56.488588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.767 qpair failed and we were unable to recover it. 00:28:07.767 [2024-10-07 09:48:56.488682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.767 [2024-10-07 09:48:56.488709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.767 qpair failed and we were unable to recover it. 00:28:07.767 [2024-10-07 09:48:56.488815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.767 [2024-10-07 09:48:56.488840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.767 qpair failed and we were unable to recover it. 00:28:07.767 [2024-10-07 09:48:56.488919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.767 [2024-10-07 09:48:56.488945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.767 qpair failed and we were unable to recover it. 00:28:07.767 [2024-10-07 09:48:56.489021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.767 [2024-10-07 09:48:56.489055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b9 09:48:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:07.767 0 with addr=10.0.0.2, port=4420 00:28:07.767 qpair failed and we were unable to recover it. 00:28:07.767 [2024-10-07 09:48:56.489165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.767 [2024-10-07 09:48:56.489191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.767 09:48:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:07.767 qpair failed and we were unable to recover it. 00:28:07.767 [2024-10-07 09:48:56.489301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.767 09:48:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.767 [2024-10-07 09:48:56.489327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.767 qpair failed and we were unable to recover it. 00:28:07.767 09:48:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:07.767 [2024-10-07 09:48:56.489440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.767 [2024-10-07 09:48:56.489466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.767 qpair failed and we were unable to recover it. 00:28:07.767 [2024-10-07 09:48:56.489580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.767 [2024-10-07 09:48:56.489606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.767 qpair failed and we were unable to recover it. 00:28:07.767 [2024-10-07 09:48:56.489708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.767 [2024-10-07 09:48:56.489736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.767 qpair failed and we were unable to recover it. 00:28:07.767 [2024-10-07 09:48:56.489886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.767 [2024-10-07 09:48:56.489912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.767 qpair failed and we were unable to recover it. 00:28:07.767 [2024-10-07 09:48:56.489999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.767 [2024-10-07 09:48:56.490026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.767 qpair failed and we were unable to recover it. 00:28:07.767 [2024-10-07 09:48:56.490102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.767 [2024-10-07 09:48:56.490128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.767 qpair failed and we were unable to recover it. 00:28:07.767 [2024-10-07 09:48:56.490209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.767 [2024-10-07 09:48:56.490236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.767 qpair failed and we were unable to recover it. 00:28:07.767 [2024-10-07 09:48:56.490310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.767 [2024-10-07 09:48:56.490335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.767 qpair failed and we were unable to recover it. 00:28:07.767 [2024-10-07 09:48:56.490412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.767 [2024-10-07 09:48:56.490437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.767 qpair failed and we were unable to recover it. 00:28:07.767 [2024-10-07 09:48:56.490545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.767 [2024-10-07 09:48:56.490571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.767 qpair failed and we were unable to recover it. 00:28:07.767 [2024-10-07 09:48:56.490655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.767 [2024-10-07 09:48:56.490692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.767 qpair failed and we were unable to recover it. 00:28:07.767 [2024-10-07 09:48:56.490800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.767 [2024-10-07 09:48:56.490825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.767 qpair failed and we were unable to recover it. 00:28:07.767 [2024-10-07 09:48:56.490956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.767 [2024-10-07 09:48:56.490982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.767 qpair failed and we were unable to recover it. 00:28:07.767 [2024-10-07 09:48:56.491067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.767 [2024-10-07 09:48:56.491094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.767 qpair failed and we were unable to recover it. 00:28:07.767 [2024-10-07 09:48:56.491165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.767 [2024-10-07 09:48:56.491191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.767 qpair failed and we were unable to recover it. 00:28:07.767 [2024-10-07 09:48:56.491281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.767 [2024-10-07 09:48:56.491307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.767 qpair failed and we were unable to recover it. 00:28:07.767 [2024-10-07 09:48:56.491416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.767 [2024-10-07 09:48:56.491446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.767 qpair failed and we were unable to recover it. 00:28:07.767 [2024-10-07 09:48:56.491558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.767 [2024-10-07 09:48:56.491585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.767 qpair failed and we were unable to recover it. 00:28:07.767 [2024-10-07 09:48:56.491680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.767 [2024-10-07 09:48:56.491706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.767 qpair failed and we were unable to recover it. 00:28:07.767 [2024-10-07 09:48:56.491785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.767 [2024-10-07 09:48:56.491811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.767 qpair failed and we were unable to recover it. 00:28:07.767 [2024-10-07 09:48:56.491918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.767 [2024-10-07 09:48:56.491943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.767 qpair failed and we were unable to recover it. 00:28:07.767 [2024-10-07 09:48:56.492033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.767 [2024-10-07 09:48:56.492060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.767 qpair failed and we were unable to recover it. 00:28:07.767 [2024-10-07 09:48:56.492168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.767 [2024-10-07 09:48:56.492194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.767 qpair failed and we were unable to recover it. 00:28:07.767 [2024-10-07 09:48:56.492276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.767 [2024-10-07 09:48:56.492302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.767 qpair failed and we were unable to recover it. 00:28:07.768 [2024-10-07 09:48:56.492411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.768 [2024-10-07 09:48:56.492436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.768 qpair failed and we were unable to recover it. 00:28:07.768 [2024-10-07 09:48:56.492547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.768 [2024-10-07 09:48:56.492574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.768 qpair failed and we were unable to recover it. 00:28:07.768 [2024-10-07 09:48:56.492651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.768 [2024-10-07 09:48:56.492684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.768 qpair failed and we were unable to recover it. 00:28:07.768 [2024-10-07 09:48:56.492803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.768 [2024-10-07 09:48:56.492829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.768 qpair failed and we were unable to recover it. 00:28:07.768 [2024-10-07 09:48:56.492915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.768 [2024-10-07 09:48:56.492942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.768 qpair failed and we were unable to recover it. 00:28:07.768 [2024-10-07 09:48:56.493044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.768 [2024-10-07 09:48:56.493070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.768 qpair failed and we were unable to recover it. 00:28:07.768 [2024-10-07 09:48:56.493164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.768 [2024-10-07 09:48:56.493189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.768 qpair failed and we were unable to recover it. 00:28:07.768 [2024-10-07 09:48:56.493300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.768 [2024-10-07 09:48:56.493326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.768 qpair failed and we were unable to recover it. 00:28:07.768 [2024-10-07 09:48:56.493404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.768 [2024-10-07 09:48:56.493431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.768 qpair failed and we were unable to recover it. 00:28:07.768 [2024-10-07 09:48:56.493539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.768 [2024-10-07 09:48:56.493565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.768 qpair failed and we were unable to recover it. 00:28:07.768 [2024-10-07 09:48:56.493636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.768 [2024-10-07 09:48:56.493662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.768 qpair failed and we were unable to recover it. 00:28:07.768 [2024-10-07 09:48:56.493777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.768 [2024-10-07 09:48:56.493802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.768 qpair failed and we were unable to recover it. 00:28:07.768 [2024-10-07 09:48:56.493897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.768 [2024-10-07 09:48:56.493924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.768 qpair failed and we were unable to recover it. 00:28:07.768 [2024-10-07 09:48:56.494034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.768 [2024-10-07 09:48:56.494061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.768 qpair failed and we were unable to recover it. 00:28:07.768 [2024-10-07 09:48:56.494140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.768 [2024-10-07 09:48:56.494167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.768 qpair failed and we were unable to recover it. 00:28:07.768 [2024-10-07 09:48:56.494282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.768 [2024-10-07 09:48:56.494308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.768 qpair failed and we were unable to recover it. 00:28:07.768 [2024-10-07 09:48:56.494427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.768 [2024-10-07 09:48:56.494452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.768 qpair failed and we were unable to recover it. 00:28:07.768 [2024-10-07 09:48:56.494534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.768 [2024-10-07 09:48:56.494560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.768 qpair failed and we were unable to recover it. 00:28:07.768 [2024-10-07 09:48:56.494642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.768 [2024-10-07 09:48:56.494673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.768 qpair failed and we were unable to recover it. 00:28:07.768 [2024-10-07 09:48:56.494759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.768 [2024-10-07 09:48:56.494789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.768 qpair failed and we were unable to recover it. 00:28:07.768 [2024-10-07 09:48:56.494879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.768 [2024-10-07 09:48:56.494905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.768 qpair failed and we were unable to recover it. 00:28:07.768 [2024-10-07 09:48:56.494989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.768 [2024-10-07 09:48:56.495015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.768 qpair failed and we were unable to recover it. 00:28:07.768 [2024-10-07 09:48:56.495125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.768 [2024-10-07 09:48:56.495151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.768 qpair failed and we were unable to recover it. 00:28:07.768 [2024-10-07 09:48:56.495229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.768 [2024-10-07 09:48:56.495254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.768 qpair failed and we were unable to recover it. 00:28:07.768 [2024-10-07 09:48:56.495328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.768 [2024-10-07 09:48:56.495354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.768 qpair failed and we were unable to recover it. 00:28:07.768 [2024-10-07 09:48:56.495432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.768 [2024-10-07 09:48:56.495459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.768 qpair failed and we were unable to recover it. 00:28:07.768 [2024-10-07 09:48:56.495543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.768 [2024-10-07 09:48:56.495570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.768 qpair failed and we were unable to recover it. 00:28:07.768 [2024-10-07 09:48:56.495650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.768 [2024-10-07 09:48:56.495682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.768 qpair failed and we were unable to recover it. 00:28:07.768 [2024-10-07 09:48:56.495767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.768 [2024-10-07 09:48:56.495793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.768 qpair failed and we were unable to recover it. 00:28:07.768 [2024-10-07 09:48:56.495892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.768 [2024-10-07 09:48:56.495918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.768 qpair failed and we were unable to recover it. 00:28:07.768 [2024-10-07 09:48:56.496029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.768 [2024-10-07 09:48:56.496055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.768 qpair failed and we were unable to recover it. 00:28:07.768 [2024-10-07 09:48:56.496160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.768 [2024-10-07 09:48:56.496185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.768 qpair failed and we were unable to recover it. 00:28:07.768 [2024-10-07 09:48:56.496302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.768 [2024-10-07 09:48:56.496329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.768 qpair failed and we were unable to recover it. 00:28:07.768 [2024-10-07 09:48:56.496429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.768 [2024-10-07 09:48:56.496455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.768 qpair failed and we were unable to recover it. 00:28:07.768 [2024-10-07 09:48:56.496531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.769 [2024-10-07 09:48:56.496557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.769 qpair failed and we were unable to recover it. 00:28:07.769 [2024-10-07 09:48:56.496661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.769 [2024-10-07 09:48:56.496694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.769 qpair failed and we were unable to recover it. 00:28:07.769 [2024-10-07 09:48:56.496783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.769 [2024-10-07 09:48:56.496810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.769 qpair failed and we were unable to recover it. 00:28:07.769 [2024-10-07 09:48:56.496920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.769 [2024-10-07 09:48:56.496946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.769 qpair failed and we were unable to recover it. 00:28:07.769 [2024-10-07 09:48:56.497027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.769 [2024-10-07 09:48:56.497052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.769 qpair failed and we were unable to recover it. 00:28:07.769 [2024-10-07 09:48:56.497167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.769 [2024-10-07 09:48:56.497193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.769 qpair failed and we were unable to recover it. 00:28:07.769 [2024-10-07 09:48:56.497301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.769 [2024-10-07 09:48:56.497327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.769 qpair failed and we were unable to recover it. 00:28:07.769 [2024-10-07 09:48:56.497404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.769 [2024-10-07 09:48:56.497430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.769 qpair failed and we were unable to recover it. 00:28:07.769 [2024-10-07 09:48:56.497541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.769 [2024-10-07 09:48:56.497567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.769 qpair failed and we were unable to recover it. 00:28:07.769 [2024-10-07 09:48:56.497636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.769 [2024-10-07 09:48:56.497662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.769 qpair failed and we were unable to recover it. 00:28:07.769 [2024-10-07 09:48:56.497750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.769 [2024-10-07 09:48:56.497776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.769 qpair failed and we were unable to recover it. 00:28:07.769 [2024-10-07 09:48:56.497886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.769 [2024-10-07 09:48:56.497911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.769 qpair failed and we were unable to recover it. 00:28:07.769 [2024-10-07 09:48:56.498014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.769 [2024-10-07 09:48:56.498055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:07.769 qpair failed and we were unable to recover it. 00:28:07.769 [2024-10-07 09:48:56.498149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.769 [2024-10-07 09:48:56.498178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.769 qpair failed and we were unable to recover it. 00:28:07.769 [2024-10-07 09:48:56.498298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.769 [2024-10-07 09:48:56.498325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.769 qpair failed and we were unable to recover it. 00:28:07.769 [2024-10-07 09:48:56.498430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.769 [2024-10-07 09:48:56.498456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.769 qpair failed and we were unable to recover it. 00:28:07.769 [2024-10-07 09:48:56.498548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.769 [2024-10-07 09:48:56.498574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.769 qpair failed and we were unable to recover it. 00:28:07.769 [2024-10-07 09:48:56.498675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.769 [2024-10-07 09:48:56.498702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.769 qpair failed and we were unable to recover it. 00:28:07.769 [2024-10-07 09:48:56.498779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.769 [2024-10-07 09:48:56.498804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.769 qpair failed and we were unable to recover it. 00:28:07.769 [2024-10-07 09:48:56.498884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.769 [2024-10-07 09:48:56.498913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:07.769 qpair failed and we were unable to recover it. 00:28:07.769 [2024-10-07 09:48:56.499053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.769 [2024-10-07 09:48:56.499080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:07.769 qpair failed and we were unable to recover it. 00:28:07.769 [2024-10-07 09:48:56.499172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.769 [2024-10-07 09:48:56.499200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.769 qpair failed and we were unable to recover it. 00:28:07.769 [2024-10-07 09:48:56.499274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.769 [2024-10-07 09:48:56.499300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.769 qpair failed and we were unable to recover it. 00:28:07.769 [2024-10-07 09:48:56.499416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.769 [2024-10-07 09:48:56.499442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.769 qpair failed and we were unable to recover it. 00:28:07.769 [2024-10-07 09:48:56.499519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.769 [2024-10-07 09:48:56.499544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.769 qpair failed and we were unable to recover it. 00:28:07.769 [2024-10-07 09:48:56.499617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.769 [2024-10-07 09:48:56.499643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.769 qpair failed and we were unable to recover it. 00:28:07.769 [2024-10-07 09:48:56.499747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.769 [2024-10-07 09:48:56.499774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.769 qpair failed and we were unable to recover it. 00:28:07.769 [2024-10-07 09:48:56.499853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.769 [2024-10-07 09:48:56.499879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.769 qpair failed and we were unable to recover it. 00:28:07.769 [2024-10-07 09:48:56.499993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.769 [2024-10-07 09:48:56.500021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.769 qpair failed and we were unable to recover it. 00:28:07.769 [2024-10-07 09:48:56.500103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.769 [2024-10-07 09:48:56.500129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.769 qpair failed and we were unable to recover it. 00:28:07.769 [2024-10-07 09:48:56.500214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.769 [2024-10-07 09:48:56.500239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.769 qpair failed and we were unable to recover it. 00:28:07.769 [2024-10-07 09:48:56.500319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.769 [2024-10-07 09:48:56.500345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.769 qpair failed and we were unable to recover it. 00:28:07.769 [2024-10-07 09:48:56.500447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.769 [2024-10-07 09:48:56.500473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.769 qpair failed and we were unable to recover it. 00:28:07.769 [2024-10-07 09:48:56.500561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.769 [2024-10-07 09:48:56.500589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:07.769 qpair failed and we were unable to recover it. 00:28:07.769 [2024-10-07 09:48:56.500691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.769 [2024-10-07 09:48:56.500719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.769 qpair failed and we were unable to recover it. 00:28:07.769 [2024-10-07 09:48:56.500804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.769 [2024-10-07 09:48:56.500830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.769 qpair failed and we were unable to recover it. 00:28:07.769 [2024-10-07 09:48:56.500942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.769 [2024-10-07 09:48:56.500979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.769 qpair failed and we were unable to recover it. 00:28:07.769 [2024-10-07 09:48:56.501086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.770 [2024-10-07 09:48:56.501112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.770 qpair failed and we were unable to recover it. 00:28:07.770 [2024-10-07 09:48:56.501196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.770 [2024-10-07 09:48:56.501222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.770 qpair failed and we were unable to recover it. 00:28:07.770 [2024-10-07 09:48:56.501302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.770 [2024-10-07 09:48:56.501328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.770 qpair failed and we were unable to recover it. 00:28:07.770 [2024-10-07 09:48:56.501420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.770 [2024-10-07 09:48:56.501459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:07.770 qpair failed and we were unable to recover it. 00:28:07.770 [2024-10-07 09:48:56.501546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.770 [2024-10-07 09:48:56.501574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:07.770 qpair failed and we were unable to recover it. 00:28:07.770 [2024-10-07 09:48:56.501693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.770 [2024-10-07 09:48:56.501722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:07.770 qpair failed and we were unable to recover it. 00:28:07.770 [2024-10-07 09:48:56.501838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.770 [2024-10-07 09:48:56.501865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:07.770 qpair failed and we were unable to recover it. 00:28:07.770 [2024-10-07 09:48:56.501984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.770 [2024-10-07 09:48:56.502011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:07.770 qpair failed and we were unable to recover it. 00:28:07.770 [2024-10-07 09:48:56.502126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.770 [2024-10-07 09:48:56.502154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.770 qpair failed and we were unable to recover it. 00:28:07.770 [2024-10-07 09:48:56.502233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.770 [2024-10-07 09:48:56.502260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.770 qpair failed and we were unable to recover it. 00:28:07.770 [2024-10-07 09:48:56.502348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.770 [2024-10-07 09:48:56.502373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.770 qpair failed and we were unable to recover it. 00:28:07.770 [2024-10-07 09:48:56.502456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.770 [2024-10-07 09:48:56.502482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.770 qpair failed and we were unable to recover it. 00:28:07.770 [2024-10-07 09:48:56.502593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.770 [2024-10-07 09:48:56.502619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.770 qpair failed and we were unable to recover it. 00:28:07.770 [2024-10-07 09:48:56.502724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.770 [2024-10-07 09:48:56.502751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.770 qpair failed and we were unable to recover it. 00:28:07.770 [2024-10-07 09:48:56.502844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.770 [2024-10-07 09:48:56.502869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.770 qpair failed and we were unable to recover it. 00:28:07.770 [2024-10-07 09:48:56.502988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.770 [2024-10-07 09:48:56.503018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.770 qpair failed and we were unable to recover it. 00:28:07.770 [2024-10-07 09:48:56.503104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.770 [2024-10-07 09:48:56.503130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.770 qpair failed and we were unable to recover it. 00:28:07.770 [2024-10-07 09:48:56.503206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.770 [2024-10-07 09:48:56.503232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.770 qpair failed and we were unable to recover it. 00:28:07.770 [2024-10-07 09:48:56.503323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.770 [2024-10-07 09:48:56.503350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.770 qpair failed and we were unable to recover it. 00:28:07.770 [2024-10-07 09:48:56.503433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.770 [2024-10-07 09:48:56.503459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.770 qpair failed and we were unable to recover it. 00:28:07.770 [2024-10-07 09:48:56.503542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.770 [2024-10-07 09:48:56.503571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:07.770 qpair failed and we were unable to recover it. 00:28:07.770 [2024-10-07 09:48:56.503693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.770 [2024-10-07 09:48:56.503721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:07.770 qpair failed and we were unable to recover it. 00:28:07.770 [2024-10-07 09:48:56.503836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.770 [2024-10-07 09:48:56.503863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:07.770 qpair failed and we were unable to recover it. 00:28:07.770 [2024-10-07 09:48:56.503940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.770 [2024-10-07 09:48:56.503966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:07.770 qpair failed and we were unable to recover it. 00:28:07.770 [2024-10-07 09:48:56.504040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.770 [2024-10-07 09:48:56.504068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.770 qpair failed and we were unable to recover it. 00:28:07.770 [2024-10-07 09:48:56.504156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.770 [2024-10-07 09:48:56.504182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.770 qpair failed and we were unable to recover it. 00:28:07.770 [2024-10-07 09:48:56.504281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.770 [2024-10-07 09:48:56.504309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.770 qpair failed and we were unable to recover it. 00:28:07.770 [2024-10-07 09:48:56.504393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.770 [2024-10-07 09:48:56.504419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.770 qpair failed and we were unable to recover it. 00:28:07.770 [2024-10-07 09:48:56.504507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.770 [2024-10-07 09:48:56.504534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.770 qpair failed and we were unable to recover it. 00:28:07.770 [2024-10-07 09:48:56.504663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.770 [2024-10-07 09:48:56.504699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.770 qpair failed and we were unable to recover it. 00:28:07.770 [2024-10-07 09:48:56.504786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.770 [2024-10-07 09:48:56.504812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.770 qpair failed and we were unable to recover it. 00:28:07.770 [2024-10-07 09:48:56.504890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.770 [2024-10-07 09:48:56.504916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.770 qpair failed and we were unable to recover it. 00:28:07.770 [2024-10-07 09:48:56.505038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.770 [2024-10-07 09:48:56.505064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.770 qpair failed and we were unable to recover it. 00:28:07.770 [2024-10-07 09:48:56.505177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.770 [2024-10-07 09:48:56.505203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.770 qpair failed and we were unable to recover it. 00:28:07.770 [2024-10-07 09:48:56.505280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.770 [2024-10-07 09:48:56.505306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.770 qpair failed and we were unable to recover it. 00:28:07.770 [2024-10-07 09:48:56.505406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.770 [2024-10-07 09:48:56.505433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.770 qpair failed and we were unable to recover it. 00:28:07.770 [2024-10-07 09:48:56.505549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.770 [2024-10-07 09:48:56.505577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:07.770 qpair failed and we were unable to recover it. 00:28:07.770 [2024-10-07 09:48:56.505698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.770 [2024-10-07 09:48:56.505746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:07.770 qpair failed and we were unable to recover it. 00:28:07.770 [2024-10-07 09:48:56.505862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.770 [2024-10-07 09:48:56.505889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:07.770 qpair failed and we were unable to recover it. 00:28:07.771 [2024-10-07 09:48:56.505964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.771 [2024-10-07 09:48:56.505990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:07.771 qpair failed and we were unable to recover it. 00:28:07.771 [2024-10-07 09:48:56.506079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.771 [2024-10-07 09:48:56.506106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7ac000b90 with addr=10.0.0.2, port=4420 00:28:07.771 qpair failed and we were unable to recover it. 00:28:07.771 [2024-10-07 09:48:56.506203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.771 [2024-10-07 09:48:56.506230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.771 qpair failed and we were unable to recover it. 00:28:07.771 [2024-10-07 09:48:56.506330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.771 [2024-10-07 09:48:56.506364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.771 qpair failed and we were unable to recover it. 00:28:07.771 [2024-10-07 09:48:56.506478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.771 [2024-10-07 09:48:56.506504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.771 qpair failed and we were unable to recover it. 00:28:07.771 [2024-10-07 09:48:56.506588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.771 [2024-10-07 09:48:56.506615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.771 qpair failed and we were unable to recover it. 00:28:07.771 [2024-10-07 09:48:56.506711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.771 [2024-10-07 09:48:56.506737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.771 qpair failed and we were unable to recover it. 00:28:07.771 [2024-10-07 09:48:56.506808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.771 [2024-10-07 09:48:56.506834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.771 qpair failed and we were unable to recover it. 00:28:07.771 [2024-10-07 09:48:56.506914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.771 [2024-10-07 09:48:56.506940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.771 qpair failed and we were unable to recover it. 00:28:07.771 [2024-10-07 09:48:56.507053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.771 [2024-10-07 09:48:56.507080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.771 qpair failed and we were unable to recover it. 00:28:07.771 [2024-10-07 09:48:56.507157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.771 [2024-10-07 09:48:56.507183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.771 qpair failed and we were unable to recover it. 00:28:07.771 [2024-10-07 09:48:56.507273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.771 [2024-10-07 09:48:56.507299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.771 qpair failed and we were unable to recover it. 00:28:07.771 [2024-10-07 09:48:56.507445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.771 [2024-10-07 09:48:56.507470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.771 qpair failed and we were unable to recover it. 00:28:07.771 [2024-10-07 09:48:56.507602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.771 [2024-10-07 09:48:56.507627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.771 qpair failed and we were unable to recover it. 00:28:07.771 [2024-10-07 09:48:56.507709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.771 [2024-10-07 09:48:56.507736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.771 qpair failed and we were unable to recover it. 00:28:07.771 [2024-10-07 09:48:56.507825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.771 [2024-10-07 09:48:56.507852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.771 qpair failed and we were unable to recover it. 00:28:07.771 [2024-10-07 09:48:56.507929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.771 [2024-10-07 09:48:56.507955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.771 qpair failed and we were unable to recover it. 00:28:07.771 [2024-10-07 09:48:56.508049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.771 [2024-10-07 09:48:56.508075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.771 qpair failed and we were unable to recover it. 00:28:07.771 [2024-10-07 09:48:56.508184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.771 [2024-10-07 09:48:56.508209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.771 qpair failed and we were unable to recover it. 00:28:07.771 [2024-10-07 09:48:56.508280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.771 [2024-10-07 09:48:56.508306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.771 qpair failed and we were unable to recover it. 00:28:07.771 [2024-10-07 09:48:56.508393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.771 [2024-10-07 09:48:56.508419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.771 qpair failed and we were unable to recover it. 00:28:07.771 [2024-10-07 09:48:56.508501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.771 [2024-10-07 09:48:56.508526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.771 qpair failed and we were unable to recover it. 00:28:07.771 [2024-10-07 09:48:56.508609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.771 [2024-10-07 09:48:56.508634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.771 qpair failed and we were unable to recover it. 00:28:07.771 [2024-10-07 09:48:56.508719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.771 [2024-10-07 09:48:56.508746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.771 qpair failed and we were unable to recover it. 00:28:07.771 [2024-10-07 09:48:56.508862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.771 [2024-10-07 09:48:56.508887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.771 qpair failed and we were unable to recover it. 00:28:07.771 [2024-10-07 09:48:56.508998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.771 [2024-10-07 09:48:56.509023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.771 qpair failed and we were unable to recover it. 00:28:07.771 [2024-10-07 09:48:56.509109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.771 [2024-10-07 09:48:56.509135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.771 qpair failed and we were unable to recover it. 00:28:07.771 [2024-10-07 09:48:56.509243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.771 [2024-10-07 09:48:56.509269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.771 qpair failed and we were unable to recover it. 00:28:07.771 [2024-10-07 09:48:56.509366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.771 [2024-10-07 09:48:56.509391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.771 qpair failed and we were unable to recover it. 00:28:07.771 [2024-10-07 09:48:56.509505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.771 [2024-10-07 09:48:56.509531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.771 qpair failed and we were unable to recover it. 00:28:07.771 [2024-10-07 09:48:56.509612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.771 [2024-10-07 09:48:56.509644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.771 qpair failed and we were unable to recover it. 00:28:07.771 [2024-10-07 09:48:56.509743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.771 [2024-10-07 09:48:56.509769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.771 qpair failed and we were unable to recover it. 00:28:07.771 [2024-10-07 09:48:56.509860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.771 [2024-10-07 09:48:56.509887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.771 qpair failed and we were unable to recover it. 00:28:07.771 [2024-10-07 09:48:56.509968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.771 [2024-10-07 09:48:56.509994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.771 qpair failed and we were unable to recover it. 00:28:07.771 [2024-10-07 09:48:56.510069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.771 [2024-10-07 09:48:56.510095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.771 qpair failed and we were unable to recover it. 00:28:07.771 [2024-10-07 09:48:56.510202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.771 [2024-10-07 09:48:56.510227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.771 qpair failed and we were unable to recover it. 00:28:07.771 [2024-10-07 09:48:56.510305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.771 [2024-10-07 09:48:56.510331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.771 qpair failed and we were unable to recover it. 00:28:07.771 [2024-10-07 09:48:56.510442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.771 [2024-10-07 09:48:56.510468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.771 qpair failed and we were unable to recover it. 00:28:07.771 [2024-10-07 09:48:56.510585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.771 [2024-10-07 09:48:56.510611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.772 qpair failed and we were unable to recover it. 00:28:07.772 [2024-10-07 09:48:56.510754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.772 [2024-10-07 09:48:56.510782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.772 qpair failed and we were unable to recover it. 00:28:07.772 [2024-10-07 09:48:56.510854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.772 [2024-10-07 09:48:56.510879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.772 qpair failed and we were unable to recover it. 00:28:07.772 [2024-10-07 09:48:56.510962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.772 [2024-10-07 09:48:56.510988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.772 qpair failed and we were unable to recover it. 00:28:07.772 [2024-10-07 09:48:56.511103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.772 [2024-10-07 09:48:56.511129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.772 qpair failed and we were unable to recover it. 00:28:07.772 [2024-10-07 09:48:56.511218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.772 [2024-10-07 09:48:56.511243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.772 qpair failed and we were unable to recover it. 00:28:07.772 [2024-10-07 09:48:56.511332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.772 [2024-10-07 09:48:56.511362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.772 qpair failed and we were unable to recover it. 00:28:07.772 [2024-10-07 09:48:56.511475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.772 [2024-10-07 09:48:56.511502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.772 qpair failed and we were unable to recover it. 00:28:07.772 [2024-10-07 09:48:56.511617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.772 [2024-10-07 09:48:56.511642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.772 qpair failed and we were unable to recover it. 00:28:07.772 [2024-10-07 09:48:56.511749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.772 [2024-10-07 09:48:56.511775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.772 qpair failed and we were unable to recover it. 00:28:07.772 [2024-10-07 09:48:56.511886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.772 [2024-10-07 09:48:56.511913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.772 qpair failed and we were unable to recover it. 00:28:07.772 [2024-10-07 09:48:56.512000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.772 [2024-10-07 09:48:56.512025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.772 qpair failed and we were unable to recover it. 00:28:07.772 [2024-10-07 09:48:56.512111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.772 [2024-10-07 09:48:56.512138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.772 qpair failed and we were unable to recover it. 00:28:07.772 [2024-10-07 09:48:56.512209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.772 [2024-10-07 09:48:56.512234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.772 qpair failed and we were unable to recover it. 00:28:07.772 [2024-10-07 09:48:56.512350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.772 [2024-10-07 09:48:56.512375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.772 qpair failed and we were unable to recover it. 00:28:07.772 [2024-10-07 09:48:56.512485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.772 [2024-10-07 09:48:56.512510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.772 qpair failed and we were unable to recover it. 00:28:07.772 Malloc0 00:28:07.772 [2024-10-07 09:48:56.512592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.772 [2024-10-07 09:48:56.512617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.772 qpair failed and we were unable to recover it. 00:28:07.772 [2024-10-07 09:48:56.512721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.772 [2024-10-07 09:48:56.512746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.772 qpair failed and we were unable to recover it. 00:28:07.772 [2024-10-07 09:48:56.512819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.772 [2024-10-07 09:48:56.512844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.772 qpair failed and we were unable to recover it. 00:28:07.772 [2024-10-07 09:48:56.512931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.772 [2024-10-07 09:48:56.512956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.772 qpair failed and we were unable to recover it. 00:28:07.772 09:48:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.772 [2024-10-07 09:48:56.513100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.772 [2024-10-07 09:48:56.513126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.772 qpair failed and we were unable to recover it. 00:28:07.772 [2024-10-07 09:48:56.513245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.772 [2024-10-07 09:48:56.513271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.772 09:48:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:28:07.772 qpair failed and we were unable to recover it. 00:28:07.772 [2024-10-07 09:48:56.513353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.772 [2024-10-07 09:48:56.513379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.772 qpair failed and we were unable to recover it. 00:28:07.772 09:48:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.772 [2024-10-07 09:48:56.513461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.772 [2024-10-07 09:48:56.513487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.772 qpair failed and we were unable to recover it. 00:28:07.772 09:48:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:07.772 [2024-10-07 09:48:56.513575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.772 [2024-10-07 09:48:56.513600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.772 qpair failed and we were unable to recover it. 00:28:07.772 [2024-10-07 09:48:56.513696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.772 [2024-10-07 09:48:56.513722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.772 qpair failed and we were unable to recover it. 00:28:07.772 [2024-10-07 09:48:56.513806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.772 [2024-10-07 09:48:56.513832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.772 qpair failed and we were unable to recover it. 00:28:07.772 [2024-10-07 09:48:56.513941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.772 [2024-10-07 09:48:56.513971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.772 qpair failed and we were unable to recover it. 00:28:07.772 [2024-10-07 09:48:56.514062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.772 [2024-10-07 09:48:56.514087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.772 qpair failed and we were unable to recover it. 00:28:07.772 [2024-10-07 09:48:56.514167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.772 [2024-10-07 09:48:56.514193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.772 qpair failed and we were unable to recover it. 00:28:07.772 [2024-10-07 09:48:56.514279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.772 [2024-10-07 09:48:56.514305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.772 qpair failed and we were unable to recover it. 00:28:07.772 [2024-10-07 09:48:56.514433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.772 [2024-10-07 09:48:56.514461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.772 qpair failed and we were unable to recover it. 00:28:07.772 [2024-10-07 09:48:56.514544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.772 [2024-10-07 09:48:56.514571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.772 qpair failed and we were unable to recover it. 00:28:07.772 [2024-10-07 09:48:56.514674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.772 [2024-10-07 09:48:56.514700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.772 qpair failed and we were unable to recover it. 00:28:07.772 [2024-10-07 09:48:56.514783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.772 [2024-10-07 09:48:56.514809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.772 qpair failed and we were unable to recover it. 00:28:07.773 [2024-10-07 09:48:56.514888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.773 [2024-10-07 09:48:56.514914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.773 qpair failed and we were unable to recover it. 00:28:07.773 [2024-10-07 09:48:56.515006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.773 [2024-10-07 09:48:56.515032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.773 qpair failed and we were unable to recover it. 00:28:07.773 [2024-10-07 09:48:56.515120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.773 [2024-10-07 09:48:56.515146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.773 qpair failed and we were unable to recover it. 00:28:07.773 [2024-10-07 09:48:56.515224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.773 [2024-10-07 09:48:56.515250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.773 qpair failed and we were unable to recover it. 00:28:07.773 [2024-10-07 09:48:56.515389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.773 [2024-10-07 09:48:56.515415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.773 qpair failed and we were unable to recover it. 00:28:07.773 [2024-10-07 09:48:56.515494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.773 [2024-10-07 09:48:56.515520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.773 qpair failed and we were unable to recover it. 00:28:07.773 [2024-10-07 09:48:56.515595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.773 [2024-10-07 09:48:56.515620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.773 qpair failed and we were unable to recover it. 00:28:07.773 [2024-10-07 09:48:56.515718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.773 [2024-10-07 09:48:56.515744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.773 qpair failed and we were unable to recover it. 00:28:07.773 [2024-10-07 09:48:56.515825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.773 [2024-10-07 09:48:56.515859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.773 qpair failed and we were unable to recover it. 00:28:07.773 [2024-10-07 09:48:56.515975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.773 [2024-10-07 09:48:56.516014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.773 qpair failed and we were unable to recover it. 00:28:07.773 [2024-10-07 09:48:56.516104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.773 [2024-10-07 09:48:56.516132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.773 qpair failed and we were unable to recover it. 00:28:07.773 [2024-10-07 09:48:56.516215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.773 [2024-10-07 09:48:56.516242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.773 qpair failed and we were unable to recover it. 00:28:07.773 [2024-10-07 09:48:56.516288] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:07.773 [2024-10-07 09:48:56.516317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.773 [2024-10-07 09:48:56.516342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.773 qpair failed and we were unable to recover it. 00:28:07.773 [2024-10-07 09:48:56.516450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.773 [2024-10-07 09:48:56.516476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.773 qpair failed and we were unable to recover it. 00:28:07.773 [2024-10-07 09:48:56.516565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.773 [2024-10-07 09:48:56.516590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.773 qpair failed and we were unable to recover it. 00:28:07.773 [2024-10-07 09:48:56.516689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.773 [2024-10-07 09:48:56.516715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.773 qpair failed and we were unable to recover it. 00:28:07.773 [2024-10-07 09:48:56.516790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.773 [2024-10-07 09:48:56.516817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.773 qpair failed and we were unable to recover it. 00:28:07.773 [2024-10-07 09:48:56.516900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.773 [2024-10-07 09:48:56.516926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.773 qpair failed and we were unable to recover it. 00:28:07.773 [2024-10-07 09:48:56.517038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.773 [2024-10-07 09:48:56.517066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.773 qpair failed and we were unable to recover it. 00:28:07.773 [2024-10-07 09:48:56.517146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.773 [2024-10-07 09:48:56.517172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.773 qpair failed and we were unable to recover it. 00:28:07.773 [2024-10-07 09:48:56.517276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.773 [2024-10-07 09:48:56.517302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.773 qpair failed and we were unable to recover it. 00:28:07.773 [2024-10-07 09:48:56.517372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.773 [2024-10-07 09:48:56.517397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.773 qpair failed and we were unable to recover it. 00:28:07.773 [2024-10-07 09:48:56.517476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.773 [2024-10-07 09:48:56.517506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.773 qpair failed and we were unable to recover it. 00:28:07.773 [2024-10-07 09:48:56.517594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.773 [2024-10-07 09:48:56.517620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.773 qpair failed and we were unable to recover it. 00:28:07.773 [2024-10-07 09:48:56.517725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.773 [2024-10-07 09:48:56.517750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.773 qpair failed and we were unable to recover it. 00:28:07.773 [2024-10-07 09:48:56.517834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.773 [2024-10-07 09:48:56.517860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.773 qpair failed and we were unable to recover it. 00:28:07.773 [2024-10-07 09:48:56.517980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.773 [2024-10-07 09:48:56.518007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.773 qpair failed and we were unable to recover it. 00:28:07.773 [2024-10-07 09:48:56.518093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.773 [2024-10-07 09:48:56.518120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.773 qpair failed and we were unable to recover it. 00:28:07.773 [2024-10-07 09:48:56.518210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.773 [2024-10-07 09:48:56.518235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.773 qpair failed and we were unable to recover it. 00:28:07.773 [2024-10-07 09:48:56.518321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.773 [2024-10-07 09:48:56.518352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.773 qpair failed and we were unable to recover it. 00:28:07.773 [2024-10-07 09:48:56.518443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.773 [2024-10-07 09:48:56.518470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.773 qpair failed and we were unable to recover it. 00:28:07.773 [2024-10-07 09:48:56.518553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.773 [2024-10-07 09:48:56.518580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.773 qpair failed and we were unable to recover it. 00:28:07.773 [2024-10-07 09:48:56.518681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.773 [2024-10-07 09:48:56.518710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.773 qpair failed and we were unable to recover it. 00:28:07.773 [2024-10-07 09:48:56.518824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.773 [2024-10-07 09:48:56.518850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.773 qpair failed and we were unable to recover it. 00:28:07.773 [2024-10-07 09:48:56.518939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.773 [2024-10-07 09:48:56.518974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.773 qpair failed and we were unable to recover it. 00:28:07.773 [2024-10-07 09:48:56.519088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.773 [2024-10-07 09:48:56.519113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.773 qpair failed and we were unable to recover it. 00:28:07.773 [2024-10-07 09:48:56.519232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.773 [2024-10-07 09:48:56.519259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.773 qpair failed and we were unable to recover it. 00:28:07.774 [2024-10-07 09:48:56.519336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.774 [2024-10-07 09:48:56.519362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.774 qpair failed and we were unable to recover it. 00:28:07.774 [2024-10-07 09:48:56.519440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.774 [2024-10-07 09:48:56.519465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.774 qpair failed and we were unable to recover it. 00:28:07.774 [2024-10-07 09:48:56.519568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.774 [2024-10-07 09:48:56.519594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.774 qpair failed and we were unable to recover it. 00:28:07.774 [2024-10-07 09:48:56.519688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.774 [2024-10-07 09:48:56.519715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.774 qpair failed and we were unable to recover it. 00:28:07.774 [2024-10-07 09:48:56.519800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.774 [2024-10-07 09:48:56.519826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.774 qpair failed and we were unable to recover it. 00:28:07.774 [2024-10-07 09:48:56.519910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.774 [2024-10-07 09:48:56.519935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.774 qpair failed and we were unable to recover it. 00:28:07.774 [2024-10-07 09:48:56.520056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.774 [2024-10-07 09:48:56.520082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.774 qpair failed and we were unable to recover it. 00:28:07.774 [2024-10-07 09:48:56.520161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.774 [2024-10-07 09:48:56.520186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.774 qpair failed and we were unable to recover it. 00:28:07.774 [2024-10-07 09:48:56.520271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.774 [2024-10-07 09:48:56.520299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.774 qpair failed and we were unable to recover it. 00:28:07.774 [2024-10-07 09:48:56.520420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.774 [2024-10-07 09:48:56.520447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.774 qpair failed and we were unable to recover it. 00:28:07.774 [2024-10-07 09:48:56.520531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.774 [2024-10-07 09:48:56.520560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.774 qpair failed and we were unable to recover it. 00:28:07.774 [2024-10-07 09:48:56.520638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.774 [2024-10-07 09:48:56.520664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.774 qpair failed and we were unable to recover it. 00:28:07.774 [2024-10-07 09:48:56.520757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.774 [2024-10-07 09:48:56.520783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.774 qpair failed and we were unable to recover it. 00:28:07.774 [2024-10-07 09:48:56.520861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.774 [2024-10-07 09:48:56.520887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.774 qpair failed and we were unable to recover it. 00:28:07.774 [2024-10-07 09:48:56.520993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.774 [2024-10-07 09:48:56.521018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.774 qpair failed and we were unable to recover it. 00:28:07.774 [2024-10-07 09:48:56.521159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.774 [2024-10-07 09:48:56.521185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.774 qpair failed and we were unable to recover it. 00:28:07.774 [2024-10-07 09:48:56.521298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.774 [2024-10-07 09:48:56.521323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.774 qpair failed and we were unable to recover it. 00:28:07.774 [2024-10-07 09:48:56.521403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.774 [2024-10-07 09:48:56.521428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.774 qpair failed and we were unable to recover it. 00:28:07.774 [2024-10-07 09:48:56.521548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.774 [2024-10-07 09:48:56.521574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.774 qpair failed and we were unable to recover it. 00:28:07.774 [2024-10-07 09:48:56.521652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.774 [2024-10-07 09:48:56.521690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.774 qpair failed and we were unable to recover it. 00:28:07.774 [2024-10-07 09:48:56.521767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.774 [2024-10-07 09:48:56.521793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.774 qpair failed and we were unable to recover it. 00:28:07.774 [2024-10-07 09:48:56.521873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.774 [2024-10-07 09:48:56.521899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.774 qpair failed and we were unable to recover it. 00:28:07.774 [2024-10-07 09:48:56.521984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.774 [2024-10-07 09:48:56.522010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.774 qpair failed and we were unable to recover it. 00:28:07.774 [2024-10-07 09:48:56.522115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.774 [2024-10-07 09:48:56.522141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.774 qpair failed and we were unable to recover it. 00:28:07.774 [2024-10-07 09:48:56.522229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.774 [2024-10-07 09:48:56.522255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.774 qpair failed and we were unable to recover it. 00:28:07.774 [2024-10-07 09:48:56.522369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.774 [2024-10-07 09:48:56.522398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.774 qpair failed and we were unable to recover it. 00:28:07.774 [2024-10-07 09:48:56.522492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.774 [2024-10-07 09:48:56.522520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.774 qpair failed and we were unable to recover it. 00:28:07.774 [2024-10-07 09:48:56.522607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.774 [2024-10-07 09:48:56.522633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.774 qpair failed and we were unable to recover it. 00:28:07.774 [2024-10-07 09:48:56.522717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.774 [2024-10-07 09:48:56.522744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.774 qpair failed and we were unable to recover it. 00:28:07.774 [2024-10-07 09:48:56.522831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.774 [2024-10-07 09:48:56.522857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.774 qpair failed and we were unable to recover it. 00:28:07.774 [2024-10-07 09:48:56.522965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.774 [2024-10-07 09:48:56.522990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.774 qpair failed and we were unable to recover it. 00:28:07.774 [2024-10-07 09:48:56.523069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.774 [2024-10-07 09:48:56.523096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.774 qpair failed and we were unable to recover it. 00:28:07.774 [2024-10-07 09:48:56.523177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.774 [2024-10-07 09:48:56.523203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.774 qpair failed and we were unable to recover it. 00:28:07.774 [2024-10-07 09:48:56.523314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.774 [2024-10-07 09:48:56.523341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.774 qpair failed and we were unable to recover it. 00:28:07.774 [2024-10-07 09:48:56.523452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.774 [2024-10-07 09:48:56.523477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.774 qpair failed and we were unable to recover it. 00:28:07.774 [2024-10-07 09:48:56.523550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.774 [2024-10-07 09:48:56.523576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.774 qpair failed and we were unable to recover it. 00:28:07.774 [2024-10-07 09:48:56.523660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.774 [2024-10-07 09:48:56.523691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.774 qpair failed and we were unable to recover it. 00:28:07.774 [2024-10-07 09:48:56.523799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.774 [2024-10-07 09:48:56.523825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.774 qpair failed and we were unable to recover it. 00:28:07.774 [2024-10-07 09:48:56.523903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.775 [2024-10-07 09:48:56.523929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.775 qpair failed and we were unable to recover it. 00:28:07.775 [2024-10-07 09:48:56.524016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.775 [2024-10-07 09:48:56.524042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.775 qpair failed and we were unable to recover it. 00:28:07.775 [2024-10-07 09:48:56.524118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.775 [2024-10-07 09:48:56.524144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.775 qpair failed and we were unable to recover it. 00:28:07.775 [2024-10-07 09:48:56.524230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.775 [2024-10-07 09:48:56.524257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.775 qpair failed and we were unable to recover it. 00:28:07.775 [2024-10-07 09:48:56.524339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.775 [2024-10-07 09:48:56.524365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.775 qpair failed and we were unable to recover it. 00:28:07.775 [2024-10-07 09:48:56.524444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.775 [2024-10-07 09:48:56.524471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.775 qpair failed and we were unable to recover it. 00:28:07.775 09:48:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.775 [2024-10-07 09:48:56.524550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.775 [2024-10-07 09:48:56.524576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.775 qpair failed and we were unable to recover it. 00:28:07.775 [2024-10-07 09:48:56.524675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.775 09:48:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:07.775 [2024-10-07 09:48:56.524702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.775 qpair failed and we were unable to recover it. 00:28:07.775 [2024-10-07 09:48:56.524776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.775 [2024-10-07 09:48:56.524801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.775 qpair failed and we were unable to recover it. 00:28:07.775 09:48:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.775 [2024-10-07 09:48:56.524914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.775 [2024-10-07 09:48:56.524940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.775 qpair failed and we were unable to recover it. 00:28:07.775 09:48:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:07.775 [2024-10-07 09:48:56.525022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.775 [2024-10-07 09:48:56.525048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.775 qpair failed and we were unable to recover it. 00:28:07.775 [2024-10-07 09:48:56.525125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.775 [2024-10-07 09:48:56.525153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.775 qpair failed and we were unable to recover it. 00:28:07.775 [2024-10-07 09:48:56.525239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.775 [2024-10-07 09:48:56.525269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.775 qpair failed and we were unable to recover it. 00:28:07.775 [2024-10-07 09:48:56.525361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.775 [2024-10-07 09:48:56.525386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.775 qpair failed and we were unable to recover it. 00:28:07.775 [2024-10-07 09:48:56.525490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.775 [2024-10-07 09:48:56.525516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.775 qpair failed and we were unable to recover it. 00:28:07.775 [2024-10-07 09:48:56.525598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.775 [2024-10-07 09:48:56.525623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.775 qpair failed and we were unable to recover it. 00:28:07.775 [2024-10-07 09:48:56.525754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.775 [2024-10-07 09:48:56.525780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.775 qpair failed and we were unable to recover it. 00:28:07.775 [2024-10-07 09:48:56.525859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.775 [2024-10-07 09:48:56.525884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.775 qpair failed and we were unable to recover it. 00:28:07.775 [2024-10-07 09:48:56.525961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.775 [2024-10-07 09:48:56.525987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.775 qpair failed and we were unable to recover it. 00:28:07.775 [2024-10-07 09:48:56.526064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.775 [2024-10-07 09:48:56.526089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.775 qpair failed and we were unable to recover it. 00:28:07.775 [2024-10-07 09:48:56.526158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.775 [2024-10-07 09:48:56.526184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.775 qpair failed and we were unable to recover it. 00:28:07.775 [2024-10-07 09:48:56.526260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.775 [2024-10-07 09:48:56.526285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.775 qpair failed and we were unable to recover it. 00:28:07.775 [2024-10-07 09:48:56.526399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.775 [2024-10-07 09:48:56.526425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.775 qpair failed and we were unable to recover it. 00:28:07.775 [2024-10-07 09:48:56.526495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.775 [2024-10-07 09:48:56.526520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.775 qpair failed and we were unable to recover it. 00:28:07.775 [2024-10-07 09:48:56.526590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.775 [2024-10-07 09:48:56.526616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.775 qpair failed and we were unable to recover it. 00:28:07.775 [2024-10-07 09:48:56.526737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.775 [2024-10-07 09:48:56.526763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.775 qpair failed and we were unable to recover it. 00:28:07.775 [2024-10-07 09:48:56.526843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.775 [2024-10-07 09:48:56.526868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.775 qpair failed and we were unable to recover it. 00:28:07.775 [2024-10-07 09:48:56.526943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.775 [2024-10-07 09:48:56.526972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.775 qpair failed and we were unable to recover it. 00:28:07.775 [2024-10-07 09:48:56.527048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.775 [2024-10-07 09:48:56.527073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.775 qpair failed and we were unable to recover it. 00:28:07.775 [2024-10-07 09:48:56.527176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.775 [2024-10-07 09:48:56.527201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.775 qpair failed and we were unable to recover it. 00:28:07.775 [2024-10-07 09:48:56.527297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.775 [2024-10-07 09:48:56.527332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.775 qpair failed and we were unable to recover it. 00:28:07.775 [2024-10-07 09:48:56.527423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.775 [2024-10-07 09:48:56.527451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.775 qpair failed and we were unable to recover it. 00:28:07.775 [2024-10-07 09:48:56.527560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.775 [2024-10-07 09:48:56.527588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.775 qpair failed and we were unable to recover it. 00:28:07.775 [2024-10-07 09:48:56.527688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.775 [2024-10-07 09:48:56.527715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.775 qpair failed and we were unable to recover it. 00:28:07.775 [2024-10-07 09:48:56.527804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.775 [2024-10-07 09:48:56.527830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.775 qpair failed and we were unable to recover it. 00:28:07.775 [2024-10-07 09:48:56.527940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.775 [2024-10-07 09:48:56.527965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.775 qpair failed and we were unable to recover it. 00:28:07.775 [2024-10-07 09:48:56.528042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.775 [2024-10-07 09:48:56.528067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.775 qpair failed and we were unable to recover it. 00:28:07.775 [2024-10-07 09:48:56.528142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.775 [2024-10-07 09:48:56.528167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.775 qpair failed and we were unable to recover it. 00:28:07.775 [2024-10-07 09:48:56.528243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.775 [2024-10-07 09:48:56.528269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.775 qpair failed and we were unable to recover it. 00:28:07.775 [2024-10-07 09:48:56.528363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.775 [2024-10-07 09:48:56.528394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.775 qpair failed and we were unable to recover it. 00:28:07.775 [2024-10-07 09:48:56.528510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.776 [2024-10-07 09:48:56.528536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.776 qpair failed and we were unable to recover it. 00:28:07.776 [2024-10-07 09:48:56.528616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.776 [2024-10-07 09:48:56.528642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.776 qpair failed and we were unable to recover it. 00:28:07.776 [2024-10-07 09:48:56.528764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.776 [2024-10-07 09:48:56.528790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.776 qpair failed and we were unable to recover it. 00:28:07.776 [2024-10-07 09:48:56.528912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.776 [2024-10-07 09:48:56.528939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.776 qpair failed and we were unable to recover it. 00:28:07.776 [2024-10-07 09:48:56.529030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.776 [2024-10-07 09:48:56.529056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.776 qpair failed and we were unable to recover it. 00:28:07.776 [2024-10-07 09:48:56.529167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.776 [2024-10-07 09:48:56.529195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.776 qpair failed and we were unable to recover it. 00:28:07.776 [2024-10-07 09:48:56.529307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.776 [2024-10-07 09:48:56.529333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.776 qpair failed and we were unable to recover it. 00:28:07.776 [2024-10-07 09:48:56.529412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.776 [2024-10-07 09:48:56.529438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.776 qpair failed and we were unable to recover it. 00:28:07.776 [2024-10-07 09:48:56.529514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.776 [2024-10-07 09:48:56.529540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.776 qpair failed and we were unable to recover it. 00:28:07.776 [2024-10-07 09:48:56.529650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.776 [2024-10-07 09:48:56.529682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.776 qpair failed and we were unable to recover it. 00:28:07.776 [2024-10-07 09:48:56.529769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.776 [2024-10-07 09:48:56.529795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.776 qpair failed and we were unable to recover it. 00:28:07.776 [2024-10-07 09:48:56.529880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.776 [2024-10-07 09:48:56.529906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.776 qpair failed and we were unable to recover it. 00:28:07.776 [2024-10-07 09:48:56.529998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.776 [2024-10-07 09:48:56.530024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.776 qpair failed and we were unable to recover it. 00:28:07.776 [2024-10-07 09:48:56.530105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.776 [2024-10-07 09:48:56.530131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.776 qpair failed and we were unable to recover it. 00:28:07.776 [2024-10-07 09:48:56.530236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.776 [2024-10-07 09:48:56.530261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.776 qpair failed and we were unable to recover it. 00:28:07.776 [2024-10-07 09:48:56.530348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.776 [2024-10-07 09:48:56.530375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.776 qpair failed and we were unable to recover it. 00:28:07.776 [2024-10-07 09:48:56.530465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.776 [2024-10-07 09:48:56.530504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.776 qpair failed and we were unable to recover it. 00:28:07.776 [2024-10-07 09:48:56.530625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.776 [2024-10-07 09:48:56.530651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.776 qpair failed and we were unable to recover it. 00:28:07.776 [2024-10-07 09:48:56.530744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.776 [2024-10-07 09:48:56.530769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.776 qpair failed and we were unable to recover it. 00:28:07.776 [2024-10-07 09:48:56.530847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.776 [2024-10-07 09:48:56.530872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.776 qpair failed and we were unable to recover it. 00:28:07.776 [2024-10-07 09:48:56.530950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.776 [2024-10-07 09:48:56.530977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.776 qpair failed and we were unable to recover it. 00:28:07.776 [2024-10-07 09:48:56.531055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.776 [2024-10-07 09:48:56.531080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.776 qpair failed and we were unable to recover it. 00:28:07.776 [2024-10-07 09:48:56.531157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.776 [2024-10-07 09:48:56.531182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.776 qpair failed and we were unable to recover it. 00:28:07.776 [2024-10-07 09:48:56.531298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.776 [2024-10-07 09:48:56.531328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.776 qpair failed and we were unable to recover it. 00:28:07.776 [2024-10-07 09:48:56.531443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.776 [2024-10-07 09:48:56.531470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.776 qpair failed and we were unable to recover it. 00:28:07.776 [2024-10-07 09:48:56.531553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.776 [2024-10-07 09:48:56.531580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.776 qpair failed and we were unable to recover it. 00:28:07.776 [2024-10-07 09:48:56.531675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.776 [2024-10-07 09:48:56.531702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.776 qpair failed and we were unable to recover it. 00:28:07.776 [2024-10-07 09:48:56.531848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.776 [2024-10-07 09:48:56.531874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.776 qpair failed and we were unable to recover it. 00:28:07.776 [2024-10-07 09:48:56.531955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.776 [2024-10-07 09:48:56.531981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.776 qpair failed and we were unable to recover it. 00:28:07.776 [2024-10-07 09:48:56.532061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.776 [2024-10-07 09:48:56.532087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.776 qpair failed and we were unable to recover it. 00:28:07.776 [2024-10-07 09:48:56.532170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.776 [2024-10-07 09:48:56.532197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.776 qpair failed and we were unable to recover it. 00:28:07.776 [2024-10-07 09:48:56.532283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.776 [2024-10-07 09:48:56.532307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.776 qpair failed and we were unable to recover it. 00:28:07.776 [2024-10-07 09:48:56.532390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.776 [2024-10-07 09:48:56.532421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.776 qpair failed and we were unable to recover it. 00:28:07.776 [2024-10-07 09:48:56.532502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.776 [2024-10-07 09:48:56.532528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.776 qpair failed and we were unable to recover it. 00:28:07.776 09:48:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.776 [2024-10-07 09:48:56.532611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.776 [2024-10-07 09:48:56.532637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.776 qpair failed and we were unable to recover it. 00:28:07.776 [2024-10-07 09:48:56.532725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.776 [2024-10-07 09:48:56.532752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.776 09:48:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:07.776 qpair failed and we were unable to recover it. 00:28:07.776 [2024-10-07 09:48:56.532841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.776 [2024-10-07 09:48:56.532866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.776 qpair failed and we were unable to recover it. 00:28:07.776 09:48:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.776 [2024-10-07 09:48:56.532946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.776 [2024-10-07 09:48:56.532972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.776 qpair failed and we were unable to recover it. 00:28:07.776 09:48:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:07.776 [2024-10-07 09:48:56.533053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.776 [2024-10-07 09:48:56.533080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.776 qpair failed and we were unable to recover it. 00:28:07.776 [2024-10-07 09:48:56.533169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.776 [2024-10-07 09:48:56.533197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.776 qpair failed and we were unable to recover it. 00:28:07.776 [2024-10-07 09:48:56.533316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.777 [2024-10-07 09:48:56.533342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.777 qpair failed and we were unable to recover it. 00:28:07.777 [2024-10-07 09:48:56.533428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.777 [2024-10-07 09:48:56.533454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.777 qpair failed and we were unable to recover it. 00:28:07.777 [2024-10-07 09:48:56.533530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.777 [2024-10-07 09:48:56.533557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.777 qpair failed and we were unable to recover it. 00:28:07.777 [2024-10-07 09:48:56.533662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.777 [2024-10-07 09:48:56.533693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.777 qpair failed and we were unable to recover it. 00:28:07.777 [2024-10-07 09:48:56.533804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.777 [2024-10-07 09:48:56.533829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.777 qpair failed and we were unable to recover it. 00:28:07.777 [2024-10-07 09:48:56.533913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.777 [2024-10-07 09:48:56.533941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.777 qpair failed and we were unable to recover it. 00:28:07.777 [2024-10-07 09:48:56.534027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.777 [2024-10-07 09:48:56.534053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.777 qpair failed and we were unable to recover it. 00:28:07.777 [2024-10-07 09:48:56.534168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.777 [2024-10-07 09:48:56.534193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.777 qpair failed and we were unable to recover it. 00:28:07.777 [2024-10-07 09:48:56.534270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.777 [2024-10-07 09:48:56.534295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.777 qpair failed and we were unable to recover it. 00:28:07.777 [2024-10-07 09:48:56.534369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.777 [2024-10-07 09:48:56.534395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.777 qpair failed and we were unable to recover it. 00:28:07.777 [2024-10-07 09:48:56.534475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.777 [2024-10-07 09:48:56.534500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.777 qpair failed and we were unable to recover it. 00:28:07.777 [2024-10-07 09:48:56.534613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.777 [2024-10-07 09:48:56.534643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.777 qpair failed and we were unable to recover it. 00:28:07.777 [2024-10-07 09:48:56.534738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.777 [2024-10-07 09:48:56.534765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.777 qpair failed and we were unable to recover it. 00:28:07.777 [2024-10-07 09:48:56.534839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.777 [2024-10-07 09:48:56.534864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.777 qpair failed and we were unable to recover it. 00:28:07.777 [2024-10-07 09:48:56.534937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.777 [2024-10-07 09:48:56.534962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.777 qpair failed and we were unable to recover it. 00:28:07.777 [2024-10-07 09:48:56.535041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.777 [2024-10-07 09:48:56.535066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.777 qpair failed and we were unable to recover it. 00:28:07.777 [2024-10-07 09:48:56.535139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.777 [2024-10-07 09:48:56.535164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.777 qpair failed and we were unable to recover it. 00:28:07.777 [2024-10-07 09:48:56.535278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.777 [2024-10-07 09:48:56.535303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.777 qpair failed and we were unable to recover it. 00:28:07.777 [2024-10-07 09:48:56.535414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.777 [2024-10-07 09:48:56.535440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.777 qpair failed and we were unable to recover it. 00:28:07.777 [2024-10-07 09:48:56.535516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.777 [2024-10-07 09:48:56.535541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.777 qpair failed and we were unable to recover it. 00:28:07.777 [2024-10-07 09:48:56.535619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.777 [2024-10-07 09:48:56.535644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.777 qpair failed and we were unable to recover it. 00:28:07.777 [2024-10-07 09:48:56.535733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.777 [2024-10-07 09:48:56.535759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.777 qpair failed and we were unable to recover it. 00:28:07.777 [2024-10-07 09:48:56.535835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.777 [2024-10-07 09:48:56.535862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.777 qpair failed and we were unable to recover it. 00:28:07.777 [2024-10-07 09:48:56.535939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.777 [2024-10-07 09:48:56.535965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.777 qpair failed and we were unable to recover it. 00:28:07.777 [2024-10-07 09:48:56.536045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.777 [2024-10-07 09:48:56.536070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.777 qpair failed and we were unable to recover it. 00:28:07.777 [2024-10-07 09:48:56.536148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.777 [2024-10-07 09:48:56.536174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.777 qpair failed and we were unable to recover it. 00:28:07.777 [2024-10-07 09:48:56.536254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.777 [2024-10-07 09:48:56.536281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.777 qpair failed and we were unable to recover it. 00:28:07.777 [2024-10-07 09:48:56.536360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.777 [2024-10-07 09:48:56.536387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.777 qpair failed and we were unable to recover it. 00:28:07.777 [2024-10-07 09:48:56.536474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.777 [2024-10-07 09:48:56.536500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.777 qpair failed and we were unable to recover it. 00:28:07.777 [2024-10-07 09:48:56.536605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.777 [2024-10-07 09:48:56.536630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.777 qpair failed and we were unable to recover it. 00:28:07.777 [2024-10-07 09:48:56.536723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.777 [2024-10-07 09:48:56.536749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.777 qpair failed and we were unable to recover it. 00:28:07.777 [2024-10-07 09:48:56.536829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.777 [2024-10-07 09:48:56.536856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.777 qpair failed and we were unable to recover it. 00:28:07.777 [2024-10-07 09:48:56.536946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.777 [2024-10-07 09:48:56.536972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.777 qpair failed and we were unable to recover it. 00:28:07.777 [2024-10-07 09:48:56.537057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.777 [2024-10-07 09:48:56.537083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.777 qpair failed and we were unable to recover it. 00:28:07.777 [2024-10-07 09:48:56.537166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.777 [2024-10-07 09:48:56.537192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.777 qpair failed and we were unable to recover it. 00:28:07.777 [2024-10-07 09:48:56.537274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.777 [2024-10-07 09:48:56.537300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.777 qpair failed and we were unable to recover it. 00:28:07.777 [2024-10-07 09:48:56.537392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.777 [2024-10-07 09:48:56.537418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.777 qpair failed and we were unable to recover it. 00:28:07.777 [2024-10-07 09:48:56.537513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.777 [2024-10-07 09:48:56.537552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.777 qpair failed and we were unable to recover it. 00:28:07.777 [2024-10-07 09:48:56.537639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.777 [2024-10-07 09:48:56.537692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.777 qpair failed and we were unable to recover it. 00:28:07.777 [2024-10-07 09:48:56.537790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.777 [2024-10-07 09:48:56.537818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.777 qpair failed and we were unable to recover it. 00:28:07.777 [2024-10-07 09:48:56.537905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.778 [2024-10-07 09:48:56.537932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.778 qpair failed and we were unable to recover it. 00:28:07.778 [2024-10-07 09:48:56.538015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.778 [2024-10-07 09:48:56.538041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.778 qpair failed and we were unable to recover it. 00:28:07.778 [2024-10-07 09:48:56.538129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.778 [2024-10-07 09:48:56.538156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.778 qpair failed and we were unable to recover it. 00:28:07.778 [2024-10-07 09:48:56.538270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.778 [2024-10-07 09:48:56.538298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.778 qpair failed and we were unable to recover it. 00:28:07.778 [2024-10-07 09:48:56.538377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.778 [2024-10-07 09:48:56.538403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.778 qpair failed and we were unable to recover it. 00:28:07.778 [2024-10-07 09:48:56.538511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.778 [2024-10-07 09:48:56.538537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.778 qpair failed and we were unable to recover it. 00:28:07.778 [2024-10-07 09:48:56.538619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.778 [2024-10-07 09:48:56.538645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.778 qpair failed and we were unable to recover it. 00:28:07.778 [2024-10-07 09:48:56.538724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.778 [2024-10-07 09:48:56.538750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.778 qpair failed and we were unable to recover it. 00:28:07.778 [2024-10-07 09:48:56.538835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.778 [2024-10-07 09:48:56.538861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.778 qpair failed and we were unable to recover it. 00:28:07.778 [2024-10-07 09:48:56.538942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.778 [2024-10-07 09:48:56.538969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.778 qpair failed and we were unable to recover it. 00:28:07.778 [2024-10-07 09:48:56.539053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.778 [2024-10-07 09:48:56.539082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.778 qpair failed and we were unable to recover it. 00:28:07.778 [2024-10-07 09:48:56.539201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.778 [2024-10-07 09:48:56.539228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.778 qpair failed and we were unable to recover it. 00:28:07.778 [2024-10-07 09:48:56.539344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.778 [2024-10-07 09:48:56.539371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.778 qpair failed and we were unable to recover it. 00:28:07.778 [2024-10-07 09:48:56.539454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.778 [2024-10-07 09:48:56.539481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.778 qpair failed and we were unable to recover it. 00:28:07.778 [2024-10-07 09:48:56.539562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.778 [2024-10-07 09:48:56.539589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.778 qpair failed and we were unable to recover it. 00:28:07.778 [2024-10-07 09:48:56.539705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.778 [2024-10-07 09:48:56.539732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.778 qpair failed and we were unable to recover it. 00:28:07.778 [2024-10-07 09:48:56.539807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.778 [2024-10-07 09:48:56.539833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.778 qpair failed and we were unable to recover it. 00:28:07.778 [2024-10-07 09:48:56.539919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.778 [2024-10-07 09:48:56.539945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.778 qpair failed and we were unable to recover it. 00:28:07.778 [2024-10-07 09:48:56.540024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.778 [2024-10-07 09:48:56.540050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.778 qpair failed and we were unable to recover it. 00:28:07.778 [2024-10-07 09:48:56.540127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.778 [2024-10-07 09:48:56.540154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.778 qpair failed and we were unable to recover it. 00:28:07.778 [2024-10-07 09:48:56.540241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.778 [2024-10-07 09:48:56.540266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.778 qpair failed and we were unable to recover it. 00:28:07.778 [2024-10-07 09:48:56.540343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.778 [2024-10-07 09:48:56.540370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.778 qpair failed and we were unable to recover it. 00:28:07.778 [2024-10-07 09:48:56.540457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.778 [2024-10-07 09:48:56.540483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.778 qpair failed and we were unable to recover it. 00:28:07.778 09:48:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.778 [2024-10-07 09:48:56.540601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.778 [2024-10-07 09:48:56.540626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.778 qpair failed and we were unable to recover it. 00:28:07.778 [2024-10-07 09:48:56.540726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.778 [2024-10-07 09:48:56.540752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b9 09:48:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:07.778 0 with addr=10.0.0.2, port=4420 00:28:07.778 qpair failed and we were unable to recover it. 00:28:07.778 [2024-10-07 09:48:56.540851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.778 [2024-10-07 09:48:56.540878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.778 09:48:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.778 qpair failed and we were unable to recover it. 00:28:07.778 [2024-10-07 09:48:56.540972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.778 [2024-10-07 09:48:56.540998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.778 qpair failed and we were unable to recover it. 00:28:07.778 09:48:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:07.778 [2024-10-07 09:48:56.541083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.778 [2024-10-07 09:48:56.541110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.778 qpair failed and we were unable to recover it. 00:28:07.778 [2024-10-07 09:48:56.541193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.778 [2024-10-07 09:48:56.541219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.778 qpair failed and we were unable to recover it. 00:28:07.778 [2024-10-07 09:48:56.541306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.778 [2024-10-07 09:48:56.541332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.778 qpair failed and we were unable to recover it. 00:28:07.778 [2024-10-07 09:48:56.541416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.778 [2024-10-07 09:48:56.541442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.778 qpair failed and we were unable to recover it. 00:28:07.778 [2024-10-07 09:48:56.541550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.778 [2024-10-07 09:48:56.541576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.778 qpair failed and we were unable to recover it. 00:28:07.778 [2024-10-07 09:48:56.541686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.778 [2024-10-07 09:48:56.541713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.778 qpair failed and we were unable to recover it. 00:28:07.778 [2024-10-07 09:48:56.541801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.778 [2024-10-07 09:48:56.541827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.778 qpair failed and we were unable to recover it. 00:28:07.778 [2024-10-07 09:48:56.541902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.778 [2024-10-07 09:48:56.541927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.778 qpair failed and we were unable to recover it. 00:28:07.778 [2024-10-07 09:48:56.542018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.778 [2024-10-07 09:48:56.542045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.778 qpair failed and we were unable to recover it. 00:28:07.778 [2024-10-07 09:48:56.542168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.778 [2024-10-07 09:48:56.542194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.778 qpair failed and we were unable to recover it. 00:28:07.778 [2024-10-07 09:48:56.542306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.778 [2024-10-07 09:48:56.542334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.778 qpair failed and we were unable to recover it. 00:28:07.778 [2024-10-07 09:48:56.542419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.778 [2024-10-07 09:48:56.542445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.778 qpair failed and we were unable to recover it. 00:28:07.778 [2024-10-07 09:48:56.542527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.778 [2024-10-07 09:48:56.542553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.778 qpair failed and we were unable to recover it. 00:28:07.778 [2024-10-07 09:48:56.542631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.778 [2024-10-07 09:48:56.542657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.778 qpair failed and we were unable to recover it. 00:28:07.778 [2024-10-07 09:48:56.542746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.778 [2024-10-07 09:48:56.542772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.778 qpair failed and we were unable to recover it. 00:28:07.778 [2024-10-07 09:48:56.542850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.778 [2024-10-07 09:48:56.542876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.778 qpair failed and we were unable to recover it. 00:28:07.778 [2024-10-07 09:48:56.542955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.778 [2024-10-07 09:48:56.542987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.779 qpair failed and we were unable to recover it. 00:28:07.779 [2024-10-07 09:48:56.543100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.779 [2024-10-07 09:48:56.543126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.779 qpair failed and we were unable to recover it. 00:28:07.779 [2024-10-07 09:48:56.543205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.779 [2024-10-07 09:48:56.543231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.779 qpair failed and we were unable to recover it. 00:28:07.779 [2024-10-07 09:48:56.543340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.779 [2024-10-07 09:48:56.543365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.779 qpair failed and we were unable to recover it. 00:28:07.779 [2024-10-07 09:48:56.543438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.779 [2024-10-07 09:48:56.543464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.779 qpair failed and we were unable to recover it. 00:28:07.779 [2024-10-07 09:48:56.543543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.779 [2024-10-07 09:48:56.543568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab230 with addr=10.0.0.2, port=4420 00:28:07.779 qpair failed and we were unable to recover it. 00:28:07.779 [2024-10-07 09:48:56.543647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.779 [2024-10-07 09:48:56.543687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.779 qpair failed and we were unable to recover it. 00:28:07.779 [2024-10-07 09:48:56.543781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.779 [2024-10-07 09:48:56.543808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.779 qpair failed and we were unable to recover it. 00:28:07.779 [2024-10-07 09:48:56.543888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.779 [2024-10-07 09:48:56.543915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.779 qpair failed and we were unable to recover it. 00:28:07.779 [2024-10-07 09:48:56.543998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.779 [2024-10-07 09:48:56.544023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.779 qpair failed and we were unable to recover it. 00:28:07.779 [2024-10-07 09:48:56.544133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.779 [2024-10-07 09:48:56.544159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7b4000b90 with addr=10.0.0.2, port=4420 00:28:07.779 qpair failed and we were unable to recover it. 00:28:07.779 [2024-10-07 09:48:56.544240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.779 [2024-10-07 09:48:56.544273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.779 qpair failed and we were unable to recover it. 00:28:07.779 [2024-10-07 09:48:56.544387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.779 [2024-10-07 09:48:56.544415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe7a8000b90 with addr=10.0.0.2, port=4420 00:28:07.779 qpair failed and we were unable to recover it. 00:28:07.779 [2024-10-07 09:48:56.544516] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:07.779 [2024-10-07 09:48:56.547057] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.779 [2024-10-07 09:48:56.547169] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.779 [2024-10-07 09:48:56.547198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.779 [2024-10-07 09:48:56.547213] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.779 [2024-10-07 09:48:56.547225] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:07.779 [2024-10-07 09:48:56.547269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.779 qpair failed and we were unable to recover it. 00:28:07.779 09:48:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.779 09:48:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:07.779 09:48:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.779 09:48:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:07.779 09:48:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.779 [2024-10-07 09:48:56.556933] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.779 09:48:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 329549 00:28:07.779 [2024-10-07 09:48:56.557029] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.779 [2024-10-07 09:48:56.557055] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.779 [2024-10-07 09:48:56.557076] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.779 [2024-10-07 09:48:56.557089] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:07.779 [2024-10-07 09:48:56.557121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.779 qpair failed and we were unable to recover it. 00:28:07.779 [2024-10-07 09:48:56.566931] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.779 [2024-10-07 09:48:56.567020] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.779 [2024-10-07 09:48:56.567046] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.779 [2024-10-07 09:48:56.567061] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.779 [2024-10-07 09:48:56.567073] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:07.779 [2024-10-07 09:48:56.567103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.779 qpair failed and we were unable to recover it. 00:28:07.779 [2024-10-07 09:48:56.576934] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.779 [2024-10-07 09:48:56.577033] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.779 [2024-10-07 09:48:56.577059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.779 [2024-10-07 09:48:56.577073] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.779 [2024-10-07 09:48:56.577084] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:07.779 [2024-10-07 09:48:56.577114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.779 qpair failed and we were unable to recover it. 00:28:07.779 [2024-10-07 09:48:56.586898] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.779 [2024-10-07 09:48:56.586986] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.779 [2024-10-07 09:48:56.587012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.779 [2024-10-07 09:48:56.587026] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.779 [2024-10-07 09:48:56.587038] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:07.779 [2024-10-07 09:48:56.587080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.779 qpair failed and we were unable to recover it. 00:28:07.779 [2024-10-07 09:48:56.596920] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.779 [2024-10-07 09:48:56.597046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.779 [2024-10-07 09:48:56.597073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.779 [2024-10-07 09:48:56.597089] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.779 [2024-10-07 09:48:56.597101] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:07.779 [2024-10-07 09:48:56.597130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.779 qpair failed and we were unable to recover it. 00:28:07.779 [2024-10-07 09:48:56.606944] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.779 [2024-10-07 09:48:56.607058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.779 [2024-10-07 09:48:56.607086] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.779 [2024-10-07 09:48:56.607101] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.779 [2024-10-07 09:48:56.607114] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:07.779 [2024-10-07 09:48:56.607145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.779 qpair failed and we were unable to recover it. 00:28:07.779 [2024-10-07 09:48:56.616980] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.779 [2024-10-07 09:48:56.617081] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.779 [2024-10-07 09:48:56.617109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.779 [2024-10-07 09:48:56.617124] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.779 [2024-10-07 09:48:56.617136] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:07.779 [2024-10-07 09:48:56.617167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.779 qpair failed and we were unable to recover it. 00:28:07.779 [2024-10-07 09:48:56.627046] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.779 [2024-10-07 09:48:56.627130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.779 [2024-10-07 09:48:56.627155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.779 [2024-10-07 09:48:56.627169] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.779 [2024-10-07 09:48:56.627182] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:07.779 [2024-10-07 09:48:56.627212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.779 qpair failed and we were unable to recover it. 00:28:07.779 [2024-10-07 09:48:56.637083] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.779 [2024-10-07 09:48:56.637178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.779 [2024-10-07 09:48:56.637204] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.779 [2024-10-07 09:48:56.637219] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.779 [2024-10-07 09:48:56.637231] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:07.779 [2024-10-07 09:48:56.637267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.779 qpair failed and we were unable to recover it. 00:28:07.779 [2024-10-07 09:48:56.647064] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.780 [2024-10-07 09:48:56.647151] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.780 [2024-10-07 09:48:56.647182] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.780 [2024-10-07 09:48:56.647198] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.780 [2024-10-07 09:48:56.647210] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:07.780 [2024-10-07 09:48:56.647241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.780 qpair failed and we were unable to recover it. 00:28:07.780 [2024-10-07 09:48:56.657088] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.780 [2024-10-07 09:48:56.657179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.780 [2024-10-07 09:48:56.657205] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.780 [2024-10-07 09:48:56.657220] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.780 [2024-10-07 09:48:56.657232] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:07.780 [2024-10-07 09:48:56.657263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.780 qpair failed and we were unable to recover it. 00:28:07.780 [2024-10-07 09:48:56.667110] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.780 [2024-10-07 09:48:56.667203] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.780 [2024-10-07 09:48:56.667230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.780 [2024-10-07 09:48:56.667244] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.780 [2024-10-07 09:48:56.667257] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:07.780 [2024-10-07 09:48:56.667287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.780 qpair failed and we were unable to recover it. 00:28:07.780 [2024-10-07 09:48:56.677150] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.780 [2024-10-07 09:48:56.677244] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.780 [2024-10-07 09:48:56.677271] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.780 [2024-10-07 09:48:56.677285] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.780 [2024-10-07 09:48:56.677297] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:07.780 [2024-10-07 09:48:56.677327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.780 qpair failed and we were unable to recover it. 00:28:07.780 [2024-10-07 09:48:56.687185] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.780 [2024-10-07 09:48:56.687297] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.780 [2024-10-07 09:48:56.687325] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.780 [2024-10-07 09:48:56.687341] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.780 [2024-10-07 09:48:56.687353] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:07.780 [2024-10-07 09:48:56.687383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.780 qpair failed and we were unable to recover it. 00:28:07.780 [2024-10-07 09:48:56.697217] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.780 [2024-10-07 09:48:56.697307] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.780 [2024-10-07 09:48:56.697332] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.780 [2024-10-07 09:48:56.697347] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.780 [2024-10-07 09:48:56.697359] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:07.780 [2024-10-07 09:48:56.697389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.780 qpair failed and we were unable to recover it. 00:28:08.039 [2024-10-07 09:48:56.707225] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.039 [2024-10-07 09:48:56.707319] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.039 [2024-10-07 09:48:56.707345] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.039 [2024-10-07 09:48:56.707360] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.039 [2024-10-07 09:48:56.707372] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:08.039 [2024-10-07 09:48:56.707402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.039 qpair failed and we were unable to recover it. 00:28:08.039 [2024-10-07 09:48:56.717261] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.039 [2024-10-07 09:48:56.717399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.039 [2024-10-07 09:48:56.717427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.039 [2024-10-07 09:48:56.717441] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.039 [2024-10-07 09:48:56.717454] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:08.039 [2024-10-07 09:48:56.717483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.039 qpair failed and we were unable to recover it. 00:28:08.039 [2024-10-07 09:48:56.727264] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.039 [2024-10-07 09:48:56.727355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.039 [2024-10-07 09:48:56.727379] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.039 [2024-10-07 09:48:56.727394] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.039 [2024-10-07 09:48:56.727407] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:08.039 [2024-10-07 09:48:56.727436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.039 qpair failed and we were unable to recover it. 00:28:08.039 [2024-10-07 09:48:56.737366] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.039 [2024-10-07 09:48:56.737467] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.039 [2024-10-07 09:48:56.737499] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.039 [2024-10-07 09:48:56.737515] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.039 [2024-10-07 09:48:56.737527] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:08.039 [2024-10-07 09:48:56.737557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.039 qpair failed and we were unable to recover it. 00:28:08.039 [2024-10-07 09:48:56.747345] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.039 [2024-10-07 09:48:56.747435] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.039 [2024-10-07 09:48:56.747460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.039 [2024-10-07 09:48:56.747475] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.039 [2024-10-07 09:48:56.747487] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:08.039 [2024-10-07 09:48:56.747516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.039 qpair failed and we were unable to recover it. 00:28:08.039 [2024-10-07 09:48:56.757367] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.039 [2024-10-07 09:48:56.757451] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.039 [2024-10-07 09:48:56.757476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.039 [2024-10-07 09:48:56.757490] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.039 [2024-10-07 09:48:56.757502] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:08.039 [2024-10-07 09:48:56.757532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.039 qpair failed and we were unable to recover it. 00:28:08.039 [2024-10-07 09:48:56.767400] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.039 [2024-10-07 09:48:56.767536] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.039 [2024-10-07 09:48:56.767562] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.039 [2024-10-07 09:48:56.767577] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.040 [2024-10-07 09:48:56.767589] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:08.040 [2024-10-07 09:48:56.767618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.040 qpair failed and we were unable to recover it. 00:28:08.040 [2024-10-07 09:48:56.777524] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.040 [2024-10-07 09:48:56.777627] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.040 [2024-10-07 09:48:56.777651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.040 [2024-10-07 09:48:56.777672] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.040 [2024-10-07 09:48:56.777687] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:08.040 [2024-10-07 09:48:56.777726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.040 qpair failed and we were unable to recover it. 00:28:08.040 [2024-10-07 09:48:56.787468] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.040 [2024-10-07 09:48:56.787551] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.040 [2024-10-07 09:48:56.787576] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.040 [2024-10-07 09:48:56.787591] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.040 [2024-10-07 09:48:56.787603] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:08.040 [2024-10-07 09:48:56.787655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.040 qpair failed and we were unable to recover it. 00:28:08.040 [2024-10-07 09:48:56.797567] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.040 [2024-10-07 09:48:56.797659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.040 [2024-10-07 09:48:56.797690] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.040 [2024-10-07 09:48:56.797705] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.040 [2024-10-07 09:48:56.797717] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:08.040 [2024-10-07 09:48:56.797747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.040 qpair failed and we were unable to recover it. 00:28:08.040 [2024-10-07 09:48:56.807515] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.040 [2024-10-07 09:48:56.807607] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.040 [2024-10-07 09:48:56.807632] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.040 [2024-10-07 09:48:56.807646] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.040 [2024-10-07 09:48:56.807659] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:08.040 [2024-10-07 09:48:56.807698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.040 qpair failed and we were unable to recover it. 00:28:08.040 [2024-10-07 09:48:56.817558] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.040 [2024-10-07 09:48:56.817645] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.040 [2024-10-07 09:48:56.817680] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.040 [2024-10-07 09:48:56.817705] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.040 [2024-10-07 09:48:56.817720] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:08.040 [2024-10-07 09:48:56.817764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.040 qpair failed and we were unable to recover it. 00:28:08.040 [2024-10-07 09:48:56.827606] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.040 [2024-10-07 09:48:56.827704] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.040 [2024-10-07 09:48:56.827736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.040 [2024-10-07 09:48:56.827751] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.040 [2024-10-07 09:48:56.827764] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:08.040 [2024-10-07 09:48:56.827794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.040 qpair failed and we were unable to recover it. 00:28:08.040 [2024-10-07 09:48:56.837610] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.040 [2024-10-07 09:48:56.837711] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.040 [2024-10-07 09:48:56.837736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.040 [2024-10-07 09:48:56.837751] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.040 [2024-10-07 09:48:56.837763] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:08.040 [2024-10-07 09:48:56.837793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.040 qpair failed and we were unable to recover it. 00:28:08.040 [2024-10-07 09:48:56.847679] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.040 [2024-10-07 09:48:56.847786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.040 [2024-10-07 09:48:56.847814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.040 [2024-10-07 09:48:56.847829] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.040 [2024-10-07 09:48:56.847842] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:08.040 [2024-10-07 09:48:56.847871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.040 qpair failed and we were unable to recover it. 00:28:08.040 [2024-10-07 09:48:56.857735] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.040 [2024-10-07 09:48:56.857881] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.040 [2024-10-07 09:48:56.857908] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.040 [2024-10-07 09:48:56.857923] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.040 [2024-10-07 09:48:56.857935] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:08.040 [2024-10-07 09:48:56.857972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.040 qpair failed and we were unable to recover it. 00:28:08.040 [2024-10-07 09:48:56.867743] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.040 [2024-10-07 09:48:56.867840] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.040 [2024-10-07 09:48:56.867865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.040 [2024-10-07 09:48:56.867880] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.040 [2024-10-07 09:48:56.867899] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:08.040 [2024-10-07 09:48:56.867931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.040 qpair failed and we were unable to recover it. 00:28:08.040 [2024-10-07 09:48:56.877768] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.040 [2024-10-07 09:48:56.877857] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.040 [2024-10-07 09:48:56.877882] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.040 [2024-10-07 09:48:56.877897] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.040 [2024-10-07 09:48:56.877909] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:08.040 [2024-10-07 09:48:56.877939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.040 qpair failed and we were unable to recover it. 00:28:08.040 [2024-10-07 09:48:56.887871] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.040 [2024-10-07 09:48:56.887970] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.040 [2024-10-07 09:48:56.887994] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.040 [2024-10-07 09:48:56.888008] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.040 [2024-10-07 09:48:56.888021] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:08.040 [2024-10-07 09:48:56.888055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.040 qpair failed and we were unable to recover it. 00:28:08.040 [2024-10-07 09:48:56.897824] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.040 [2024-10-07 09:48:56.897914] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.040 [2024-10-07 09:48:56.897939] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.040 [2024-10-07 09:48:56.897954] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.040 [2024-10-07 09:48:56.897968] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:08.040 [2024-10-07 09:48:56.897998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.040 qpair failed and we were unable to recover it. 00:28:08.040 [2024-10-07 09:48:56.907820] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.041 [2024-10-07 09:48:56.907912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.041 [2024-10-07 09:48:56.907937] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.041 [2024-10-07 09:48:56.907951] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.041 [2024-10-07 09:48:56.907964] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:08.041 [2024-10-07 09:48:56.907993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.041 qpair failed and we were unable to recover it. 00:28:08.041 [2024-10-07 09:48:56.917853] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.041 [2024-10-07 09:48:56.917948] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.041 [2024-10-07 09:48:56.917978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.041 [2024-10-07 09:48:56.917993] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.041 [2024-10-07 09:48:56.918005] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:08.041 [2024-10-07 09:48:56.918036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.041 qpair failed and we were unable to recover it. 00:28:08.041 [2024-10-07 09:48:56.927914] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.041 [2024-10-07 09:48:56.928001] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.041 [2024-10-07 09:48:56.928026] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.041 [2024-10-07 09:48:56.928040] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.041 [2024-10-07 09:48:56.928053] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:08.041 [2024-10-07 09:48:56.928083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.041 qpair failed and we were unable to recover it. 00:28:08.041 [2024-10-07 09:48:56.938017] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.041 [2024-10-07 09:48:56.938128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.041 [2024-10-07 09:48:56.938153] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.041 [2024-10-07 09:48:56.938167] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.041 [2024-10-07 09:48:56.938179] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:08.041 [2024-10-07 09:48:56.938208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.041 qpair failed and we were unable to recover it. 00:28:08.041 [2024-10-07 09:48:56.947977] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.041 [2024-10-07 09:48:56.948069] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.041 [2024-10-07 09:48:56.948094] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.041 [2024-10-07 09:48:56.948108] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.041 [2024-10-07 09:48:56.948120] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:08.041 [2024-10-07 09:48:56.948149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.041 qpair failed and we were unable to recover it. 00:28:08.041 [2024-10-07 09:48:56.958139] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.041 [2024-10-07 09:48:56.958247] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.041 [2024-10-07 09:48:56.958272] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.041 [2024-10-07 09:48:56.958286] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.041 [2024-10-07 09:48:56.958305] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:08.041 [2024-10-07 09:48:56.958336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.041 qpair failed and we were unable to recover it. 00:28:08.041 [2024-10-07 09:48:56.968008] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.041 [2024-10-07 09:48:56.968105] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.041 [2024-10-07 09:48:56.968130] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.041 [2024-10-07 09:48:56.968144] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.041 [2024-10-07 09:48:56.968156] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:08.041 [2024-10-07 09:48:56.968185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.041 qpair failed and we were unable to recover it. 00:28:08.041 [2024-10-07 09:48:56.978101] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.041 [2024-10-07 09:48:56.978197] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.041 [2024-10-07 09:48:56.978227] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.041 [2024-10-07 09:48:56.978242] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.041 [2024-10-07 09:48:56.978254] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:08.041 [2024-10-07 09:48:56.978283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.041 qpair failed and we were unable to recover it. 00:28:08.041 [2024-10-07 09:48:56.988081] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.041 [2024-10-07 09:48:56.988168] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.041 [2024-10-07 09:48:56.988192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.041 [2024-10-07 09:48:56.988206] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.041 [2024-10-07 09:48:56.988218] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:08.041 [2024-10-07 09:48:56.988247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.041 qpair failed and we were unable to recover it. 00:28:08.041 [2024-10-07 09:48:56.998121] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.041 [2024-10-07 09:48:56.998250] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.041 [2024-10-07 09:48:56.998276] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.041 [2024-10-07 09:48:56.998291] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.041 [2024-10-07 09:48:56.998303] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:08.041 [2024-10-07 09:48:56.998333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.041 qpair failed and we were unable to recover it. 00:28:08.041 [2024-10-07 09:48:57.008128] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.041 [2024-10-07 09:48:57.008219] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.041 [2024-10-07 09:48:57.008244] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.041 [2024-10-07 09:48:57.008259] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.041 [2024-10-07 09:48:57.008272] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:08.041 [2024-10-07 09:48:57.008302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.041 qpair failed and we were unable to recover it. 00:28:08.041 [2024-10-07 09:48:57.018185] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.041 [2024-10-07 09:48:57.018278] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.041 [2024-10-07 09:48:57.018303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.041 [2024-10-07 09:48:57.018317] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.041 [2024-10-07 09:48:57.018329] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:08.041 [2024-10-07 09:48:57.018358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.041 qpair failed and we were unable to recover it. 00:28:08.041 [2024-10-07 09:48:57.028183] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.041 [2024-10-07 09:48:57.028278] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.041 [2024-10-07 09:48:57.028302] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.041 [2024-10-07 09:48:57.028316] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.041 [2024-10-07 09:48:57.028329] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:08.041 [2024-10-07 09:48:57.028359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.041 qpair failed and we were unable to recover it. 00:28:08.300 [2024-10-07 09:48:57.038273] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.300 [2024-10-07 09:48:57.038373] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.300 [2024-10-07 09:48:57.038403] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.300 [2024-10-07 09:48:57.038419] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.300 [2024-10-07 09:48:57.038432] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:08.300 [2024-10-07 09:48:57.038474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.300 qpair failed and we were unable to recover it. 00:28:08.300 [2024-10-07 09:48:57.048239] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.300 [2024-10-07 09:48:57.048328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.300 [2024-10-07 09:48:57.048353] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.300 [2024-10-07 09:48:57.048373] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.300 [2024-10-07 09:48:57.048387] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:08.300 [2024-10-07 09:48:57.048417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.300 qpair failed and we were unable to recover it. 00:28:08.300 [2024-10-07 09:48:57.058334] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.300 [2024-10-07 09:48:57.058439] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.300 [2024-10-07 09:48:57.058465] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.301 [2024-10-07 09:48:57.058480] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.301 [2024-10-07 09:48:57.058493] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:08.301 [2024-10-07 09:48:57.058522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.301 qpair failed and we were unable to recover it. 00:28:08.301 [2024-10-07 09:48:57.068325] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.301 [2024-10-07 09:48:57.068457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.301 [2024-10-07 09:48:57.068484] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.301 [2024-10-07 09:48:57.068498] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.301 [2024-10-07 09:48:57.068510] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:08.301 [2024-10-07 09:48:57.068539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.301 qpair failed and we were unable to recover it. 00:28:08.301 [2024-10-07 09:48:57.078349] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.301 [2024-10-07 09:48:57.078467] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.301 [2024-10-07 09:48:57.078496] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.301 [2024-10-07 09:48:57.078513] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.301 [2024-10-07 09:48:57.078525] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:08.301 [2024-10-07 09:48:57.078555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.301 qpair failed and we were unable to recover it. 00:28:08.301 [2024-10-07 09:48:57.088369] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.301 [2024-10-07 09:48:57.088456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.301 [2024-10-07 09:48:57.088486] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.301 [2024-10-07 09:48:57.088501] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.301 [2024-10-07 09:48:57.088514] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:08.301 [2024-10-07 09:48:57.088544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.301 qpair failed and we were unable to recover it. 00:28:08.301 [2024-10-07 09:48:57.098422] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.301 [2024-10-07 09:48:57.098522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.301 [2024-10-07 09:48:57.098548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.301 [2024-10-07 09:48:57.098563] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.301 [2024-10-07 09:48:57.098575] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:08.301 [2024-10-07 09:48:57.098616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.301 qpair failed and we were unable to recover it. 00:28:08.301 [2024-10-07 09:48:57.108437] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.301 [2024-10-07 09:48:57.108524] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.301 [2024-10-07 09:48:57.108550] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.301 [2024-10-07 09:48:57.108565] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.301 [2024-10-07 09:48:57.108578] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:08.301 [2024-10-07 09:48:57.108608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.301 qpair failed and we were unable to recover it. 00:28:08.301 [2024-10-07 09:48:57.118449] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.301 [2024-10-07 09:48:57.118537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.301 [2024-10-07 09:48:57.118564] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.301 [2024-10-07 09:48:57.118579] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.301 [2024-10-07 09:48:57.118592] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:08.301 [2024-10-07 09:48:57.118622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.301 qpair failed and we were unable to recover it. 00:28:08.301 [2024-10-07 09:48:57.128496] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.301 [2024-10-07 09:48:57.128599] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.301 [2024-10-07 09:48:57.128623] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.301 [2024-10-07 09:48:57.128638] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.301 [2024-10-07 09:48:57.128650] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:08.301 [2024-10-07 09:48:57.128688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.301 qpair failed and we were unable to recover it. 00:28:08.301 [2024-10-07 09:48:57.138515] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.301 [2024-10-07 09:48:57.138606] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.301 [2024-10-07 09:48:57.138631] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.301 [2024-10-07 09:48:57.138651] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.301 [2024-10-07 09:48:57.138672] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:08.301 [2024-10-07 09:48:57.138704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.301 qpair failed and we were unable to recover it. 00:28:08.301 [2024-10-07 09:48:57.148536] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.301 [2024-10-07 09:48:57.148623] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.301 [2024-10-07 09:48:57.148648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.301 [2024-10-07 09:48:57.148662] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.301 [2024-10-07 09:48:57.148683] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:08.301 [2024-10-07 09:48:57.148714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.301 qpair failed and we were unable to recover it. 00:28:08.301 [2024-10-07 09:48:57.158569] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.301 [2024-10-07 09:48:57.158658] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.301 [2024-10-07 09:48:57.158690] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.301 [2024-10-07 09:48:57.158705] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.301 [2024-10-07 09:48:57.158718] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:08.301 [2024-10-07 09:48:57.158749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.301 qpair failed and we were unable to recover it. 00:28:08.301 [2024-10-07 09:48:57.168617] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.301 [2024-10-07 09:48:57.168714] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.301 [2024-10-07 09:48:57.168740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.301 [2024-10-07 09:48:57.168755] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.301 [2024-10-07 09:48:57.168768] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:08.301 [2024-10-07 09:48:57.168800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.301 qpair failed and we were unable to recover it. 00:28:08.301 [2024-10-07 09:48:57.178621] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.301 [2024-10-07 09:48:57.178725] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.301 [2024-10-07 09:48:57.178751] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.301 [2024-10-07 09:48:57.178766] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.301 [2024-10-07 09:48:57.178778] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:08.301 [2024-10-07 09:48:57.178808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.301 qpair failed and we were unable to recover it. 00:28:08.301 [2024-10-07 09:48:57.188647] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.301 [2024-10-07 09:48:57.188745] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.301 [2024-10-07 09:48:57.188770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.301 [2024-10-07 09:48:57.188784] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.301 [2024-10-07 09:48:57.188796] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:08.301 [2024-10-07 09:48:57.188826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.301 qpair failed and we were unable to recover it. 00:28:08.301 [2024-10-07 09:48:57.198680] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.302 [2024-10-07 09:48:57.198767] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.302 [2024-10-07 09:48:57.198792] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.302 [2024-10-07 09:48:57.198807] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.302 [2024-10-07 09:48:57.198820] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:08.302 [2024-10-07 09:48:57.198850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.302 qpair failed and we were unable to recover it. 00:28:08.302 [2024-10-07 09:48:57.208720] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.302 [2024-10-07 09:48:57.208804] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.302 [2024-10-07 09:48:57.208829] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.302 [2024-10-07 09:48:57.208843] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.302 [2024-10-07 09:48:57.208856] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:08.302 [2024-10-07 09:48:57.208886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.302 qpair failed and we were unable to recover it. 00:28:08.302 [2024-10-07 09:48:57.218764] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.302 [2024-10-07 09:48:57.218861] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.302 [2024-10-07 09:48:57.218886] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.302 [2024-10-07 09:48:57.218900] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.302 [2024-10-07 09:48:57.218913] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:08.302 [2024-10-07 09:48:57.218943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.302 qpair failed and we were unable to recover it. 00:28:08.302 [2024-10-07 09:48:57.228807] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.302 [2024-10-07 09:48:57.228940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.302 [2024-10-07 09:48:57.228981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.302 [2024-10-07 09:48:57.228999] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.302 [2024-10-07 09:48:57.229011] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:08.302 [2024-10-07 09:48:57.229048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.302 qpair failed and we were unable to recover it. 00:28:08.302 [2024-10-07 09:48:57.238802] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.302 [2024-10-07 09:48:57.238895] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.302 [2024-10-07 09:48:57.238922] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.302 [2024-10-07 09:48:57.238937] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.302 [2024-10-07 09:48:57.238950] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:08.302 [2024-10-07 09:48:57.238980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.302 qpair failed and we were unable to recover it. 00:28:08.302 [2024-10-07 09:48:57.248831] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.302 [2024-10-07 09:48:57.248943] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.302 [2024-10-07 09:48:57.248968] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.302 [2024-10-07 09:48:57.248983] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.302 [2024-10-07 09:48:57.248996] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:08.302 [2024-10-07 09:48:57.249029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.302 qpair failed and we were unable to recover it. 00:28:08.302 [2024-10-07 09:48:57.258853] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.302 [2024-10-07 09:48:57.258945] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.302 [2024-10-07 09:48:57.258975] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.302 [2024-10-07 09:48:57.258989] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.302 [2024-10-07 09:48:57.259002] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:08.302 [2024-10-07 09:48:57.259031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.302 qpair failed and we were unable to recover it. 00:28:08.302 [2024-10-07 09:48:57.268891] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.302 [2024-10-07 09:48:57.268974] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.302 [2024-10-07 09:48:57.268999] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.302 [2024-10-07 09:48:57.269014] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.302 [2024-10-07 09:48:57.269026] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:08.302 [2024-10-07 09:48:57.269062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.302 qpair failed and we were unable to recover it. 00:28:08.302 [2024-10-07 09:48:57.278910] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.302 [2024-10-07 09:48:57.279004] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.302 [2024-10-07 09:48:57.279032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.302 [2024-10-07 09:48:57.279047] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.302 [2024-10-07 09:48:57.279060] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:08.302 [2024-10-07 09:48:57.279090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.302 qpair failed and we were unable to recover it. 00:28:08.302 [2024-10-07 09:48:57.288925] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.302 [2024-10-07 09:48:57.289020] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.302 [2024-10-07 09:48:57.289045] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.302 [2024-10-07 09:48:57.289060] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.302 [2024-10-07 09:48:57.289073] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:08.302 [2024-10-07 09:48:57.289103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.302 qpair failed and we were unable to recover it. 00:28:08.564 [2024-10-07 09:48:57.298979] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.564 [2024-10-07 09:48:57.299071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.564 [2024-10-07 09:48:57.299096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.564 [2024-10-07 09:48:57.299111] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.564 [2024-10-07 09:48:57.299124] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:08.564 [2024-10-07 09:48:57.299153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.564 qpair failed and we were unable to recover it. 00:28:08.564 [2024-10-07 09:48:57.308986] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.564 [2024-10-07 09:48:57.309070] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.564 [2024-10-07 09:48:57.309095] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.564 [2024-10-07 09:48:57.309109] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.564 [2024-10-07 09:48:57.309122] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:08.564 [2024-10-07 09:48:57.309152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.564 qpair failed and we were unable to recover it. 00:28:08.564 [2024-10-07 09:48:57.319004] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.564 [2024-10-07 09:48:57.319096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.564 [2024-10-07 09:48:57.319128] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.564 [2024-10-07 09:48:57.319143] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.564 [2024-10-07 09:48:57.319156] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:08.564 [2024-10-07 09:48:57.319186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.564 qpair failed and we were unable to recover it. 00:28:08.564 [2024-10-07 09:48:57.329034] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.564 [2024-10-07 09:48:57.329154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.564 [2024-10-07 09:48:57.329180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.564 [2024-10-07 09:48:57.329195] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.564 [2024-10-07 09:48:57.329207] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:08.564 [2024-10-07 09:48:57.329239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.564 qpair failed and we were unable to recover it. 00:28:08.564 [2024-10-07 09:48:57.339095] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.564 [2024-10-07 09:48:57.339215] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.564 [2024-10-07 09:48:57.339240] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.564 [2024-10-07 09:48:57.339255] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.564 [2024-10-07 09:48:57.339268] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:08.564 [2024-10-07 09:48:57.339298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.564 qpair failed and we were unable to recover it. 00:28:08.564 [2024-10-07 09:48:57.349304] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.564 [2024-10-07 09:48:57.349403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.564 [2024-10-07 09:48:57.349429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.564 [2024-10-07 09:48:57.349443] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.564 [2024-10-07 09:48:57.349456] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:08.564 [2024-10-07 09:48:57.349498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.564 qpair failed and we were unable to recover it. 00:28:08.564 [2024-10-07 09:48:57.359187] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.564 [2024-10-07 09:48:57.359303] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.564 [2024-10-07 09:48:57.359329] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.564 [2024-10-07 09:48:57.359343] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.564 [2024-10-07 09:48:57.359356] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:08.565 [2024-10-07 09:48:57.359392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.565 qpair failed and we were unable to recover it. 00:28:08.565 [2024-10-07 09:48:57.369189] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.565 [2024-10-07 09:48:57.369273] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.565 [2024-10-07 09:48:57.369298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.565 [2024-10-07 09:48:57.369312] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.565 [2024-10-07 09:48:57.369325] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:08.565 [2024-10-07 09:48:57.369355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.565 qpair failed and we were unable to recover it. 00:28:08.565 [2024-10-07 09:48:57.379228] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.565 [2024-10-07 09:48:57.379318] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.565 [2024-10-07 09:48:57.379343] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.565 [2024-10-07 09:48:57.379357] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.565 [2024-10-07 09:48:57.379370] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:08.565 [2024-10-07 09:48:57.379400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.565 qpair failed and we were unable to recover it. 00:28:08.565 [2024-10-07 09:48:57.389210] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.565 [2024-10-07 09:48:57.389293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.565 [2024-10-07 09:48:57.389318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.565 [2024-10-07 09:48:57.389332] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.565 [2024-10-07 09:48:57.389344] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:08.565 [2024-10-07 09:48:57.389373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.565 qpair failed and we were unable to recover it. 00:28:08.565 [2024-10-07 09:48:57.399248] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.565 [2024-10-07 09:48:57.399331] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.565 [2024-10-07 09:48:57.399357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.565 [2024-10-07 09:48:57.399372] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.565 [2024-10-07 09:48:57.399385] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:08.565 [2024-10-07 09:48:57.399417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.565 qpair failed and we were unable to recover it. 00:28:08.565 [2024-10-07 09:48:57.409255] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.565 [2024-10-07 09:48:57.409356] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.565 [2024-10-07 09:48:57.409382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.565 [2024-10-07 09:48:57.409397] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.565 [2024-10-07 09:48:57.409410] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:08.565 [2024-10-07 09:48:57.409452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.565 qpair failed and we were unable to recover it. 00:28:08.565 [2024-10-07 09:48:57.419368] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.565 [2024-10-07 09:48:57.419467] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.565 [2024-10-07 09:48:57.419492] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.565 [2024-10-07 09:48:57.419506] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.565 [2024-10-07 09:48:57.419519] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:08.565 [2024-10-07 09:48:57.419548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.565 qpair failed and we were unable to recover it. 00:28:08.565 [2024-10-07 09:48:57.429316] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.565 [2024-10-07 09:48:57.429413] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.565 [2024-10-07 09:48:57.429438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.565 [2024-10-07 09:48:57.429453] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.565 [2024-10-07 09:48:57.429466] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:08.565 [2024-10-07 09:48:57.429495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.565 qpair failed and we were unable to recover it. 00:28:08.565 [2024-10-07 09:48:57.439380] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.565 [2024-10-07 09:48:57.439491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.565 [2024-10-07 09:48:57.439516] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.565 [2024-10-07 09:48:57.439530] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.565 [2024-10-07 09:48:57.439543] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:08.565 [2024-10-07 09:48:57.439573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.565 qpair failed and we were unable to recover it. 00:28:08.565 [2024-10-07 09:48:57.449460] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.565 [2024-10-07 09:48:57.449548] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.565 [2024-10-07 09:48:57.449573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.565 [2024-10-07 09:48:57.449587] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.565 [2024-10-07 09:48:57.449606] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:08.565 [2024-10-07 09:48:57.449636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.565 qpair failed and we were unable to recover it. 00:28:08.565 [2024-10-07 09:48:57.459424] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.565 [2024-10-07 09:48:57.459533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.565 [2024-10-07 09:48:57.459558] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.565 [2024-10-07 09:48:57.459573] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.565 [2024-10-07 09:48:57.459586] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:08.565 [2024-10-07 09:48:57.459615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.565 qpair failed and we were unable to recover it. 00:28:08.565 [2024-10-07 09:48:57.469534] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.565 [2024-10-07 09:48:57.469634] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.565 [2024-10-07 09:48:57.469660] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.565 [2024-10-07 09:48:57.469682] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.565 [2024-10-07 09:48:57.469695] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:08.565 [2024-10-07 09:48:57.469725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.565 qpair failed and we were unable to recover it. 00:28:08.565 [2024-10-07 09:48:57.479470] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.565 [2024-10-07 09:48:57.479557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.565 [2024-10-07 09:48:57.479582] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.565 [2024-10-07 09:48:57.479597] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.565 [2024-10-07 09:48:57.479609] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:08.565 [2024-10-07 09:48:57.479639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.565 qpair failed and we were unable to recover it. 00:28:08.565 [2024-10-07 09:48:57.489529] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.565 [2024-10-07 09:48:57.489646] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.565 [2024-10-07 09:48:57.489676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.565 [2024-10-07 09:48:57.489693] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.565 [2024-10-07 09:48:57.489706] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:08.565 [2024-10-07 09:48:57.489735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.565 qpair failed and we were unable to recover it. 00:28:08.565 [2024-10-07 09:48:57.499586] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.565 [2024-10-07 09:48:57.499699] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.566 [2024-10-07 09:48:57.499725] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.566 [2024-10-07 09:48:57.499740] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.566 [2024-10-07 09:48:57.499753] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:08.566 [2024-10-07 09:48:57.499784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.566 qpair failed and we were unable to recover it. 00:28:08.566 [2024-10-07 09:48:57.509565] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.566 [2024-10-07 09:48:57.509680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.566 [2024-10-07 09:48:57.509707] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.566 [2024-10-07 09:48:57.509723] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.566 [2024-10-07 09:48:57.509735] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:08.566 [2024-10-07 09:48:57.509765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.566 qpair failed and we were unable to recover it. 00:28:08.566 [2024-10-07 09:48:57.519569] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.566 [2024-10-07 09:48:57.519654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.566 [2024-10-07 09:48:57.519686] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.566 [2024-10-07 09:48:57.519702] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.566 [2024-10-07 09:48:57.519718] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:08.566 [2024-10-07 09:48:57.519748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.566 qpair failed and we were unable to recover it. 00:28:08.566 [2024-10-07 09:48:57.529611] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.566 [2024-10-07 09:48:57.529711] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.566 [2024-10-07 09:48:57.529736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.566 [2024-10-07 09:48:57.529750] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.566 [2024-10-07 09:48:57.529762] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:08.566 [2024-10-07 09:48:57.529792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.566 qpair failed and we were unable to recover it. 00:28:08.566 [2024-10-07 09:48:57.539663] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.566 [2024-10-07 09:48:57.539769] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.566 [2024-10-07 09:48:57.539794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.566 [2024-10-07 09:48:57.539814] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.566 [2024-10-07 09:48:57.539827] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:08.566 [2024-10-07 09:48:57.539857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.566 qpair failed and we were unable to recover it. 00:28:08.566 [2024-10-07 09:48:57.549693] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.566 [2024-10-07 09:48:57.549811] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.566 [2024-10-07 09:48:57.549836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.566 [2024-10-07 09:48:57.549850] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.566 [2024-10-07 09:48:57.549862] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:08.566 [2024-10-07 09:48:57.549892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.566 qpair failed and we were unable to recover it. 00:28:08.837 [2024-10-07 09:48:57.559745] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.837 [2024-10-07 09:48:57.559831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.837 [2024-10-07 09:48:57.559856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.837 [2024-10-07 09:48:57.559871] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.837 [2024-10-07 09:48:57.559884] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:08.837 [2024-10-07 09:48:57.559914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.837 qpair failed and we were unable to recover it. 00:28:08.837 [2024-10-07 09:48:57.569756] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.837 [2024-10-07 09:48:57.569883] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.837 [2024-10-07 09:48:57.569908] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.837 [2024-10-07 09:48:57.569923] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.837 [2024-10-07 09:48:57.569935] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:08.837 [2024-10-07 09:48:57.569965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.837 qpair failed and we were unable to recover it. 00:28:08.837 [2024-10-07 09:48:57.579771] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.837 [2024-10-07 09:48:57.579864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.837 [2024-10-07 09:48:57.579892] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.837 [2024-10-07 09:48:57.579908] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.837 [2024-10-07 09:48:57.579920] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:08.837 [2024-10-07 09:48:57.579950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.837 qpair failed and we were unable to recover it. 00:28:08.837 [2024-10-07 09:48:57.589785] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.837 [2024-10-07 09:48:57.589869] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.837 [2024-10-07 09:48:57.589894] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.837 [2024-10-07 09:48:57.589909] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.837 [2024-10-07 09:48:57.589921] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:08.837 [2024-10-07 09:48:57.589963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.837 qpair failed and we were unable to recover it. 00:28:08.837 [2024-10-07 09:48:57.599803] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.837 [2024-10-07 09:48:57.599885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.837 [2024-10-07 09:48:57.599910] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.837 [2024-10-07 09:48:57.599925] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.837 [2024-10-07 09:48:57.599937] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:08.837 [2024-10-07 09:48:57.599967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.837 qpair failed and we were unable to recover it. 00:28:08.837 [2024-10-07 09:48:57.609878] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.837 [2024-10-07 09:48:57.610008] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.837 [2024-10-07 09:48:57.610032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.837 [2024-10-07 09:48:57.610047] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.837 [2024-10-07 09:48:57.610059] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:08.837 [2024-10-07 09:48:57.610088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.837 qpair failed and we were unable to recover it. 00:28:08.837 [2024-10-07 09:48:57.619883] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.837 [2024-10-07 09:48:57.619971] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.837 [2024-10-07 09:48:57.619997] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.837 [2024-10-07 09:48:57.620011] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.837 [2024-10-07 09:48:57.620024] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:08.837 [2024-10-07 09:48:57.620053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.837 qpair failed and we were unable to recover it. 00:28:08.837 [2024-10-07 09:48:57.629950] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.837 [2024-10-07 09:48:57.630037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.837 [2024-10-07 09:48:57.630062] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.837 [2024-10-07 09:48:57.630083] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.837 [2024-10-07 09:48:57.630097] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:08.837 [2024-10-07 09:48:57.630126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.837 qpair failed and we were unable to recover it. 00:28:08.837 [2024-10-07 09:48:57.639939] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.837 [2024-10-07 09:48:57.640026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.837 [2024-10-07 09:48:57.640051] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.837 [2024-10-07 09:48:57.640066] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.837 [2024-10-07 09:48:57.640078] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:08.837 [2024-10-07 09:48:57.640108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.837 qpair failed and we were unable to recover it. 00:28:08.837 [2024-10-07 09:48:57.649992] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.837 [2024-10-07 09:48:57.650114] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.837 [2024-10-07 09:48:57.650139] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.837 [2024-10-07 09:48:57.650154] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.837 [2024-10-07 09:48:57.650167] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:08.837 [2024-10-07 09:48:57.650197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.837 qpair failed and we were unable to recover it. 00:28:08.837 [2024-10-07 09:48:57.660001] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.837 [2024-10-07 09:48:57.660095] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.837 [2024-10-07 09:48:57.660120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.837 [2024-10-07 09:48:57.660135] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.837 [2024-10-07 09:48:57.660147] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:08.837 [2024-10-07 09:48:57.660177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.837 qpair failed and we were unable to recover it. 00:28:08.838 [2024-10-07 09:48:57.669997] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.838 [2024-10-07 09:48:57.670121] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.838 [2024-10-07 09:48:57.670145] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.838 [2024-10-07 09:48:57.670160] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.838 [2024-10-07 09:48:57.670173] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:08.838 [2024-10-07 09:48:57.670203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.838 qpair failed and we were unable to recover it. 00:28:08.838 [2024-10-07 09:48:57.680035] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.838 [2024-10-07 09:48:57.680123] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.838 [2024-10-07 09:48:57.680148] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.838 [2024-10-07 09:48:57.680162] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.838 [2024-10-07 09:48:57.680175] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:08.838 [2024-10-07 09:48:57.680204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.838 qpair failed and we were unable to recover it. 00:28:08.838 [2024-10-07 09:48:57.690054] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.838 [2024-10-07 09:48:57.690154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.838 [2024-10-07 09:48:57.690178] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.838 [2024-10-07 09:48:57.690194] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.838 [2024-10-07 09:48:57.690206] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:08.838 [2024-10-07 09:48:57.690236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.838 qpair failed and we were unable to recover it. 00:28:08.838 [2024-10-07 09:48:57.700129] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.838 [2024-10-07 09:48:57.700221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.838 [2024-10-07 09:48:57.700246] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.838 [2024-10-07 09:48:57.700261] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.838 [2024-10-07 09:48:57.700274] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:08.838 [2024-10-07 09:48:57.700303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.838 qpair failed and we were unable to recover it. 00:28:08.838 [2024-10-07 09:48:57.710158] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.838 [2024-10-07 09:48:57.710243] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.838 [2024-10-07 09:48:57.710269] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.838 [2024-10-07 09:48:57.710283] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.838 [2024-10-07 09:48:57.710296] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:08.838 [2024-10-07 09:48:57.710326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.838 qpair failed and we were unable to recover it. 00:28:08.838 [2024-10-07 09:48:57.720203] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.838 [2024-10-07 09:48:57.720317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.838 [2024-10-07 09:48:57.720347] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.838 [2024-10-07 09:48:57.720362] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.838 [2024-10-07 09:48:57.720375] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:08.838 [2024-10-07 09:48:57.720405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.838 qpair failed and we were unable to recover it. 00:28:08.838 [2024-10-07 09:48:57.730213] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.838 [2024-10-07 09:48:57.730323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.838 [2024-10-07 09:48:57.730348] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.838 [2024-10-07 09:48:57.730363] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.838 [2024-10-07 09:48:57.730375] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:08.838 [2024-10-07 09:48:57.730406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.838 qpair failed and we were unable to recover it. 00:28:08.838 [2024-10-07 09:48:57.740276] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.838 [2024-10-07 09:48:57.740367] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.838 [2024-10-07 09:48:57.740392] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.838 [2024-10-07 09:48:57.740407] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.838 [2024-10-07 09:48:57.740419] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:08.838 [2024-10-07 09:48:57.740449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.838 qpair failed and we were unable to recover it. 00:28:08.838 [2024-10-07 09:48:57.750284] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.838 [2024-10-07 09:48:57.750407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.838 [2024-10-07 09:48:57.750432] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.838 [2024-10-07 09:48:57.750447] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.838 [2024-10-07 09:48:57.750459] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:08.838 [2024-10-07 09:48:57.750500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.838 qpair failed and we were unable to recover it. 00:28:08.838 [2024-10-07 09:48:57.760257] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.838 [2024-10-07 09:48:57.760342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.838 [2024-10-07 09:48:57.760367] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.838 [2024-10-07 09:48:57.760382] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.838 [2024-10-07 09:48:57.760394] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:08.838 [2024-10-07 09:48:57.760430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.838 qpair failed and we were unable to recover it. 00:28:08.838 [2024-10-07 09:48:57.770289] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.838 [2024-10-07 09:48:57.770373] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.838 [2024-10-07 09:48:57.770398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.838 [2024-10-07 09:48:57.770412] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.838 [2024-10-07 09:48:57.770425] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:08.838 [2024-10-07 09:48:57.770455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.838 qpair failed and we were unable to recover it. 00:28:08.838 [2024-10-07 09:48:57.780370] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.838 [2024-10-07 09:48:57.780477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.838 [2024-10-07 09:48:57.780502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.838 [2024-10-07 09:48:57.780516] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.838 [2024-10-07 09:48:57.780529] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:08.838 [2024-10-07 09:48:57.780558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.838 qpair failed and we were unable to recover it. 00:28:08.838 [2024-10-07 09:48:57.790330] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.838 [2024-10-07 09:48:57.790416] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.838 [2024-10-07 09:48:57.790441] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.838 [2024-10-07 09:48:57.790455] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.838 [2024-10-07 09:48:57.790467] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:08.838 [2024-10-07 09:48:57.790497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.838 qpair failed and we were unable to recover it. 00:28:08.838 [2024-10-07 09:48:57.800374] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.838 [2024-10-07 09:48:57.800456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.838 [2024-10-07 09:48:57.800481] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.839 [2024-10-07 09:48:57.800496] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.839 [2024-10-07 09:48:57.800508] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:08.839 [2024-10-07 09:48:57.800538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.839 qpair failed and we were unable to recover it. 00:28:08.839 [2024-10-07 09:48:57.810393] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.839 [2024-10-07 09:48:57.810516] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.839 [2024-10-07 09:48:57.810546] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.839 [2024-10-07 09:48:57.810562] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.839 [2024-10-07 09:48:57.810575] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:08.839 [2024-10-07 09:48:57.810604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.839 qpair failed and we were unable to recover it. 00:28:08.839 [2024-10-07 09:48:57.820445] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.839 [2024-10-07 09:48:57.820533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.839 [2024-10-07 09:48:57.820558] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.839 [2024-10-07 09:48:57.820572] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.839 [2024-10-07 09:48:57.820586] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:08.839 [2024-10-07 09:48:57.820615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.839 qpair failed and we were unable to recover it. 00:28:09.178 [2024-10-07 09:48:57.830478] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.178 [2024-10-07 09:48:57.830570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.178 [2024-10-07 09:48:57.830598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.178 [2024-10-07 09:48:57.830614] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.178 [2024-10-07 09:48:57.830627] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:09.178 [2024-10-07 09:48:57.830660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.178 qpair failed and we were unable to recover it. 00:28:09.178 [2024-10-07 09:48:57.840576] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.178 [2024-10-07 09:48:57.840676] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.178 [2024-10-07 09:48:57.840703] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.178 [2024-10-07 09:48:57.840717] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.178 [2024-10-07 09:48:57.840730] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:09.178 [2024-10-07 09:48:57.840760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.178 qpair failed and we were unable to recover it. 00:28:09.178 [2024-10-07 09:48:57.850512] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.178 [2024-10-07 09:48:57.850635] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.178 [2024-10-07 09:48:57.850660] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.178 [2024-10-07 09:48:57.850684] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.178 [2024-10-07 09:48:57.850697] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:09.178 [2024-10-07 09:48:57.850733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.178 qpair failed and we were unable to recover it. 00:28:09.178 [2024-10-07 09:48:57.860595] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.178 [2024-10-07 09:48:57.860693] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.178 [2024-10-07 09:48:57.860719] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.178 [2024-10-07 09:48:57.860733] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.178 [2024-10-07 09:48:57.860746] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:09.178 [2024-10-07 09:48:57.860776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.178 qpair failed and we were unable to recover it. 00:28:09.178 [2024-10-07 09:48:57.870586] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.178 [2024-10-07 09:48:57.870698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.178 [2024-10-07 09:48:57.870725] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.178 [2024-10-07 09:48:57.870739] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.178 [2024-10-07 09:48:57.870752] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:09.178 [2024-10-07 09:48:57.870782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.178 qpair failed and we were unable to recover it. 00:28:09.178 [2024-10-07 09:48:57.880616] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.178 [2024-10-07 09:48:57.880747] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.178 [2024-10-07 09:48:57.880773] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.178 [2024-10-07 09:48:57.880788] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.178 [2024-10-07 09:48:57.880800] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:09.178 [2024-10-07 09:48:57.880831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.178 qpair failed and we were unable to recover it. 00:28:09.178 [2024-10-07 09:48:57.890640] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.178 [2024-10-07 09:48:57.890728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.178 [2024-10-07 09:48:57.890754] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.178 [2024-10-07 09:48:57.890769] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.178 [2024-10-07 09:48:57.890782] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:09.178 [2024-10-07 09:48:57.890812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.178 qpair failed and we were unable to recover it. 00:28:09.178 [2024-10-07 09:48:57.900703] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.178 [2024-10-07 09:48:57.900797] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.178 [2024-10-07 09:48:57.900827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.178 [2024-10-07 09:48:57.900842] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.178 [2024-10-07 09:48:57.900855] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:09.178 [2024-10-07 09:48:57.900885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.178 qpair failed and we were unable to recover it. 00:28:09.178 [2024-10-07 09:48:57.910721] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.178 [2024-10-07 09:48:57.910813] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.178 [2024-10-07 09:48:57.910839] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.178 [2024-10-07 09:48:57.910854] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.178 [2024-10-07 09:48:57.910866] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:09.178 [2024-10-07 09:48:57.910897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.178 qpair failed and we were unable to recover it. 00:28:09.178 [2024-10-07 09:48:57.920772] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.178 [2024-10-07 09:48:57.920862] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.178 [2024-10-07 09:48:57.920887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.178 [2024-10-07 09:48:57.920902] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.178 [2024-10-07 09:48:57.920915] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:09.178 [2024-10-07 09:48:57.920944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.178 qpair failed and we were unable to recover it. 00:28:09.178 [2024-10-07 09:48:57.930771] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.178 [2024-10-07 09:48:57.930877] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.178 [2024-10-07 09:48:57.930902] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.178 [2024-10-07 09:48:57.930916] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.178 [2024-10-07 09:48:57.930929] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:09.178 [2024-10-07 09:48:57.930960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.178 qpair failed and we were unable to recover it. 00:28:09.178 [2024-10-07 09:48:57.940831] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.178 [2024-10-07 09:48:57.940924] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.178 [2024-10-07 09:48:57.940950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.178 [2024-10-07 09:48:57.940964] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.179 [2024-10-07 09:48:57.940986] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:09.179 [2024-10-07 09:48:57.941017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.179 qpair failed and we were unable to recover it. 00:28:09.179 [2024-10-07 09:48:57.950798] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.179 [2024-10-07 09:48:57.950886] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.179 [2024-10-07 09:48:57.950911] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.179 [2024-10-07 09:48:57.950925] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.179 [2024-10-07 09:48:57.950937] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:09.179 [2024-10-07 09:48:57.950966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.179 qpair failed and we were unable to recover it. 00:28:09.179 [2024-10-07 09:48:57.960841] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.179 [2024-10-07 09:48:57.960926] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.179 [2024-10-07 09:48:57.960951] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.179 [2024-10-07 09:48:57.960965] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.179 [2024-10-07 09:48:57.960977] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:09.179 [2024-10-07 09:48:57.961019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.179 qpair failed and we were unable to recover it. 00:28:09.179 [2024-10-07 09:48:57.970861] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.179 [2024-10-07 09:48:57.970942] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.179 [2024-10-07 09:48:57.970967] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.179 [2024-10-07 09:48:57.970981] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.179 [2024-10-07 09:48:57.970994] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:09.179 [2024-10-07 09:48:57.971024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.179 qpair failed and we were unable to recover it. 00:28:09.179 [2024-10-07 09:48:57.980921] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.179 [2024-10-07 09:48:57.981029] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.179 [2024-10-07 09:48:57.981054] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.179 [2024-10-07 09:48:57.981069] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.179 [2024-10-07 09:48:57.981082] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:09.179 [2024-10-07 09:48:57.981112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.179 qpair failed and we were unable to recover it. 00:28:09.179 [2024-10-07 09:48:57.990917] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.179 [2024-10-07 09:48:57.991052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.179 [2024-10-07 09:48:57.991079] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.179 [2024-10-07 09:48:57.991094] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.179 [2024-10-07 09:48:57.991106] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:09.179 [2024-10-07 09:48:57.991136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.179 qpair failed and we were unable to recover it. 00:28:09.179 [2024-10-07 09:48:58.000959] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.179 [2024-10-07 09:48:58.001070] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.179 [2024-10-07 09:48:58.001096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.179 [2024-10-07 09:48:58.001111] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.179 [2024-10-07 09:48:58.001123] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:09.179 [2024-10-07 09:48:58.001153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.179 qpair failed and we were unable to recover it. 00:28:09.179 [2024-10-07 09:48:58.010996] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.179 [2024-10-07 09:48:58.011112] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.179 [2024-10-07 09:48:58.011137] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.179 [2024-10-07 09:48:58.011152] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.179 [2024-10-07 09:48:58.011164] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:09.179 [2024-10-07 09:48:58.011206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.179 qpair failed and we were unable to recover it. 00:28:09.179 [2024-10-07 09:48:58.021114] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.179 [2024-10-07 09:48:58.021241] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.179 [2024-10-07 09:48:58.021268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.179 [2024-10-07 09:48:58.021283] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.179 [2024-10-07 09:48:58.021297] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:09.179 [2024-10-07 09:48:58.021327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.179 qpair failed and we were unable to recover it. 00:28:09.179 [2024-10-07 09:48:58.031040] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.179 [2024-10-07 09:48:58.031120] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.179 [2024-10-07 09:48:58.031145] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.179 [2024-10-07 09:48:58.031160] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.179 [2024-10-07 09:48:58.031177] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:09.179 [2024-10-07 09:48:58.031220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.179 qpair failed and we were unable to recover it. 00:28:09.179 [2024-10-07 09:48:58.041078] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.179 [2024-10-07 09:48:58.041197] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.179 [2024-10-07 09:48:58.041224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.179 [2024-10-07 09:48:58.041239] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.179 [2024-10-07 09:48:58.041253] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:09.179 [2024-10-07 09:48:58.041283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.179 qpair failed and we were unable to recover it. 00:28:09.179 [2024-10-07 09:48:58.051076] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.179 [2024-10-07 09:48:58.051161] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.179 [2024-10-07 09:48:58.051185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.179 [2024-10-07 09:48:58.051200] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.179 [2024-10-07 09:48:58.051212] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:09.179 [2024-10-07 09:48:58.051242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.179 qpair failed and we were unable to recover it. 00:28:09.179 [2024-10-07 09:48:58.061125] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.179 [2024-10-07 09:48:58.061257] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.179 [2024-10-07 09:48:58.061284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.179 [2024-10-07 09:48:58.061299] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.179 [2024-10-07 09:48:58.061311] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:09.179 [2024-10-07 09:48:58.061341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.179 qpair failed and we were unable to recover it. 00:28:09.179 [2024-10-07 09:48:58.071145] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.179 [2024-10-07 09:48:58.071231] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.179 [2024-10-07 09:48:58.071256] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.179 [2024-10-07 09:48:58.071270] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.179 [2024-10-07 09:48:58.071282] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:09.179 [2024-10-07 09:48:58.071312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.179 qpair failed and we were unable to recover it. 00:28:09.179 [2024-10-07 09:48:58.081199] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.180 [2024-10-07 09:48:58.081286] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.180 [2024-10-07 09:48:58.081311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.180 [2024-10-07 09:48:58.081326] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.180 [2024-10-07 09:48:58.081339] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:09.180 [2024-10-07 09:48:58.081369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.180 qpair failed and we were unable to recover it. 00:28:09.180 [2024-10-07 09:48:58.091200] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.180 [2024-10-07 09:48:58.091285] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.180 [2024-10-07 09:48:58.091310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.180 [2024-10-07 09:48:58.091326] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.180 [2024-10-07 09:48:58.091338] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:09.180 [2024-10-07 09:48:58.091368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.180 qpair failed and we were unable to recover it. 00:28:09.180 [2024-10-07 09:48:58.101293] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.180 [2024-10-07 09:48:58.101386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.180 [2024-10-07 09:48:58.101411] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.180 [2024-10-07 09:48:58.101426] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.180 [2024-10-07 09:48:58.101438] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:09.180 [2024-10-07 09:48:58.101468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.180 qpair failed and we were unable to recover it. 00:28:09.180 [2024-10-07 09:48:58.111308] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.180 [2024-10-07 09:48:58.111396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.180 [2024-10-07 09:48:58.111421] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.180 [2024-10-07 09:48:58.111435] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.180 [2024-10-07 09:48:58.111448] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:09.180 [2024-10-07 09:48:58.111477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.180 qpair failed and we were unable to recover it. 00:28:09.180 [2024-10-07 09:48:58.121327] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.180 [2024-10-07 09:48:58.121417] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.180 [2024-10-07 09:48:58.121442] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.180 [2024-10-07 09:48:58.121463] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.180 [2024-10-07 09:48:58.121477] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:09.180 [2024-10-07 09:48:58.121507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.180 qpair failed and we were unable to recover it. 00:28:09.180 [2024-10-07 09:48:58.131314] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.180 [2024-10-07 09:48:58.131408] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.180 [2024-10-07 09:48:58.131433] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.180 [2024-10-07 09:48:58.131448] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.180 [2024-10-07 09:48:58.131460] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:09.180 [2024-10-07 09:48:58.131489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.180 qpair failed and we were unable to recover it. 00:28:09.180 [2024-10-07 09:48:58.141365] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.180 [2024-10-07 09:48:58.141485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.180 [2024-10-07 09:48:58.141512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.180 [2024-10-07 09:48:58.141527] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.180 [2024-10-07 09:48:58.141540] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:09.180 [2024-10-07 09:48:58.141569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.180 qpair failed and we were unable to recover it. 00:28:09.492 [2024-10-07 09:48:58.151390] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.492 [2024-10-07 09:48:58.151504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.492 [2024-10-07 09:48:58.151531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.492 [2024-10-07 09:48:58.151546] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.492 [2024-10-07 09:48:58.151558] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:09.492 [2024-10-07 09:48:58.151591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.492 qpair failed and we were unable to recover it. 00:28:09.492 [2024-10-07 09:48:58.161510] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.492 [2024-10-07 09:48:58.161607] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.492 [2024-10-07 09:48:58.161632] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.492 [2024-10-07 09:48:58.161646] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.492 [2024-10-07 09:48:58.161659] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:09.492 [2024-10-07 09:48:58.161698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.492 qpair failed and we were unable to recover it. 00:28:09.492 [2024-10-07 09:48:58.171477] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.492 [2024-10-07 09:48:58.171596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.492 [2024-10-07 09:48:58.171623] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.492 [2024-10-07 09:48:58.171638] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.492 [2024-10-07 09:48:58.171650] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:09.492 [2024-10-07 09:48:58.171687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.492 qpair failed and we were unable to recover it. 00:28:09.492 [2024-10-07 09:48:58.181502] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.492 [2024-10-07 09:48:58.181595] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.492 [2024-10-07 09:48:58.181620] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.492 [2024-10-07 09:48:58.181634] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.492 [2024-10-07 09:48:58.181646] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:09.492 [2024-10-07 09:48:58.181687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.492 qpair failed and we were unable to recover it. 00:28:09.492 [2024-10-07 09:48:58.191547] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.492 [2024-10-07 09:48:58.191653] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.492 [2024-10-07 09:48:58.191687] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.492 [2024-10-07 09:48:58.191703] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.492 [2024-10-07 09:48:58.191715] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:09.492 [2024-10-07 09:48:58.191744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.492 qpair failed and we were unable to recover it. 00:28:09.492 [2024-10-07 09:48:58.201520] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.492 [2024-10-07 09:48:58.201611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.492 [2024-10-07 09:48:58.201635] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.492 [2024-10-07 09:48:58.201649] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.492 [2024-10-07 09:48:58.201661] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:09.492 [2024-10-07 09:48:58.201700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.492 qpair failed and we were unable to recover it. 00:28:09.492 [2024-10-07 09:48:58.211536] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.492 [2024-10-07 09:48:58.211618] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.492 [2024-10-07 09:48:58.211647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.492 [2024-10-07 09:48:58.211662] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.492 [2024-10-07 09:48:58.211684] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:09.492 [2024-10-07 09:48:58.211715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.492 qpair failed and we were unable to recover it. 00:28:09.492 [2024-10-07 09:48:58.221594] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.492 [2024-10-07 09:48:58.221723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.492 [2024-10-07 09:48:58.221750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.492 [2024-10-07 09:48:58.221765] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.492 [2024-10-07 09:48:58.221778] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:09.492 [2024-10-07 09:48:58.221820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.492 qpair failed and we were unable to recover it. 00:28:09.492 [2024-10-07 09:48:58.231631] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.492 [2024-10-07 09:48:58.231753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.492 [2024-10-07 09:48:58.231779] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.492 [2024-10-07 09:48:58.231794] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.492 [2024-10-07 09:48:58.231806] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:09.492 [2024-10-07 09:48:58.231836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.492 qpair failed and we were unable to recover it. 00:28:09.492 [2024-10-07 09:48:58.241659] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.492 [2024-10-07 09:48:58.241751] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.492 [2024-10-07 09:48:58.241777] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.492 [2024-10-07 09:48:58.241791] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.492 [2024-10-07 09:48:58.241803] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:09.492 [2024-10-07 09:48:58.241833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.492 qpair failed and we were unable to recover it. 00:28:09.492 [2024-10-07 09:48:58.251730] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.492 [2024-10-07 09:48:58.251820] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.492 [2024-10-07 09:48:58.251845] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.492 [2024-10-07 09:48:58.251859] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.492 [2024-10-07 09:48:58.251872] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:09.492 [2024-10-07 09:48:58.251907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.492 qpair failed and we were unable to recover it. 00:28:09.492 [2024-10-07 09:48:58.261706] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.493 [2024-10-07 09:48:58.261811] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.493 [2024-10-07 09:48:58.261835] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.493 [2024-10-07 09:48:58.261848] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.493 [2024-10-07 09:48:58.261861] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:09.493 [2024-10-07 09:48:58.261891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.493 qpair failed and we were unable to recover it. 00:28:09.493 [2024-10-07 09:48:58.271749] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.493 [2024-10-07 09:48:58.271848] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.493 [2024-10-07 09:48:58.271872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.493 [2024-10-07 09:48:58.271886] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.493 [2024-10-07 09:48:58.271898] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:09.493 [2024-10-07 09:48:58.271927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.493 qpair failed and we were unable to recover it. 00:28:09.493 [2024-10-07 09:48:58.281882] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.493 [2024-10-07 09:48:58.281976] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.493 [2024-10-07 09:48:58.282002] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.493 [2024-10-07 09:48:58.282017] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.493 [2024-10-07 09:48:58.282029] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:09.493 [2024-10-07 09:48:58.282059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.493 qpair failed and we were unable to recover it. 00:28:09.493 [2024-10-07 09:48:58.291793] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.493 [2024-10-07 09:48:58.291927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.493 [2024-10-07 09:48:58.291953] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.493 [2024-10-07 09:48:58.291969] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.493 [2024-10-07 09:48:58.291981] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:09.493 [2024-10-07 09:48:58.292010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.493 qpair failed and we were unable to recover it. 00:28:09.493 [2024-10-07 09:48:58.301846] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.493 [2024-10-07 09:48:58.301936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.493 [2024-10-07 09:48:58.301967] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.493 [2024-10-07 09:48:58.301983] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.493 [2024-10-07 09:48:58.301995] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:09.493 [2024-10-07 09:48:58.302025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.493 qpair failed and we were unable to recover it. 00:28:09.493 [2024-10-07 09:48:58.311903] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.493 [2024-10-07 09:48:58.312019] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.493 [2024-10-07 09:48:58.312045] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.493 [2024-10-07 09:48:58.312060] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.493 [2024-10-07 09:48:58.312073] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:09.493 [2024-10-07 09:48:58.312105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.493 qpair failed and we were unable to recover it. 00:28:09.493 [2024-10-07 09:48:58.321882] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.493 [2024-10-07 09:48:58.321969] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.493 [2024-10-07 09:48:58.321995] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.493 [2024-10-07 09:48:58.322009] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.493 [2024-10-07 09:48:58.322022] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:09.493 [2024-10-07 09:48:58.322052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.493 qpair failed and we were unable to recover it. 00:28:09.493 [2024-10-07 09:48:58.331886] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.493 [2024-10-07 09:48:58.331970] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.493 [2024-10-07 09:48:58.331996] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.493 [2024-10-07 09:48:58.332010] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.493 [2024-10-07 09:48:58.332023] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:09.493 [2024-10-07 09:48:58.332053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.493 qpair failed and we were unable to recover it. 00:28:09.493 [2024-10-07 09:48:58.341949] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.493 [2024-10-07 09:48:58.342039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.493 [2024-10-07 09:48:58.342064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.493 [2024-10-07 09:48:58.342079] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.493 [2024-10-07 09:48:58.342092] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:09.493 [2024-10-07 09:48:58.342130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.493 qpair failed and we were unable to recover it. 00:28:09.493 [2024-10-07 09:48:58.351945] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.493 [2024-10-07 09:48:58.352053] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.493 [2024-10-07 09:48:58.352078] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.493 [2024-10-07 09:48:58.352093] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.493 [2024-10-07 09:48:58.352105] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:09.493 [2024-10-07 09:48:58.352135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.493 qpair failed and we were unable to recover it. 00:28:09.493 [2024-10-07 09:48:58.362014] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.493 [2024-10-07 09:48:58.362129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.493 [2024-10-07 09:48:58.362157] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.493 [2024-10-07 09:48:58.362174] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.493 [2024-10-07 09:48:58.362188] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:09.493 [2024-10-07 09:48:58.362220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.493 qpair failed and we were unable to recover it. 00:28:09.493 [2024-10-07 09:48:58.372004] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.493 [2024-10-07 09:48:58.372089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.493 [2024-10-07 09:48:58.372114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.493 [2024-10-07 09:48:58.372129] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.493 [2024-10-07 09:48:58.372142] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:09.493 [2024-10-07 09:48:58.372172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.493 qpair failed and we were unable to recover it. 00:28:09.493 [2024-10-07 09:48:58.382031] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.493 [2024-10-07 09:48:58.382129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.493 [2024-10-07 09:48:58.382156] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.493 [2024-10-07 09:48:58.382171] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.493 [2024-10-07 09:48:58.382184] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:09.493 [2024-10-07 09:48:58.382214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.493 qpair failed and we were unable to recover it. 00:28:09.493 [2024-10-07 09:48:58.392077] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.493 [2024-10-07 09:48:58.392165] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.493 [2024-10-07 09:48:58.392195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.493 [2024-10-07 09:48:58.392210] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.494 [2024-10-07 09:48:58.392222] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:09.494 [2024-10-07 09:48:58.392252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.494 qpair failed and we were unable to recover it. 00:28:09.494 [2024-10-07 09:48:58.402137] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.494 [2024-10-07 09:48:58.402247] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.494 [2024-10-07 09:48:58.402278] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.494 [2024-10-07 09:48:58.402295] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.494 [2024-10-07 09:48:58.402308] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:09.494 [2024-10-07 09:48:58.402338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.494 qpair failed and we were unable to recover it. 00:28:09.494 [2024-10-07 09:48:58.412135] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.494 [2024-10-07 09:48:58.412223] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.494 [2024-10-07 09:48:58.412251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.494 [2024-10-07 09:48:58.412267] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.494 [2024-10-07 09:48:58.412279] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:09.494 [2024-10-07 09:48:58.412310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.494 qpair failed and we were unable to recover it. 00:28:09.494 [2024-10-07 09:48:58.422178] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.494 [2024-10-07 09:48:58.422272] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.494 [2024-10-07 09:48:58.422300] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.494 [2024-10-07 09:48:58.422315] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.494 [2024-10-07 09:48:58.422328] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:09.494 [2024-10-07 09:48:58.422358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.494 qpair failed and we were unable to recover it. 00:28:09.494 [2024-10-07 09:48:58.432163] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.494 [2024-10-07 09:48:58.432253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.494 [2024-10-07 09:48:58.432278] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.494 [2024-10-07 09:48:58.432293] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.494 [2024-10-07 09:48:58.432310] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:09.494 [2024-10-07 09:48:58.432342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.494 qpair failed and we were unable to recover it. 00:28:09.494 [2024-10-07 09:48:58.442226] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.494 [2024-10-07 09:48:58.442322] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.494 [2024-10-07 09:48:58.442348] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.494 [2024-10-07 09:48:58.442362] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.494 [2024-10-07 09:48:58.442374] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:09.494 [2024-10-07 09:48:58.442404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.494 qpair failed and we were unable to recover it. 00:28:09.494 [2024-10-07 09:48:58.452228] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.494 [2024-10-07 09:48:58.452318] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.494 [2024-10-07 09:48:58.452343] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.494 [2024-10-07 09:48:58.452358] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.494 [2024-10-07 09:48:58.452370] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:09.494 [2024-10-07 09:48:58.452399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.494 qpair failed and we were unable to recover it. 00:28:09.800 [2024-10-07 09:48:58.462301] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.800 [2024-10-07 09:48:58.462393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.800 [2024-10-07 09:48:58.462418] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.800 [2024-10-07 09:48:58.462432] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.800 [2024-10-07 09:48:58.462444] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:09.800 [2024-10-07 09:48:58.462474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.800 qpair failed and we were unable to recover it. 00:28:09.800 [2024-10-07 09:48:58.472297] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.800 [2024-10-07 09:48:58.472394] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.800 [2024-10-07 09:48:58.472420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.801 [2024-10-07 09:48:58.472435] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.801 [2024-10-07 09:48:58.472446] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:09.801 [2024-10-07 09:48:58.472476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.801 qpair failed and we were unable to recover it. 00:28:09.801 [2024-10-07 09:48:58.482418] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.801 [2024-10-07 09:48:58.482557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.801 [2024-10-07 09:48:58.482583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.801 [2024-10-07 09:48:58.482598] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.801 [2024-10-07 09:48:58.482611] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:09.801 [2024-10-07 09:48:58.482640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.801 qpair failed and we were unable to recover it. 00:28:09.801 [2024-10-07 09:48:58.492399] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.801 [2024-10-07 09:48:58.492493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.801 [2024-10-07 09:48:58.492517] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.801 [2024-10-07 09:48:58.492531] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.801 [2024-10-07 09:48:58.492543] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:09.801 [2024-10-07 09:48:58.492572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.801 qpair failed and we were unable to recover it. 00:28:09.801 [2024-10-07 09:48:58.502398] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.801 [2024-10-07 09:48:58.502490] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.801 [2024-10-07 09:48:58.502514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.801 [2024-10-07 09:48:58.502528] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.801 [2024-10-07 09:48:58.502541] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:09.801 [2024-10-07 09:48:58.502571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.801 qpair failed and we were unable to recover it. 00:28:09.801 [2024-10-07 09:48:58.512407] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.801 [2024-10-07 09:48:58.512537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.801 [2024-10-07 09:48:58.512563] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.801 [2024-10-07 09:48:58.512577] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.801 [2024-10-07 09:48:58.512590] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:09.801 [2024-10-07 09:48:58.512619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.801 qpair failed and we were unable to recover it. 00:28:09.801 [2024-10-07 09:48:58.522512] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.801 [2024-10-07 09:48:58.522611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.801 [2024-10-07 09:48:58.522637] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.801 [2024-10-07 09:48:58.522663] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.801 [2024-10-07 09:48:58.522691] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:09.801 [2024-10-07 09:48:58.522722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.801 qpair failed and we were unable to recover it. 00:28:09.801 [2024-10-07 09:48:58.532457] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.801 [2024-10-07 09:48:58.532547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.801 [2024-10-07 09:48:58.532573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.801 [2024-10-07 09:48:58.532588] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.801 [2024-10-07 09:48:58.532600] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:09.801 [2024-10-07 09:48:58.532632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.801 qpair failed and we were unable to recover it. 00:28:09.801 [2024-10-07 09:48:58.542509] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.801 [2024-10-07 09:48:58.542654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.801 [2024-10-07 09:48:58.542689] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.801 [2024-10-07 09:48:58.542705] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.801 [2024-10-07 09:48:58.542718] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:09.801 [2024-10-07 09:48:58.542748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.801 qpair failed and we were unable to recover it. 00:28:09.801 [2024-10-07 09:48:58.552526] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.801 [2024-10-07 09:48:58.552618] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.801 [2024-10-07 09:48:58.552643] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.801 [2024-10-07 09:48:58.552657] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.801 [2024-10-07 09:48:58.552680] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:09.801 [2024-10-07 09:48:58.552713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.801 qpair failed and we were unable to recover it. 00:28:09.801 [2024-10-07 09:48:58.562547] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.801 [2024-10-07 09:48:58.562637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.801 [2024-10-07 09:48:58.562662] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.801 [2024-10-07 09:48:58.562688] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.801 [2024-10-07 09:48:58.562701] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:09.801 [2024-10-07 09:48:58.562731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.801 qpair failed and we were unable to recover it. 00:28:09.801 [2024-10-07 09:48:58.572584] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.801 [2024-10-07 09:48:58.572709] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.801 [2024-10-07 09:48:58.572736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.801 [2024-10-07 09:48:58.572750] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.801 [2024-10-07 09:48:58.572763] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:09.801 [2024-10-07 09:48:58.572793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.801 qpair failed and we were unable to recover it. 00:28:09.801 [2024-10-07 09:48:58.582641] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.801 [2024-10-07 09:48:58.582760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.801 [2024-10-07 09:48:58.582786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.801 [2024-10-07 09:48:58.582800] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.801 [2024-10-07 09:48:58.582812] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:09.801 [2024-10-07 09:48:58.582842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.801 qpair failed and we were unable to recover it. 00:28:09.801 [2024-10-07 09:48:58.592606] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.801 [2024-10-07 09:48:58.592699] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.801 [2024-10-07 09:48:58.592724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.801 [2024-10-07 09:48:58.592738] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.801 [2024-10-07 09:48:58.592750] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:09.801 [2024-10-07 09:48:58.592779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.801 qpair failed and we were unable to recover it. 00:28:09.801 [2024-10-07 09:48:58.602659] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.801 [2024-10-07 09:48:58.602786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.801 [2024-10-07 09:48:58.602813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.801 [2024-10-07 09:48:58.602828] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.801 [2024-10-07 09:48:58.602841] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:09.802 [2024-10-07 09:48:58.602870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.802 qpair failed and we were unable to recover it. 00:28:09.802 [2024-10-07 09:48:58.612691] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.802 [2024-10-07 09:48:58.612778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.802 [2024-10-07 09:48:58.612803] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.802 [2024-10-07 09:48:58.612823] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.802 [2024-10-07 09:48:58.612836] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:09.802 [2024-10-07 09:48:58.612866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.802 qpair failed and we were unable to recover it. 00:28:09.802 [2024-10-07 09:48:58.622744] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.802 [2024-10-07 09:48:58.622880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.802 [2024-10-07 09:48:58.622908] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.802 [2024-10-07 09:48:58.622924] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.802 [2024-10-07 09:48:58.622936] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:09.802 [2024-10-07 09:48:58.622966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.802 qpair failed and we were unable to recover it. 00:28:09.802 [2024-10-07 09:48:58.632743] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.802 [2024-10-07 09:48:58.632862] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.802 [2024-10-07 09:48:58.632889] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.802 [2024-10-07 09:48:58.632904] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.802 [2024-10-07 09:48:58.632916] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:09.802 [2024-10-07 09:48:58.632946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.802 qpair failed and we were unable to recover it. 00:28:09.802 [2024-10-07 09:48:58.642770] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.802 [2024-10-07 09:48:58.642875] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.802 [2024-10-07 09:48:58.642902] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.802 [2024-10-07 09:48:58.642916] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.802 [2024-10-07 09:48:58.642929] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:09.802 [2024-10-07 09:48:58.642959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.802 qpair failed and we were unable to recover it. 00:28:09.802 [2024-10-07 09:48:58.652821] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.802 [2024-10-07 09:48:58.652941] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.802 [2024-10-07 09:48:58.652969] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.802 [2024-10-07 09:48:58.652984] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.802 [2024-10-07 09:48:58.652996] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:09.802 [2024-10-07 09:48:58.653038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.802 qpair failed and we were unable to recover it. 00:28:09.802 [2024-10-07 09:48:58.662851] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.802 [2024-10-07 09:48:58.662971] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.802 [2024-10-07 09:48:58.662997] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.802 [2024-10-07 09:48:58.663011] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.802 [2024-10-07 09:48:58.663023] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:09.802 [2024-10-07 09:48:58.663053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.802 qpair failed and we were unable to recover it. 00:28:09.802 [2024-10-07 09:48:58.672886] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.802 [2024-10-07 09:48:58.672974] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.802 [2024-10-07 09:48:58.672999] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.802 [2024-10-07 09:48:58.673013] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.802 [2024-10-07 09:48:58.673025] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:09.802 [2024-10-07 09:48:58.673054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.802 qpair failed and we were unable to recover it. 00:28:09.802 [2024-10-07 09:48:58.682871] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.802 [2024-10-07 09:48:58.683005] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.802 [2024-10-07 09:48:58.683032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.802 [2024-10-07 09:48:58.683047] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.802 [2024-10-07 09:48:58.683059] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:09.802 [2024-10-07 09:48:58.683088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.802 qpair failed and we were unable to recover it. 00:28:09.802 [2024-10-07 09:48:58.692935] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.802 [2024-10-07 09:48:58.693069] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.802 [2024-10-07 09:48:58.693095] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.802 [2024-10-07 09:48:58.693110] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.802 [2024-10-07 09:48:58.693122] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:09.802 [2024-10-07 09:48:58.693152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.802 qpair failed and we were unable to recover it. 00:28:09.802 [2024-10-07 09:48:58.702999] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.802 [2024-10-07 09:48:58.703096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.802 [2024-10-07 09:48:58.703125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.802 [2024-10-07 09:48:58.703151] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.802 [2024-10-07 09:48:58.703165] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:09.802 [2024-10-07 09:48:58.703196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.802 qpair failed and we were unable to recover it. 00:28:09.802 [2024-10-07 09:48:58.712996] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.802 [2024-10-07 09:48:58.713088] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.802 [2024-10-07 09:48:58.713113] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.802 [2024-10-07 09:48:58.713127] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.802 [2024-10-07 09:48:58.713141] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:09.802 [2024-10-07 09:48:58.713171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.802 qpair failed and we were unable to recover it. 00:28:09.802 [2024-10-07 09:48:58.723052] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.802 [2024-10-07 09:48:58.723157] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.802 [2024-10-07 09:48:58.723184] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.802 [2024-10-07 09:48:58.723199] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.802 [2024-10-07 09:48:58.723211] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:09.802 [2024-10-07 09:48:58.723240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.802 qpair failed and we were unable to recover it. 00:28:09.802 [2024-10-07 09:48:58.733026] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.802 [2024-10-07 09:48:58.733137] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.802 [2024-10-07 09:48:58.733166] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.802 [2024-10-07 09:48:58.733184] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.802 [2024-10-07 09:48:58.733197] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:09.802 [2024-10-07 09:48:58.733227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.802 qpair failed and we were unable to recover it. 00:28:09.802 [2024-10-07 09:48:58.743046] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.802 [2024-10-07 09:48:58.743139] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.803 [2024-10-07 09:48:58.743164] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.803 [2024-10-07 09:48:58.743178] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.803 [2024-10-07 09:48:58.743191] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:09.803 [2024-10-07 09:48:58.743221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.803 qpair failed and we were unable to recover it. 00:28:09.803 [2024-10-07 09:48:58.753066] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.803 [2024-10-07 09:48:58.753165] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.803 [2024-10-07 09:48:58.753190] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.803 [2024-10-07 09:48:58.753204] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.803 [2024-10-07 09:48:58.753216] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:09.803 [2024-10-07 09:48:58.753247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.803 qpair failed and we were unable to recover it. 00:28:09.803 [2024-10-07 09:48:58.763238] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.803 [2024-10-07 09:48:58.763329] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.803 [2024-10-07 09:48:58.763356] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.803 [2024-10-07 09:48:58.763371] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.803 [2024-10-07 09:48:58.763398] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:09.803 [2024-10-07 09:48:58.763428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.803 qpair failed and we were unable to recover it. 00:28:09.803 [2024-10-07 09:48:58.773130] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.803 [2024-10-07 09:48:58.773225] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.803 [2024-10-07 09:48:58.773250] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.803 [2024-10-07 09:48:58.773265] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.803 [2024-10-07 09:48:58.773278] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:09.803 [2024-10-07 09:48:58.773308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.803 qpair failed and we were unable to recover it. 00:28:10.093 [2024-10-07 09:48:58.783170] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.093 [2024-10-07 09:48:58.783269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.093 [2024-10-07 09:48:58.783293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.094 [2024-10-07 09:48:58.783308] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.094 [2024-10-07 09:48:58.783321] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:10.094 [2024-10-07 09:48:58.783350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:10.094 qpair failed and we were unable to recover it. 00:28:10.094 [2024-10-07 09:48:58.793233] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.094 [2024-10-07 09:48:58.793337] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.094 [2024-10-07 09:48:58.793369] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.094 [2024-10-07 09:48:58.793386] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.094 [2024-10-07 09:48:58.793398] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:10.094 [2024-10-07 09:48:58.793427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:10.094 qpair failed and we were unable to recover it. 00:28:10.094 [2024-10-07 09:48:58.803234] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.094 [2024-10-07 09:48:58.803331] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.094 [2024-10-07 09:48:58.803356] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.094 [2024-10-07 09:48:58.803370] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.094 [2024-10-07 09:48:58.803382] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:10.094 [2024-10-07 09:48:58.803412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:10.094 qpair failed and we were unable to recover it. 00:28:10.094 [2024-10-07 09:48:58.813284] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.094 [2024-10-07 09:48:58.813358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.094 [2024-10-07 09:48:58.813382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.094 [2024-10-07 09:48:58.813397] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.094 [2024-10-07 09:48:58.813410] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:10.094 [2024-10-07 09:48:58.813439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:10.094 qpair failed and we were unable to recover it. 00:28:10.094 [2024-10-07 09:48:58.823307] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.094 [2024-10-07 09:48:58.823400] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.094 [2024-10-07 09:48:58.823425] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.094 [2024-10-07 09:48:58.823439] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.094 [2024-10-07 09:48:58.823451] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:10.094 [2024-10-07 09:48:58.823480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:10.094 qpair failed and we were unable to recover it. 00:28:10.094 [2024-10-07 09:48:58.833339] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.094 [2024-10-07 09:48:58.833425] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.094 [2024-10-07 09:48:58.833449] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.094 [2024-10-07 09:48:58.833464] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.094 [2024-10-07 09:48:58.833476] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:10.094 [2024-10-07 09:48:58.833518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:10.094 qpair failed and we were unable to recover it. 00:28:10.094 [2024-10-07 09:48:58.843331] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.094 [2024-10-07 09:48:58.843439] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.094 [2024-10-07 09:48:58.843466] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.094 [2024-10-07 09:48:58.843481] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.094 [2024-10-07 09:48:58.843493] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:10.094 [2024-10-07 09:48:58.843522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:10.094 qpair failed and we were unable to recover it. 00:28:10.094 [2024-10-07 09:48:58.853403] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.094 [2024-10-07 09:48:58.853516] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.094 [2024-10-07 09:48:58.853542] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.094 [2024-10-07 09:48:58.853557] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.094 [2024-10-07 09:48:58.853570] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:10.094 [2024-10-07 09:48:58.853599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:10.094 qpair failed and we were unable to recover it. 00:28:10.094 [2024-10-07 09:48:58.863480] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.094 [2024-10-07 09:48:58.863610] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.094 [2024-10-07 09:48:58.863634] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.094 [2024-10-07 09:48:58.863648] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.094 [2024-10-07 09:48:58.863660] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:10.094 [2024-10-07 09:48:58.863698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:10.094 qpair failed and we were unable to recover it. 00:28:10.094 [2024-10-07 09:48:58.873499] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.094 [2024-10-07 09:48:58.873590] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.094 [2024-10-07 09:48:58.873616] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.094 [2024-10-07 09:48:58.873631] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.094 [2024-10-07 09:48:58.873643] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:10.094 [2024-10-07 09:48:58.873682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:10.094 qpair failed and we were unable to recover it. 00:28:10.094 [2024-10-07 09:48:58.883481] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.094 [2024-10-07 09:48:58.883596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.094 [2024-10-07 09:48:58.883628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.094 [2024-10-07 09:48:58.883645] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.094 [2024-10-07 09:48:58.883657] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:10.094 [2024-10-07 09:48:58.883693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:10.094 qpair failed and we were unable to recover it. 00:28:10.094 [2024-10-07 09:48:58.893445] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.094 [2024-10-07 09:48:58.893547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.094 [2024-10-07 09:48:58.893572] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.094 [2024-10-07 09:48:58.893586] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.094 [2024-10-07 09:48:58.893599] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:10.094 [2024-10-07 09:48:58.893628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:10.094 qpair failed and we were unable to recover it. 00:28:10.094 [2024-10-07 09:48:58.903506] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.094 [2024-10-07 09:48:58.903598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.094 [2024-10-07 09:48:58.903622] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.094 [2024-10-07 09:48:58.903636] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.094 [2024-10-07 09:48:58.903648] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:10.094 [2024-10-07 09:48:58.903684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:10.094 qpair failed and we were unable to recover it. 00:28:10.094 [2024-10-07 09:48:58.913525] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.094 [2024-10-07 09:48:58.913616] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.094 [2024-10-07 09:48:58.913644] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.094 [2024-10-07 09:48:58.913659] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.094 [2024-10-07 09:48:58.913683] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:10.095 [2024-10-07 09:48:58.913714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:10.095 qpair failed and we were unable to recover it. 00:28:10.095 [2024-10-07 09:48:58.923576] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.095 [2024-10-07 09:48:58.923704] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.095 [2024-10-07 09:48:58.923731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.095 [2024-10-07 09:48:58.923746] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.095 [2024-10-07 09:48:58.923764] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:10.095 [2024-10-07 09:48:58.923797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:10.095 qpair failed and we were unable to recover it. 00:28:10.095 [2024-10-07 09:48:58.933569] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.095 [2024-10-07 09:48:58.933656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.095 [2024-10-07 09:48:58.933689] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.095 [2024-10-07 09:48:58.933704] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.095 [2024-10-07 09:48:58.933716] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:10.095 [2024-10-07 09:48:58.933746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:10.095 qpair failed and we were unable to recover it. 00:28:10.095 [2024-10-07 09:48:58.943653] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.095 [2024-10-07 09:48:58.943775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.095 [2024-10-07 09:48:58.943801] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.095 [2024-10-07 09:48:58.943816] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.095 [2024-10-07 09:48:58.943828] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:10.095 [2024-10-07 09:48:58.943858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:10.095 qpair failed and we were unable to recover it. 00:28:10.095 [2024-10-07 09:48:58.953658] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.095 [2024-10-07 09:48:58.953779] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.095 [2024-10-07 09:48:58.953804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.095 [2024-10-07 09:48:58.953818] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.095 [2024-10-07 09:48:58.953830] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:10.095 [2024-10-07 09:48:58.953861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:10.095 qpair failed and we were unable to recover it. 00:28:10.095 [2024-10-07 09:48:58.963655] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.095 [2024-10-07 09:48:58.963743] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.095 [2024-10-07 09:48:58.963768] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.095 [2024-10-07 09:48:58.963782] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.095 [2024-10-07 09:48:58.963795] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:10.095 [2024-10-07 09:48:58.963825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:10.095 qpair failed and we were unable to recover it. 00:28:10.095 [2024-10-07 09:48:58.973690] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.095 [2024-10-07 09:48:58.973793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.095 [2024-10-07 09:48:58.973818] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.095 [2024-10-07 09:48:58.973831] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.095 [2024-10-07 09:48:58.973843] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:10.095 [2024-10-07 09:48:58.973873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:10.095 qpair failed and we were unable to recover it. 00:28:10.095 [2024-10-07 09:48:58.983771] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.095 [2024-10-07 09:48:58.983860] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.095 [2024-10-07 09:48:58.983885] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.095 [2024-10-07 09:48:58.983900] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.095 [2024-10-07 09:48:58.983912] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:10.095 [2024-10-07 09:48:58.983942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:10.095 qpair failed and we were unable to recover it. 00:28:10.095 [2024-10-07 09:48:58.993763] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.095 [2024-10-07 09:48:58.993844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.095 [2024-10-07 09:48:58.993869] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.095 [2024-10-07 09:48:58.993884] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.095 [2024-10-07 09:48:58.993896] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:10.095 [2024-10-07 09:48:58.993938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:10.095 qpair failed and we were unable to recover it. 00:28:10.095 [2024-10-07 09:48:59.003798] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.095 [2024-10-07 09:48:59.003918] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.095 [2024-10-07 09:48:59.003943] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.095 [2024-10-07 09:48:59.003958] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.095 [2024-10-07 09:48:59.003971] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:10.095 [2024-10-07 09:48:59.004003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:10.095 qpair failed and we were unable to recover it. 00:28:10.095 [2024-10-07 09:48:59.013827] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.095 [2024-10-07 09:48:59.013947] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.095 [2024-10-07 09:48:59.013971] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.095 [2024-10-07 09:48:59.013987] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.095 [2024-10-07 09:48:59.014005] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:10.095 [2024-10-07 09:48:59.014036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:10.095 qpair failed and we were unable to recover it. 00:28:10.095 [2024-10-07 09:48:59.023850] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.095 [2024-10-07 09:48:59.023975] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.095 [2024-10-07 09:48:59.023999] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.095 [2024-10-07 09:48:59.024014] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.095 [2024-10-07 09:48:59.024026] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:10.095 [2024-10-07 09:48:59.024056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:10.095 qpair failed and we were unable to recover it. 00:28:10.095 [2024-10-07 09:48:59.033905] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.095 [2024-10-07 09:48:59.034018] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.095 [2024-10-07 09:48:59.034043] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.095 [2024-10-07 09:48:59.034057] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.095 [2024-10-07 09:48:59.034069] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:10.095 [2024-10-07 09:48:59.034099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:10.095 qpair failed and we were unable to recover it. 00:28:10.095 [2024-10-07 09:48:59.043908] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.095 [2024-10-07 09:48:59.043993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.095 [2024-10-07 09:48:59.044019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.095 [2024-10-07 09:48:59.044034] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.095 [2024-10-07 09:48:59.044047] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:10.095 [2024-10-07 09:48:59.044077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:10.095 qpair failed and we were unable to recover it. 00:28:10.095 [2024-10-07 09:48:59.053961] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.096 [2024-10-07 09:48:59.054046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.096 [2024-10-07 09:48:59.054072] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.096 [2024-10-07 09:48:59.054087] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.096 [2024-10-07 09:48:59.054099] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:10.096 [2024-10-07 09:48:59.054130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:10.096 qpair failed and we were unable to recover it. 00:28:10.096 [2024-10-07 09:48:59.064000] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.096 [2024-10-07 09:48:59.064114] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.096 [2024-10-07 09:48:59.064139] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.096 [2024-10-07 09:48:59.064154] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.096 [2024-10-07 09:48:59.064166] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:10.096 [2024-10-07 09:48:59.064196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:10.096 qpair failed and we were unable to recover it. 00:28:10.096 [2024-10-07 09:48:59.074023] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.096 [2024-10-07 09:48:59.074109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.096 [2024-10-07 09:48:59.074134] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.096 [2024-10-07 09:48:59.074148] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.096 [2024-10-07 09:48:59.074160] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:10.096 [2024-10-07 09:48:59.074189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:10.096 qpair failed and we were unable to recover it. 00:28:10.370 [2024-10-07 09:48:59.084132] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.370 [2024-10-07 09:48:59.084221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.370 [2024-10-07 09:48:59.084247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.370 [2024-10-07 09:48:59.084262] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.370 [2024-10-07 09:48:59.084275] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:10.370 [2024-10-07 09:48:59.084305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:10.370 qpair failed and we were unable to recover it. 00:28:10.370 [2024-10-07 09:48:59.094063] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.370 [2024-10-07 09:48:59.094149] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.370 [2024-10-07 09:48:59.094173] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.370 [2024-10-07 09:48:59.094187] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.370 [2024-10-07 09:48:59.094200] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:10.370 [2024-10-07 09:48:59.094230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:10.370 qpair failed and we were unable to recover it. 00:28:10.370 [2024-10-07 09:48:59.104067] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.370 [2024-10-07 09:48:59.104189] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.370 [2024-10-07 09:48:59.104216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.370 [2024-10-07 09:48:59.104238] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.370 [2024-10-07 09:48:59.104251] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:10.370 [2024-10-07 09:48:59.104281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:10.370 qpair failed and we were unable to recover it. 00:28:10.370 [2024-10-07 09:48:59.114095] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.370 [2024-10-07 09:48:59.114191] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.370 [2024-10-07 09:48:59.114215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.370 [2024-10-07 09:48:59.114230] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.370 [2024-10-07 09:48:59.114243] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:10.370 [2024-10-07 09:48:59.114285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:10.370 qpair failed and we were unable to recover it. 00:28:10.370 [2024-10-07 09:48:59.124116] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.370 [2024-10-07 09:48:59.124209] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.370 [2024-10-07 09:48:59.124235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.370 [2024-10-07 09:48:59.124249] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.370 [2024-10-07 09:48:59.124262] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:10.370 [2024-10-07 09:48:59.124292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:10.370 qpair failed and we were unable to recover it. 00:28:10.370 [2024-10-07 09:48:59.134153] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.370 [2024-10-07 09:48:59.134232] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.370 [2024-10-07 09:48:59.134257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.370 [2024-10-07 09:48:59.134271] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.370 [2024-10-07 09:48:59.134283] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:10.370 [2024-10-07 09:48:59.134312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:10.370 qpair failed and we were unable to recover it. 00:28:10.370 [2024-10-07 09:48:59.144282] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.370 [2024-10-07 09:48:59.144376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.370 [2024-10-07 09:48:59.144401] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.370 [2024-10-07 09:48:59.144416] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.370 [2024-10-07 09:48:59.144428] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:10.370 [2024-10-07 09:48:59.144457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:10.370 qpair failed and we were unable to recover it. 00:28:10.370 [2024-10-07 09:48:59.154207] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.370 [2024-10-07 09:48:59.154331] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.370 [2024-10-07 09:48:59.154363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.370 [2024-10-07 09:48:59.154378] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.370 [2024-10-07 09:48:59.154391] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:10.370 [2024-10-07 09:48:59.154420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:10.370 qpair failed and we were unable to recover it. 00:28:10.370 [2024-10-07 09:48:59.164266] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.370 [2024-10-07 09:48:59.164356] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.370 [2024-10-07 09:48:59.164380] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.370 [2024-10-07 09:48:59.164395] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.370 [2024-10-07 09:48:59.164407] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:10.370 [2024-10-07 09:48:59.164437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:10.370 qpair failed and we were unable to recover it. 00:28:10.370 [2024-10-07 09:48:59.174286] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.370 [2024-10-07 09:48:59.174371] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.370 [2024-10-07 09:48:59.174395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.370 [2024-10-07 09:48:59.174409] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.370 [2024-10-07 09:48:59.174422] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:10.370 [2024-10-07 09:48:59.174452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:10.370 qpair failed and we were unable to recover it. 00:28:10.370 [2024-10-07 09:48:59.184340] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.370 [2024-10-07 09:48:59.184445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.370 [2024-10-07 09:48:59.184469] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.370 [2024-10-07 09:48:59.184483] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.370 [2024-10-07 09:48:59.184496] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:10.370 [2024-10-07 09:48:59.184525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:10.370 qpair failed and we were unable to recover it. 00:28:10.370 [2024-10-07 09:48:59.194346] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.370 [2024-10-07 09:48:59.194437] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.370 [2024-10-07 09:48:59.194462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.371 [2024-10-07 09:48:59.194483] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.371 [2024-10-07 09:48:59.194496] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:10.371 [2024-10-07 09:48:59.194526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:10.371 qpair failed and we were unable to recover it. 00:28:10.371 [2024-10-07 09:48:59.204379] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.371 [2024-10-07 09:48:59.204479] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.371 [2024-10-07 09:48:59.204504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.371 [2024-10-07 09:48:59.204518] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.371 [2024-10-07 09:48:59.204531] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:10.371 [2024-10-07 09:48:59.204563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:10.371 qpair failed and we were unable to recover it. 00:28:10.371 [2024-10-07 09:48:59.214459] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.371 [2024-10-07 09:48:59.214557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.371 [2024-10-07 09:48:59.214581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.371 [2024-10-07 09:48:59.214596] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.371 [2024-10-07 09:48:59.214609] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:10.371 [2024-10-07 09:48:59.214638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:10.371 qpair failed and we were unable to recover it. 00:28:10.371 [2024-10-07 09:48:59.224407] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.371 [2024-10-07 09:48:59.224494] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.371 [2024-10-07 09:48:59.224519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.371 [2024-10-07 09:48:59.224534] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.371 [2024-10-07 09:48:59.224546] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:10.371 [2024-10-07 09:48:59.224575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:10.371 qpair failed and we were unable to recover it. 00:28:10.371 [2024-10-07 09:48:59.234465] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.371 [2024-10-07 09:48:59.234550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.371 [2024-10-07 09:48:59.234575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.371 [2024-10-07 09:48:59.234590] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.371 [2024-10-07 09:48:59.234603] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:10.371 [2024-10-07 09:48:59.234632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:10.371 qpair failed and we were unable to recover it. 00:28:10.371 [2024-10-07 09:48:59.244583] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.371 [2024-10-07 09:48:59.244675] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.371 [2024-10-07 09:48:59.244710] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.371 [2024-10-07 09:48:59.244724] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.371 [2024-10-07 09:48:59.244737] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:10.371 [2024-10-07 09:48:59.244766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:10.371 qpair failed and we were unable to recover it. 00:28:10.371 [2024-10-07 09:48:59.254476] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.371 [2024-10-07 09:48:59.254561] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.371 [2024-10-07 09:48:59.254586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.371 [2024-10-07 09:48:59.254601] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.371 [2024-10-07 09:48:59.254613] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:10.371 [2024-10-07 09:48:59.254643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:10.371 qpair failed and we were unable to recover it. 00:28:10.371 [2024-10-07 09:48:59.264529] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.371 [2024-10-07 09:48:59.264620] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.371 [2024-10-07 09:48:59.264644] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.371 [2024-10-07 09:48:59.264659] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.371 [2024-10-07 09:48:59.264680] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:10.371 [2024-10-07 09:48:59.264711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:10.371 qpair failed and we were unable to recover it. 00:28:10.371 [2024-10-07 09:48:59.274568] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.371 [2024-10-07 09:48:59.274699] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.371 [2024-10-07 09:48:59.274724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.371 [2024-10-07 09:48:59.274738] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.371 [2024-10-07 09:48:59.274751] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:10.371 [2024-10-07 09:48:59.274781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:10.371 qpair failed and we were unable to recover it. 00:28:10.371 [2024-10-07 09:48:59.284652] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.371 [2024-10-07 09:48:59.284741] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.371 [2024-10-07 09:48:59.284772] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.371 [2024-10-07 09:48:59.284788] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.371 [2024-10-07 09:48:59.284800] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:10.371 [2024-10-07 09:48:59.284830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:10.371 qpair failed and we were unable to recover it. 00:28:10.371 [2024-10-07 09:48:59.294658] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.371 [2024-10-07 09:48:59.294749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.371 [2024-10-07 09:48:59.294775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.371 [2024-10-07 09:48:59.294789] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.371 [2024-10-07 09:48:59.294801] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:10.371 [2024-10-07 09:48:59.294830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:10.371 qpair failed and we were unable to recover it. 00:28:10.371 [2024-10-07 09:48:59.304623] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.371 [2024-10-07 09:48:59.304718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.371 [2024-10-07 09:48:59.304743] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.371 [2024-10-07 09:48:59.304757] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.371 [2024-10-07 09:48:59.304770] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:10.371 [2024-10-07 09:48:59.304800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:10.371 qpair failed and we were unable to recover it. 00:28:10.371 [2024-10-07 09:48:59.314642] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.371 [2024-10-07 09:48:59.314737] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.371 [2024-10-07 09:48:59.314763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.371 [2024-10-07 09:48:59.314777] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.371 [2024-10-07 09:48:59.314790] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:10.371 [2024-10-07 09:48:59.314820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:10.371 qpair failed and we were unable to recover it. 00:28:10.371 [2024-10-07 09:48:59.324683] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.371 [2024-10-07 09:48:59.324771] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.371 [2024-10-07 09:48:59.324796] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.371 [2024-10-07 09:48:59.324811] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.371 [2024-10-07 09:48:59.324823] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:10.371 [2024-10-07 09:48:59.324859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:10.371 qpair failed and we were unable to recover it. 00:28:10.372 [2024-10-07 09:48:59.334756] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.372 [2024-10-07 09:48:59.334861] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.372 [2024-10-07 09:48:59.334886] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.372 [2024-10-07 09:48:59.334901] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.372 [2024-10-07 09:48:59.334913] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:10.372 [2024-10-07 09:48:59.334944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:10.372 qpair failed and we were unable to recover it. 00:28:10.372 [2024-10-07 09:48:59.344756] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.372 [2024-10-07 09:48:59.344844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.372 [2024-10-07 09:48:59.344869] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.372 [2024-10-07 09:48:59.344883] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.372 [2024-10-07 09:48:59.344895] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:10.372 [2024-10-07 09:48:59.344925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:10.372 qpair failed and we were unable to recover it. 00:28:10.372 [2024-10-07 09:48:59.354771] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.372 [2024-10-07 09:48:59.354864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.372 [2024-10-07 09:48:59.354889] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.372 [2024-10-07 09:48:59.354903] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.372 [2024-10-07 09:48:59.354915] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:10.372 [2024-10-07 09:48:59.354945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:10.372 qpair failed and we were unable to recover it. 00:28:10.634 [2024-10-07 09:48:59.364822] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.634 [2024-10-07 09:48:59.364938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.634 [2024-10-07 09:48:59.364973] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.634 [2024-10-07 09:48:59.364990] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.634 [2024-10-07 09:48:59.365003] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:10.634 [2024-10-07 09:48:59.365033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:10.634 qpair failed and we were unable to recover it. 00:28:10.634 [2024-10-07 09:48:59.374839] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.634 [2024-10-07 09:48:59.374925] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.634 [2024-10-07 09:48:59.374957] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.634 [2024-10-07 09:48:59.374973] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.634 [2024-10-07 09:48:59.374985] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:10.634 [2024-10-07 09:48:59.375015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:10.634 qpair failed and we were unable to recover it. 00:28:10.634 [2024-10-07 09:48:59.384870] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.634 [2024-10-07 09:48:59.384960] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.634 [2024-10-07 09:48:59.384985] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.634 [2024-10-07 09:48:59.385000] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.634 [2024-10-07 09:48:59.385013] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:10.634 [2024-10-07 09:48:59.385042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:10.634 qpair failed and we were unable to recover it. 00:28:10.634 [2024-10-07 09:48:59.394860] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.634 [2024-10-07 09:48:59.394943] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.634 [2024-10-07 09:48:59.394967] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.634 [2024-10-07 09:48:59.394982] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.634 [2024-10-07 09:48:59.394994] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:10.634 [2024-10-07 09:48:59.395025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:10.634 qpair failed and we were unable to recover it. 00:28:10.634 [2024-10-07 09:48:59.404925] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.634 [2024-10-07 09:48:59.405010] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.634 [2024-10-07 09:48:59.405036] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.634 [2024-10-07 09:48:59.405050] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.634 [2024-10-07 09:48:59.405063] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:10.634 [2024-10-07 09:48:59.405092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:10.634 qpair failed and we were unable to recover it. 00:28:10.634 [2024-10-07 09:48:59.414939] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.634 [2024-10-07 09:48:59.415035] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.634 [2024-10-07 09:48:59.415063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.634 [2024-10-07 09:48:59.415081] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.635 [2024-10-07 09:48:59.415094] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:10.635 [2024-10-07 09:48:59.415130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:10.635 qpair failed and we were unable to recover it. 00:28:10.635 [2024-10-07 09:48:59.424980] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.635 [2024-10-07 09:48:59.425067] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.635 [2024-10-07 09:48:59.425092] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.635 [2024-10-07 09:48:59.425107] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.635 [2024-10-07 09:48:59.425120] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:10.635 [2024-10-07 09:48:59.425150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:10.635 qpair failed and we were unable to recover it. 00:28:10.635 [2024-10-07 09:48:59.435005] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.635 [2024-10-07 09:48:59.435091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.635 [2024-10-07 09:48:59.435118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.635 [2024-10-07 09:48:59.435134] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.635 [2024-10-07 09:48:59.435146] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:10.635 [2024-10-07 09:48:59.435176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:10.635 qpair failed and we were unable to recover it. 00:28:10.635 [2024-10-07 09:48:59.445020] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.635 [2024-10-07 09:48:59.445106] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.635 [2024-10-07 09:48:59.445132] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.635 [2024-10-07 09:48:59.445146] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.635 [2024-10-07 09:48:59.445159] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:10.635 [2024-10-07 09:48:59.445188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:10.635 qpair failed and we were unable to recover it. 00:28:10.635 [2024-10-07 09:48:59.455029] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.635 [2024-10-07 09:48:59.455121] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.635 [2024-10-07 09:48:59.455146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.635 [2024-10-07 09:48:59.455161] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.635 [2024-10-07 09:48:59.455173] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:10.635 [2024-10-07 09:48:59.455202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:10.635 qpair failed and we were unable to recover it. 00:28:10.635 [2024-10-07 09:48:59.465139] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.635 [2024-10-07 09:48:59.465233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.635 [2024-10-07 09:48:59.465271] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.635 [2024-10-07 09:48:59.465287] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.635 [2024-10-07 09:48:59.465299] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:10.635 [2024-10-07 09:48:59.465330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:10.635 qpair failed and we were unable to recover it. 00:28:10.635 [2024-10-07 09:48:59.475139] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.635 [2024-10-07 09:48:59.475229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.635 [2024-10-07 09:48:59.475254] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.635 [2024-10-07 09:48:59.475268] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.635 [2024-10-07 09:48:59.475280] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:10.635 [2024-10-07 09:48:59.475310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:10.635 qpair failed and we were unable to recover it. 00:28:10.635 [2024-10-07 09:48:59.485167] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.635 [2024-10-07 09:48:59.485265] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.635 [2024-10-07 09:48:59.485290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.635 [2024-10-07 09:48:59.485304] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.635 [2024-10-07 09:48:59.485317] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:10.635 [2024-10-07 09:48:59.485347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:10.635 qpair failed and we were unable to recover it. 00:28:10.635 [2024-10-07 09:48:59.495157] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.635 [2024-10-07 09:48:59.495243] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.635 [2024-10-07 09:48:59.495268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.635 [2024-10-07 09:48:59.495282] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.635 [2024-10-07 09:48:59.495296] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:10.635 [2024-10-07 09:48:59.495325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:10.635 qpair failed and we were unable to recover it. 00:28:10.635 [2024-10-07 09:48:59.505291] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.635 [2024-10-07 09:48:59.505392] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.635 [2024-10-07 09:48:59.505416] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.635 [2024-10-07 09:48:59.505431] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.635 [2024-10-07 09:48:59.505450] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:10.635 [2024-10-07 09:48:59.505481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:10.635 qpair failed and we were unable to recover it. 00:28:10.635 [2024-10-07 09:48:59.515249] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.635 [2024-10-07 09:48:59.515339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.635 [2024-10-07 09:48:59.515368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.635 [2024-10-07 09:48:59.515385] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.635 [2024-10-07 09:48:59.515397] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:10.635 [2024-10-07 09:48:59.515428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:10.635 qpair failed and we were unable to recover it. 00:28:10.635 [2024-10-07 09:48:59.525252] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.635 [2024-10-07 09:48:59.525337] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.635 [2024-10-07 09:48:59.525362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.635 [2024-10-07 09:48:59.525377] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.635 [2024-10-07 09:48:59.525390] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:10.635 [2024-10-07 09:48:59.525419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:10.635 qpair failed and we were unable to recover it. 00:28:10.635 [2024-10-07 09:48:59.535260] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.635 [2024-10-07 09:48:59.535374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.635 [2024-10-07 09:48:59.535400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.635 [2024-10-07 09:48:59.535415] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.635 [2024-10-07 09:48:59.535428] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:10.635 [2024-10-07 09:48:59.535457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:10.635 qpair failed and we were unable to recover it. 00:28:10.635 [2024-10-07 09:48:59.545382] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.635 [2024-10-07 09:48:59.545485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.635 [2024-10-07 09:48:59.545510] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.635 [2024-10-07 09:48:59.545524] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.635 [2024-10-07 09:48:59.545536] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:10.635 [2024-10-07 09:48:59.545566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:10.635 qpair failed and we were unable to recover it. 00:28:10.635 [2024-10-07 09:48:59.555335] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.636 [2024-10-07 09:48:59.555427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.636 [2024-10-07 09:48:59.555453] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.636 [2024-10-07 09:48:59.555467] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.636 [2024-10-07 09:48:59.555479] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:10.636 [2024-10-07 09:48:59.555509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:10.636 qpair failed and we were unable to recover it. 00:28:10.636 [2024-10-07 09:48:59.565352] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.636 [2024-10-07 09:48:59.565434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.636 [2024-10-07 09:48:59.565459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.636 [2024-10-07 09:48:59.565475] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.636 [2024-10-07 09:48:59.565487] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:10.636 [2024-10-07 09:48:59.565518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:10.636 qpair failed and we were unable to recover it. 00:28:10.636 [2024-10-07 09:48:59.575472] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.636 [2024-10-07 09:48:59.575558] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.636 [2024-10-07 09:48:59.575583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.636 [2024-10-07 09:48:59.575598] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.636 [2024-10-07 09:48:59.575610] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:10.636 [2024-10-07 09:48:59.575640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:10.636 qpair failed and we were unable to recover it. 00:28:10.636 [2024-10-07 09:48:59.585419] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.636 [2024-10-07 09:48:59.585510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.636 [2024-10-07 09:48:59.585535] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.636 [2024-10-07 09:48:59.585549] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.636 [2024-10-07 09:48:59.585562] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:10.636 [2024-10-07 09:48:59.585592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:10.636 qpair failed and we were unable to recover it. 00:28:10.636 [2024-10-07 09:48:59.595465] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.636 [2024-10-07 09:48:59.595584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.636 [2024-10-07 09:48:59.595610] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.636 [2024-10-07 09:48:59.595631] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.636 [2024-10-07 09:48:59.595645] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:10.636 [2024-10-07 09:48:59.595685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:10.636 qpair failed and we were unable to recover it. 00:28:10.636 [2024-10-07 09:48:59.605514] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.636 [2024-10-07 09:48:59.605612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.636 [2024-10-07 09:48:59.605637] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.636 [2024-10-07 09:48:59.605652] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.636 [2024-10-07 09:48:59.605673] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:10.636 [2024-10-07 09:48:59.605729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:10.636 qpair failed and we were unable to recover it. 00:28:10.636 [2024-10-07 09:48:59.615487] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.636 [2024-10-07 09:48:59.615567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.636 [2024-10-07 09:48:59.615592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.636 [2024-10-07 09:48:59.615606] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.636 [2024-10-07 09:48:59.615618] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:10.636 [2024-10-07 09:48:59.615648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:10.636 qpair failed and we were unable to recover it. 00:28:10.636 [2024-10-07 09:48:59.625557] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.636 [2024-10-07 09:48:59.625652] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.636 [2024-10-07 09:48:59.625686] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.636 [2024-10-07 09:48:59.625713] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.636 [2024-10-07 09:48:59.625726] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:10.636 [2024-10-07 09:48:59.625763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:10.636 qpair failed and we were unable to recover it. 00:28:10.897 [2024-10-07 09:48:59.635570] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.897 [2024-10-07 09:48:59.635698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.897 [2024-10-07 09:48:59.635724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.897 [2024-10-07 09:48:59.635739] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.897 [2024-10-07 09:48:59.635751] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:10.897 [2024-10-07 09:48:59.635781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:10.897 qpair failed and we were unable to recover it. 00:28:10.897 [2024-10-07 09:48:59.645635] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.897 [2024-10-07 09:48:59.645726] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.897 [2024-10-07 09:48:59.645751] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.897 [2024-10-07 09:48:59.645766] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.897 [2024-10-07 09:48:59.645778] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:10.897 [2024-10-07 09:48:59.645810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:10.897 qpair failed and we were unable to recover it. 00:28:10.897 [2024-10-07 09:48:59.655647] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.898 [2024-10-07 09:48:59.655761] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.898 [2024-10-07 09:48:59.655786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.898 [2024-10-07 09:48:59.655806] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.898 [2024-10-07 09:48:59.655818] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:10.898 [2024-10-07 09:48:59.655849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:10.898 qpair failed and we were unable to recover it. 00:28:10.898 [2024-10-07 09:48:59.665627] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.898 [2024-10-07 09:48:59.665722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.898 [2024-10-07 09:48:59.665747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.898 [2024-10-07 09:48:59.665761] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.898 [2024-10-07 09:48:59.665773] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:10.898 [2024-10-07 09:48:59.665814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:10.898 qpair failed and we were unable to recover it. 00:28:10.898 [2024-10-07 09:48:59.675651] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.898 [2024-10-07 09:48:59.675749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.898 [2024-10-07 09:48:59.675774] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.898 [2024-10-07 09:48:59.675788] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.898 [2024-10-07 09:48:59.675801] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:10.898 [2024-10-07 09:48:59.675831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:10.898 qpair failed and we were unable to recover it. 00:28:10.898 [2024-10-07 09:48:59.685673] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.898 [2024-10-07 09:48:59.685762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.898 [2024-10-07 09:48:59.685787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.898 [2024-10-07 09:48:59.685807] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.898 [2024-10-07 09:48:59.685821] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:10.898 [2024-10-07 09:48:59.685851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:10.898 qpair failed and we were unable to recover it. 00:28:10.898 [2024-10-07 09:48:59.695745] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.898 [2024-10-07 09:48:59.695827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.898 [2024-10-07 09:48:59.695852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.898 [2024-10-07 09:48:59.695866] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.898 [2024-10-07 09:48:59.695878] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:10.898 [2024-10-07 09:48:59.695920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:10.898 qpair failed and we were unable to recover it. 00:28:10.898 [2024-10-07 09:48:59.705748] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.898 [2024-10-07 09:48:59.705847] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.898 [2024-10-07 09:48:59.705872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.898 [2024-10-07 09:48:59.705886] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.898 [2024-10-07 09:48:59.705898] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:10.898 [2024-10-07 09:48:59.705928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:10.898 qpair failed and we were unable to recover it. 00:28:10.898 [2024-10-07 09:48:59.715752] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.898 [2024-10-07 09:48:59.715829] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.898 [2024-10-07 09:48:59.715854] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.898 [2024-10-07 09:48:59.715869] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.898 [2024-10-07 09:48:59.715881] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:10.898 [2024-10-07 09:48:59.715911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:10.898 qpair failed and we were unable to recover it. 00:28:10.898 [2024-10-07 09:48:59.725805] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.898 [2024-10-07 09:48:59.725892] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.898 [2024-10-07 09:48:59.725917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.898 [2024-10-07 09:48:59.725932] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.898 [2024-10-07 09:48:59.725945] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:10.898 [2024-10-07 09:48:59.725974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:10.898 qpair failed and we were unable to recover it. 00:28:10.898 [2024-10-07 09:48:59.735821] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.898 [2024-10-07 09:48:59.735949] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.898 [2024-10-07 09:48:59.735974] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.898 [2024-10-07 09:48:59.735988] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.898 [2024-10-07 09:48:59.736001] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:10.898 [2024-10-07 09:48:59.736043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:10.898 qpair failed and we were unable to recover it. 00:28:10.898 [2024-10-07 09:48:59.745883] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.898 [2024-10-07 09:48:59.745975] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.898 [2024-10-07 09:48:59.746000] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.898 [2024-10-07 09:48:59.746013] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.898 [2024-10-07 09:48:59.746026] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:10.898 [2024-10-07 09:48:59.746055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:10.898 qpair failed and we were unable to recover it. 00:28:10.898 [2024-10-07 09:48:59.755926] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.898 [2024-10-07 09:48:59.756009] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.898 [2024-10-07 09:48:59.756034] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.898 [2024-10-07 09:48:59.756048] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.898 [2024-10-07 09:48:59.756060] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:10.898 [2024-10-07 09:48:59.756090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:10.898 qpair failed and we were unable to recover it. 00:28:10.898 [2024-10-07 09:48:59.765896] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.898 [2024-10-07 09:48:59.765984] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.898 [2024-10-07 09:48:59.766009] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.898 [2024-10-07 09:48:59.766023] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.898 [2024-10-07 09:48:59.766036] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:10.898 [2024-10-07 09:48:59.766066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:10.898 qpair failed and we were unable to recover it. 00:28:10.898 [2024-10-07 09:48:59.775948] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.898 [2024-10-07 09:48:59.776068] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.898 [2024-10-07 09:48:59.776099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.898 [2024-10-07 09:48:59.776114] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.898 [2024-10-07 09:48:59.776126] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:10.898 [2024-10-07 09:48:59.776158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:10.898 qpair failed and we were unable to recover it. 00:28:10.898 [2024-10-07 09:48:59.785973] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.898 [2024-10-07 09:48:59.786063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.899 [2024-10-07 09:48:59.786088] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.899 [2024-10-07 09:48:59.786103] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.899 [2024-10-07 09:48:59.786116] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:10.899 [2024-10-07 09:48:59.786146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:10.899 qpair failed and we were unable to recover it. 00:28:10.899 [2024-10-07 09:48:59.795989] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.899 [2024-10-07 09:48:59.796073] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.899 [2024-10-07 09:48:59.796098] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.899 [2024-10-07 09:48:59.796112] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.899 [2024-10-07 09:48:59.796125] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:10.899 [2024-10-07 09:48:59.796157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:10.899 qpair failed and we were unable to recover it. 00:28:10.899 [2024-10-07 09:48:59.806007] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.899 [2024-10-07 09:48:59.806095] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.899 [2024-10-07 09:48:59.806119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.899 [2024-10-07 09:48:59.806134] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.899 [2024-10-07 09:48:59.806147] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:10.899 [2024-10-07 09:48:59.806176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:10.899 qpair failed and we were unable to recover it. 00:28:10.899 [2024-10-07 09:48:59.816106] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.899 [2024-10-07 09:48:59.816186] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.899 [2024-10-07 09:48:59.816211] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.899 [2024-10-07 09:48:59.816227] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.899 [2024-10-07 09:48:59.816239] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:10.899 [2024-10-07 09:48:59.816287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:10.899 qpair failed and we were unable to recover it. 00:28:10.899 [2024-10-07 09:48:59.826141] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.899 [2024-10-07 09:48:59.826234] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.899 [2024-10-07 09:48:59.826260] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.899 [2024-10-07 09:48:59.826274] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.899 [2024-10-07 09:48:59.826286] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:10.899 [2024-10-07 09:48:59.826316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:10.899 qpair failed and we were unable to recover it. 00:28:10.899 [2024-10-07 09:48:59.836102] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.899 [2024-10-07 09:48:59.836190] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.899 [2024-10-07 09:48:59.836214] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.899 [2024-10-07 09:48:59.836229] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.899 [2024-10-07 09:48:59.836241] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:10.899 [2024-10-07 09:48:59.836270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:10.899 qpair failed and we were unable to recover it. 00:28:10.899 [2024-10-07 09:48:59.846136] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.899 [2024-10-07 09:48:59.846222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.899 [2024-10-07 09:48:59.846246] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.899 [2024-10-07 09:48:59.846261] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.899 [2024-10-07 09:48:59.846274] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:10.899 [2024-10-07 09:48:59.846303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:10.899 qpair failed and we were unable to recover it. 00:28:10.899 [2024-10-07 09:48:59.856166] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.899 [2024-10-07 09:48:59.856293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.899 [2024-10-07 09:48:59.856321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.899 [2024-10-07 09:48:59.856337] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.899 [2024-10-07 09:48:59.856349] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:10.899 [2024-10-07 09:48:59.856394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:10.899 qpair failed and we were unable to recover it. 00:28:10.899 [2024-10-07 09:48:59.866183] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.899 [2024-10-07 09:48:59.866297] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.899 [2024-10-07 09:48:59.866329] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.899 [2024-10-07 09:48:59.866345] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.899 [2024-10-07 09:48:59.866358] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:10.899 [2024-10-07 09:48:59.866387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:10.899 qpair failed and we were unable to recover it. 00:28:10.899 [2024-10-07 09:48:59.876214] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.899 [2024-10-07 09:48:59.876296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.899 [2024-10-07 09:48:59.876322] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.899 [2024-10-07 09:48:59.876337] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.899 [2024-10-07 09:48:59.876350] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:10.899 [2024-10-07 09:48:59.876391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:10.899 qpair failed and we were unable to recover it. 00:28:10.899 [2024-10-07 09:48:59.886248] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.899 [2024-10-07 09:48:59.886358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.899 [2024-10-07 09:48:59.886383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.899 [2024-10-07 09:48:59.886398] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.899 [2024-10-07 09:48:59.886410] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:10.899 [2024-10-07 09:48:59.886440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:10.899 qpair failed and we were unable to recover it. 00:28:11.159 [2024-10-07 09:48:59.896250] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.160 [2024-10-07 09:48:59.896332] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.160 [2024-10-07 09:48:59.896357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.160 [2024-10-07 09:48:59.896371] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.160 [2024-10-07 09:48:59.896384] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:11.160 [2024-10-07 09:48:59.896414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:11.160 qpair failed and we were unable to recover it. 00:28:11.160 [2024-10-07 09:48:59.906337] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.160 [2024-10-07 09:48:59.906437] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.160 [2024-10-07 09:48:59.906462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.160 [2024-10-07 09:48:59.906477] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.160 [2024-10-07 09:48:59.906489] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:11.160 [2024-10-07 09:48:59.906525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:11.160 qpair failed and we were unable to recover it. 00:28:11.160 [2024-10-07 09:48:59.916320] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.160 [2024-10-07 09:48:59.916403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.160 [2024-10-07 09:48:59.916427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.160 [2024-10-07 09:48:59.916441] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.160 [2024-10-07 09:48:59.916454] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:11.160 [2024-10-07 09:48:59.916484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:11.160 qpair failed and we were unable to recover it. 00:28:11.160 [2024-10-07 09:48:59.926351] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.160 [2024-10-07 09:48:59.926434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.160 [2024-10-07 09:48:59.926459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.160 [2024-10-07 09:48:59.926474] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.160 [2024-10-07 09:48:59.926487] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:11.160 [2024-10-07 09:48:59.926516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:11.160 qpair failed and we were unable to recover it. 00:28:11.160 [2024-10-07 09:48:59.936462] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.160 [2024-10-07 09:48:59.936543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.160 [2024-10-07 09:48:59.936569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.160 [2024-10-07 09:48:59.936583] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.160 [2024-10-07 09:48:59.936596] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:11.160 [2024-10-07 09:48:59.936625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:11.160 qpair failed and we were unable to recover it. 00:28:11.160 [2024-10-07 09:48:59.946416] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.160 [2024-10-07 09:48:59.946503] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.160 [2024-10-07 09:48:59.946528] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.160 [2024-10-07 09:48:59.946543] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.160 [2024-10-07 09:48:59.946555] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:11.160 [2024-10-07 09:48:59.946585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:11.160 qpair failed and we were unable to recover it. 00:28:11.160 [2024-10-07 09:48:59.956436] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.160 [2024-10-07 09:48:59.956529] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.160 [2024-10-07 09:48:59.956560] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.160 [2024-10-07 09:48:59.956574] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.160 [2024-10-07 09:48:59.956587] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:11.160 [2024-10-07 09:48:59.956617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:11.160 qpair failed and we were unable to recover it. 00:28:11.160 [2024-10-07 09:48:59.966587] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.160 [2024-10-07 09:48:59.966705] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.160 [2024-10-07 09:48:59.966732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.160 [2024-10-07 09:48:59.966746] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.160 [2024-10-07 09:48:59.966759] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:11.160 [2024-10-07 09:48:59.966789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:11.160 qpair failed and we were unable to recover it. 00:28:11.160 [2024-10-07 09:48:59.976549] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.160 [2024-10-07 09:48:59.976634] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.160 [2024-10-07 09:48:59.976658] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.160 [2024-10-07 09:48:59.976680] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.160 [2024-10-07 09:48:59.976694] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:11.160 [2024-10-07 09:48:59.976732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:11.160 qpair failed and we were unable to recover it. 00:28:11.160 [2024-10-07 09:48:59.986582] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.160 [2024-10-07 09:48:59.986715] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.160 [2024-10-07 09:48:59.986741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.160 [2024-10-07 09:48:59.986755] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.160 [2024-10-07 09:48:59.986768] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:11.160 [2024-10-07 09:48:59.986798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:11.160 qpair failed and we were unable to recover it. 00:28:11.160 [2024-10-07 09:48:59.996609] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.160 [2024-10-07 09:48:59.996719] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.160 [2024-10-07 09:48:59.996747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.160 [2024-10-07 09:48:59.996762] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.160 [2024-10-07 09:48:59.996781] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:11.160 [2024-10-07 09:48:59.996811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:11.160 qpair failed and we were unable to recover it. 00:28:11.160 [2024-10-07 09:49:00.006681] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.160 [2024-10-07 09:49:00.006779] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.160 [2024-10-07 09:49:00.006805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.160 [2024-10-07 09:49:00.006820] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.160 [2024-10-07 09:49:00.006832] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:11.160 [2024-10-07 09:49:00.006862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:11.160 qpair failed and we were unable to recover it. 00:28:11.160 [2024-10-07 09:49:00.016631] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.160 [2024-10-07 09:49:00.016734] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.160 [2024-10-07 09:49:00.016762] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.160 [2024-10-07 09:49:00.016777] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.160 [2024-10-07 09:49:00.016790] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:11.160 [2024-10-07 09:49:00.016822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:11.160 qpair failed and we were unable to recover it. 00:28:11.160 [2024-10-07 09:49:00.026719] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.161 [2024-10-07 09:49:00.026812] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.161 [2024-10-07 09:49:00.026838] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.161 [2024-10-07 09:49:00.026853] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.161 [2024-10-07 09:49:00.026865] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:11.161 [2024-10-07 09:49:00.026896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:11.161 qpair failed and we were unable to recover it. 00:28:11.161 [2024-10-07 09:49:00.036690] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.161 [2024-10-07 09:49:00.036779] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.161 [2024-10-07 09:49:00.036805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.161 [2024-10-07 09:49:00.036820] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.161 [2024-10-07 09:49:00.036832] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:11.161 [2024-10-07 09:49:00.036862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:11.161 qpair failed and we were unable to recover it. 00:28:11.161 [2024-10-07 09:49:00.046763] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.161 [2024-10-07 09:49:00.046874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.161 [2024-10-07 09:49:00.046901] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.161 [2024-10-07 09:49:00.046916] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.161 [2024-10-07 09:49:00.046929] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:11.161 [2024-10-07 09:49:00.046958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:11.161 qpair failed and we were unable to recover it. 00:28:11.161 [2024-10-07 09:49:00.056733] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.161 [2024-10-07 09:49:00.056824] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.161 [2024-10-07 09:49:00.056848] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.161 [2024-10-07 09:49:00.056863] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.161 [2024-10-07 09:49:00.056876] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:11.161 [2024-10-07 09:49:00.056905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:11.161 qpair failed and we were unable to recover it. 00:28:11.161 [2024-10-07 09:49:00.066767] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.161 [2024-10-07 09:49:00.066858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.161 [2024-10-07 09:49:00.066883] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.161 [2024-10-07 09:49:00.066897] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.161 [2024-10-07 09:49:00.066910] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:11.161 [2024-10-07 09:49:00.066939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:11.161 qpair failed and we were unable to recover it. 00:28:11.161 [2024-10-07 09:49:00.076899] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.161 [2024-10-07 09:49:00.076981] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.161 [2024-10-07 09:49:00.077007] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.161 [2024-10-07 09:49:00.077020] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.161 [2024-10-07 09:49:00.077032] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:11.161 [2024-10-07 09:49:00.077062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:11.161 qpair failed and we were unable to recover it. 00:28:11.161 [2024-10-07 09:49:00.086834] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.161 [2024-10-07 09:49:00.086921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.161 [2024-10-07 09:49:00.086946] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.161 [2024-10-07 09:49:00.086960] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.161 [2024-10-07 09:49:00.086980] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:11.161 [2024-10-07 09:49:00.087011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:11.161 qpair failed and we were unable to recover it. 00:28:11.161 [2024-10-07 09:49:00.096881] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.161 [2024-10-07 09:49:00.096989] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.161 [2024-10-07 09:49:00.097017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.161 [2024-10-07 09:49:00.097032] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.161 [2024-10-07 09:49:00.097044] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:11.161 [2024-10-07 09:49:00.097085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:11.161 qpair failed and we were unable to recover it. 00:28:11.161 [2024-10-07 09:49:00.106895] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.161 [2024-10-07 09:49:00.106985] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.161 [2024-10-07 09:49:00.107009] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.161 [2024-10-07 09:49:00.107023] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.161 [2024-10-07 09:49:00.107037] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:11.161 [2024-10-07 09:49:00.107066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:11.161 qpair failed and we were unable to recover it. 00:28:11.161 [2024-10-07 09:49:00.116930] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.161 [2024-10-07 09:49:00.117055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.161 [2024-10-07 09:49:00.117079] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.161 [2024-10-07 09:49:00.117093] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.161 [2024-10-07 09:49:00.117106] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:11.161 [2024-10-07 09:49:00.117135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:11.161 qpair failed and we were unable to recover it. 00:28:11.161 [2024-10-07 09:49:00.127105] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.161 [2024-10-07 09:49:00.127233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.161 [2024-10-07 09:49:00.127261] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.161 [2024-10-07 09:49:00.127276] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.161 [2024-10-07 09:49:00.127289] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:11.161 [2024-10-07 09:49:00.127332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:11.161 qpair failed and we were unable to recover it. 00:28:11.161 [2024-10-07 09:49:00.136974] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.161 [2024-10-07 09:49:00.137063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.162 [2024-10-07 09:49:00.137087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.162 [2024-10-07 09:49:00.137102] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.162 [2024-10-07 09:49:00.137115] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:11.162 [2024-10-07 09:49:00.137144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:11.162 qpair failed and we were unable to recover it. 00:28:11.162 [2024-10-07 09:49:00.147018] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.162 [2024-10-07 09:49:00.147109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.162 [2024-10-07 09:49:00.147133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.162 [2024-10-07 09:49:00.147147] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.162 [2024-10-07 09:49:00.147160] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:11.162 [2024-10-07 09:49:00.147190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:11.162 qpair failed and we were unable to recover it. 00:28:11.421 [2024-10-07 09:49:00.157022] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.421 [2024-10-07 09:49:00.157124] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.421 [2024-10-07 09:49:00.157149] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.421 [2024-10-07 09:49:00.157163] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.421 [2024-10-07 09:49:00.157175] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:11.421 [2024-10-07 09:49:00.157204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:11.421 qpair failed and we were unable to recover it. 00:28:11.421 [2024-10-07 09:49:00.167077] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.421 [2024-10-07 09:49:00.167158] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.421 [2024-10-07 09:49:00.167184] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.421 [2024-10-07 09:49:00.167198] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.421 [2024-10-07 09:49:00.167210] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:11.421 [2024-10-07 09:49:00.167240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:11.421 qpair failed and we were unable to recover it. 00:28:11.421 [2024-10-07 09:49:00.177071] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.421 [2024-10-07 09:49:00.177153] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.421 [2024-10-07 09:49:00.177178] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.421 [2024-10-07 09:49:00.177198] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.421 [2024-10-07 09:49:00.177212] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:11.421 [2024-10-07 09:49:00.177242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:11.421 qpair failed and we were unable to recover it. 00:28:11.421 [2024-10-07 09:49:00.187148] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.421 [2024-10-07 09:49:00.187255] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.421 [2024-10-07 09:49:00.187279] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.421 [2024-10-07 09:49:00.187294] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.421 [2024-10-07 09:49:00.187306] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:11.421 [2024-10-07 09:49:00.187335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:11.421 qpair failed and we were unable to recover it. 00:28:11.421 [2024-10-07 09:49:00.197182] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.421 [2024-10-07 09:49:00.197272] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.421 [2024-10-07 09:49:00.197297] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.421 [2024-10-07 09:49:00.197311] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.421 [2024-10-07 09:49:00.197324] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:11.421 [2024-10-07 09:49:00.197354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:11.421 qpair failed and we were unable to recover it. 00:28:11.421 [2024-10-07 09:49:00.207197] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.421 [2024-10-07 09:49:00.207286] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.421 [2024-10-07 09:49:00.207312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.421 [2024-10-07 09:49:00.207326] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.421 [2024-10-07 09:49:00.207338] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:11.421 [2024-10-07 09:49:00.207371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:11.421 qpair failed and we were unable to recover it. 00:28:11.421 [2024-10-07 09:49:00.217173] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.421 [2024-10-07 09:49:00.217252] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.421 [2024-10-07 09:49:00.217277] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.421 [2024-10-07 09:49:00.217291] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.421 [2024-10-07 09:49:00.217304] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:11.421 [2024-10-07 09:49:00.217333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:11.421 qpair failed and we were unable to recover it. 00:28:11.421 [2024-10-07 09:49:00.227258] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.421 [2024-10-07 09:49:00.227349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.421 [2024-10-07 09:49:00.227374] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.421 [2024-10-07 09:49:00.227388] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.421 [2024-10-07 09:49:00.227401] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:11.421 [2024-10-07 09:49:00.227430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:11.421 qpair failed and we were unable to recover it. 00:28:11.421 [2024-10-07 09:49:00.237257] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.421 [2024-10-07 09:49:00.237339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.421 [2024-10-07 09:49:00.237363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.421 [2024-10-07 09:49:00.237377] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.421 [2024-10-07 09:49:00.237390] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:11.421 [2024-10-07 09:49:00.237421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:11.421 qpair failed and we were unable to recover it. 00:28:11.422 [2024-10-07 09:49:00.247300] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.422 [2024-10-07 09:49:00.247397] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.422 [2024-10-07 09:49:00.247422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.422 [2024-10-07 09:49:00.247436] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.422 [2024-10-07 09:49:00.247449] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:11.422 [2024-10-07 09:49:00.247479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:11.422 qpair failed and we were unable to recover it. 00:28:11.422 [2024-10-07 09:49:00.257344] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.422 [2024-10-07 09:49:00.257448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.422 [2024-10-07 09:49:00.257472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.422 [2024-10-07 09:49:00.257487] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.422 [2024-10-07 09:49:00.257499] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:11.422 [2024-10-07 09:49:00.257540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:11.422 qpair failed and we were unable to recover it. 00:28:11.422 [2024-10-07 09:49:00.267374] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.422 [2024-10-07 09:49:00.267486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.422 [2024-10-07 09:49:00.267513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.422 [2024-10-07 09:49:00.267534] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.422 [2024-10-07 09:49:00.267547] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:11.422 [2024-10-07 09:49:00.267576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:11.422 qpair failed and we were unable to recover it. 00:28:11.422 [2024-10-07 09:49:00.277379] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.422 [2024-10-07 09:49:00.277480] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.422 [2024-10-07 09:49:00.277505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.422 [2024-10-07 09:49:00.277519] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.422 [2024-10-07 09:49:00.277532] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:11.422 [2024-10-07 09:49:00.277561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:11.422 qpair failed and we were unable to recover it. 00:28:11.422 [2024-10-07 09:49:00.287448] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.422 [2024-10-07 09:49:00.287543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.422 [2024-10-07 09:49:00.287568] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.422 [2024-10-07 09:49:00.287583] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.422 [2024-10-07 09:49:00.287596] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:11.422 [2024-10-07 09:49:00.287625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:11.422 qpair failed and we were unable to recover it. 00:28:11.422 [2024-10-07 09:49:00.297405] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.422 [2024-10-07 09:49:00.297492] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.422 [2024-10-07 09:49:00.297516] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.422 [2024-10-07 09:49:00.297530] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.422 [2024-10-07 09:49:00.297543] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:11.422 [2024-10-07 09:49:00.297573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:11.422 qpair failed and we were unable to recover it. 00:28:11.422 [2024-10-07 09:49:00.307466] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.422 [2024-10-07 09:49:00.307559] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.422 [2024-10-07 09:49:00.307584] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.422 [2024-10-07 09:49:00.307598] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.422 [2024-10-07 09:49:00.307610] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:11.422 [2024-10-07 09:49:00.307639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:11.422 qpair failed and we were unable to recover it. 00:28:11.422 [2024-10-07 09:49:00.317485] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.422 [2024-10-07 09:49:00.317586] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.422 [2024-10-07 09:49:00.317611] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.422 [2024-10-07 09:49:00.317625] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.422 [2024-10-07 09:49:00.317638] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:11.422 [2024-10-07 09:49:00.317675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:11.422 qpair failed and we were unable to recover it. 00:28:11.422 [2024-10-07 09:49:00.327592] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.422 [2024-10-07 09:49:00.327691] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.422 [2024-10-07 09:49:00.327718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.422 [2024-10-07 09:49:00.327732] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.422 [2024-10-07 09:49:00.327745] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:11.422 [2024-10-07 09:49:00.327788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:11.422 qpair failed and we were unable to recover it. 00:28:11.422 [2024-10-07 09:49:00.337609] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.422 [2024-10-07 09:49:00.337697] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.422 [2024-10-07 09:49:00.337723] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.422 [2024-10-07 09:49:00.337737] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.422 [2024-10-07 09:49:00.337749] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:11.422 [2024-10-07 09:49:00.337779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:11.422 qpair failed and we were unable to recover it. 00:28:11.422 [2024-10-07 09:49:00.347571] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.422 [2024-10-07 09:49:00.347662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.422 [2024-10-07 09:49:00.347695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.422 [2024-10-07 09:49:00.347709] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.422 [2024-10-07 09:49:00.347722] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:11.422 [2024-10-07 09:49:00.347752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:11.422 qpair failed and we were unable to recover it. 00:28:11.422 [2024-10-07 09:49:00.357581] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.422 [2024-10-07 09:49:00.357707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.422 [2024-10-07 09:49:00.357739] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.422 [2024-10-07 09:49:00.357756] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.422 [2024-10-07 09:49:00.357768] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:11.422 [2024-10-07 09:49:00.357798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:11.422 qpair failed and we were unable to recover it. 00:28:11.422 [2024-10-07 09:49:00.367697] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.422 [2024-10-07 09:49:00.367787] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.422 [2024-10-07 09:49:00.367815] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.422 [2024-10-07 09:49:00.367832] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.422 [2024-10-07 09:49:00.367844] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:11.422 [2024-10-07 09:49:00.367874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:11.422 qpair failed and we were unable to recover it. 00:28:11.422 [2024-10-07 09:49:00.377663] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.422 [2024-10-07 09:49:00.377756] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.423 [2024-10-07 09:49:00.377782] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.423 [2024-10-07 09:49:00.377797] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.423 [2024-10-07 09:49:00.377810] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:11.423 [2024-10-07 09:49:00.377840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:11.423 qpair failed and we were unable to recover it. 00:28:11.423 [2024-10-07 09:49:00.387661] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.423 [2024-10-07 09:49:00.387767] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.423 [2024-10-07 09:49:00.387792] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.423 [2024-10-07 09:49:00.387806] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.423 [2024-10-07 09:49:00.387819] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:11.423 [2024-10-07 09:49:00.387849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:11.423 qpair failed and we were unable to recover it. 00:28:11.423 [2024-10-07 09:49:00.397707] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.423 [2024-10-07 09:49:00.397792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.423 [2024-10-07 09:49:00.397816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.423 [2024-10-07 09:49:00.397830] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.423 [2024-10-07 09:49:00.397843] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:11.423 [2024-10-07 09:49:00.397879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:11.423 qpair failed and we were unable to recover it. 00:28:11.423 [2024-10-07 09:49:00.407751] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.423 [2024-10-07 09:49:00.407840] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.423 [2024-10-07 09:49:00.407866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.423 [2024-10-07 09:49:00.407880] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.423 [2024-10-07 09:49:00.407893] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:11.423 [2024-10-07 09:49:00.407924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:11.423 qpair failed and we were unable to recover it. 00:28:11.683 [2024-10-07 09:49:00.417742] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.683 [2024-10-07 09:49:00.417837] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.683 [2024-10-07 09:49:00.417863] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.683 [2024-10-07 09:49:00.417877] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.683 [2024-10-07 09:49:00.417889] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:11.683 [2024-10-07 09:49:00.417919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:11.683 qpair failed and we were unable to recover it. 00:28:11.683 [2024-10-07 09:49:00.427814] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.683 [2024-10-07 09:49:00.427902] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.683 [2024-10-07 09:49:00.427927] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.683 [2024-10-07 09:49:00.427941] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.683 [2024-10-07 09:49:00.427954] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:11.683 [2024-10-07 09:49:00.427984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:11.683 qpair failed and we were unable to recover it. 00:28:11.683 [2024-10-07 09:49:00.437823] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.683 [2024-10-07 09:49:00.437916] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.683 [2024-10-07 09:49:00.437940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.683 [2024-10-07 09:49:00.437954] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.683 [2024-10-07 09:49:00.437967] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:11.683 [2024-10-07 09:49:00.437996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:11.683 qpair failed and we were unable to recover it. 00:28:11.683 [2024-10-07 09:49:00.447875] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.683 [2024-10-07 09:49:00.447959] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.683 [2024-10-07 09:49:00.447990] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.683 [2024-10-07 09:49:00.448005] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.683 [2024-10-07 09:49:00.448017] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:11.683 [2024-10-07 09:49:00.448047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:11.683 qpair failed and we were unable to recover it. 00:28:11.683 [2024-10-07 09:49:00.457970] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.683 [2024-10-07 09:49:00.458054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.683 [2024-10-07 09:49:00.458079] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.683 [2024-10-07 09:49:00.458093] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.683 [2024-10-07 09:49:00.458105] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:11.683 [2024-10-07 09:49:00.458134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:11.683 qpair failed and we were unable to recover it. 00:28:11.683 [2024-10-07 09:49:00.467926] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.683 [2024-10-07 09:49:00.468017] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.683 [2024-10-07 09:49:00.468041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.683 [2024-10-07 09:49:00.468055] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.683 [2024-10-07 09:49:00.468068] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:11.683 [2024-10-07 09:49:00.468097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:11.683 qpair failed and we were unable to recover it. 00:28:11.683 [2024-10-07 09:49:00.477962] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.683 [2024-10-07 09:49:00.478047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.683 [2024-10-07 09:49:00.478072] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.683 [2024-10-07 09:49:00.478087] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.683 [2024-10-07 09:49:00.478099] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:11.683 [2024-10-07 09:49:00.478128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:11.683 qpair failed and we were unable to recover it. 00:28:11.683 [2024-10-07 09:49:00.487977] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.683 [2024-10-07 09:49:00.488106] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.683 [2024-10-07 09:49:00.488132] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.683 [2024-10-07 09:49:00.488147] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.683 [2024-10-07 09:49:00.488165] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:11.683 [2024-10-07 09:49:00.488196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:11.683 qpair failed and we were unable to recover it. 00:28:11.683 [2024-10-07 09:49:00.497998] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.683 [2024-10-07 09:49:00.498082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.683 [2024-10-07 09:49:00.498107] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.683 [2024-10-07 09:49:00.498121] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.683 [2024-10-07 09:49:00.498134] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:11.683 [2024-10-07 09:49:00.498163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:11.683 qpair failed and we were unable to recover it. 00:28:11.683 [2024-10-07 09:49:00.508028] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.683 [2024-10-07 09:49:00.508115] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.683 [2024-10-07 09:49:00.508139] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.683 [2024-10-07 09:49:00.508153] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.683 [2024-10-07 09:49:00.508166] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:11.683 [2024-10-07 09:49:00.508195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:11.683 qpair failed and we were unable to recover it. 00:28:11.683 [2024-10-07 09:49:00.518144] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.684 [2024-10-07 09:49:00.518230] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.684 [2024-10-07 09:49:00.518256] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.684 [2024-10-07 09:49:00.518270] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.684 [2024-10-07 09:49:00.518282] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:11.684 [2024-10-07 09:49:00.518312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:11.684 qpair failed and we were unable to recover it. 00:28:11.684 [2024-10-07 09:49:00.528085] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.684 [2024-10-07 09:49:00.528171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.684 [2024-10-07 09:49:00.528196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.684 [2024-10-07 09:49:00.528211] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.684 [2024-10-07 09:49:00.528223] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:11.684 [2024-10-07 09:49:00.528264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:11.684 qpair failed and we were unable to recover it. 00:28:11.684 [2024-10-07 09:49:00.538083] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.684 [2024-10-07 09:49:00.538168] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.684 [2024-10-07 09:49:00.538193] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.684 [2024-10-07 09:49:00.538208] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.684 [2024-10-07 09:49:00.538220] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:11.684 [2024-10-07 09:49:00.538250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:11.684 qpair failed and we were unable to recover it. 00:28:11.684 [2024-10-07 09:49:00.548146] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.684 [2024-10-07 09:49:00.548262] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.684 [2024-10-07 09:49:00.548288] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.684 [2024-10-07 09:49:00.548303] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.684 [2024-10-07 09:49:00.548316] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:11.684 [2024-10-07 09:49:00.548346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:11.684 qpair failed and we were unable to recover it. 00:28:11.684 [2024-10-07 09:49:00.558248] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.684 [2024-10-07 09:49:00.558358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.684 [2024-10-07 09:49:00.558384] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.684 [2024-10-07 09:49:00.558399] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.684 [2024-10-07 09:49:00.558411] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:11.684 [2024-10-07 09:49:00.558440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:11.684 qpair failed and we were unable to recover it. 00:28:11.684 [2024-10-07 09:49:00.568222] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.684 [2024-10-07 09:49:00.568306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.684 [2024-10-07 09:49:00.568335] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.684 [2024-10-07 09:49:00.568351] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.684 [2024-10-07 09:49:00.568364] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:11.684 [2024-10-07 09:49:00.568395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:11.684 qpair failed and we were unable to recover it. 00:28:11.684 [2024-10-07 09:49:00.578202] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.684 [2024-10-07 09:49:00.578285] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.684 [2024-10-07 09:49:00.578310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.684 [2024-10-07 09:49:00.578325] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.684 [2024-10-07 09:49:00.578343] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:11.684 [2024-10-07 09:49:00.578374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:11.684 qpair failed and we were unable to recover it. 00:28:11.684 [2024-10-07 09:49:00.588295] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.684 [2024-10-07 09:49:00.588416] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.684 [2024-10-07 09:49:00.588443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.684 [2024-10-07 09:49:00.588458] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.684 [2024-10-07 09:49:00.588470] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:11.684 [2024-10-07 09:49:00.588499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:11.684 qpair failed and we were unable to recover it. 00:28:11.684 [2024-10-07 09:49:00.598285] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.684 [2024-10-07 09:49:00.598393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.684 [2024-10-07 09:49:00.598420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.684 [2024-10-07 09:49:00.598435] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.684 [2024-10-07 09:49:00.598447] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:11.684 [2024-10-07 09:49:00.598476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:11.684 qpair failed and we were unable to recover it. 00:28:11.684 [2024-10-07 09:49:00.608317] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.684 [2024-10-07 09:49:00.608406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.684 [2024-10-07 09:49:00.608434] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.684 [2024-10-07 09:49:00.608451] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.684 [2024-10-07 09:49:00.608463] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:11.684 [2024-10-07 09:49:00.608493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:11.684 qpair failed and we were unable to recover it. 00:28:11.684 [2024-10-07 09:49:00.618342] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.684 [2024-10-07 09:49:00.618453] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.684 [2024-10-07 09:49:00.618479] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.684 [2024-10-07 09:49:00.618493] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.684 [2024-10-07 09:49:00.618506] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:11.684 [2024-10-07 09:49:00.618536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:11.684 qpair failed and we were unable to recover it. 00:28:11.684 [2024-10-07 09:49:00.628416] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.684 [2024-10-07 09:49:00.628505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.684 [2024-10-07 09:49:00.628530] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.684 [2024-10-07 09:49:00.628544] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.684 [2024-10-07 09:49:00.628556] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:11.684 [2024-10-07 09:49:00.628587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:11.684 qpair failed and we were unable to recover it. 00:28:11.684 [2024-10-07 09:49:00.638422] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.684 [2024-10-07 09:49:00.638528] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.684 [2024-10-07 09:49:00.638552] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.684 [2024-10-07 09:49:00.638567] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.684 [2024-10-07 09:49:00.638580] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:11.684 [2024-10-07 09:49:00.638610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:11.684 qpair failed and we were unable to recover it. 00:28:11.684 [2024-10-07 09:49:00.648480] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.684 [2024-10-07 09:49:00.648619] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.684 [2024-10-07 09:49:00.648651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.684 [2024-10-07 09:49:00.648675] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.685 [2024-10-07 09:49:00.648691] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:11.685 [2024-10-07 09:49:00.648742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:11.685 qpair failed and we were unable to recover it. 00:28:11.685 [2024-10-07 09:49:00.658440] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.685 [2024-10-07 09:49:00.658528] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.685 [2024-10-07 09:49:00.658553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.685 [2024-10-07 09:49:00.658567] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.685 [2024-10-07 09:49:00.658580] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:11.685 [2024-10-07 09:49:00.658611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:11.685 qpair failed and we were unable to recover it. 00:28:11.685 [2024-10-07 09:49:00.668499] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.685 [2024-10-07 09:49:00.668626] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.685 [2024-10-07 09:49:00.668671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.685 [2024-10-07 09:49:00.668694] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.685 [2024-10-07 09:49:00.668708] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:11.685 [2024-10-07 09:49:00.668738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:11.685 qpair failed and we were unable to recover it. 00:28:11.947 [2024-10-07 09:49:00.678515] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.947 [2024-10-07 09:49:00.678606] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.947 [2024-10-07 09:49:00.678631] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.947 [2024-10-07 09:49:00.678645] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.947 [2024-10-07 09:49:00.678658] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:11.947 [2024-10-07 09:49:00.678698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:11.947 qpair failed and we were unable to recover it. 00:28:11.947 [2024-10-07 09:49:00.688554] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.947 [2024-10-07 09:49:00.688644] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.947 [2024-10-07 09:49:00.688677] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.947 [2024-10-07 09:49:00.688693] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.947 [2024-10-07 09:49:00.688706] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:11.947 [2024-10-07 09:49:00.688736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:11.947 qpair failed and we were unable to recover it. 00:28:11.947 [2024-10-07 09:49:00.698567] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.947 [2024-10-07 09:49:00.698683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.947 [2024-10-07 09:49:00.698710] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.947 [2024-10-07 09:49:00.698725] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.947 [2024-10-07 09:49:00.698738] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:11.947 [2024-10-07 09:49:00.698780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:11.947 qpair failed and we were unable to recover it. 00:28:11.947 [2024-10-07 09:49:00.708598] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.947 [2024-10-07 09:49:00.708698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.947 [2024-10-07 09:49:00.708724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.947 [2024-10-07 09:49:00.708738] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.947 [2024-10-07 09:49:00.708751] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:11.947 [2024-10-07 09:49:00.708781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:11.947 qpair failed and we were unable to recover it. 00:28:11.947 [2024-10-07 09:49:00.718689] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.947 [2024-10-07 09:49:00.718834] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.947 [2024-10-07 09:49:00.718864] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.947 [2024-10-07 09:49:00.718880] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.947 [2024-10-07 09:49:00.718892] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:11.947 [2024-10-07 09:49:00.718922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:11.948 qpair failed and we were unable to recover it. 00:28:11.948 [2024-10-07 09:49:00.728654] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.948 [2024-10-07 09:49:00.728753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.948 [2024-10-07 09:49:00.728778] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.948 [2024-10-07 09:49:00.728792] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.948 [2024-10-07 09:49:00.728804] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:11.948 [2024-10-07 09:49:00.728835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:11.948 qpair failed and we were unable to recover it. 00:28:11.948 [2024-10-07 09:49:00.738671] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.948 [2024-10-07 09:49:00.738751] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.948 [2024-10-07 09:49:00.738776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.948 [2024-10-07 09:49:00.738790] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.948 [2024-10-07 09:49:00.738802] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:11.948 [2024-10-07 09:49:00.738832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:11.948 qpair failed and we were unable to recover it. 00:28:11.948 [2024-10-07 09:49:00.748769] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.948 [2024-10-07 09:49:00.748869] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.948 [2024-10-07 09:49:00.748894] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.948 [2024-10-07 09:49:00.748909] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.948 [2024-10-07 09:49:00.748921] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:11.948 [2024-10-07 09:49:00.748951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:11.948 qpair failed and we were unable to recover it. 00:28:11.948 [2024-10-07 09:49:00.758733] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.948 [2024-10-07 09:49:00.758863] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.948 [2024-10-07 09:49:00.758890] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.948 [2024-10-07 09:49:00.758910] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.948 [2024-10-07 09:49:00.758923] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:11.948 [2024-10-07 09:49:00.758963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:11.948 qpair failed and we were unable to recover it. 00:28:11.948 [2024-10-07 09:49:00.768802] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.948 [2024-10-07 09:49:00.768898] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.948 [2024-10-07 09:49:00.768923] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.948 [2024-10-07 09:49:00.768938] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.948 [2024-10-07 09:49:00.768949] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:11.948 [2024-10-07 09:49:00.768979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:11.948 qpair failed and we were unable to recover it. 00:28:11.948 [2024-10-07 09:49:00.778805] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.948 [2024-10-07 09:49:00.778889] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.948 [2024-10-07 09:49:00.778914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.948 [2024-10-07 09:49:00.778929] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.948 [2024-10-07 09:49:00.778941] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:11.948 [2024-10-07 09:49:00.778970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:11.948 qpair failed and we were unable to recover it. 00:28:11.948 [2024-10-07 09:49:00.788896] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.948 [2024-10-07 09:49:00.789010] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.948 [2024-10-07 09:49:00.789037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.948 [2024-10-07 09:49:00.789052] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.948 [2024-10-07 09:49:00.789064] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:11.948 [2024-10-07 09:49:00.789094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:11.948 qpair failed and we were unable to recover it. 00:28:11.948 [2024-10-07 09:49:00.798874] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.948 [2024-10-07 09:49:00.799013] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.948 [2024-10-07 09:49:00.799039] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.948 [2024-10-07 09:49:00.799054] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.948 [2024-10-07 09:49:00.799066] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:11.948 [2024-10-07 09:49:00.799095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:11.948 qpair failed and we were unable to recover it. 00:28:11.948 [2024-10-07 09:49:00.808884] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.948 [2024-10-07 09:49:00.809015] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.948 [2024-10-07 09:49:00.809042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.948 [2024-10-07 09:49:00.809057] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.948 [2024-10-07 09:49:00.809069] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:11.948 [2024-10-07 09:49:00.809099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:11.948 qpair failed and we were unable to recover it. 00:28:11.948 [2024-10-07 09:49:00.818936] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.949 [2024-10-07 09:49:00.819023] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.949 [2024-10-07 09:49:00.819049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.949 [2024-10-07 09:49:00.819063] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.949 [2024-10-07 09:49:00.819076] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:11.949 [2024-10-07 09:49:00.819119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:11.949 qpair failed and we were unable to recover it. 00:28:11.949 [2024-10-07 09:49:00.829015] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.949 [2024-10-07 09:49:00.829108] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.949 [2024-10-07 09:49:00.829133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.949 [2024-10-07 09:49:00.829147] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.949 [2024-10-07 09:49:00.829159] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:11.949 [2024-10-07 09:49:00.829188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:11.949 qpair failed and we were unable to recover it. 00:28:11.949 [2024-10-07 09:49:00.839000] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.949 [2024-10-07 09:49:00.839093] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.949 [2024-10-07 09:49:00.839119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.949 [2024-10-07 09:49:00.839133] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.949 [2024-10-07 09:49:00.839145] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:11.949 [2024-10-07 09:49:00.839175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:11.949 qpair failed and we were unable to recover it. 00:28:11.949 [2024-10-07 09:49:00.849111] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.949 [2024-10-07 09:49:00.849206] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.949 [2024-10-07 09:49:00.849238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.949 [2024-10-07 09:49:00.849254] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.949 [2024-10-07 09:49:00.849267] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:11.949 [2024-10-07 09:49:00.849297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:11.949 qpair failed and we were unable to recover it. 00:28:11.949 [2024-10-07 09:49:00.859019] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.949 [2024-10-07 09:49:00.859102] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.949 [2024-10-07 09:49:00.859127] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.949 [2024-10-07 09:49:00.859141] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.949 [2024-10-07 09:49:00.859154] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:11.949 [2024-10-07 09:49:00.859184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:11.949 qpair failed and we were unable to recover it. 00:28:11.949 [2024-10-07 09:49:00.869059] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.949 [2024-10-07 09:49:00.869193] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.949 [2024-10-07 09:49:00.869224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.949 [2024-10-07 09:49:00.869239] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.949 [2024-10-07 09:49:00.869251] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:11.949 [2024-10-07 09:49:00.869281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:11.949 qpair failed and we were unable to recover it. 00:28:11.949 [2024-10-07 09:49:00.879096] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.949 [2024-10-07 09:49:00.879216] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.949 [2024-10-07 09:49:00.879244] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.949 [2024-10-07 09:49:00.879259] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.949 [2024-10-07 09:49:00.879271] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:11.949 [2024-10-07 09:49:00.879314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:11.949 qpair failed and we were unable to recover it. 00:28:11.949 [2024-10-07 09:49:00.889140] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.949 [2024-10-07 09:49:00.889238] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.949 [2024-10-07 09:49:00.889269] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.949 [2024-10-07 09:49:00.889285] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.949 [2024-10-07 09:49:00.889298] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:11.949 [2024-10-07 09:49:00.889335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:11.949 qpair failed and we were unable to recover it. 00:28:11.949 [2024-10-07 09:49:00.899183] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.949 [2024-10-07 09:49:00.899284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.949 [2024-10-07 09:49:00.899311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.949 [2024-10-07 09:49:00.899327] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.949 [2024-10-07 09:49:00.899339] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:11.949 [2024-10-07 09:49:00.899369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:11.949 qpair failed and we were unable to recover it. 00:28:11.949 [2024-10-07 09:49:00.909177] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.949 [2024-10-07 09:49:00.909269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.949 [2024-10-07 09:49:00.909293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.949 [2024-10-07 09:49:00.909307] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.949 [2024-10-07 09:49:00.909319] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:11.949 [2024-10-07 09:49:00.909348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:11.949 qpair failed and we were unable to recover it. 00:28:11.949 [2024-10-07 09:49:00.919194] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.950 [2024-10-07 09:49:00.919276] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.950 [2024-10-07 09:49:00.919300] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.950 [2024-10-07 09:49:00.919314] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.950 [2024-10-07 09:49:00.919326] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:11.950 [2024-10-07 09:49:00.919356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:11.950 qpair failed and we were unable to recover it. 00:28:11.950 [2024-10-07 09:49:00.929212] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.950 [2024-10-07 09:49:00.929300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.950 [2024-10-07 09:49:00.929324] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.950 [2024-10-07 09:49:00.929339] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.950 [2024-10-07 09:49:00.929351] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:11.950 [2024-10-07 09:49:00.929381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:11.950 qpair failed and we were unable to recover it. 00:28:11.950 [2024-10-07 09:49:00.939323] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.950 [2024-10-07 09:49:00.939409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.950 [2024-10-07 09:49:00.939447] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.950 [2024-10-07 09:49:00.939463] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.950 [2024-10-07 09:49:00.939475] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:11.950 [2024-10-07 09:49:00.939506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:11.950 qpair failed and we were unable to recover it. 00:28:12.212 [2024-10-07 09:49:00.949372] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.212 [2024-10-07 09:49:00.949465] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.212 [2024-10-07 09:49:00.949490] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.212 [2024-10-07 09:49:00.949504] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.212 [2024-10-07 09:49:00.949516] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:12.212 [2024-10-07 09:49:00.949546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:12.212 qpair failed and we were unable to recover it. 00:28:12.212 [2024-10-07 09:49:00.959292] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.212 [2024-10-07 09:49:00.959402] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.212 [2024-10-07 09:49:00.959427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.212 [2024-10-07 09:49:00.959441] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.212 [2024-10-07 09:49:00.959454] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:12.212 [2024-10-07 09:49:00.959483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:12.212 qpair failed and we were unable to recover it. 00:28:12.212 [2024-10-07 09:49:00.969334] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.212 [2024-10-07 09:49:00.969420] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.212 [2024-10-07 09:49:00.969445] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.212 [2024-10-07 09:49:00.969460] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.212 [2024-10-07 09:49:00.969472] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:12.212 [2024-10-07 09:49:00.969502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:12.212 qpair failed and we were unable to recover it. 00:28:12.212 [2024-10-07 09:49:00.979381] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.212 [2024-10-07 09:49:00.979465] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.212 [2024-10-07 09:49:00.979491] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.212 [2024-10-07 09:49:00.979505] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.212 [2024-10-07 09:49:00.979517] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:12.212 [2024-10-07 09:49:00.979553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:12.212 qpair failed and we were unable to recover it. 00:28:12.212 [2024-10-07 09:49:00.989399] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.212 [2024-10-07 09:49:00.989487] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.212 [2024-10-07 09:49:00.989512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.212 [2024-10-07 09:49:00.989526] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.212 [2024-10-07 09:49:00.989538] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:12.212 [2024-10-07 09:49:00.989568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:12.212 qpair failed and we were unable to recover it. 00:28:12.213 [2024-10-07 09:49:00.999434] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.213 [2024-10-07 09:49:00.999520] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.213 [2024-10-07 09:49:00.999548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.213 [2024-10-07 09:49:00.999565] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.213 [2024-10-07 09:49:00.999578] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:12.213 [2024-10-07 09:49:00.999608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:12.213 qpair failed and we were unable to recover it. 00:28:12.213 [2024-10-07 09:49:01.009463] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.213 [2024-10-07 09:49:01.009552] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.213 [2024-10-07 09:49:01.009577] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.213 [2024-10-07 09:49:01.009592] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.213 [2024-10-07 09:49:01.009605] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:12.213 [2024-10-07 09:49:01.009634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:12.213 qpair failed and we were unable to recover it. 00:28:12.213 [2024-10-07 09:49:01.019478] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.213 [2024-10-07 09:49:01.019568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.213 [2024-10-07 09:49:01.019593] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.213 [2024-10-07 09:49:01.019608] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.213 [2024-10-07 09:49:01.019620] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:12.213 [2024-10-07 09:49:01.019650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:12.213 qpair failed and we were unable to recover it. 00:28:12.213 [2024-10-07 09:49:01.029533] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.213 [2024-10-07 09:49:01.029654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.213 [2024-10-07 09:49:01.029692] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.213 [2024-10-07 09:49:01.029708] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.213 [2024-10-07 09:49:01.029721] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:12.213 [2024-10-07 09:49:01.029751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:12.213 qpair failed and we were unable to recover it. 00:28:12.213 [2024-10-07 09:49:01.039635] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.213 [2024-10-07 09:49:01.039773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.213 [2024-10-07 09:49:01.039798] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.213 [2024-10-07 09:49:01.039814] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.213 [2024-10-07 09:49:01.039826] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:12.213 [2024-10-07 09:49:01.039856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:12.213 qpair failed and we were unable to recover it. 00:28:12.213 [2024-10-07 09:49:01.049577] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.213 [2024-10-07 09:49:01.049662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.213 [2024-10-07 09:49:01.049694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.213 [2024-10-07 09:49:01.049710] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.213 [2024-10-07 09:49:01.049723] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:12.213 [2024-10-07 09:49:01.049753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:12.213 qpair failed and we were unable to recover it. 00:28:12.213 [2024-10-07 09:49:01.059588] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.213 [2024-10-07 09:49:01.059682] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.213 [2024-10-07 09:49:01.059708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.213 [2024-10-07 09:49:01.059723] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.213 [2024-10-07 09:49:01.059735] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:12.213 [2024-10-07 09:49:01.059765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:12.213 qpair failed and we were unable to recover it. 00:28:12.213 [2024-10-07 09:49:01.069675] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.213 [2024-10-07 09:49:01.069771] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.213 [2024-10-07 09:49:01.069796] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.213 [2024-10-07 09:49:01.069811] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.213 [2024-10-07 09:49:01.069830] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:12.213 [2024-10-07 09:49:01.069860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:12.213 qpair failed and we were unable to recover it. 00:28:12.213 [2024-10-07 09:49:01.079643] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.213 [2024-10-07 09:49:01.079737] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.213 [2024-10-07 09:49:01.079762] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.213 [2024-10-07 09:49:01.079776] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.213 [2024-10-07 09:49:01.079789] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:12.213 [2024-10-07 09:49:01.079818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:12.213 qpair failed and we were unable to recover it. 00:28:12.213 [2024-10-07 09:49:01.089707] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.213 [2024-10-07 09:49:01.089795] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.213 [2024-10-07 09:49:01.089821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.213 [2024-10-07 09:49:01.089835] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.213 [2024-10-07 09:49:01.089848] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:12.213 [2024-10-07 09:49:01.089878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:12.213 qpair failed and we were unable to recover it. 00:28:12.213 [2024-10-07 09:49:01.099704] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.213 [2024-10-07 09:49:01.099786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.213 [2024-10-07 09:49:01.099812] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.213 [2024-10-07 09:49:01.099827] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.213 [2024-10-07 09:49:01.099839] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:12.213 [2024-10-07 09:49:01.099869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:12.213 qpair failed and we were unable to recover it. 00:28:12.213 [2024-10-07 09:49:01.109747] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.213 [2024-10-07 09:49:01.109887] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.213 [2024-10-07 09:49:01.109912] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.213 [2024-10-07 09:49:01.109927] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.213 [2024-10-07 09:49:01.109940] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:12.213 [2024-10-07 09:49:01.109969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:12.213 qpair failed and we were unable to recover it. 00:28:12.213 [2024-10-07 09:49:01.119881] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.213 [2024-10-07 09:49:01.120017] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.213 [2024-10-07 09:49:01.120043] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.213 [2024-10-07 09:49:01.120059] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.214 [2024-10-07 09:49:01.120071] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:12.214 [2024-10-07 09:49:01.120101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:12.214 qpair failed and we were unable to recover it. 00:28:12.214 [2024-10-07 09:49:01.129786] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.214 [2024-10-07 09:49:01.129870] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.214 [2024-10-07 09:49:01.129897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.214 [2024-10-07 09:49:01.129911] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.214 [2024-10-07 09:49:01.129924] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:12.214 [2024-10-07 09:49:01.129954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:12.214 qpair failed and we were unable to recover it. 00:28:12.214 [2024-10-07 09:49:01.139851] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.214 [2024-10-07 09:49:01.139972] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.214 [2024-10-07 09:49:01.139997] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.214 [2024-10-07 09:49:01.140012] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.214 [2024-10-07 09:49:01.140025] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:12.214 [2024-10-07 09:49:01.140055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:12.214 qpair failed and we were unable to recover it. 00:28:12.214 [2024-10-07 09:49:01.149856] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.214 [2024-10-07 09:49:01.149948] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.214 [2024-10-07 09:49:01.149973] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.214 [2024-10-07 09:49:01.149987] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.214 [2024-10-07 09:49:01.149999] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:12.214 [2024-10-07 09:49:01.150028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:12.214 qpair failed and we were unable to recover it. 00:28:12.214 [2024-10-07 09:49:01.159915] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.214 [2024-10-07 09:49:01.160037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.214 [2024-10-07 09:49:01.160062] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.214 [2024-10-07 09:49:01.160083] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.214 [2024-10-07 09:49:01.160097] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:12.214 [2024-10-07 09:49:01.160127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:12.214 qpair failed and we were unable to recover it. 00:28:12.214 [2024-10-07 09:49:01.169898] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.214 [2024-10-07 09:49:01.169988] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.214 [2024-10-07 09:49:01.170014] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.214 [2024-10-07 09:49:01.170029] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.214 [2024-10-07 09:49:01.170041] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:12.214 [2024-10-07 09:49:01.170071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:12.214 qpair failed and we were unable to recover it. 00:28:12.214 [2024-10-07 09:49:01.179935] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.214 [2024-10-07 09:49:01.180035] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.214 [2024-10-07 09:49:01.180063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.214 [2024-10-07 09:49:01.180080] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.214 [2024-10-07 09:49:01.180094] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:12.214 [2024-10-07 09:49:01.180125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:12.214 qpair failed and we were unable to recover it. 00:28:12.214 [2024-10-07 09:49:01.189992] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.214 [2024-10-07 09:49:01.190082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.214 [2024-10-07 09:49:01.190107] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.214 [2024-10-07 09:49:01.190121] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.214 [2024-10-07 09:49:01.190134] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:12.214 [2024-10-07 09:49:01.190164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:12.214 qpair failed and we were unable to recover it. 00:28:12.214 [2024-10-07 09:49:01.199989] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.214 [2024-10-07 09:49:01.200072] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.214 [2024-10-07 09:49:01.200098] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.214 [2024-10-07 09:49:01.200112] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.214 [2024-10-07 09:49:01.200125] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:12.214 [2024-10-07 09:49:01.200154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:12.214 qpair failed and we were unable to recover it. 00:28:12.475 [2024-10-07 09:49:01.210023] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.475 [2024-10-07 09:49:01.210109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.475 [2024-10-07 09:49:01.210135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.475 [2024-10-07 09:49:01.210149] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.475 [2024-10-07 09:49:01.210162] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:12.475 [2024-10-07 09:49:01.210191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:12.475 qpair failed and we were unable to recover it. 00:28:12.475 [2024-10-07 09:49:01.220058] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.475 [2024-10-07 09:49:01.220141] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.475 [2024-10-07 09:49:01.220166] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.475 [2024-10-07 09:49:01.220181] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.475 [2024-10-07 09:49:01.220193] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:12.475 [2024-10-07 09:49:01.220224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:12.475 qpair failed and we were unable to recover it. 00:28:12.475 [2024-10-07 09:49:01.230125] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.475 [2024-10-07 09:49:01.230215] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.475 [2024-10-07 09:49:01.230240] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.475 [2024-10-07 09:49:01.230255] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.475 [2024-10-07 09:49:01.230267] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:12.475 [2024-10-07 09:49:01.230297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:12.475 qpair failed and we were unable to recover it. 00:28:12.475 [2024-10-07 09:49:01.240158] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.475 [2024-10-07 09:49:01.240245] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.475 [2024-10-07 09:49:01.240271] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.475 [2024-10-07 09:49:01.240286] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.475 [2024-10-07 09:49:01.240299] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:12.475 [2024-10-07 09:49:01.240331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:12.475 qpair failed and we were unable to recover it. 00:28:12.475 [2024-10-07 09:49:01.250169] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.475 [2024-10-07 09:49:01.250260] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.475 [2024-10-07 09:49:01.250286] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.475 [2024-10-07 09:49:01.250307] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.475 [2024-10-07 09:49:01.250321] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:12.475 [2024-10-07 09:49:01.250363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:12.475 qpair failed and we were unable to recover it. 00:28:12.475 [2024-10-07 09:49:01.260162] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.475 [2024-10-07 09:49:01.260279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.475 [2024-10-07 09:49:01.260304] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.475 [2024-10-07 09:49:01.260318] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.475 [2024-10-07 09:49:01.260331] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:12.475 [2024-10-07 09:49:01.260372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:12.475 qpair failed and we were unable to recover it. 00:28:12.475 [2024-10-07 09:49:01.270217] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.475 [2024-10-07 09:49:01.270339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.475 [2024-10-07 09:49:01.270363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.475 [2024-10-07 09:49:01.270377] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.475 [2024-10-07 09:49:01.270390] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:12.475 [2024-10-07 09:49:01.270420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:12.475 qpair failed and we were unable to recover it. 00:28:12.475 [2024-10-07 09:49:01.280303] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.475 [2024-10-07 09:49:01.280382] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.475 [2024-10-07 09:49:01.280407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.475 [2024-10-07 09:49:01.280421] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.475 [2024-10-07 09:49:01.280434] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:12.475 [2024-10-07 09:49:01.280463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:12.475 qpair failed and we were unable to recover it. 00:28:12.475 [2024-10-07 09:49:01.290292] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.475 [2024-10-07 09:49:01.290385] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.475 [2024-10-07 09:49:01.290410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.475 [2024-10-07 09:49:01.290425] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.475 [2024-10-07 09:49:01.290437] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:12.475 [2024-10-07 09:49:01.290467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:12.475 qpair failed and we were unable to recover it. 00:28:12.475 [2024-10-07 09:49:01.300304] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.475 [2024-10-07 09:49:01.300394] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.475 [2024-10-07 09:49:01.300419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.475 [2024-10-07 09:49:01.300433] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.475 [2024-10-07 09:49:01.300445] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:12.475 [2024-10-07 09:49:01.300475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:12.475 qpair failed and we were unable to recover it. 00:28:12.475 [2024-10-07 09:49:01.310424] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.475 [2024-10-07 09:49:01.310515] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.475 [2024-10-07 09:49:01.310540] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.475 [2024-10-07 09:49:01.310554] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.475 [2024-10-07 09:49:01.310566] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:12.475 [2024-10-07 09:49:01.310596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:12.475 qpair failed and we were unable to recover it. 00:28:12.475 [2024-10-07 09:49:01.320374] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.475 [2024-10-07 09:49:01.320493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.475 [2024-10-07 09:49:01.320519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.475 [2024-10-07 09:49:01.320533] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.475 [2024-10-07 09:49:01.320546] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:12.475 [2024-10-07 09:49:01.320588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:12.475 qpair failed and we were unable to recover it. 00:28:12.475 [2024-10-07 09:49:01.330405] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.476 [2024-10-07 09:49:01.330494] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.476 [2024-10-07 09:49:01.330519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.476 [2024-10-07 09:49:01.330534] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.476 [2024-10-07 09:49:01.330547] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:12.476 [2024-10-07 09:49:01.330589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:12.476 qpair failed and we were unable to recover it. 00:28:12.476 [2024-10-07 09:49:01.340393] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.476 [2024-10-07 09:49:01.340523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.476 [2024-10-07 09:49:01.340555] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.476 [2024-10-07 09:49:01.340572] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.476 [2024-10-07 09:49:01.340585] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:12.476 [2024-10-07 09:49:01.340614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:12.476 qpair failed and we were unable to recover it. 00:28:12.476 [2024-10-07 09:49:01.350453] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.476 [2024-10-07 09:49:01.350547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.476 [2024-10-07 09:49:01.350575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.476 [2024-10-07 09:49:01.350590] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.476 [2024-10-07 09:49:01.350603] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:12.476 [2024-10-07 09:49:01.350633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:12.476 qpair failed and we were unable to recover it. 00:28:12.476 [2024-10-07 09:49:01.360546] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.476 [2024-10-07 09:49:01.360632] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.476 [2024-10-07 09:49:01.360657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.476 [2024-10-07 09:49:01.360680] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.476 [2024-10-07 09:49:01.360693] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:12.476 [2024-10-07 09:49:01.360723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:12.476 qpair failed and we were unable to recover it. 00:28:12.476 [2024-10-07 09:49:01.370613] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.476 [2024-10-07 09:49:01.370757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.476 [2024-10-07 09:49:01.370781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.476 [2024-10-07 09:49:01.370796] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.476 [2024-10-07 09:49:01.370808] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:12.476 [2024-10-07 09:49:01.370838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:12.476 qpair failed and we were unable to recover it. 00:28:12.476 [2024-10-07 09:49:01.380562] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.476 [2024-10-07 09:49:01.380691] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.476 [2024-10-07 09:49:01.380717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.476 [2024-10-07 09:49:01.380731] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.476 [2024-10-07 09:49:01.380744] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:12.476 [2024-10-07 09:49:01.380779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:12.476 qpair failed and we were unable to recover it. 00:28:12.476 [2024-10-07 09:49:01.390593] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.476 [2024-10-07 09:49:01.390695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.476 [2024-10-07 09:49:01.390724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.476 [2024-10-07 09:49:01.390738] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.476 [2024-10-07 09:49:01.390750] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:12.476 [2024-10-07 09:49:01.390780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:12.476 qpair failed and we were unable to recover it. 00:28:12.476 [2024-10-07 09:49:01.400638] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.476 [2024-10-07 09:49:01.400741] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.476 [2024-10-07 09:49:01.400766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.476 [2024-10-07 09:49:01.400780] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.476 [2024-10-07 09:49:01.400793] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:12.476 [2024-10-07 09:49:01.400823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:12.476 qpair failed and we were unable to recover it. 00:28:12.476 [2024-10-07 09:49:01.410606] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.476 [2024-10-07 09:49:01.410696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.476 [2024-10-07 09:49:01.410721] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.476 [2024-10-07 09:49:01.410736] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.476 [2024-10-07 09:49:01.410748] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:12.476 [2024-10-07 09:49:01.410778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:12.476 qpair failed and we were unable to recover it. 00:28:12.476 [2024-10-07 09:49:01.420678] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.476 [2024-10-07 09:49:01.420769] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.476 [2024-10-07 09:49:01.420794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.476 [2024-10-07 09:49:01.420808] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.476 [2024-10-07 09:49:01.420820] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:12.476 [2024-10-07 09:49:01.420852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:12.476 qpair failed and we were unable to recover it. 00:28:12.476 [2024-10-07 09:49:01.430713] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.476 [2024-10-07 09:49:01.430813] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.476 [2024-10-07 09:49:01.430844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.476 [2024-10-07 09:49:01.430859] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.476 [2024-10-07 09:49:01.430872] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:12.476 [2024-10-07 09:49:01.430901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:12.476 qpair failed and we were unable to recover it. 00:28:12.476 [2024-10-07 09:49:01.440731] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.476 [2024-10-07 09:49:01.440823] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.476 [2024-10-07 09:49:01.440847] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.476 [2024-10-07 09:49:01.440862] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.476 [2024-10-07 09:49:01.440874] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:12.476 [2024-10-07 09:49:01.440904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:12.476 qpair failed and we were unable to recover it. 00:28:12.476 [2024-10-07 09:49:01.450740] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.476 [2024-10-07 09:49:01.450841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.476 [2024-10-07 09:49:01.450867] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.476 [2024-10-07 09:49:01.450881] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.476 [2024-10-07 09:49:01.450894] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:12.476 [2024-10-07 09:49:01.450937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:12.476 qpair failed and we were unable to recover it. 00:28:12.476 [2024-10-07 09:49:01.460842] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.476 [2024-10-07 09:49:01.460940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.476 [2024-10-07 09:49:01.460965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.476 [2024-10-07 09:49:01.460979] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.477 [2024-10-07 09:49:01.460991] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:12.477 [2024-10-07 09:49:01.461020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:12.477 qpair failed and we were unable to recover it. 00:28:12.738 [2024-10-07 09:49:01.470802] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.738 [2024-10-07 09:49:01.470933] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.738 [2024-10-07 09:49:01.470960] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.738 [2024-10-07 09:49:01.470976] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.738 [2024-10-07 09:49:01.470989] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:12.738 [2024-10-07 09:49:01.471025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:12.738 qpair failed and we were unable to recover it. 00:28:12.738 [2024-10-07 09:49:01.480911] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.738 [2024-10-07 09:49:01.481006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.738 [2024-10-07 09:49:01.481034] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.738 [2024-10-07 09:49:01.481050] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.738 [2024-10-07 09:49:01.481062] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:12.738 [2024-10-07 09:49:01.481092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:12.738 qpair failed and we were unable to recover it. 00:28:12.738 [2024-10-07 09:49:01.490866] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.738 [2024-10-07 09:49:01.490953] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.738 [2024-10-07 09:49:01.490978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.738 [2024-10-07 09:49:01.490992] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.738 [2024-10-07 09:49:01.491004] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:12.738 [2024-10-07 09:49:01.491034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:12.738 qpair failed and we were unable to recover it. 00:28:12.738 [2024-10-07 09:49:01.500877] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.738 [2024-10-07 09:49:01.500957] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.738 [2024-10-07 09:49:01.500982] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.738 [2024-10-07 09:49:01.500997] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.738 [2024-10-07 09:49:01.501010] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:12.738 [2024-10-07 09:49:01.501053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:12.738 qpair failed and we were unable to recover it. 00:28:12.738 [2024-10-07 09:49:01.510940] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.738 [2024-10-07 09:49:01.511033] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.738 [2024-10-07 09:49:01.511062] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.738 [2024-10-07 09:49:01.511079] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.738 [2024-10-07 09:49:01.511091] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:12.738 [2024-10-07 09:49:01.511122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:12.738 qpair failed and we were unable to recover it. 00:28:12.738 [2024-10-07 09:49:01.520952] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.738 [2024-10-07 09:49:01.521041] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.738 [2024-10-07 09:49:01.521072] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.738 [2024-10-07 09:49:01.521087] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.738 [2024-10-07 09:49:01.521099] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:12.738 [2024-10-07 09:49:01.521142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:12.738 qpair failed and we were unable to recover it. 00:28:12.738 [2024-10-07 09:49:01.531021] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.738 [2024-10-07 09:49:01.531144] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.738 [2024-10-07 09:49:01.531169] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.738 [2024-10-07 09:49:01.531184] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.738 [2024-10-07 09:49:01.531197] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:12.738 [2024-10-07 09:49:01.531227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:12.738 qpair failed and we were unable to recover it. 00:28:12.738 [2024-10-07 09:49:01.540969] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.738 [2024-10-07 09:49:01.541048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.738 [2024-10-07 09:49:01.541073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.738 [2024-10-07 09:49:01.541088] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.738 [2024-10-07 09:49:01.541100] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:12.738 [2024-10-07 09:49:01.541130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:12.738 qpair failed and we were unable to recover it. 00:28:12.738 [2024-10-07 09:49:01.551074] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.738 [2024-10-07 09:49:01.551166] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.738 [2024-10-07 09:49:01.551191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.738 [2024-10-07 09:49:01.551205] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.738 [2024-10-07 09:49:01.551218] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:12.738 [2024-10-07 09:49:01.551259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:12.738 qpair failed and we were unable to recover it. 00:28:12.738 [2024-10-07 09:49:01.561156] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.738 [2024-10-07 09:49:01.561267] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.738 [2024-10-07 09:49:01.561292] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.738 [2024-10-07 09:49:01.561306] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.738 [2024-10-07 09:49:01.561325] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:12.738 [2024-10-07 09:49:01.561355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:12.738 qpair failed and we were unable to recover it. 00:28:12.738 [2024-10-07 09:49:01.571086] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.738 [2024-10-07 09:49:01.571207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.738 [2024-10-07 09:49:01.571232] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.738 [2024-10-07 09:49:01.571247] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.738 [2024-10-07 09:49:01.571259] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:12.738 [2024-10-07 09:49:01.571289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:12.738 qpair failed and we were unable to recover it. 00:28:12.738 [2024-10-07 09:49:01.581089] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.739 [2024-10-07 09:49:01.581180] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.739 [2024-10-07 09:49:01.581206] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.739 [2024-10-07 09:49:01.581220] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.739 [2024-10-07 09:49:01.581233] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:12.739 [2024-10-07 09:49:01.581276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:12.739 qpair failed and we were unable to recover it. 00:28:12.739 [2024-10-07 09:49:01.591164] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.739 [2024-10-07 09:49:01.591278] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.739 [2024-10-07 09:49:01.591303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.739 [2024-10-07 09:49:01.591317] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.739 [2024-10-07 09:49:01.591330] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:12.739 [2024-10-07 09:49:01.591360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:12.739 qpair failed and we were unable to recover it. 00:28:12.739 [2024-10-07 09:49:01.601152] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.739 [2024-10-07 09:49:01.601279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.739 [2024-10-07 09:49:01.601304] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.739 [2024-10-07 09:49:01.601319] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.739 [2024-10-07 09:49:01.601332] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:12.739 [2024-10-07 09:49:01.601361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:12.739 qpair failed and we were unable to recover it. 00:28:12.739 [2024-10-07 09:49:01.611169] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.739 [2024-10-07 09:49:01.611260] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.739 [2024-10-07 09:49:01.611286] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.739 [2024-10-07 09:49:01.611300] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.739 [2024-10-07 09:49:01.611313] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:12.739 [2024-10-07 09:49:01.611343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:12.739 qpair failed and we were unable to recover it. 00:28:12.739 [2024-10-07 09:49:01.621239] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.739 [2024-10-07 09:49:01.621328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.739 [2024-10-07 09:49:01.621353] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.739 [2024-10-07 09:49:01.621367] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.739 [2024-10-07 09:49:01.621381] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:12.739 [2024-10-07 09:49:01.621410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:12.739 qpair failed and we were unable to recover it. 00:28:12.739 [2024-10-07 09:49:01.631306] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.739 [2024-10-07 09:49:01.631399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.739 [2024-10-07 09:49:01.631425] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.739 [2024-10-07 09:49:01.631440] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.739 [2024-10-07 09:49:01.631452] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:12.739 [2024-10-07 09:49:01.631482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:12.739 qpair failed and we were unable to recover it. 00:28:12.739 [2024-10-07 09:49:01.641298] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.739 [2024-10-07 09:49:01.641402] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.739 [2024-10-07 09:49:01.641427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.739 [2024-10-07 09:49:01.641442] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.739 [2024-10-07 09:49:01.641455] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:12.739 [2024-10-07 09:49:01.641484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:12.739 qpair failed and we were unable to recover it. 00:28:12.739 [2024-10-07 09:49:01.651285] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.739 [2024-10-07 09:49:01.651384] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.739 [2024-10-07 09:49:01.651409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.739 [2024-10-07 09:49:01.651424] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.739 [2024-10-07 09:49:01.651443] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:12.739 [2024-10-07 09:49:01.651474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:12.739 qpair failed and we were unable to recover it. 00:28:12.739 [2024-10-07 09:49:01.661315] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.739 [2024-10-07 09:49:01.661394] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.739 [2024-10-07 09:49:01.661419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.739 [2024-10-07 09:49:01.661434] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.739 [2024-10-07 09:49:01.661446] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:12.739 [2024-10-07 09:49:01.661499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:12.739 qpair failed and we were unable to recover it. 00:28:12.739 [2024-10-07 09:49:01.671389] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.739 [2024-10-07 09:49:01.671489] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.739 [2024-10-07 09:49:01.671513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.739 [2024-10-07 09:49:01.671528] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.739 [2024-10-07 09:49:01.671541] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:12.739 [2024-10-07 09:49:01.671571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:12.739 qpair failed and we were unable to recover it. 00:28:12.739 [2024-10-07 09:49:01.681406] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.739 [2024-10-07 09:49:01.681498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.739 [2024-10-07 09:49:01.681524] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.739 [2024-10-07 09:49:01.681538] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.739 [2024-10-07 09:49:01.681554] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:12.739 [2024-10-07 09:49:01.681585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:12.739 qpair failed and we were unable to recover it. 00:28:12.739 [2024-10-07 09:49:01.691421] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.739 [2024-10-07 09:49:01.691514] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.739 [2024-10-07 09:49:01.691540] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.739 [2024-10-07 09:49:01.691555] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.739 [2024-10-07 09:49:01.691567] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:12.739 [2024-10-07 09:49:01.691597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:12.739 qpair failed and we were unable to recover it. 00:28:12.739 [2024-10-07 09:49:01.701445] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.739 [2024-10-07 09:49:01.701553] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.739 [2024-10-07 09:49:01.701579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.739 [2024-10-07 09:49:01.701594] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.739 [2024-10-07 09:49:01.701606] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:12.739 [2024-10-07 09:49:01.701636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:12.739 qpair failed and we were unable to recover it. 00:28:12.739 [2024-10-07 09:49:01.711476] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.739 [2024-10-07 09:49:01.711612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.739 [2024-10-07 09:49:01.711642] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.739 [2024-10-07 09:49:01.711659] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.740 [2024-10-07 09:49:01.711682] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:12.740 [2024-10-07 09:49:01.711715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:12.740 qpair failed and we were unable to recover it. 00:28:12.740 [2024-10-07 09:49:01.721515] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.740 [2024-10-07 09:49:01.721601] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.740 [2024-10-07 09:49:01.721627] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.740 [2024-10-07 09:49:01.721641] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.740 [2024-10-07 09:49:01.721654] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:12.740 [2024-10-07 09:49:01.721707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:12.740 qpair failed and we were unable to recover it. 00:28:12.740 [2024-10-07 09:49:01.731519] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.740 [2024-10-07 09:49:01.731611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.740 [2024-10-07 09:49:01.731639] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.740 [2024-10-07 09:49:01.731654] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.740 [2024-10-07 09:49:01.731673] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:12.740 [2024-10-07 09:49:01.731706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:12.740 qpair failed and we were unable to recover it. 00:28:13.000 [2024-10-07 09:49:01.741538] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.000 [2024-10-07 09:49:01.741626] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.000 [2024-10-07 09:49:01.741651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.000 [2024-10-07 09:49:01.741680] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.000 [2024-10-07 09:49:01.741695] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:13.000 [2024-10-07 09:49:01.741725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:13.000 qpair failed and we were unable to recover it. 00:28:13.000 [2024-10-07 09:49:01.751597] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.000 [2024-10-07 09:49:01.751698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.000 [2024-10-07 09:49:01.751726] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.000 [2024-10-07 09:49:01.751741] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.000 [2024-10-07 09:49:01.751754] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:13.000 [2024-10-07 09:49:01.751783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:13.000 qpair failed and we were unable to recover it. 00:28:13.000 [2024-10-07 09:49:01.761602] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.000 [2024-10-07 09:49:01.761698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.000 [2024-10-07 09:49:01.761724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.000 [2024-10-07 09:49:01.761738] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.000 [2024-10-07 09:49:01.761750] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:13.000 [2024-10-07 09:49:01.761780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:13.000 qpair failed and we were unable to recover it. 00:28:13.000 [2024-10-07 09:49:01.771636] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.000 [2024-10-07 09:49:01.771750] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.000 [2024-10-07 09:49:01.771775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.000 [2024-10-07 09:49:01.771789] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.000 [2024-10-07 09:49:01.771802] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:13.000 [2024-10-07 09:49:01.771832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:13.000 qpair failed and we were unable to recover it. 00:28:13.000 [2024-10-07 09:49:01.781649] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.000 [2024-10-07 09:49:01.781743] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.000 [2024-10-07 09:49:01.781768] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.000 [2024-10-07 09:49:01.781783] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.000 [2024-10-07 09:49:01.781795] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:13.000 [2024-10-07 09:49:01.781827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:13.000 qpair failed and we were unable to recover it. 00:28:13.001 [2024-10-07 09:49:01.791770] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.001 [2024-10-07 09:49:01.791890] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.001 [2024-10-07 09:49:01.791915] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.001 [2024-10-07 09:49:01.791930] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.001 [2024-10-07 09:49:01.791943] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:13.001 [2024-10-07 09:49:01.791973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:13.001 qpair failed and we were unable to recover it. 00:28:13.001 [2024-10-07 09:49:01.801769] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.001 [2024-10-07 09:49:01.801875] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.001 [2024-10-07 09:49:01.801903] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.001 [2024-10-07 09:49:01.801920] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.001 [2024-10-07 09:49:01.801932] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:13.001 [2024-10-07 09:49:01.801964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:13.001 qpair failed and we were unable to recover it. 00:28:13.001 [2024-10-07 09:49:01.811795] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.001 [2024-10-07 09:49:01.811912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.001 [2024-10-07 09:49:01.811939] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.001 [2024-10-07 09:49:01.811954] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.001 [2024-10-07 09:49:01.811967] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:13.001 [2024-10-07 09:49:01.811998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:13.001 qpair failed and we were unable to recover it. 00:28:13.001 [2024-10-07 09:49:01.821853] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.001 [2024-10-07 09:49:01.821935] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.001 [2024-10-07 09:49:01.821960] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.001 [2024-10-07 09:49:01.821975] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.001 [2024-10-07 09:49:01.821987] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:13.001 [2024-10-07 09:49:01.822017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:13.001 qpair failed and we were unable to recover it. 00:28:13.001 [2024-10-07 09:49:01.831823] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.001 [2024-10-07 09:49:01.831912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.001 [2024-10-07 09:49:01.831936] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.001 [2024-10-07 09:49:01.831956] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.001 [2024-10-07 09:49:01.831970] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:13.001 [2024-10-07 09:49:01.831999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:13.001 qpair failed and we were unable to recover it. 00:28:13.001 [2024-10-07 09:49:01.841853] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.001 [2024-10-07 09:49:01.841944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.001 [2024-10-07 09:49:01.841969] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.001 [2024-10-07 09:49:01.841983] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.001 [2024-10-07 09:49:01.841996] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:13.001 [2024-10-07 09:49:01.842025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:13.001 qpair failed and we were unable to recover it. 00:28:13.001 [2024-10-07 09:49:01.851857] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.001 [2024-10-07 09:49:01.851948] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.001 [2024-10-07 09:49:01.851974] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.001 [2024-10-07 09:49:01.851988] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.001 [2024-10-07 09:49:01.852001] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:13.001 [2024-10-07 09:49:01.852030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:13.001 qpair failed and we were unable to recover it. 00:28:13.001 [2024-10-07 09:49:01.861887] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.001 [2024-10-07 09:49:01.861975] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.001 [2024-10-07 09:49:01.862000] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.001 [2024-10-07 09:49:01.862015] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.001 [2024-10-07 09:49:01.862028] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:13.001 [2024-10-07 09:49:01.862060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:13.001 qpair failed and we were unable to recover it. 00:28:13.001 [2024-10-07 09:49:01.871969] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.001 [2024-10-07 09:49:01.872065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.001 [2024-10-07 09:49:01.872090] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.001 [2024-10-07 09:49:01.872104] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.001 [2024-10-07 09:49:01.872118] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:13.001 [2024-10-07 09:49:01.872148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:13.001 qpair failed and we were unable to recover it. 00:28:13.001 [2024-10-07 09:49:01.882070] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.001 [2024-10-07 09:49:01.882154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.001 [2024-10-07 09:49:01.882180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.001 [2024-10-07 09:49:01.882195] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.001 [2024-10-07 09:49:01.882207] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:13.001 [2024-10-07 09:49:01.882237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:13.001 qpair failed and we were unable to recover it. 00:28:13.001 [2024-10-07 09:49:01.891994] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.001 [2024-10-07 09:49:01.892082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.001 [2024-10-07 09:49:01.892107] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.001 [2024-10-07 09:49:01.892122] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.001 [2024-10-07 09:49:01.892134] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:13.001 [2024-10-07 09:49:01.892176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:13.001 qpair failed and we were unable to recover it. 00:28:13.001 [2024-10-07 09:49:01.902000] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.001 [2024-10-07 09:49:01.902080] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.001 [2024-10-07 09:49:01.902105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.001 [2024-10-07 09:49:01.902120] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.001 [2024-10-07 09:49:01.902132] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:13.001 [2024-10-07 09:49:01.902162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:13.001 qpair failed and we were unable to recover it. 00:28:13.001 [2024-10-07 09:49:01.912085] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.001 [2024-10-07 09:49:01.912192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.001 [2024-10-07 09:49:01.912217] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.001 [2024-10-07 09:49:01.912231] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.001 [2024-10-07 09:49:01.912244] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:13.001 [2024-10-07 09:49:01.912285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:13.001 qpair failed and we were unable to recover it. 00:28:13.001 [2024-10-07 09:49:01.922088] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.001 [2024-10-07 09:49:01.922175] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.001 [2024-10-07 09:49:01.922208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.002 [2024-10-07 09:49:01.922224] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.002 [2024-10-07 09:49:01.922237] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:13.002 [2024-10-07 09:49:01.922267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:13.002 qpair failed and we were unable to recover it. 00:28:13.002 [2024-10-07 09:49:01.932168] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.002 [2024-10-07 09:49:01.932274] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.002 [2024-10-07 09:49:01.932300] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.002 [2024-10-07 09:49:01.932314] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.002 [2024-10-07 09:49:01.932327] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:13.002 [2024-10-07 09:49:01.932357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:13.002 qpair failed and we were unable to recover it. 00:28:13.002 [2024-10-07 09:49:01.942163] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.002 [2024-10-07 09:49:01.942247] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.002 [2024-10-07 09:49:01.942272] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.002 [2024-10-07 09:49:01.942285] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.002 [2024-10-07 09:49:01.942298] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:13.002 [2024-10-07 09:49:01.942329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:13.002 qpair failed and we were unable to recover it. 00:28:13.002 [2024-10-07 09:49:01.952166] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.002 [2024-10-07 09:49:01.952258] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.002 [2024-10-07 09:49:01.952283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.002 [2024-10-07 09:49:01.952297] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.002 [2024-10-07 09:49:01.952310] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:13.002 [2024-10-07 09:49:01.952339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:13.002 qpair failed and we were unable to recover it. 00:28:13.002 [2024-10-07 09:49:01.962175] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.002 [2024-10-07 09:49:01.962266] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.002 [2024-10-07 09:49:01.962290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.002 [2024-10-07 09:49:01.962304] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.002 [2024-10-07 09:49:01.962316] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:13.002 [2024-10-07 09:49:01.962351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:13.002 qpair failed and we were unable to recover it. 00:28:13.002 [2024-10-07 09:49:01.972208] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.002 [2024-10-07 09:49:01.972297] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.002 [2024-10-07 09:49:01.972328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.002 [2024-10-07 09:49:01.972342] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.002 [2024-10-07 09:49:01.972354] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:13.002 [2024-10-07 09:49:01.972383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:13.002 qpair failed and we were unable to recover it. 00:28:13.002 [2024-10-07 09:49:01.982250] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.002 [2024-10-07 09:49:01.982337] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.002 [2024-10-07 09:49:01.982362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.002 [2024-10-07 09:49:01.982376] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.002 [2024-10-07 09:49:01.982389] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:13.002 [2024-10-07 09:49:01.982418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:13.002 qpair failed and we were unable to recover it. 00:28:13.002 [2024-10-07 09:49:01.992283] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.002 [2024-10-07 09:49:01.992376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.002 [2024-10-07 09:49:01.992401] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.002 [2024-10-07 09:49:01.992416] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.002 [2024-10-07 09:49:01.992428] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:13.002 [2024-10-07 09:49:01.992469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:13.002 qpair failed and we were unable to recover it. 00:28:13.262 [2024-10-07 09:49:02.002337] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.262 [2024-10-07 09:49:02.002428] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.262 [2024-10-07 09:49:02.002453] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.262 [2024-10-07 09:49:02.002468] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.262 [2024-10-07 09:49:02.002481] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:13.262 [2024-10-07 09:49:02.002510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:13.262 qpair failed and we were unable to recover it. 00:28:13.262 [2024-10-07 09:49:02.012466] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.262 [2024-10-07 09:49:02.012554] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.262 [2024-10-07 09:49:02.012585] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.262 [2024-10-07 09:49:02.012600] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.262 [2024-10-07 09:49:02.012628] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:13.262 [2024-10-07 09:49:02.012657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:13.262 qpair failed and we were unable to recover it. 00:28:13.262 [2024-10-07 09:49:02.022383] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.262 [2024-10-07 09:49:02.022502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.262 [2024-10-07 09:49:02.022529] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.262 [2024-10-07 09:49:02.022544] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.262 [2024-10-07 09:49:02.022557] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:13.262 [2024-10-07 09:49:02.022586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:13.262 qpair failed and we were unable to recover it. 00:28:13.262 [2024-10-07 09:49:02.032395] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.262 [2024-10-07 09:49:02.032487] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.262 [2024-10-07 09:49:02.032511] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.262 [2024-10-07 09:49:02.032525] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.262 [2024-10-07 09:49:02.032538] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:13.262 [2024-10-07 09:49:02.032570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:13.262 qpair failed and we were unable to recover it. 00:28:13.262 [2024-10-07 09:49:02.042409] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.262 [2024-10-07 09:49:02.042525] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.262 [2024-10-07 09:49:02.042552] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.262 [2024-10-07 09:49:02.042566] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.262 [2024-10-07 09:49:02.042578] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:13.262 [2024-10-07 09:49:02.042608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:13.262 qpair failed and we were unable to recover it. 00:28:13.262 [2024-10-07 09:49:02.052450] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.262 [2024-10-07 09:49:02.052536] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.262 [2024-10-07 09:49:02.052561] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.262 [2024-10-07 09:49:02.052575] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.262 [2024-10-07 09:49:02.052593] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:13.263 [2024-10-07 09:49:02.052625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:13.263 qpair failed and we were unable to recover it. 00:28:13.263 [2024-10-07 09:49:02.062488] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.263 [2024-10-07 09:49:02.062580] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.263 [2024-10-07 09:49:02.062605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.263 [2024-10-07 09:49:02.062620] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.263 [2024-10-07 09:49:02.062632] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:13.263 [2024-10-07 09:49:02.062662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:13.263 qpair failed and we were unable to recover it. 00:28:13.263 [2024-10-07 09:49:02.072518] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.263 [2024-10-07 09:49:02.072605] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.263 [2024-10-07 09:49:02.072630] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.263 [2024-10-07 09:49:02.072645] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.263 [2024-10-07 09:49:02.072658] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:13.263 [2024-10-07 09:49:02.072711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:13.263 qpair failed and we were unable to recover it. 00:28:13.263 [2024-10-07 09:49:02.082532] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.263 [2024-10-07 09:49:02.082628] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.263 [2024-10-07 09:49:02.082656] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.263 [2024-10-07 09:49:02.082679] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.263 [2024-10-07 09:49:02.082693] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:13.263 [2024-10-07 09:49:02.082723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:13.263 qpair failed and we were unable to recover it. 00:28:13.263 [2024-10-07 09:49:02.092679] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.263 [2024-10-07 09:49:02.092769] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.263 [2024-10-07 09:49:02.092794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.263 [2024-10-07 09:49:02.092808] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.263 [2024-10-07 09:49:02.092820] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:13.263 [2024-10-07 09:49:02.092850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:13.263 qpair failed and we were unable to recover it. 00:28:13.263 [2024-10-07 09:49:02.102617] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.263 [2024-10-07 09:49:02.102740] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.263 [2024-10-07 09:49:02.102767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.263 [2024-10-07 09:49:02.102782] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.263 [2024-10-07 09:49:02.102794] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:13.263 [2024-10-07 09:49:02.102836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:13.263 qpair failed and we were unable to recover it. 00:28:13.263 [2024-10-07 09:49:02.112625] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.263 [2024-10-07 09:49:02.112765] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.263 [2024-10-07 09:49:02.112791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.263 [2024-10-07 09:49:02.112806] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.263 [2024-10-07 09:49:02.112818] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:13.263 [2024-10-07 09:49:02.112849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:13.263 qpair failed and we were unable to recover it. 00:28:13.263 [2024-10-07 09:49:02.122658] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.263 [2024-10-07 09:49:02.122752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.263 [2024-10-07 09:49:02.122780] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.263 [2024-10-07 09:49:02.122795] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.263 [2024-10-07 09:49:02.122807] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:13.263 [2024-10-07 09:49:02.122837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:13.263 qpair failed and we were unable to recover it. 00:28:13.263 [2024-10-07 09:49:02.132662] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.263 [2024-10-07 09:49:02.132767] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.263 [2024-10-07 09:49:02.132792] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.263 [2024-10-07 09:49:02.132807] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.263 [2024-10-07 09:49:02.132820] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:13.263 [2024-10-07 09:49:02.132849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:13.263 qpair failed and we were unable to recover it. 00:28:13.263 [2024-10-07 09:49:02.142698] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.263 [2024-10-07 09:49:02.142823] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.263 [2024-10-07 09:49:02.142849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.263 [2024-10-07 09:49:02.142864] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.263 [2024-10-07 09:49:02.142882] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:13.263 [2024-10-07 09:49:02.142913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:13.263 qpair failed and we were unable to recover it. 00:28:13.263 [2024-10-07 09:49:02.152742] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.263 [2024-10-07 09:49:02.152852] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.263 [2024-10-07 09:49:02.152879] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.263 [2024-10-07 09:49:02.152893] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.263 [2024-10-07 09:49:02.152905] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:13.263 [2024-10-07 09:49:02.152935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:13.263 qpair failed and we were unable to recover it. 00:28:13.263 [2024-10-07 09:49:02.162797] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.263 [2024-10-07 09:49:02.162923] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.263 [2024-10-07 09:49:02.162948] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.263 [2024-10-07 09:49:02.162964] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.263 [2024-10-07 09:49:02.162976] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:13.263 [2024-10-07 09:49:02.163005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:13.263 qpair failed and we were unable to recover it. 00:28:13.263 [2024-10-07 09:49:02.172807] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.263 [2024-10-07 09:49:02.172901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.263 [2024-10-07 09:49:02.172925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.263 [2024-10-07 09:49:02.172940] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.263 [2024-10-07 09:49:02.172952] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:13.263 [2024-10-07 09:49:02.172982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:13.263 qpair failed and we were unable to recover it. 00:28:13.263 [2024-10-07 09:49:02.182815] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.263 [2024-10-07 09:49:02.182896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.263 [2024-10-07 09:49:02.182923] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.263 [2024-10-07 09:49:02.182938] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.263 [2024-10-07 09:49:02.182951] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:13.263 [2024-10-07 09:49:02.182992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:13.263 qpair failed and we were unable to recover it. 00:28:13.263 [2024-10-07 09:49:02.192868] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.264 [2024-10-07 09:49:02.192963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.264 [2024-10-07 09:49:02.192988] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.264 [2024-10-07 09:49:02.193002] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.264 [2024-10-07 09:49:02.193014] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:13.264 [2024-10-07 09:49:02.193044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:13.264 qpair failed and we were unable to recover it. 00:28:13.264 [2024-10-07 09:49:02.202858] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.264 [2024-10-07 09:49:02.202949] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.264 [2024-10-07 09:49:02.202973] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.264 [2024-10-07 09:49:02.202987] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.264 [2024-10-07 09:49:02.203000] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:13.264 [2024-10-07 09:49:02.203029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:13.264 qpair failed and we were unable to recover it. 00:28:13.264 [2024-10-07 09:49:02.212897] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.264 [2024-10-07 09:49:02.212987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.264 [2024-10-07 09:49:02.213012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.264 [2024-10-07 09:49:02.213027] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.264 [2024-10-07 09:49:02.213039] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:13.264 [2024-10-07 09:49:02.213068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:13.264 qpair failed and we were unable to recover it. 00:28:13.264 [2024-10-07 09:49:02.222911] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.264 [2024-10-07 09:49:02.222992] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.264 [2024-10-07 09:49:02.223018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.264 [2024-10-07 09:49:02.223032] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.264 [2024-10-07 09:49:02.223044] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:13.264 [2024-10-07 09:49:02.223085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:13.264 qpair failed and we were unable to recover it. 00:28:13.264 [2024-10-07 09:49:02.233033] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.264 [2024-10-07 09:49:02.233138] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.264 [2024-10-07 09:49:02.233164] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.264 [2024-10-07 09:49:02.233185] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.264 [2024-10-07 09:49:02.233198] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:13.264 [2024-10-07 09:49:02.233228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:13.264 qpair failed and we were unable to recover it. 00:28:13.264 [2024-10-07 09:49:02.242997] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.264 [2024-10-07 09:49:02.243083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.264 [2024-10-07 09:49:02.243108] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.264 [2024-10-07 09:49:02.243122] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.264 [2024-10-07 09:49:02.243135] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:13.264 [2024-10-07 09:49:02.243164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:13.264 qpair failed and we were unable to recover it. 00:28:13.264 [2024-10-07 09:49:02.253100] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.264 [2024-10-07 09:49:02.253194] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.264 [2024-10-07 09:49:02.253219] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.264 [2024-10-07 09:49:02.253233] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.264 [2024-10-07 09:49:02.253246] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:13.264 [2024-10-07 09:49:02.253275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:13.264 qpair failed and we were unable to recover it. 00:28:13.523 [2024-10-07 09:49:02.263034] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.523 [2024-10-07 09:49:02.263151] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.523 [2024-10-07 09:49:02.263177] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.523 [2024-10-07 09:49:02.263192] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.523 [2024-10-07 09:49:02.263211] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:13.523 [2024-10-07 09:49:02.263241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:13.523 qpair failed and we were unable to recover it. 00:28:13.523 [2024-10-07 09:49:02.273060] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.523 [2024-10-07 09:49:02.273146] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.523 [2024-10-07 09:49:02.273171] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.523 [2024-10-07 09:49:02.273185] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.523 [2024-10-07 09:49:02.273198] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:13.523 [2024-10-07 09:49:02.273226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:13.523 qpair failed and we were unable to recover it. 00:28:13.523 [2024-10-07 09:49:02.283086] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.523 [2024-10-07 09:49:02.283179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.523 [2024-10-07 09:49:02.283204] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.523 [2024-10-07 09:49:02.283218] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.523 [2024-10-07 09:49:02.283230] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:13.523 [2024-10-07 09:49:02.283260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:13.523 qpair failed and we were unable to recover it. 00:28:13.523 [2024-10-07 09:49:02.293187] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.523 [2024-10-07 09:49:02.293303] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.523 [2024-10-07 09:49:02.293330] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.523 [2024-10-07 09:49:02.293345] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.523 [2024-10-07 09:49:02.293357] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:13.523 [2024-10-07 09:49:02.293387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:13.523 qpair failed and we were unable to recover it. 00:28:13.523 [2024-10-07 09:49:02.303140] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.523 [2024-10-07 09:49:02.303233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.523 [2024-10-07 09:49:02.303258] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.523 [2024-10-07 09:49:02.303272] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.523 [2024-10-07 09:49:02.303284] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe7a8000b90 00:28:13.523 [2024-10-07 09:49:02.303315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:13.523 qpair failed and we were unable to recover it. 00:28:13.523 [2024-10-07 09:49:02.313205] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.523 [2024-10-07 09:49:02.313308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.523 [2024-10-07 09:49:02.313340] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.523 [2024-10-07 09:49:02.313356] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.523 [2024-10-07 09:49:02.313369] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1fab230 00:28:13.523 [2024-10-07 09:49:02.313399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:13.523 qpair failed and we were unable to recover it. 00:28:13.523 [2024-10-07 09:49:02.323210] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.523 [2024-10-07 09:49:02.323307] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.523 [2024-10-07 09:49:02.323332] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.523 [2024-10-07 09:49:02.323353] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.523 [2024-10-07 09:49:02.323367] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1fab230 00:28:13.523 [2024-10-07 09:49:02.323397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:13.523 qpair failed and we were unable to recover it. 00:28:13.523 [2024-10-07 09:49:02.323520] nvme_ctrlr.c:4505:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:28:13.523 A controller has encountered a failure and is being reset. 00:28:13.523 Controller properly reset. 00:28:13.523 Initializing NVMe Controllers 00:28:13.523 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:13.523 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:13.523 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:28:13.523 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:28:13.523 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:28:13.523 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:28:13.523 Initialization complete. Launching workers. 00:28:13.523 Starting thread on core 1 00:28:13.523 Starting thread on core 2 00:28:13.523 Starting thread on core 3 00:28:13.523 Starting thread on core 0 00:28:13.523 09:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:28:13.523 00:28:13.523 real 0m10.866s 00:28:13.523 user 0m19.334s 00:28:13.523 sys 0m5.321s 00:28:13.523 09:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:13.523 09:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:13.523 ************************************ 00:28:13.523 END TEST nvmf_target_disconnect_tc2 00:28:13.523 ************************************ 00:28:13.523 09:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:28:13.523 09:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:28:13.524 09:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:28:13.524 09:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@514 -- # nvmfcleanup 00:28:13.524 09:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:28:13.524 09:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:13.524 09:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:28:13.524 09:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:13.524 09:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:13.524 rmmod nvme_tcp 00:28:13.820 rmmod nvme_fabrics 00:28:13.820 rmmod nvme_keyring 00:28:13.820 09:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:13.820 09:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:28:13.820 09:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:28:13.820 09:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@515 -- # '[' -n 330048 ']' 00:28:13.820 09:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # killprocess 330048 00:28:13.820 09:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # '[' -z 330048 ']' 00:28:13.820 09:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # kill -0 330048 00:28:13.820 09:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # uname 00:28:13.820 09:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:13.820 09:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 330048 00:28:13.820 09:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_4 00:28:13.820 09:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_4 = sudo ']' 00:28:13.820 09:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 330048' 00:28:13.820 killing process with pid 330048 00:28:13.820 09:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@969 -- # kill 330048 00:28:13.820 09:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@974 -- # wait 330048 00:28:14.079 09:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:28:14.079 09:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:28:14.079 09:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:28:14.079 09:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:28:14.079 09:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@789 -- # iptables-save 00:28:14.079 09:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:28:14.079 09:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@789 -- # iptables-restore 00:28:14.079 09:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:14.079 09:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:14.079 09:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:14.079 09:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:14.079 09:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:15.990 09:49:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:15.990 00:28:15.990 real 0m15.826s 00:28:15.990 user 0m45.717s 00:28:15.990 sys 0m7.390s 00:28:15.990 09:49:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:15.990 09:49:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:15.990 ************************************ 00:28:15.990 END TEST nvmf_target_disconnect 00:28:15.990 ************************************ 00:28:15.990 09:49:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:28:15.990 00:28:15.990 real 5m8.545s 00:28:15.990 user 10m54.950s 00:28:15.990 sys 1m12.191s 00:28:15.990 09:49:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:15.990 09:49:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.990 ************************************ 00:28:15.990 END TEST nvmf_host 00:28:15.990 ************************************ 00:28:15.990 09:49:04 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:28:15.990 09:49:04 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:28:15.990 09:49:04 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:28:15.990 09:49:04 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:28:15.990 09:49:04 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:16.250 09:49:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:16.250 ************************************ 00:28:16.250 START TEST nvmf_target_core_interrupt_mode 00:28:16.250 ************************************ 00:28:16.250 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:28:16.250 * Looking for test storage... 00:28:16.250 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:28:16.250 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:28:16.250 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1681 -- # lcov --version 00:28:16.250 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:28:16.250 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:28:16.250 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:16.250 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:16.250 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:16.250 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:28:16.250 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:28:16.250 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:28:16.250 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:28:16.250 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:28:16.250 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:28:16.250 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:28:16.250 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:16.250 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:28:16.250 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:28:16.250 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:16.250 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:16.250 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:28:16.250 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:28:16.250 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:16.250 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:28:16.250 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:28:16.250 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:28:16.250 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:28:16.250 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:16.250 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:28:16.250 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:28:16.250 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:16.250 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:16.250 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:28:16.250 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:16.250 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:28:16.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:16.250 --rc genhtml_branch_coverage=1 00:28:16.250 --rc genhtml_function_coverage=1 00:28:16.250 --rc genhtml_legend=1 00:28:16.250 --rc geninfo_all_blocks=1 00:28:16.250 --rc geninfo_unexecuted_blocks=1 00:28:16.250 00:28:16.250 ' 00:28:16.250 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:28:16.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:16.250 --rc genhtml_branch_coverage=1 00:28:16.250 --rc genhtml_function_coverage=1 00:28:16.250 --rc genhtml_legend=1 00:28:16.250 --rc geninfo_all_blocks=1 00:28:16.250 --rc geninfo_unexecuted_blocks=1 00:28:16.250 00:28:16.250 ' 00:28:16.250 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:28:16.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:16.250 --rc genhtml_branch_coverage=1 00:28:16.250 --rc genhtml_function_coverage=1 00:28:16.250 --rc genhtml_legend=1 00:28:16.250 --rc geninfo_all_blocks=1 00:28:16.250 --rc geninfo_unexecuted_blocks=1 00:28:16.250 00:28:16.250 ' 00:28:16.250 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:28:16.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:16.250 --rc genhtml_branch_coverage=1 00:28:16.250 --rc genhtml_function_coverage=1 00:28:16.250 --rc genhtml_legend=1 00:28:16.250 --rc geninfo_all_blocks=1 00:28:16.250 --rc geninfo_unexecuted_blocks=1 00:28:16.250 00:28:16.250 ' 00:28:16.250 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:28:16.250 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:28:16.250 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:16.250 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:28:16.250 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:16.250 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:16.250 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:16.250 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:16.250 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:16.250 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:16.250 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:16.250 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:16.250 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:16.250 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:16.250 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:28:16.250 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:28:16.250 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:16.250 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:16.250 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:16.250 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:16.250 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:16.250 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:28:16.250 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:16.250 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:16.250 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:16.250 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:16.250 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:16.250 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:16.250 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:28:16.251 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:16.251 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:28:16.251 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:16.251 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:16.251 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:16.251 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:16.251 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:16.251 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:16.251 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:16.251 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:16.251 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:16.251 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:16.251 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:28:16.251 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:28:16.251 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:28:16.251 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:28:16.251 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:28:16.251 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:16.251 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:16.251 ************************************ 00:28:16.251 START TEST nvmf_abort 00:28:16.251 ************************************ 00:28:16.251 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:28:16.510 * Looking for test storage... 00:28:16.510 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:16.510 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:28:16.510 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1681 -- # lcov --version 00:28:16.510 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:28:16.510 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:28:16.510 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:16.510 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:16.510 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:16.510 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:28:16.510 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:28:16.511 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:28:16.511 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:28:16.511 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:28:16.511 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:28:16.511 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:28:16.511 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:16.511 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:28:16.511 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:28:16.511 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:16.511 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:16.511 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:28:16.511 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:28:16.511 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:16.511 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:28:16.511 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:28:16.511 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:28:16.511 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:28:16.511 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:16.511 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:28:16.511 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:28:16.511 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:16.511 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:16.511 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:28:16.511 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:16.511 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:28:16.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:16.511 --rc genhtml_branch_coverage=1 00:28:16.511 --rc genhtml_function_coverage=1 00:28:16.511 --rc genhtml_legend=1 00:28:16.511 --rc geninfo_all_blocks=1 00:28:16.511 --rc geninfo_unexecuted_blocks=1 00:28:16.511 00:28:16.511 ' 00:28:16.511 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:28:16.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:16.511 --rc genhtml_branch_coverage=1 00:28:16.511 --rc genhtml_function_coverage=1 00:28:16.511 --rc genhtml_legend=1 00:28:16.511 --rc geninfo_all_blocks=1 00:28:16.511 --rc geninfo_unexecuted_blocks=1 00:28:16.511 00:28:16.511 ' 00:28:16.511 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:28:16.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:16.511 --rc genhtml_branch_coverage=1 00:28:16.511 --rc genhtml_function_coverage=1 00:28:16.511 --rc genhtml_legend=1 00:28:16.511 --rc geninfo_all_blocks=1 00:28:16.511 --rc geninfo_unexecuted_blocks=1 00:28:16.511 00:28:16.511 ' 00:28:16.511 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:28:16.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:16.511 --rc genhtml_branch_coverage=1 00:28:16.511 --rc genhtml_function_coverage=1 00:28:16.511 --rc genhtml_legend=1 00:28:16.511 --rc geninfo_all_blocks=1 00:28:16.511 --rc geninfo_unexecuted_blocks=1 00:28:16.511 00:28:16.511 ' 00:28:16.511 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:16.511 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:28:16.511 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:16.511 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:16.511 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:16.511 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:16.511 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:16.511 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:16.511 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:16.511 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:16.511 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:16.511 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:16.511 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:28:16.511 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:28:16.511 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:16.511 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:16.511 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:16.511 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:16.511 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:16.511 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:28:16.511 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:16.511 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:16.511 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:16.511 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:16.511 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:16.511 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:16.511 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:28:16.511 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:16.511 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:28:16.511 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:16.511 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:16.511 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:16.511 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:16.511 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:16.511 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:16.511 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:16.511 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:16.511 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:16.511 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:16.511 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:16.511 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:28:16.511 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:28:16.511 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:28:16.511 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:16.511 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # prepare_net_devs 00:28:16.511 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@436 -- # local -g is_hw=no 00:28:16.511 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # remove_spdk_ns 00:28:16.512 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:16.512 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:16.512 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:16.512 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:28:16.512 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:28:16.512 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:28:16.512 09:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:18.417 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:18.417 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:28:18.417 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:18.417 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:18.417 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:18.417 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:18.417 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:18.417 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:28:18.417 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:18.417 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:28:18.417 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:28:18.417 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:28:18.417 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:28:18.417 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:28:18.417 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:28:18.417 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:18.417 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:18.417 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:18.417 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:18.417 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:18.417 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:18.417 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:18.417 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:18.417 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:18.417 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:18.417 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:18.417 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:18.417 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:18.417 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:18.417 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:18.417 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:18.417 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:18.417 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:18.417 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:18.417 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x1592)' 00:28:18.417 Found 0000:09:00.0 (0x8086 - 0x1592) 00:28:18.417 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:18.417 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:18.417 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:28:18.417 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:28:18.417 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:18.417 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:18.417 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x1592)' 00:28:18.417 Found 0000:09:00.1 (0x8086 - 0x1592) 00:28:18.417 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:18.417 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:18.417 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:28:18.417 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:28:18.417 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:18.417 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:18.417 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:18.417 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:18.417 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:18.417 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:18.417 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:18.417 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:18.417 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:18.417 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:18.417 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:18.417 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:28:18.417 Found net devices under 0000:09:00.0: cvl_0_0 00:28:18.417 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:18.417 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:18.417 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:18.417 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:18.417 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:18.417 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:18.417 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:18.417 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:18.417 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:28:18.417 Found net devices under 0000:09:00.1: cvl_0_1 00:28:18.417 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:18.417 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:28:18.417 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # is_hw=yes 00:28:18.417 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:28:18.417 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:28:18.417 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:28:18.417 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:18.417 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:18.417 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:18.417 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:18.417 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:18.417 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:18.417 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:18.417 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:18.417 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:18.417 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:18.417 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:18.417 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:18.676 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:18.676 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:18.676 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:18.676 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:18.676 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:18.676 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:18.676 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:18.676 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:18.676 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:18.676 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:18.676 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:18.676 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:18.676 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:28:18.676 00:28:18.676 --- 10.0.0.2 ping statistics --- 00:28:18.676 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:18.676 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:28:18.676 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:18.676 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:18.676 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:28:18.676 00:28:18.677 --- 10.0.0.1 ping statistics --- 00:28:18.677 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:18.677 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:28:18.677 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:18.677 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@448 -- # return 0 00:28:18.677 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:28:18.677 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:18.677 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:28:18.677 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:28:18.677 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:18.677 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:28:18.677 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:28:18.677 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:28:18.677 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:28:18.677 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:18.677 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:18.677 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # nvmfpid=332740 00:28:18.677 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:28:18.677 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # waitforlisten 332740 00:28:18.677 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 332740 ']' 00:28:18.677 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:18.677 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:18.677 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:18.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:18.677 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:18.677 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:18.677 [2024-10-07 09:49:07.611882] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:18.677 [2024-10-07 09:49:07.613025] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:28:18.677 [2024-10-07 09:49:07.613095] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:18.936 [2024-10-07 09:49:07.675713] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:18.936 [2024-10-07 09:49:07.781806] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:18.936 [2024-10-07 09:49:07.781861] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:18.936 [2024-10-07 09:49:07.781884] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:18.936 [2024-10-07 09:49:07.781904] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:18.936 [2024-10-07 09:49:07.781914] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:18.936 [2024-10-07 09:49:07.782689] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:28:18.936 [2024-10-07 09:49:07.782734] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:28:18.936 [2024-10-07 09:49:07.782738] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:28:18.936 [2024-10-07 09:49:07.888053] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:18.936 [2024-10-07 09:49:07.888288] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:18.936 [2024-10-07 09:49:07.888290] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:18.936 [2024-10-07 09:49:07.888544] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:18.936 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:18.936 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:28:18.936 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:28:18.936 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:18.936 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:19.196 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:19.196 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:28:19.196 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.196 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:19.196 [2024-10-07 09:49:07.943438] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:19.196 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.196 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:28:19.196 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.196 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:19.196 Malloc0 00:28:19.196 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.196 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:28:19.196 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.196 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:19.196 Delay0 00:28:19.196 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.196 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:28:19.196 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.196 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:19.196 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.196 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:28:19.196 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.196 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:19.196 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.196 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:19.196 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.196 09:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:19.196 [2024-10-07 09:49:08.003692] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:19.196 09:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.196 09:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:19.196 09:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.196 09:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:19.196 09:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.196 09:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:28:19.196 [2024-10-07 09:49:08.064832] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:28:21.104 Initializing NVMe Controllers 00:28:21.104 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:28:21.104 controller IO queue size 128 less than required 00:28:21.104 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:28:21.104 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:28:21.104 Initialization complete. Launching workers. 00:28:21.104 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28911 00:28:21.104 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28968, failed to submit 66 00:28:21.104 success 28911, unsuccessful 57, failed 0 00:28:21.104 09:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:21.104 09:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.104 09:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:21.363 09:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.363 09:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:28:21.363 09:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:28:21.363 09:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@514 -- # nvmfcleanup 00:28:21.363 09:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:28:21.363 09:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:21.363 09:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:28:21.363 09:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:21.363 09:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:21.363 rmmod nvme_tcp 00:28:21.363 rmmod nvme_fabrics 00:28:21.363 rmmod nvme_keyring 00:28:21.363 09:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:21.363 09:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:28:21.363 09:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:28:21.363 09:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@515 -- # '[' -n 332740 ']' 00:28:21.363 09:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # killprocess 332740 00:28:21.363 09:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 332740 ']' 00:28:21.363 09:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 332740 00:28:21.363 09:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:28:21.363 09:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:21.363 09:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 332740 00:28:21.363 09:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:21.363 09:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:21.363 09:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 332740' 00:28:21.363 killing process with pid 332740 00:28:21.363 09:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@969 -- # kill 332740 00:28:21.363 09:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@974 -- # wait 332740 00:28:21.622 09:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:28:21.622 09:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:28:21.622 09:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:28:21.622 09:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:28:21.622 09:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@789 -- # iptables-save 00:28:21.622 09:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:28:21.622 09:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@789 -- # iptables-restore 00:28:21.622 09:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:21.622 09:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:21.622 09:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:21.622 09:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:21.622 09:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:23.534 09:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:23.534 00:28:23.534 real 0m7.283s 00:28:23.534 user 0m9.014s 00:28:23.534 sys 0m2.878s 00:28:23.534 09:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:23.534 09:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:23.534 ************************************ 00:28:23.534 END TEST nvmf_abort 00:28:23.534 ************************************ 00:28:23.794 09:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:28:23.794 09:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:28:23.794 09:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:23.794 09:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:23.794 ************************************ 00:28:23.794 START TEST nvmf_ns_hotplug_stress 00:28:23.794 ************************************ 00:28:23.794 09:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:28:23.794 * Looking for test storage... 00:28:23.794 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:23.794 09:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:28:23.794 09:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lcov --version 00:28:23.794 09:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:28:23.794 09:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:28:23.794 09:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:23.794 09:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:23.794 09:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:23.794 09:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:28:23.794 09:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:28:23.794 09:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:28:23.794 09:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:28:23.794 09:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:28:23.794 09:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:28:23.794 09:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:28:23.794 09:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:23.794 09:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:28:23.794 09:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:28:23.794 09:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:23.794 09:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:23.794 09:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:28:23.794 09:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:28:23.794 09:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:23.794 09:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:28:23.794 09:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:28:23.794 09:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:28:23.794 09:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:28:23.794 09:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:23.794 09:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:28:23.794 09:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:28:23.794 09:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:23.794 09:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:23.794 09:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:28:23.794 09:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:23.794 09:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:28:23.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:23.794 --rc genhtml_branch_coverage=1 00:28:23.794 --rc genhtml_function_coverage=1 00:28:23.794 --rc genhtml_legend=1 00:28:23.794 --rc geninfo_all_blocks=1 00:28:23.794 --rc geninfo_unexecuted_blocks=1 00:28:23.794 00:28:23.794 ' 00:28:23.794 09:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:28:23.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:23.794 --rc genhtml_branch_coverage=1 00:28:23.794 --rc genhtml_function_coverage=1 00:28:23.794 --rc genhtml_legend=1 00:28:23.794 --rc geninfo_all_blocks=1 00:28:23.794 --rc geninfo_unexecuted_blocks=1 00:28:23.794 00:28:23.794 ' 00:28:23.794 09:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:28:23.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:23.794 --rc genhtml_branch_coverage=1 00:28:23.794 --rc genhtml_function_coverage=1 00:28:23.794 --rc genhtml_legend=1 00:28:23.794 --rc geninfo_all_blocks=1 00:28:23.794 --rc geninfo_unexecuted_blocks=1 00:28:23.794 00:28:23.794 ' 00:28:23.794 09:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:28:23.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:23.794 --rc genhtml_branch_coverage=1 00:28:23.795 --rc genhtml_function_coverage=1 00:28:23.795 --rc genhtml_legend=1 00:28:23.795 --rc geninfo_all_blocks=1 00:28:23.795 --rc geninfo_unexecuted_blocks=1 00:28:23.795 00:28:23.795 ' 00:28:23.795 09:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:23.795 09:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:28:23.795 09:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:23.795 09:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:23.795 09:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:23.795 09:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:23.795 09:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:23.795 09:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:23.795 09:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:23.795 09:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:23.795 09:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:23.795 09:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:23.795 09:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:28:23.795 09:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:28:23.795 09:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:23.795 09:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:23.795 09:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:23.795 09:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:23.795 09:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:23.795 09:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:28:23.795 09:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:23.795 09:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:23.795 09:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:23.795 09:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:23.795 09:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:23.795 09:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:23.795 09:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:28:23.795 09:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:23.795 09:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:28:23.795 09:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:23.795 09:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:23.795 09:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:23.795 09:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:23.795 09:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:23.795 09:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:23.795 09:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:23.795 09:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:23.795 09:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:23.795 09:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:23.795 09:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:23.795 09:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:28:23.795 09:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:28:23.795 09:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:23.795 09:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # prepare_net_devs 00:28:23.795 09:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@436 -- # local -g is_hw=no 00:28:23.795 09:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # remove_spdk_ns 00:28:23.795 09:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:23.795 09:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:23.795 09:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:23.795 09:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:28:23.795 09:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:28:23.795 09:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:28:23.795 09:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:26.334 09:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:26.334 09:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:28:26.334 09:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:26.334 09:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:26.334 09:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:26.334 09:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:26.334 09:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:26.334 09:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:28:26.334 09:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:26.334 09:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:28:26.334 09:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:28:26.334 09:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:28:26.334 09:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:28:26.334 09:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:28:26.334 09:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:28:26.334 09:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:26.334 09:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:26.334 09:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:26.334 09:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:26.334 09:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:26.334 09:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:26.334 09:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:26.334 09:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:26.334 09:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:26.334 09:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:26.334 09:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:26.334 09:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:26.334 09:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:26.335 09:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:26.335 09:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:26.335 09:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:26.335 09:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:26.335 09:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:26.335 09:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:26.335 09:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x1592)' 00:28:26.335 Found 0000:09:00.0 (0x8086 - 0x1592) 00:28:26.335 09:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:26.335 09:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:26.335 09:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:28:26.335 09:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:28:26.335 09:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:26.335 09:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:26.335 09:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x1592)' 00:28:26.335 Found 0000:09:00.1 (0x8086 - 0x1592) 00:28:26.335 09:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:26.335 09:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:26.335 09:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:28:26.335 09:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:28:26.335 09:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:26.335 09:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:26.335 09:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:26.335 09:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:26.335 09:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:26.335 09:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:26.335 09:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:26.335 09:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:26.335 09:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:26.335 09:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:26.335 09:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:26.335 09:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:28:26.335 Found net devices under 0000:09:00.0: cvl_0_0 00:28:26.335 09:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:26.335 09:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:26.335 09:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:26.335 09:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:26.335 09:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:26.335 09:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:26.335 09:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:26.335 09:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:26.335 09:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:28:26.335 Found net devices under 0000:09:00.1: cvl_0_1 00:28:26.335 09:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:26.335 09:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:28:26.335 09:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # is_hw=yes 00:28:26.335 09:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:28:26.335 09:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:28:26.335 09:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:28:26.335 09:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:26.335 09:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:26.335 09:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:26.335 09:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:26.335 09:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:26.335 09:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:26.335 09:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:26.335 09:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:26.335 09:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:26.335 09:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:26.335 09:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:26.335 09:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:26.335 09:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:26.335 09:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:26.335 09:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:26.335 09:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:26.335 09:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:26.335 09:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:26.335 09:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:26.335 09:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:26.335 09:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:26.335 09:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:26.335 09:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:26.335 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:26.335 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:28:26.335 00:28:26.335 --- 10.0.0.2 ping statistics --- 00:28:26.335 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:26.335 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:28:26.335 09:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:26.335 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:26.335 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.091 ms 00:28:26.335 00:28:26.335 --- 10.0.0.1 ping statistics --- 00:28:26.335 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:26.335 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:28:26.335 09:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:26.335 09:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # return 0 00:28:26.335 09:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:28:26.335 09:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:26.335 09:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:28:26.335 09:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:28:26.335 09:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:26.335 09:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:28:26.335 09:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:28:26.335 09:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:28:26.335 09:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:28:26.335 09:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:26.335 09:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:26.335 09:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # nvmfpid=334878 00:28:26.335 09:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:28:26.335 09:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # waitforlisten 334878 00:28:26.335 09:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 334878 ']' 00:28:26.335 09:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:26.335 09:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:26.335 09:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:26.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:26.335 09:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:26.335 09:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:26.335 [2024-10-07 09:49:14.967937] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:26.335 [2024-10-07 09:49:14.968997] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:28:26.335 [2024-10-07 09:49:14.969065] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:26.335 [2024-10-07 09:49:15.033116] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:26.335 [2024-10-07 09:49:15.135846] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:26.335 [2024-10-07 09:49:15.135902] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:26.335 [2024-10-07 09:49:15.135925] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:26.335 [2024-10-07 09:49:15.135935] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:26.335 [2024-10-07 09:49:15.135944] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:26.335 [2024-10-07 09:49:15.136654] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:28:26.335 [2024-10-07 09:49:15.136791] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:28:26.335 [2024-10-07 09:49:15.136792] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:28:26.335 [2024-10-07 09:49:15.233096] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:26.335 [2024-10-07 09:49:15.233333] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:26.335 [2024-10-07 09:49:15.233356] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:26.335 [2024-10-07 09:49:15.233591] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:26.335 09:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:26.335 09:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:28:26.335 09:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:28:26.335 09:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:26.335 09:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:26.335 09:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:26.335 09:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:28:26.335 09:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:26.594 [2024-10-07 09:49:15.589478] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:26.854 09:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:28:27.111 09:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:27.370 [2024-10-07 09:49:16.137871] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:27.370 09:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:27.628 09:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:28:27.887 Malloc0 00:28:27.888 09:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:28:28.146 Delay0 00:28:28.146 09:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:28.404 09:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:28:28.662 NULL1 00:28:28.923 09:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:28:29.182 09:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=335278 00:28:29.182 09:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:28:29.182 09:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 335278 00:28:29.182 09:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:29.441 09:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:29.699 09:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:28:29.699 09:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:28:29.957 true 00:28:29.957 09:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 335278 00:28:29.957 09:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:30.216 09:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:30.475 09:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:28:30.475 09:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:28:30.733 true 00:28:30.991 09:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 335278 00:28:30.991 09:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:31.249 09:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:31.507 09:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:28:31.507 09:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:28:31.764 true 00:28:31.765 09:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 335278 00:28:31.765 09:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:32.022 09:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:32.280 09:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:28:32.280 09:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:28:32.539 true 00:28:32.539 09:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 335278 00:28:32.539 09:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:33.474 Read completed with error (sct=0, sc=11) 00:28:33.474 09:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:33.732 09:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:28:33.732 09:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:28:33.990 true 00:28:33.990 09:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 335278 00:28:33.990 09:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:34.248 09:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:34.507 09:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:28:34.507 09:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:28:34.766 true 00:28:34.766 09:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 335278 00:28:34.766 09:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:35.024 09:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:35.284 09:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:28:35.284 09:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:28:35.853 true 00:28:35.853 09:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 335278 00:28:35.853 09:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:36.791 09:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:37.049 09:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:28:37.049 09:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:28:37.307 true 00:28:37.307 09:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 335278 00:28:37.307 09:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:37.565 09:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:37.822 09:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:28:37.822 09:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:28:38.080 true 00:28:38.080 09:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 335278 00:28:38.080 09:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:38.338 09:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:38.596 09:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:28:38.596 09:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:28:38.854 true 00:28:38.854 09:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 335278 00:28:38.854 09:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:39.786 09:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:39.786 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:39.786 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:40.044 09:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:28:40.044 09:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:28:40.301 true 00:28:40.301 09:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 335278 00:28:40.301 09:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:40.559 09:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:40.818 09:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:28:40.818 09:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:28:41.077 true 00:28:41.077 09:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 335278 00:28:41.077 09:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:41.335 09:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:41.900 09:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:28:41.900 09:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:28:41.900 true 00:28:41.900 09:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 335278 00:28:41.900 09:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:43.273 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:43.273 09:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:43.273 09:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:28:43.273 09:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:28:43.531 true 00:28:43.531 09:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 335278 00:28:43.531 09:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:43.789 09:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:44.047 09:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:28:44.047 09:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:28:44.304 true 00:28:44.305 09:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 335278 00:28:44.305 09:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:44.568 09:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:44.827 09:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:28:44.827 09:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:28:45.085 true 00:28:45.085 09:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 335278 00:28:45.085 09:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:46.021 09:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:46.021 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:46.279 09:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:28:46.279 09:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:28:46.536 true 00:28:46.536 09:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 335278 00:28:46.536 09:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:46.795 09:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:47.054 09:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:28:47.054 09:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:28:47.313 true 00:28:47.313 09:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 335278 00:28:47.571 09:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:47.830 09:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:48.089 09:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:28:48.089 09:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:28:48.349 true 00:28:48.349 09:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 335278 00:28:48.349 09:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:49.289 09:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:49.289 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:49.551 09:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:28:49.551 09:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:28:49.811 true 00:28:49.811 09:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 335278 00:28:49.812 09:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:50.070 09:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:50.328 09:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:28:50.328 09:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:28:50.587 true 00:28:50.587 09:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 335278 00:28:50.587 09:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:51.524 09:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:51.524 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:51.782 09:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:28:51.782 09:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:28:52.041 true 00:28:52.041 09:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 335278 00:28:52.041 09:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:52.299 09:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:52.557 09:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:28:52.558 09:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:28:52.816 true 00:28:52.816 09:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 335278 00:28:52.816 09:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:53.753 09:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:53.753 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:53.753 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:53.753 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:53.753 09:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:28:53.753 09:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:28:54.013 true 00:28:54.273 09:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 335278 00:28:54.273 09:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:54.531 09:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:54.789 09:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:28:54.789 09:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:28:55.048 true 00:28:55.048 09:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 335278 00:28:55.048 09:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:55.614 09:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:55.874 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:56.134 09:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:28:56.134 09:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:28:56.391 true 00:28:56.391 09:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 335278 00:28:56.391 09:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:56.650 09:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:56.908 09:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:28:56.908 09:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:28:57.165 true 00:28:57.165 09:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 335278 00:28:57.165 09:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:57.424 09:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:57.682 09:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:28:57.682 09:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:28:57.940 true 00:28:57.940 09:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 335278 00:28:57.940 09:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:58.875 09:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:58.875 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:59.134 09:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:28:59.134 09:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:28:59.392 true 00:28:59.392 09:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 335278 00:28:59.392 09:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:59.392 Initializing NVMe Controllers 00:28:59.392 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:59.392 Controller IO queue size 128, less than required. 00:28:59.392 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:59.392 Controller IO queue size 128, less than required. 00:28:59.392 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:59.392 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:59.392 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:59.392 Initialization complete. Launching workers. 00:28:59.392 ======================================================== 00:28:59.392 Latency(us) 00:28:59.392 Device Information : IOPS MiB/s Average min max 00:28:59.392 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 438.77 0.21 108725.69 3481.01 1013986.77 00:28:59.392 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 7729.62 3.77 16511.13 1576.72 457675.61 00:28:59.392 ======================================================== 00:28:59.392 Total : 8168.40 3.99 21464.53 1576.72 1013986.77 00:28:59.392 00:28:59.650 09:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:59.908 09:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:28:59.908 09:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:29:00.166 true 00:29:00.166 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 335278 00:29:00.166 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (335278) - No such process 00:29:00.166 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 335278 00:29:00.166 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:00.424 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:00.683 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:29:00.683 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:29:00.683 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:29:00.683 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:00.683 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:29:00.941 null0 00:29:00.941 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:00.941 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:00.941 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:29:01.201 null1 00:29:01.201 09:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:01.201 09:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:01.201 09:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:29:01.460 null2 00:29:01.460 09:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:01.460 09:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:01.460 09:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:29:01.720 null3 00:29:01.720 09:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:01.720 09:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:01.720 09:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:29:01.981 null4 00:29:01.981 09:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:01.981 09:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:01.981 09:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:29:02.241 null5 00:29:02.241 09:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:02.241 09:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:02.241 09:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:29:02.500 null6 00:29:02.500 09:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:02.500 09:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:02.500 09:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:29:02.759 null7 00:29:03.018 09:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:03.018 09:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:03.018 09:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:29:03.018 09:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:03.018 09:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:03.018 09:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:29:03.018 09:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:03.018 09:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:29:03.018 09:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:03.018 09:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:03.018 09:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:03.018 09:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:03.018 09:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:03.018 09:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:29:03.018 09:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:03.018 09:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:03.018 09:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:29:03.018 09:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:03.018 09:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:03.018 09:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:03.018 09:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:03.018 09:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:29:03.018 09:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:03.018 09:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:29:03.018 09:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:03.018 09:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:03.018 09:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:03.018 09:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:03.018 09:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:03.018 09:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:29:03.018 09:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:03.018 09:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:29:03.018 09:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:03.018 09:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:03.018 09:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:03.018 09:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:03.018 09:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:03.018 09:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:29:03.018 09:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:03.018 09:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:29:03.018 09:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:03.018 09:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:03.018 09:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:03.018 09:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:03.018 09:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:03.018 09:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:29:03.018 09:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:03.018 09:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:29:03.018 09:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:03.018 09:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:03.018 09:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:03.018 09:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:03.018 09:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:03.018 09:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:29:03.018 09:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:03.018 09:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:29:03.018 09:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:03.018 09:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:03.018 09:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:03.019 09:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:03.019 09:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:03.019 09:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:29:03.019 09:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:03.019 09:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:29:03.019 09:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:03.019 09:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:03.019 09:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 339219 339220 339222 339224 339226 339228 339230 339232 00:29:03.019 09:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:03.019 09:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:03.277 09:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:03.277 09:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:03.277 09:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:03.277 09:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:03.277 09:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:03.277 09:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:03.277 09:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:03.277 09:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:03.536 09:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:03.536 09:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:03.536 09:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:03.536 09:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:03.536 09:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:03.536 09:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:03.536 09:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:03.536 09:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:03.536 09:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:03.536 09:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:03.536 09:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:03.536 09:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:03.536 09:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:03.536 09:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:03.536 09:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:03.536 09:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:03.536 09:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:03.536 09:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:03.536 09:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:03.536 09:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:03.536 09:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:03.536 09:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:03.536 09:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:03.536 09:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:03.794 09:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:03.794 09:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:03.794 09:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:03.794 09:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:03.794 09:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:03.795 09:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:03.795 09:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:03.795 09:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:04.053 09:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:04.053 09:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:04.053 09:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:04.053 09:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:04.053 09:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:04.053 09:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:04.053 09:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:04.053 09:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:04.053 09:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:04.053 09:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:04.053 09:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:04.053 09:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:04.053 09:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:04.053 09:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:04.053 09:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:04.053 09:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:04.053 09:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:04.053 09:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:04.053 09:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:04.053 09:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:04.053 09:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:04.053 09:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:04.053 09:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:04.053 09:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:04.311 09:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:04.311 09:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:04.311 09:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:04.311 09:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:04.311 09:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:04.311 09:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:04.311 09:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:04.311 09:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:04.569 09:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:04.569 09:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:04.569 09:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:04.569 09:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:04.569 09:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:04.569 09:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:04.569 09:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:04.569 09:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:04.569 09:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:04.569 09:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:04.569 09:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:04.569 09:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:04.569 09:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:04.569 09:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:04.569 09:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:04.569 09:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:04.569 09:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:04.569 09:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:04.569 09:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:04.569 09:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:04.569 09:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:04.569 09:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:04.569 09:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:04.569 09:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:05.137 09:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:05.137 09:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:05.137 09:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:05.137 09:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:05.137 09:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:05.137 09:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:05.137 09:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:05.137 09:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:05.137 09:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:05.137 09:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:05.137 09:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:05.137 09:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:05.137 09:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:05.137 09:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:05.396 09:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:05.396 09:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:05.396 09:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:05.396 09:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:05.396 09:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:05.396 09:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:05.396 09:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:05.396 09:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:05.396 09:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:05.396 09:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:05.396 09:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:05.396 09:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:05.396 09:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:05.396 09:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:05.396 09:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:05.396 09:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:05.396 09:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:05.396 09:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:05.655 09:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:05.655 09:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:05.655 09:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:05.655 09:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:05.655 09:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:05.655 09:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:05.655 09:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:05.655 09:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:05.913 09:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:05.913 09:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:05.913 09:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:05.913 09:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:05.913 09:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:05.913 09:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:05.913 09:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:05.913 09:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:05.913 09:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:05.913 09:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:05.913 09:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:05.913 09:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:05.913 09:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:05.913 09:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:05.913 09:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:05.913 09:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:05.913 09:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:05.913 09:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:05.913 09:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:05.913 09:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:05.913 09:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:05.913 09:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:05.913 09:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:05.914 09:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:06.172 09:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:06.172 09:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:06.172 09:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:06.172 09:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:06.172 09:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:06.172 09:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:06.172 09:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:06.172 09:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:06.431 09:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:06.431 09:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:06.431 09:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:06.431 09:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:06.431 09:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:06.431 09:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:06.431 09:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:06.431 09:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:06.431 09:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:06.431 09:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:06.431 09:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:06.431 09:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:06.431 09:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:06.431 09:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:06.431 09:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:06.431 09:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:06.431 09:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:06.431 09:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:06.431 09:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:06.431 09:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:06.431 09:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:06.431 09:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:06.431 09:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:06.431 09:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:06.689 09:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:06.689 09:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:06.689 09:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:06.689 09:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:06.689 09:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:06.689 09:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:06.689 09:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:06.689 09:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:06.948 09:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:06.948 09:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:06.948 09:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:06.948 09:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:06.948 09:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:06.948 09:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:07.207 09:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:07.207 09:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:07.207 09:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:07.207 09:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:07.207 09:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:07.208 09:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:07.208 09:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:07.208 09:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:07.208 09:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:07.208 09:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:07.208 09:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:07.208 09:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:07.208 09:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:07.208 09:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:07.208 09:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:07.208 09:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:07.208 09:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:07.208 09:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:07.467 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:07.467 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:07.467 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:07.467 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:07.467 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:07.467 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:07.467 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:07.467 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:07.726 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:07.726 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:07.726 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:07.726 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:07.726 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:07.726 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:07.726 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:07.726 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:07.726 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:07.726 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:07.726 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:07.726 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:07.726 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:07.726 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:07.726 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:07.726 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:07.726 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:07.726 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:07.726 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:07.726 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:07.726 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:07.726 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:07.726 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:07.726 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:07.985 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:07.985 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:07.985 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:07.985 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:07.985 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:07.985 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:07.985 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:07.985 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:08.244 09:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:08.244 09:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:08.244 09:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:08.244 09:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:08.244 09:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:08.244 09:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:08.244 09:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:08.244 09:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:08.244 09:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:08.244 09:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:08.245 09:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:08.245 09:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:08.245 09:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:08.245 09:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:08.245 09:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:08.245 09:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:08.245 09:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:08.245 09:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:08.245 09:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:08.245 09:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:08.245 09:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:08.245 09:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:08.245 09:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:08.245 09:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:08.504 09:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:08.504 09:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:08.504 09:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:08.504 09:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:08.504 09:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:08.504 09:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:08.504 09:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:08.504 09:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:08.764 09:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:08.764 09:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:08.764 09:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:08.764 09:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:08.764 09:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:08.764 09:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:08.764 09:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:08.764 09:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:08.764 09:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:08.764 09:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:08.764 09:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:08.764 09:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:08.764 09:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:08.764 09:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:08.764 09:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:08.764 09:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:08.764 09:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:29:08.764 09:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:29:08.764 09:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@514 -- # nvmfcleanup 00:29:08.764 09:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:29:08.764 09:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:08.764 09:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:29:08.764 09:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:08.764 09:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:09.023 rmmod nvme_tcp 00:29:09.023 rmmod nvme_fabrics 00:29:09.023 rmmod nvme_keyring 00:29:09.023 09:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:09.023 09:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:29:09.023 09:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:29:09.023 09:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@515 -- # '[' -n 334878 ']' 00:29:09.023 09:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # killprocess 334878 00:29:09.023 09:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 334878 ']' 00:29:09.023 09:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 334878 00:29:09.023 09:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:29:09.023 09:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:09.023 09:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 334878 00:29:09.023 09:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:09.023 09:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:09.023 09:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 334878' 00:29:09.023 killing process with pid 334878 00:29:09.023 09:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 334878 00:29:09.023 09:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 334878 00:29:09.283 09:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:29:09.283 09:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:29:09.283 09:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:29:09.283 09:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:29:09.283 09:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-save 00:29:09.283 09:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:29:09.283 09:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-restore 00:29:09.283 09:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:09.283 09:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:09.283 09:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:09.283 09:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:09.283 09:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:11.193 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:11.193 00:29:11.193 real 0m47.584s 00:29:11.193 user 3m18.673s 00:29:11.193 sys 0m22.459s 00:29:11.193 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:11.193 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:11.193 ************************************ 00:29:11.193 END TEST nvmf_ns_hotplug_stress 00:29:11.193 ************************************ 00:29:11.193 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:29:11.193 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:29:11.193 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:11.193 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:11.454 ************************************ 00:29:11.454 START TEST nvmf_delete_subsystem 00:29:11.454 ************************************ 00:29:11.454 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:29:11.454 * Looking for test storage... 00:29:11.454 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:11.454 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:29:11.454 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lcov --version 00:29:11.454 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:29:11.454 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:29:11.454 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:11.454 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:11.454 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:11.454 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:29:11.454 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:29:11.454 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:29:11.454 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:29:11.454 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:29:11.454 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:29:11.454 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:29:11.454 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:11.454 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:29:11.454 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:29:11.454 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:11.454 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:11.454 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:29:11.454 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:29:11.454 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:11.454 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:29:11.454 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:29:11.454 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:29:11.454 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:29:11.454 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:11.454 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:29:11.454 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:29:11.454 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:11.454 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:11.454 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:29:11.454 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:11.454 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:29:11.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:11.454 --rc genhtml_branch_coverage=1 00:29:11.454 --rc genhtml_function_coverage=1 00:29:11.454 --rc genhtml_legend=1 00:29:11.454 --rc geninfo_all_blocks=1 00:29:11.454 --rc geninfo_unexecuted_blocks=1 00:29:11.454 00:29:11.454 ' 00:29:11.454 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:29:11.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:11.454 --rc genhtml_branch_coverage=1 00:29:11.454 --rc genhtml_function_coverage=1 00:29:11.454 --rc genhtml_legend=1 00:29:11.454 --rc geninfo_all_blocks=1 00:29:11.454 --rc geninfo_unexecuted_blocks=1 00:29:11.454 00:29:11.454 ' 00:29:11.454 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:29:11.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:11.454 --rc genhtml_branch_coverage=1 00:29:11.454 --rc genhtml_function_coverage=1 00:29:11.454 --rc genhtml_legend=1 00:29:11.454 --rc geninfo_all_blocks=1 00:29:11.454 --rc geninfo_unexecuted_blocks=1 00:29:11.454 00:29:11.454 ' 00:29:11.454 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:29:11.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:11.454 --rc genhtml_branch_coverage=1 00:29:11.454 --rc genhtml_function_coverage=1 00:29:11.454 --rc genhtml_legend=1 00:29:11.454 --rc geninfo_all_blocks=1 00:29:11.454 --rc geninfo_unexecuted_blocks=1 00:29:11.454 00:29:11.454 ' 00:29:11.454 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:11.454 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:29:11.454 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:11.454 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:11.454 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:11.454 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:11.454 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:11.454 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:11.454 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:11.454 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:11.454 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:11.454 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:11.454 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:29:11.454 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:29:11.454 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:11.454 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:11.454 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:11.454 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:11.454 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:11.454 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:29:11.454 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:11.454 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:11.454 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:11.454 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:11.454 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:11.455 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:11.455 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:29:11.455 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:11.455 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:29:11.455 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:11.455 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:11.455 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:11.455 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:11.455 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:11.455 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:11.455 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:11.455 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:11.455 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:11.455 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:11.455 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:29:11.455 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:29:11.455 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:11.455 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # prepare_net_devs 00:29:11.455 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@436 -- # local -g is_hw=no 00:29:11.455 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # remove_spdk_ns 00:29:11.455 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:11.455 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:11.455 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:11.455 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:29:11.455 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:29:11.455 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:29:11.455 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:13.991 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:13.991 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:29:13.991 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:13.991 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:13.991 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:13.991 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:13.991 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:13.991 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:29:13.991 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:13.991 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:29:13.991 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:29:13.991 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:29:13.991 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:29:13.991 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:29:13.991 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:29:13.991 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:13.991 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:13.991 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:13.991 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:13.991 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:13.991 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:13.991 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:13.991 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:13.991 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:13.991 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:13.991 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:13.991 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:13.991 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:13.991 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:13.991 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:13.991 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:13.991 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:13.991 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:13.991 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:13.991 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x1592)' 00:29:13.991 Found 0000:09:00.0 (0x8086 - 0x1592) 00:29:13.991 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:13.991 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:13.991 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:29:13.991 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:29:13.991 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:13.991 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:13.991 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x1592)' 00:29:13.991 Found 0000:09:00.1 (0x8086 - 0x1592) 00:29:13.991 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:13.991 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:13.991 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:29:13.991 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:29:13.991 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:13.991 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:13.991 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:13.991 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:13.991 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:13.991 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:13.991 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:13.991 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:13.991 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:13.991 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:13.991 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:13.991 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:29:13.991 Found net devices under 0000:09:00.0: cvl_0_0 00:29:13.991 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:13.991 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:13.991 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:13.991 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:13.991 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:13.991 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:13.991 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:13.992 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:13.992 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:29:13.992 Found net devices under 0000:09:00.1: cvl_0_1 00:29:13.992 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:13.992 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:29:13.992 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # is_hw=yes 00:29:13.992 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:29:13.992 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:29:13.992 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:29:13.992 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:13.992 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:13.992 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:13.992 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:13.992 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:13.992 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:13.992 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:13.992 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:13.992 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:13.992 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:13.992 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:13.992 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:13.992 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:13.992 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:13.992 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:13.992 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:13.992 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:13.992 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:13.992 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:13.992 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:13.992 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:13.992 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:13.992 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:13.992 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:13.992 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.216 ms 00:29:13.992 00:29:13.992 --- 10.0.0.2 ping statistics --- 00:29:13.992 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:13.992 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:29:13.992 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:13.992 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:13.992 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:29:13.992 00:29:13.992 --- 10.0.0.1 ping statistics --- 00:29:13.992 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:13.992 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:29:13.992 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:13.992 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # return 0 00:29:13.992 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:29:13.992 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:13.992 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:29:13.992 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:29:13.992 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:13.992 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:29:13.992 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:29:13.992 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:29:13.992 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:29:13.992 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:13.992 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:13.992 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # nvmfpid=341860 00:29:13.992 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:29:13.992 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # waitforlisten 341860 00:29:13.992 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 341860 ']' 00:29:13.992 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:13.992 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:13.992 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:13.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:13.992 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:13.992 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:13.992 [2024-10-07 09:50:02.577175] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:13.992 [2024-10-07 09:50:02.578213] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:29:13.992 [2024-10-07 09:50:02.578273] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:13.992 [2024-10-07 09:50:02.643978] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:13.992 [2024-10-07 09:50:02.752724] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:13.992 [2024-10-07 09:50:02.752798] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:13.992 [2024-10-07 09:50:02.752811] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:13.992 [2024-10-07 09:50:02.752822] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:13.992 [2024-10-07 09:50:02.752831] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:13.992 [2024-10-07 09:50:02.753543] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:29:13.993 [2024-10-07 09:50:02.753552] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:29:13.993 [2024-10-07 09:50:02.854228] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:13.993 [2024-10-07 09:50:02.854243] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:13.993 [2024-10-07 09:50:02.854472] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:13.993 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:13.993 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:29:13.993 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:29:13.993 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:13.993 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:13.993 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:13.993 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:13.993 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:13.993 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:13.993 [2024-10-07 09:50:02.909304] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:13.993 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:13.993 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:29:13.993 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:13.993 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:13.993 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:13.993 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:13.993 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:13.993 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:13.993 [2024-10-07 09:50:02.933543] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:13.993 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:13.993 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:29:13.993 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:13.993 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:13.993 NULL1 00:29:13.993 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:13.993 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:29:13.993 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:13.993 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:13.993 Delay0 00:29:13.993 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:13.993 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:13.993 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:13.993 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:13.993 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:13.993 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=341989 00:29:13.993 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:29:13.993 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:29:14.251 [2024-10-07 09:50:03.006445] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:29:16.152 09:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:16.152 09:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:16.152 09:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:16.152 Read completed with error (sct=0, sc=8) 00:29:16.152 starting I/O failed: -6 00:29:16.152 Read completed with error (sct=0, sc=8) 00:29:16.152 Read completed with error (sct=0, sc=8) 00:29:16.152 Read completed with error (sct=0, sc=8) 00:29:16.152 Read completed with error (sct=0, sc=8) 00:29:16.152 starting I/O failed: -6 00:29:16.152 Write completed with error (sct=0, sc=8) 00:29:16.152 Read completed with error (sct=0, sc=8) 00:29:16.152 Read completed with error (sct=0, sc=8) 00:29:16.152 Read completed with error (sct=0, sc=8) 00:29:16.152 starting I/O failed: -6 00:29:16.152 Read completed with error (sct=0, sc=8) 00:29:16.152 Read completed with error (sct=0, sc=8) 00:29:16.152 Read completed with error (sct=0, sc=8) 00:29:16.152 Read completed with error (sct=0, sc=8) 00:29:16.152 starting I/O failed: -6 00:29:16.152 Write completed with error (sct=0, sc=8) 00:29:16.152 Read completed with error (sct=0, sc=8) 00:29:16.152 Read completed with error (sct=0, sc=8) 00:29:16.152 Read completed with error (sct=0, sc=8) 00:29:16.152 starting I/O failed: -6 00:29:16.152 Write completed with error (sct=0, sc=8) 00:29:16.152 Write completed with error (sct=0, sc=8) 00:29:16.152 Read completed with error (sct=0, sc=8) 00:29:16.152 Write completed with error (sct=0, sc=8) 00:29:16.153 starting I/O failed: -6 00:29:16.153 Read completed with error (sct=0, sc=8) 00:29:16.153 Write completed with error (sct=0, sc=8) 00:29:16.153 Read completed with error (sct=0, sc=8) 00:29:16.153 Read completed with error (sct=0, sc=8) 00:29:16.153 starting I/O failed: -6 00:29:16.153 Write completed with error (sct=0, sc=8) 00:29:16.153 Read completed with error (sct=0, sc=8) 00:29:16.153 Write completed with error (sct=0, sc=8) 00:29:16.153 Read completed with error (sct=0, sc=8) 00:29:16.153 starting I/O failed: -6 00:29:16.153 Write completed with error (sct=0, sc=8) 00:29:16.153 Read completed with error (sct=0, sc=8) 00:29:16.153 Read completed with error (sct=0, sc=8) 00:29:16.153 Write completed with error (sct=0, sc=8) 00:29:16.153 Read completed with error (sct=0, sc=8) 00:29:16.153 Read completed with error (sct=0, sc=8) 00:29:16.153 starting I/O failed: -6 00:29:16.153 Read completed with error (sct=0, sc=8) 00:29:16.153 Read completed with error (sct=0, sc=8) 00:29:16.153 Read completed with error (sct=0, sc=8) 00:29:16.153 Read completed with error (sct=0, sc=8) 00:29:16.153 starting I/O failed: -6 00:29:16.153 Read completed with error (sct=0, sc=8) 00:29:16.153 Read completed with error (sct=0, sc=8) 00:29:16.153 Write completed with error (sct=0, sc=8) 00:29:16.153 Read completed with error (sct=0, sc=8) 00:29:16.153 starting I/O failed: -6 00:29:16.153 Read completed with error (sct=0, sc=8) 00:29:16.153 Read completed with error (sct=0, sc=8) 00:29:16.153 Read completed with error (sct=0, sc=8) 00:29:16.153 Read completed with error (sct=0, sc=8) 00:29:16.153 starting I/O failed: -6 00:29:16.153 Read completed with error (sct=0, sc=8) 00:29:16.153 Read completed with error (sct=0, sc=8) 00:29:16.153 Read completed with error (sct=0, sc=8) 00:29:16.153 Write completed with error (sct=0, sc=8) 00:29:16.153 starting I/O failed: -6 00:29:16.153 Read completed with error (sct=0, sc=8) 00:29:16.153 Read completed with error (sct=0, sc=8) 00:29:16.153 Read completed with error (sct=0, sc=8) 00:29:16.153 starting I/O failed: -6 00:29:16.153 Write completed with error (sct=0, sc=8) 00:29:16.153 Write completed with error (sct=0, sc=8) 00:29:16.153 Read completed with error (sct=0, sc=8) 00:29:16.153 starting I/O failed: -6 00:29:16.153 Write completed with error (sct=0, sc=8) 00:29:16.153 starting I/O failed: -6 00:29:16.153 Read completed with error (sct=0, sc=8) 00:29:16.153 Read completed with error (sct=0, sc=8) 00:29:16.153 Read completed with error (sct=0, sc=8) 00:29:16.153 starting I/O failed: -6 00:29:16.153 Write completed with error (sct=0, sc=8) 00:29:16.153 starting I/O failed: -6 00:29:16.153 Read completed with error (sct=0, sc=8) 00:29:16.153 Read completed with error (sct=0, sc=8) 00:29:16.153 Read completed with error (sct=0, sc=8) 00:29:16.153 Write completed with error (sct=0, sc=8) 00:29:16.153 starting I/O failed: -6 00:29:16.153 Write completed with error (sct=0, sc=8) 00:29:16.153 Read completed with error (sct=0, sc=8) 00:29:16.153 Write completed with error (sct=0, sc=8) 00:29:16.153 Read completed with error (sct=0, sc=8) 00:29:16.153 starting I/O failed: -6 00:29:16.153 Write completed with error (sct=0, sc=8) 00:29:16.153 Read completed with error (sct=0, sc=8) 00:29:16.153 starting I/O failed: -6 00:29:16.153 Write completed with error (sct=0, sc=8) 00:29:16.153 Read completed with error (sct=0, sc=8) 00:29:16.153 starting I/O failed: -6 00:29:16.153 Read completed with error (sct=0, sc=8) 00:29:16.153 Read completed with error (sct=0, sc=8) 00:29:16.153 Write completed with error (sct=0, sc=8) 00:29:16.153 starting I/O failed: -6 00:29:16.153 Read completed with error (sct=0, sc=8) 00:29:16.153 Read completed with error (sct=0, sc=8) 00:29:16.153 Read completed with error (sct=0, sc=8) 00:29:16.153 Write completed with error (sct=0, sc=8) 00:29:16.153 starting I/O failed: -6 00:29:16.153 starting I/O failed: -6 00:29:16.153 Write completed with error (sct=0, sc=8) 00:29:16.153 Read completed with error (sct=0, sc=8) 00:29:16.153 Read completed with error (sct=0, sc=8) 00:29:16.153 Read completed with error (sct=0, sc=8) 00:29:16.153 Read completed with error (sct=0, sc=8) 00:29:16.153 starting I/O failed: -6 00:29:16.153 Write completed with error (sct=0, sc=8) 00:29:16.153 Read completed with error (sct=0, sc=8) 00:29:16.153 starting I/O failed: -6 00:29:16.153 Write completed with error (sct=0, sc=8) 00:29:16.153 Write completed with error (sct=0, sc=8) 00:29:16.153 starting I/O failed: -6 00:29:16.153 Read completed with error (sct=0, sc=8) 00:29:16.153 Write completed with error (sct=0, sc=8) 00:29:16.153 Write completed with error (sct=0, sc=8) 00:29:16.153 Write completed with error (sct=0, sc=8) 00:29:16.153 starting I/O failed: -6 00:29:16.153 Read completed with error (sct=0, sc=8) 00:29:16.153 Read completed with error (sct=0, sc=8) 00:29:16.153 starting I/O failed: -6 00:29:16.153 Write completed with error (sct=0, sc=8) 00:29:16.153 Read completed with error (sct=0, sc=8) 00:29:16.153 starting I/O failed: -6 00:29:16.153 Read completed with error (sct=0, sc=8) 00:29:16.153 Read completed with error (sct=0, sc=8) 00:29:16.153 Write completed with error (sct=0, sc=8) 00:29:16.153 Write completed with error (sct=0, sc=8) 00:29:16.153 starting I/O failed: -6 00:29:16.153 Read completed with error (sct=0, sc=8) 00:29:16.153 Read completed with error (sct=0, sc=8) 00:29:16.153 starting I/O failed: -6 00:29:16.153 Write completed with error (sct=0, sc=8) 00:29:16.153 Read completed with error (sct=0, sc=8) 00:29:16.153 starting I/O failed: -6 00:29:16.153 Read completed with error (sct=0, sc=8) 00:29:16.153 Read completed with error (sct=0, sc=8) 00:29:16.153 Read completed with error (sct=0, sc=8) 00:29:16.153 starting I/O failed: -6 00:29:16.153 Read completed with error (sct=0, sc=8) 00:29:16.153 Read completed with error (sct=0, sc=8) 00:29:16.153 starting I/O failed: -6 00:29:16.153 Write completed with error (sct=0, sc=8) 00:29:16.153 Write completed with error (sct=0, sc=8) 00:29:16.153 starting I/O failed: -6 00:29:16.153 Read completed with error (sct=0, sc=8) 00:29:16.153 Read completed with error (sct=0, sc=8) 00:29:16.153 starting I/O failed: -6 00:29:16.153 starting I/O failed: -6 00:29:16.153 Read completed with error (sct=0, sc=8) 00:29:16.153 Read completed with error (sct=0, sc=8) 00:29:16.153 starting I/O failed: -6 00:29:16.153 Read completed with error (sct=0, sc=8) 00:29:16.153 Write completed with error (sct=0, sc=8) 00:29:16.153 starting I/O failed: -6 00:29:16.153 Read completed with error (sct=0, sc=8) 00:29:16.153 Read completed with error (sct=0, sc=8) 00:29:16.153 starting I/O failed: -6 00:29:16.153 Read completed with error (sct=0, sc=8) 00:29:16.153 Read completed with error (sct=0, sc=8) 00:29:16.153 starting I/O failed: -6 00:29:16.153 starting I/O failed: -6 00:29:16.153 Read completed with error (sct=0, sc=8) 00:29:16.153 Read completed with error (sct=0, sc=8) 00:29:16.153 starting I/O failed: -6 00:29:16.153 Write completed with error (sct=0, sc=8) 00:29:16.153 Read completed with error (sct=0, sc=8) 00:29:16.153 starting I/O failed: -6 00:29:16.153 Read completed with error (sct=0, sc=8) 00:29:16.153 Read completed with error (sct=0, sc=8) 00:29:16.153 starting I/O failed: -6 00:29:16.153 Write completed with error (sct=0, sc=8) 00:29:16.153 Write completed with error (sct=0, sc=8) 00:29:16.153 starting I/O failed: -6 00:29:16.153 Write completed with error (sct=0, sc=8) 00:29:16.153 Write completed with error (sct=0, sc=8) 00:29:16.153 starting I/O failed: -6 00:29:16.153 Read completed with error (sct=0, sc=8) 00:29:16.153 Read completed with error (sct=0, sc=8) 00:29:16.153 starting I/O failed: -6 00:29:16.153 Read completed with error (sct=0, sc=8) 00:29:16.153 Read completed with error (sct=0, sc=8) 00:29:16.153 starting I/O failed: -6 00:29:16.153 Read completed with error (sct=0, sc=8) 00:29:16.153 Write completed with error (sct=0, sc=8) 00:29:16.153 starting I/O failed: -6 00:29:16.153 Read completed with error (sct=0, sc=8) 00:29:16.153 Write completed with error (sct=0, sc=8) 00:29:16.153 starting I/O failed: -6 00:29:16.153 Read completed with error (sct=0, sc=8) 00:29:16.153 Read completed with error (sct=0, sc=8) 00:29:16.153 starting I/O failed: -6 00:29:16.153 [2024-10-07 09:50:05.127629] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1723570 is same with the state(6) to be set 00:29:16.153 Read completed with error (sct=0, sc=8) 00:29:16.153 Read completed with error (sct=0, sc=8) 00:29:16.153 starting I/O failed: -6 00:29:16.153 Read completed with error (sct=0, sc=8) 00:29:16.153 Read completed with error (sct=0, sc=8) 00:29:16.153 starting I/O failed: -6 00:29:16.153 Write completed with error (sct=0, sc=8) 00:29:16.153 Read completed with error (sct=0, sc=8) 00:29:16.153 starting I/O failed: -6 00:29:16.153 Read completed with error (sct=0, sc=8) 00:29:16.153 Read completed with error (sct=0, sc=8) 00:29:16.153 starting I/O failed: -6 00:29:16.153 Read completed with error (sct=0, sc=8) 00:29:16.153 Read completed with error (sct=0, sc=8) 00:29:16.153 starting I/O failed: -6 00:29:16.153 Read completed with error (sct=0, sc=8) 00:29:16.153 Read completed with error (sct=0, sc=8) 00:29:16.153 starting I/O failed: -6 00:29:16.153 Read completed with error (sct=0, sc=8) 00:29:16.153 Read completed with error (sct=0, sc=8) 00:29:16.153 starting I/O failed: -6 00:29:16.153 Read completed with error (sct=0, sc=8) 00:29:16.153 Read completed with error (sct=0, sc=8) 00:29:16.153 starting I/O failed: -6 00:29:16.153 Read completed with error (sct=0, sc=8) 00:29:16.153 Read completed with error (sct=0, sc=8) 00:29:16.153 starting I/O failed: -6 00:29:16.153 Write completed with error (sct=0, sc=8) 00:29:16.153 Read completed with error (sct=0, sc=8) 00:29:16.153 starting I/O failed: -6 00:29:16.153 Read completed with error (sct=0, sc=8) 00:29:16.153 Read completed with error (sct=0, sc=8) 00:29:16.153 starting I/O failed: -6 00:29:16.153 Read completed with error (sct=0, sc=8) 00:29:16.153 Read completed with error (sct=0, sc=8) 00:29:16.153 starting I/O failed: -6 00:29:16.153 Read completed with error (sct=0, sc=8) 00:29:16.153 Read completed with error (sct=0, sc=8) 00:29:16.153 starting I/O failed: -6 00:29:16.153 Read completed with error (sct=0, sc=8) 00:29:16.153 Read completed with error (sct=0, sc=8) 00:29:16.154 starting I/O failed: -6 00:29:16.154 Read completed with error (sct=0, sc=8) 00:29:16.154 Write completed with error (sct=0, sc=8) 00:29:16.154 starting I/O failed: -6 00:29:16.154 Read completed with error (sct=0, sc=8) 00:29:16.154 Read completed with error (sct=0, sc=8) 00:29:16.154 starting I/O failed: -6 00:29:16.154 Write completed with error (sct=0, sc=8) 00:29:16.154 Read completed with error (sct=0, sc=8) 00:29:16.154 starting I/O failed: -6 00:29:16.154 Read completed with error (sct=0, sc=8) 00:29:16.154 Read completed with error (sct=0, sc=8) 00:29:16.154 starting I/O failed: -6 00:29:16.154 Read completed with error (sct=0, sc=8) 00:29:16.154 Write completed with error (sct=0, sc=8) 00:29:16.154 starting I/O failed: -6 00:29:16.154 Write completed with error (sct=0, sc=8) 00:29:16.154 Read completed with error (sct=0, sc=8) 00:29:16.154 starting I/O failed: -6 00:29:16.154 Read completed with error (sct=0, sc=8) 00:29:16.154 Read completed with error (sct=0, sc=8) 00:29:16.154 starting I/O failed: -6 00:29:16.154 Read completed with error (sct=0, sc=8) 00:29:16.154 Write completed with error (sct=0, sc=8) 00:29:16.154 starting I/O failed: -6 00:29:16.154 Read completed with error (sct=0, sc=8) 00:29:16.154 Read completed with error (sct=0, sc=8) 00:29:16.154 starting I/O failed: -6 00:29:16.154 Read completed with error (sct=0, sc=8) 00:29:16.154 Read completed with error (sct=0, sc=8) 00:29:16.154 starting I/O failed: -6 00:29:16.154 Write completed with error (sct=0, sc=8) 00:29:16.154 Read completed with error (sct=0, sc=8) 00:29:16.154 starting I/O failed: -6 00:29:16.154 Write completed with error (sct=0, sc=8) 00:29:16.154 Write completed with error (sct=0, sc=8) 00:29:16.154 starting I/O failed: -6 00:29:16.154 Read completed with error (sct=0, sc=8) 00:29:16.154 Write completed with error (sct=0, sc=8) 00:29:16.154 starting I/O failed: -6 00:29:16.154 Read completed with error (sct=0, sc=8) 00:29:16.154 Read completed with error (sct=0, sc=8) 00:29:16.154 starting I/O failed: -6 00:29:16.154 Write completed with error (sct=0, sc=8) 00:29:16.154 Write completed with error (sct=0, sc=8) 00:29:16.154 starting I/O failed: -6 00:29:16.154 Read completed with error (sct=0, sc=8) 00:29:16.154 Read completed with error (sct=0, sc=8) 00:29:16.154 starting I/O failed: -6 00:29:16.154 [2024-10-07 09:50:05.128424] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7ac4000c00 is same with the state(6) to be set 00:29:17.531 [2024-10-07 09:50:06.101962] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724a70 is same with the state(6) to be set 00:29:17.531 Read completed with error (sct=0, sc=8) 00:29:17.531 Write completed with error (sct=0, sc=8) 00:29:17.531 Read completed with error (sct=0, sc=8) 00:29:17.531 Read completed with error (sct=0, sc=8) 00:29:17.531 Read completed with error (sct=0, sc=8) 00:29:17.531 Read completed with error (sct=0, sc=8) 00:29:17.531 Read completed with error (sct=0, sc=8) 00:29:17.531 Write completed with error (sct=0, sc=8) 00:29:17.531 Read completed with error (sct=0, sc=8) 00:29:17.531 Read completed with error (sct=0, sc=8) 00:29:17.531 Read completed with error (sct=0, sc=8) 00:29:17.531 Write completed with error (sct=0, sc=8) 00:29:17.531 Write completed with error (sct=0, sc=8) 00:29:17.532 Read completed with error (sct=0, sc=8) 00:29:17.532 Read completed with error (sct=0, sc=8) 00:29:17.532 Write completed with error (sct=0, sc=8) 00:29:17.532 Read completed with error (sct=0, sc=8) 00:29:17.532 Write completed with error (sct=0, sc=8) 00:29:17.532 Read completed with error (sct=0, sc=8) 00:29:17.532 Read completed with error (sct=0, sc=8) 00:29:17.532 Read completed with error (sct=0, sc=8) 00:29:17.532 Read completed with error (sct=0, sc=8) 00:29:17.532 Read completed with error (sct=0, sc=8) 00:29:17.532 Read completed with error (sct=0, sc=8) 00:29:17.532 Write completed with error (sct=0, sc=8) 00:29:17.532 Read completed with error (sct=0, sc=8) 00:29:17.532 Write completed with error (sct=0, sc=8) 00:29:17.532 Read completed with error (sct=0, sc=8) 00:29:17.532 Read completed with error (sct=0, sc=8) 00:29:17.532 Write completed with error (sct=0, sc=8) 00:29:17.532 Read completed with error (sct=0, sc=8) 00:29:17.532 Read completed with error (sct=0, sc=8) 00:29:17.532 Read completed with error (sct=0, sc=8) 00:29:17.532 Write completed with error (sct=0, sc=8) 00:29:17.532 Write completed with error (sct=0, sc=8) 00:29:17.532 Write completed with error (sct=0, sc=8) 00:29:17.532 Read completed with error (sct=0, sc=8) 00:29:17.532 [2024-10-07 09:50:06.129329] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1723750 is same with the state(6) to be set 00:29:17.532 Read completed with error (sct=0, sc=8) 00:29:17.532 Read completed with error (sct=0, sc=8) 00:29:17.532 Write completed with error (sct=0, sc=8) 00:29:17.532 Write completed with error (sct=0, sc=8) 00:29:17.532 Read completed with error (sct=0, sc=8) 00:29:17.532 Read completed with error (sct=0, sc=8) 00:29:17.532 Read completed with error (sct=0, sc=8) 00:29:17.532 Read completed with error (sct=0, sc=8) 00:29:17.532 Read completed with error (sct=0, sc=8) 00:29:17.532 Write completed with error (sct=0, sc=8) 00:29:17.532 Write completed with error (sct=0, sc=8) 00:29:17.532 Read completed with error (sct=0, sc=8) 00:29:17.532 Read completed with error (sct=0, sc=8) 00:29:17.532 Read completed with error (sct=0, sc=8) 00:29:17.532 Read completed with error (sct=0, sc=8) 00:29:17.532 Write completed with error (sct=0, sc=8) 00:29:17.532 Read completed with error (sct=0, sc=8) 00:29:17.532 Read completed with error (sct=0, sc=8) 00:29:17.532 Write completed with error (sct=0, sc=8) 00:29:17.532 Read completed with error (sct=0, sc=8) 00:29:17.532 Write completed with error (sct=0, sc=8) 00:29:17.532 Write completed with error (sct=0, sc=8) 00:29:17.532 Write completed with error (sct=0, sc=8) 00:29:17.532 Read completed with error (sct=0, sc=8) 00:29:17.532 Read completed with error (sct=0, sc=8) 00:29:17.532 Write completed with error (sct=0, sc=8) 00:29:17.532 Read completed with error (sct=0, sc=8) 00:29:17.532 Read completed with error (sct=0, sc=8) 00:29:17.532 Write completed with error (sct=0, sc=8) 00:29:17.532 Read completed with error (sct=0, sc=8) 00:29:17.532 Read completed with error (sct=0, sc=8) 00:29:17.532 Read completed with error (sct=0, sc=8) 00:29:17.532 Write completed with error (sct=0, sc=8) 00:29:17.532 Read completed with error (sct=0, sc=8) 00:29:17.532 Read completed with error (sct=0, sc=8) 00:29:17.532 Read completed with error (sct=0, sc=8) 00:29:17.532 Read completed with error (sct=0, sc=8) 00:29:17.532 Read completed with error (sct=0, sc=8) 00:29:17.532 Write completed with error (sct=0, sc=8) 00:29:17.532 Read completed with error (sct=0, sc=8) 00:29:17.532 Read completed with error (sct=0, sc=8) 00:29:17.532 Write completed with error (sct=0, sc=8) 00:29:17.532 [2024-10-07 09:50:06.129593] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7ac400d780 is same with the state(6) to be set 00:29:17.532 Write completed with error (sct=0, sc=8) 00:29:17.532 Read completed with error (sct=0, sc=8) 00:29:17.532 Read completed with error (sct=0, sc=8) 00:29:17.532 Write completed with error (sct=0, sc=8) 00:29:17.532 Read completed with error (sct=0, sc=8) 00:29:17.532 Read completed with error (sct=0, sc=8) 00:29:17.532 Read completed with error (sct=0, sc=8) 00:29:17.532 Write completed with error (sct=0, sc=8) 00:29:17.532 Write completed with error (sct=0, sc=8) 00:29:17.532 Read completed with error (sct=0, sc=8) 00:29:17.532 Write completed with error (sct=0, sc=8) 00:29:17.532 Read completed with error (sct=0, sc=8) 00:29:17.532 Read completed with error (sct=0, sc=8) 00:29:17.532 Read completed with error (sct=0, sc=8) 00:29:17.532 Read completed with error (sct=0, sc=8) 00:29:17.532 Read completed with error (sct=0, sc=8) 00:29:17.532 Read completed with error (sct=0, sc=8) 00:29:17.532 Read completed with error (sct=0, sc=8) 00:29:17.532 Read completed with error (sct=0, sc=8) 00:29:17.532 Read completed with error (sct=0, sc=8) 00:29:17.532 Read completed with error (sct=0, sc=8) 00:29:17.532 Read completed with error (sct=0, sc=8) 00:29:17.532 Write completed with error (sct=0, sc=8) 00:29:17.532 Read completed with error (sct=0, sc=8) 00:29:17.532 Read completed with error (sct=0, sc=8) 00:29:17.532 Read completed with error (sct=0, sc=8) 00:29:17.532 Read completed with error (sct=0, sc=8) 00:29:17.532 Write completed with error (sct=0, sc=8) 00:29:17.532 Read completed with error (sct=0, sc=8) 00:29:17.532 Read completed with error (sct=0, sc=8) 00:29:17.532 Write completed with error (sct=0, sc=8) 00:29:17.532 Read completed with error (sct=0, sc=8) 00:29:17.532 Read completed with error (sct=0, sc=8) 00:29:17.532 Read completed with error (sct=0, sc=8) 00:29:17.532 Read completed with error (sct=0, sc=8) 00:29:17.532 Read completed with error (sct=0, sc=8) 00:29:17.532 Read completed with error (sct=0, sc=8) 00:29:17.532 [2024-10-07 09:50:06.129854] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1723390 is same with the state(6) to be set 00:29:17.532 Read completed with error (sct=0, sc=8) 00:29:17.532 Read completed with error (sct=0, sc=8) 00:29:17.532 Read completed with error (sct=0, sc=8) 00:29:17.532 Write completed with error (sct=0, sc=8) 00:29:17.532 Read completed with error (sct=0, sc=8) 00:29:17.532 Write completed with error (sct=0, sc=8) 00:29:17.532 Read completed with error (sct=0, sc=8) 00:29:17.532 Read completed with error (sct=0, sc=8) 00:29:17.532 Read completed with error (sct=0, sc=8) 00:29:17.532 Write completed with error (sct=0, sc=8) 00:29:17.532 Read completed with error (sct=0, sc=8) 00:29:17.532 Read completed with error (sct=0, sc=8) 00:29:17.532 Read completed with error (sct=0, sc=8) 00:29:17.532 Read completed with error (sct=0, sc=8) 00:29:17.532 Read completed with error (sct=0, sc=8) 00:29:17.532 Write completed with error (sct=0, sc=8) 00:29:17.532 Read completed with error (sct=0, sc=8) 00:29:17.532 Write completed with error (sct=0, sc=8) 00:29:17.532 Read completed with error (sct=0, sc=8) 00:29:17.532 Read completed with error (sct=0, sc=8) 00:29:17.532 Read completed with error (sct=0, sc=8) 00:29:17.532 Write completed with error (sct=0, sc=8) 00:29:17.532 Write completed with error (sct=0, sc=8) 00:29:17.532 Read completed with error (sct=0, sc=8) 00:29:17.532 Read completed with error (sct=0, sc=8) 00:29:17.532 Read completed with error (sct=0, sc=8) 00:29:17.532 Write completed with error (sct=0, sc=8) 00:29:17.532 Read completed with error (sct=0, sc=8) 00:29:17.532 Read completed with error (sct=0, sc=8) 00:29:17.532 Write completed with error (sct=0, sc=8) 00:29:17.532 Read completed with error (sct=0, sc=8) 00:29:17.532 Read completed with error (sct=0, sc=8) 00:29:17.532 Write completed with error (sct=0, sc=8) 00:29:17.532 Read completed with error (sct=0, sc=8) 00:29:17.532 Read completed with error (sct=0, sc=8) 00:29:17.532 Read completed with error (sct=0, sc=8) 00:29:17.532 Read completed with error (sct=0, sc=8) 00:29:17.532 Write completed with error (sct=0, sc=8) 00:29:17.532 Read completed with error (sct=0, sc=8) 00:29:17.532 Read completed with error (sct=0, sc=8) 00:29:17.532 Read completed with error (sct=0, sc=8) 00:29:17.532 Write completed with error (sct=0, sc=8) 00:29:17.532 [2024-10-07 09:50:06.130122] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7ac400cfe0 is same with the state(6) to be set 00:29:17.532 Initializing NVMe Controllers 00:29:17.532 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:17.532 Controller IO queue size 128, less than required. 00:29:17.532 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:17.532 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:29:17.532 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:29:17.532 Initialization complete. Launching workers. 00:29:17.532 ======================================================== 00:29:17.532 Latency(us) 00:29:17.532 Device Information : IOPS MiB/s Average min max 00:29:17.532 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 180.14 0.09 914568.61 703.16 1012740.16 00:29:17.532 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 189.57 0.09 895380.40 792.37 1013593.62 00:29:17.532 ======================================================== 00:29:17.532 Total : 369.71 0.18 904729.82 703.16 1013593.62 00:29:17.532 00:29:17.532 [2024-10-07 09:50:06.131249] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1724a70 (9): Bad file descriptor 00:29:17.532 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:29:17.532 09:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:17.532 09:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:29:17.532 09:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 341989 00:29:17.532 09:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:29:17.792 09:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:29:17.792 09:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 341989 00:29:17.792 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (341989) - No such process 00:29:17.792 09:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 341989 00:29:17.792 09:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:29:17.792 09:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 341989 00:29:17.792 09:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:29:17.792 09:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:17.792 09:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:29:17.792 09:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:17.792 09:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 341989 00:29:17.792 09:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:29:17.792 09:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:17.792 09:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:17.792 09:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:17.792 09:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:29:17.792 09:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:17.792 09:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:17.792 09:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:17.792 09:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:17.792 09:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:17.792 09:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:17.792 [2024-10-07 09:50:06.653482] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:17.792 09:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:17.792 09:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:17.792 09:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:17.792 09:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:17.792 09:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:17.792 09:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=342376 00:29:17.792 09:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:29:17.792 09:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 342376 00:29:17.792 09:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:17.792 09:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:29:17.792 [2024-10-07 09:50:06.708403] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:29:18.359 09:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:18.359 09:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 342376 00:29:18.359 09:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:18.926 09:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:18.926 09:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 342376 00:29:18.926 09:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:19.185 09:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:19.185 09:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 342376 00:29:19.185 09:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:19.751 09:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:19.751 09:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 342376 00:29:19.751 09:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:20.339 09:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:20.339 09:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 342376 00:29:20.339 09:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:20.906 09:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:20.906 09:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 342376 00:29:20.906 09:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:20.906 Initializing NVMe Controllers 00:29:20.906 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:20.906 Controller IO queue size 128, less than required. 00:29:20.906 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:20.906 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:29:20.906 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:29:20.906 Initialization complete. Launching workers. 00:29:20.906 ======================================================== 00:29:20.906 Latency(us) 00:29:20.906 Device Information : IOPS MiB/s Average min max 00:29:20.906 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003674.41 1000205.71 1042159.02 00:29:20.906 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005185.08 1000223.42 1013693.76 00:29:20.906 ======================================================== 00:29:20.906 Total : 256.00 0.12 1004429.74 1000205.71 1042159.02 00:29:20.906 00:29:21.473 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:21.473 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 342376 00:29:21.473 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (342376) - No such process 00:29:21.473 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 342376 00:29:21.473 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:29:21.473 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:29:21.473 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@514 -- # nvmfcleanup 00:29:21.473 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:29:21.473 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:21.473 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:29:21.474 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:21.474 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:21.474 rmmod nvme_tcp 00:29:21.474 rmmod nvme_fabrics 00:29:21.474 rmmod nvme_keyring 00:29:21.474 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:21.474 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:29:21.474 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:29:21.474 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@515 -- # '[' -n 341860 ']' 00:29:21.474 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # killprocess 341860 00:29:21.474 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 341860 ']' 00:29:21.474 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 341860 00:29:21.474 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:29:21.474 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:21.474 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 341860 00:29:21.474 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:21.474 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:21.474 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 341860' 00:29:21.474 killing process with pid 341860 00:29:21.474 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 341860 00:29:21.474 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 341860 00:29:21.733 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:29:21.733 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:29:21.733 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:29:21.733 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:29:21.733 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-save 00:29:21.733 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:29:21.733 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-restore 00:29:21.733 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:21.733 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:21.733 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:21.733 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:21.733 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:23.641 09:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:23.641 00:29:23.641 real 0m12.388s 00:29:23.641 user 0m24.662s 00:29:23.641 sys 0m3.640s 00:29:23.641 09:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:23.641 09:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:23.641 ************************************ 00:29:23.641 END TEST nvmf_delete_subsystem 00:29:23.641 ************************************ 00:29:23.641 09:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:29:23.641 09:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:29:23.641 09:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:23.641 09:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:23.901 ************************************ 00:29:23.901 START TEST nvmf_host_management 00:29:23.901 ************************************ 00:29:23.901 09:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:29:23.901 * Looking for test storage... 00:29:23.901 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:23.901 09:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:29:23.901 09:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1681 -- # lcov --version 00:29:23.901 09:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:29:23.901 09:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:29:23.901 09:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:23.901 09:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:23.901 09:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:23.901 09:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:29:23.901 09:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:29:23.901 09:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:29:23.901 09:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:29:23.901 09:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:29:23.901 09:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:29:23.901 09:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:29:23.901 09:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:23.902 09:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:29:23.902 09:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:29:23.902 09:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:23.902 09:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:23.902 09:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:29:23.902 09:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:29:23.902 09:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:23.902 09:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:29:23.902 09:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:29:23.902 09:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:29:23.902 09:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:29:23.902 09:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:23.902 09:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:29:23.902 09:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:29:23.902 09:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:23.902 09:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:23.902 09:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:29:23.902 09:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:23.902 09:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:29:23.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:23.902 --rc genhtml_branch_coverage=1 00:29:23.902 --rc genhtml_function_coverage=1 00:29:23.902 --rc genhtml_legend=1 00:29:23.902 --rc geninfo_all_blocks=1 00:29:23.902 --rc geninfo_unexecuted_blocks=1 00:29:23.902 00:29:23.902 ' 00:29:23.902 09:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:29:23.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:23.902 --rc genhtml_branch_coverage=1 00:29:23.902 --rc genhtml_function_coverage=1 00:29:23.902 --rc genhtml_legend=1 00:29:23.902 --rc geninfo_all_blocks=1 00:29:23.902 --rc geninfo_unexecuted_blocks=1 00:29:23.902 00:29:23.902 ' 00:29:23.902 09:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:29:23.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:23.902 --rc genhtml_branch_coverage=1 00:29:23.902 --rc genhtml_function_coverage=1 00:29:23.902 --rc genhtml_legend=1 00:29:23.902 --rc geninfo_all_blocks=1 00:29:23.902 --rc geninfo_unexecuted_blocks=1 00:29:23.902 00:29:23.902 ' 00:29:23.902 09:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:29:23.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:23.902 --rc genhtml_branch_coverage=1 00:29:23.902 --rc genhtml_function_coverage=1 00:29:23.902 --rc genhtml_legend=1 00:29:23.902 --rc geninfo_all_blocks=1 00:29:23.902 --rc geninfo_unexecuted_blocks=1 00:29:23.902 00:29:23.902 ' 00:29:23.902 09:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:23.902 09:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:29:23.902 09:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:23.902 09:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:23.902 09:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:23.902 09:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:23.902 09:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:23.902 09:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:23.902 09:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:23.902 09:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:23.902 09:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:23.902 09:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:23.902 09:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:29:23.902 09:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:29:23.902 09:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:23.902 09:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:23.902 09:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:23.902 09:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:23.902 09:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:23.902 09:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:29:23.902 09:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:23.902 09:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:23.902 09:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:23.902 09:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:23.902 09:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:23.903 09:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:23.903 09:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:29:23.903 09:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:23.903 09:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:29:23.903 09:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:23.903 09:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:23.903 09:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:23.903 09:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:23.903 09:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:23.903 09:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:23.903 09:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:23.903 09:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:23.903 09:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:23.903 09:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:23.903 09:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:23.903 09:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:23.903 09:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:29:23.903 09:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:29:23.903 09:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:23.903 09:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # prepare_net_devs 00:29:23.903 09:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@436 -- # local -g is_hw=no 00:29:23.903 09:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # remove_spdk_ns 00:29:23.903 09:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:23.903 09:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:23.903 09:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:23.903 09:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:29:23.903 09:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:29:23.903 09:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:29:23.903 09:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:25.820 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:25.820 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:29:25.820 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:25.820 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:25.820 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:25.820 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:25.820 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:25.820 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:29:25.820 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:25.820 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:29:25.820 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:29:25.820 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:29:25.820 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:29:25.820 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:29:25.820 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:29:25.820 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:25.820 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:25.820 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:25.820 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:25.820 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:25.820 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:25.820 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:25.820 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:25.820 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:25.820 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:25.820 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:25.820 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:25.820 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:25.820 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:25.820 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:25.820 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:25.820 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:25.820 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:25.820 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:25.820 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x1592)' 00:29:25.820 Found 0000:09:00.0 (0x8086 - 0x1592) 00:29:25.820 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:25.820 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:25.820 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:29:25.820 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:29:25.820 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:25.820 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:25.820 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x1592)' 00:29:25.820 Found 0000:09:00.1 (0x8086 - 0x1592) 00:29:25.820 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:25.820 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:25.820 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:29:25.820 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:29:25.820 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:25.820 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:25.820 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:25.820 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:25.820 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:25.820 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:25.820 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:25.820 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:25.820 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:25.820 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:25.820 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:25.820 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:29:25.820 Found net devices under 0000:09:00.0: cvl_0_0 00:29:25.820 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:25.821 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:25.821 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:25.821 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:25.821 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:25.821 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:25.821 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:25.821 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:25.821 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:29:25.821 Found net devices under 0000:09:00.1: cvl_0_1 00:29:25.821 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:25.821 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:29:25.821 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # is_hw=yes 00:29:25.821 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:29:25.821 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:29:25.821 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:29:25.821 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:25.821 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:25.821 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:25.821 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:25.821 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:25.821 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:25.821 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:25.821 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:25.821 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:25.821 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:25.821 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:25.821 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:25.821 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:25.821 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:25.821 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:25.821 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:25.821 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:25.821 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:25.821 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:26.081 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:26.081 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:26.081 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:26.081 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:26.081 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:26.081 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:29:26.081 00:29:26.081 --- 10.0.0.2 ping statistics --- 00:29:26.081 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:26.081 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:29:26.081 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:26.081 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:26.081 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:29:26.081 00:29:26.081 --- 10.0.0.1 ping statistics --- 00:29:26.081 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:26.081 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:29:26.081 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:26.081 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@448 -- # return 0 00:29:26.081 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:29:26.081 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:26.081 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:29:26.081 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:29:26.081 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:26.081 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:29:26.081 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:29:26.081 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:29:26.081 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:29:26.081 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:29:26.081 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:29:26.081 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:26.081 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:26.081 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # nvmfpid=344611 00:29:26.081 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:29:26.081 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # waitforlisten 344611 00:29:26.081 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 344611 ']' 00:29:26.081 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:26.081 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:26.081 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:26.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:26.081 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:26.081 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:26.081 [2024-10-07 09:50:14.920162] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:26.081 [2024-10-07 09:50:14.921172] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:29:26.081 [2024-10-07 09:50:14.921221] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:26.081 [2024-10-07 09:50:14.981081] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:26.341 [2024-10-07 09:50:15.088619] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:26.341 [2024-10-07 09:50:15.088688] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:26.341 [2024-10-07 09:50:15.088703] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:26.341 [2024-10-07 09:50:15.088714] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:26.341 [2024-10-07 09:50:15.088723] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:26.341 [2024-10-07 09:50:15.090329] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:29:26.341 [2024-10-07 09:50:15.090393] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:29:26.341 [2024-10-07 09:50:15.090461] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:29:26.341 [2024-10-07 09:50:15.090464] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:29:26.341 [2024-10-07 09:50:15.185575] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:26.341 [2024-10-07 09:50:15.185827] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:26.341 [2024-10-07 09:50:15.186085] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:26.341 [2024-10-07 09:50:15.186607] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:26.341 [2024-10-07 09:50:15.186872] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:29:26.341 09:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:26.341 09:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:29:26.341 09:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:29:26.341 09:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:26.341 09:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:26.341 09:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:26.341 09:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:26.341 09:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:26.341 09:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:26.341 [2024-10-07 09:50:15.235155] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:26.341 09:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:26.341 09:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:29:26.341 09:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:26.341 09:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:26.341 09:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:26.341 09:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:29:26.341 09:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:29:26.341 09:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:26.341 09:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:26.341 Malloc0 00:29:26.341 [2024-10-07 09:50:15.295338] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:26.341 09:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:26.341 09:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:29:26.341 09:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:26.341 09:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:26.341 09:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=344765 00:29:26.341 09:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 344765 /var/tmp/bdevperf.sock 00:29:26.341 09:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 344765 ']' 00:29:26.341 09:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:26.341 09:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:29:26.341 09:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:29:26.341 09:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:26.341 09:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:29:26.341 09:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:26.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:26.341 09:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:26.341 09:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:29:26.341 09:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:26.341 09:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:29:26.341 09:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:29:26.341 { 00:29:26.341 "params": { 00:29:26.341 "name": "Nvme$subsystem", 00:29:26.341 "trtype": "$TEST_TRANSPORT", 00:29:26.341 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:26.341 "adrfam": "ipv4", 00:29:26.341 "trsvcid": "$NVMF_PORT", 00:29:26.341 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:26.341 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:26.341 "hdgst": ${hdgst:-false}, 00:29:26.341 "ddgst": ${ddgst:-false} 00:29:26.341 }, 00:29:26.342 "method": "bdev_nvme_attach_controller" 00:29:26.342 } 00:29:26.342 EOF 00:29:26.342 )") 00:29:26.342 09:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:29:26.342 09:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:29:26.601 09:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:29:26.601 09:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:29:26.601 "params": { 00:29:26.601 "name": "Nvme0", 00:29:26.601 "trtype": "tcp", 00:29:26.601 "traddr": "10.0.0.2", 00:29:26.601 "adrfam": "ipv4", 00:29:26.601 "trsvcid": "4420", 00:29:26.601 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:26.601 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:26.601 "hdgst": false, 00:29:26.601 "ddgst": false 00:29:26.601 }, 00:29:26.601 "method": "bdev_nvme_attach_controller" 00:29:26.601 }' 00:29:26.601 [2024-10-07 09:50:15.377228] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:29:26.601 [2024-10-07 09:50:15.377302] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid344765 ] 00:29:26.601 [2024-10-07 09:50:15.434140] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:26.601 [2024-10-07 09:50:15.544821] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:29:26.860 Running I/O for 10 seconds... 00:29:26.860 09:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:26.860 09:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:29:26.860 09:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:26.860 09:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:26.860 09:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:26.860 09:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:26.860 09:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:26.860 09:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:29:26.860 09:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:29:26.860 09:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:29:26.860 09:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:29:26.860 09:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:29:26.860 09:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:29:26.860 09:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:29:26.860 09:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:29:26.860 09:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:29:26.860 09:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:26.860 09:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:26.860 09:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:27.118 09:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:29:27.118 09:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:29:27.118 09:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:29:27.379 09:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:29:27.379 09:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:29:27.379 09:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:29:27.379 09:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:27.379 09:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:29:27.379 09:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:27.379 09:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:27.379 09:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=579 00:29:27.379 09:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']' 00:29:27.379 09:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:29:27.379 09:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:29:27.379 09:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:29:27.379 09:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:29:27.379 09:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:27.379 09:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:27.379 [2024-10-07 09:50:16.163374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.379 [2024-10-07 09:50:16.163442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.380 [2024-10-07 09:50:16.163471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.380 [2024-10-07 09:50:16.163486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.380 [2024-10-07 09:50:16.163503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.380 [2024-10-07 09:50:16.163531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.380 [2024-10-07 09:50:16.163547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.380 [2024-10-07 09:50:16.163560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.380 [2024-10-07 09:50:16.163575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.380 [2024-10-07 09:50:16.163588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.380 [2024-10-07 09:50:16.163603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.380 [2024-10-07 09:50:16.163616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.380 [2024-10-07 09:50:16.163631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.380 [2024-10-07 09:50:16.163644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.380 [2024-10-07 09:50:16.163659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.380 [2024-10-07 09:50:16.163682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.380 [2024-10-07 09:50:16.163698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.380 [2024-10-07 09:50:16.163712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.380 [2024-10-07 09:50:16.163734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.380 [2024-10-07 09:50:16.163748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.380 [2024-10-07 09:50:16.163762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.380 [2024-10-07 09:50:16.163776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.380 [2024-10-07 09:50:16.163790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.380 [2024-10-07 09:50:16.163804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.380 [2024-10-07 09:50:16.163819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.380 [2024-10-07 09:50:16.163832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.380 [2024-10-07 09:50:16.163846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.380 [2024-10-07 09:50:16.163860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.380 [2024-10-07 09:50:16.163874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.380 [2024-10-07 09:50:16.163888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.380 [2024-10-07 09:50:16.163907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.380 [2024-10-07 09:50:16.163922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.380 [2024-10-07 09:50:16.163936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.380 [2024-10-07 09:50:16.163950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.380 [2024-10-07 09:50:16.163964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.380 [2024-10-07 09:50:16.163978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.380 [2024-10-07 09:50:16.163992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.380 [2024-10-07 09:50:16.164005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.380 [2024-10-07 09:50:16.164019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.380 [2024-10-07 09:50:16.164033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.380 [2024-10-07 09:50:16.164047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.380 [2024-10-07 09:50:16.164060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.380 [2024-10-07 09:50:16.164075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.380 [2024-10-07 09:50:16.164089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.380 [2024-10-07 09:50:16.164104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.380 [2024-10-07 09:50:16.164117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.380 [2024-10-07 09:50:16.164132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.380 [2024-10-07 09:50:16.164145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.380 [2024-10-07 09:50:16.164159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.380 [2024-10-07 09:50:16.164173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.380 [2024-10-07 09:50:16.164188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.380 [2024-10-07 09:50:16.164201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.380 [2024-10-07 09:50:16.164216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.380 [2024-10-07 09:50:16.164229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.380 [2024-10-07 09:50:16.164243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.380 [2024-10-07 09:50:16.164265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.380 [2024-10-07 09:50:16.164281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.380 [2024-10-07 09:50:16.164294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.380 [2024-10-07 09:50:16.164309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.380 [2024-10-07 09:50:16.164323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.380 [2024-10-07 09:50:16.164338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.380 [2024-10-07 09:50:16.164352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.380 [2024-10-07 09:50:16.164367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.380 [2024-10-07 09:50:16.164380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.380 [2024-10-07 09:50:16.164394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.380 [2024-10-07 09:50:16.164408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.380 [2024-10-07 09:50:16.164422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.380 [2024-10-07 09:50:16.164436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.380 [2024-10-07 09:50:16.164450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.380 [2024-10-07 09:50:16.164463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.380 [2024-10-07 09:50:16.164478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.380 [2024-10-07 09:50:16.164493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.380 [2024-10-07 09:50:16.164509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.380 [2024-10-07 09:50:16.164523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.380 [2024-10-07 09:50:16.164538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.380 [2024-10-07 09:50:16.164552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.380 [2024-10-07 09:50:16.164567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.380 [2024-10-07 09:50:16.164581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.380 [2024-10-07 09:50:16.164596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.380 [2024-10-07 09:50:16.164609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.380 [2024-10-07 09:50:16.164629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.381 [2024-10-07 09:50:16.164643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.381 [2024-10-07 09:50:16.164658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.381 [2024-10-07 09:50:16.164680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.381 [2024-10-07 09:50:16.164696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.381 [2024-10-07 09:50:16.164710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.381 [2024-10-07 09:50:16.164736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.381 [2024-10-07 09:50:16.164749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.381 [2024-10-07 09:50:16.164765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.381 [2024-10-07 09:50:16.164778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.381 [2024-10-07 09:50:16.164793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.381 [2024-10-07 09:50:16.164806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.381 [2024-10-07 09:50:16.164821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.381 [2024-10-07 09:50:16.164835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.381 [2024-10-07 09:50:16.164850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.381 [2024-10-07 09:50:16.164864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.381 [2024-10-07 09:50:16.164879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.381 [2024-10-07 09:50:16.164892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.381 [2024-10-07 09:50:16.164907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.381 [2024-10-07 09:50:16.164921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.381 [2024-10-07 09:50:16.164935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.381 [2024-10-07 09:50:16.164949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.381 [2024-10-07 09:50:16.164964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.381 [2024-10-07 09:50:16.164984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.381 [2024-10-07 09:50:16.165007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.381 [2024-10-07 09:50:16.165025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.381 [2024-10-07 09:50:16.165041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.381 [2024-10-07 09:50:16.165055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.381 [2024-10-07 09:50:16.165069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.381 [2024-10-07 09:50:16.165083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.381 [2024-10-07 09:50:16.165098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.381 [2024-10-07 09:50:16.165111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.381 [2024-10-07 09:50:16.165126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.381 [2024-10-07 09:50:16.165140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.381 [2024-10-07 09:50:16.165155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.381 [2024-10-07 09:50:16.165169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.381 [2024-10-07 09:50:16.165184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.381 [2024-10-07 09:50:16.165198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.381 [2024-10-07 09:50:16.165213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.381 [2024-10-07 09:50:16.165227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.381 [2024-10-07 09:50:16.165242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.381 [2024-10-07 09:50:16.165255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.381 [2024-10-07 09:50:16.165270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.381 [2024-10-07 09:50:16.165283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.381 [2024-10-07 09:50:16.165299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.381 [2024-10-07 09:50:16.165314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.381 [2024-10-07 09:50:16.165328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.381 [2024-10-07 09:50:16.165342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.381 [2024-10-07 09:50:16.165383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:27.381 [2024-10-07 09:50:16.165461] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x23eee70 was disconnected and freed. reset controller. 00:29:27.381 [2024-10-07 09:50:16.166631] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:27.381 09:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:27.381 09:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:29:27.381 09:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:27.381 09:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:27.381 task offset: 83584 on job bdev=Nvme0n1 fails 00:29:27.381 00:29:27.381 Latency(us) 00:29:27.381 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:27.381 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:27.381 Job: Nvme0n1 ended in about 0.40 seconds with error 00:29:27.381 Verification LBA range: start 0x0 length 0x400 00:29:27.381 Nvme0n1 : 0.40 1595.83 99.74 159.58 0.00 35405.35 2512.21 34564.17 00:29:27.381 =================================================================================================================== 00:29:27.381 Total : 1595.83 99.74 159.58 0.00 35405.35 2512.21 34564.17 00:29:27.381 [2024-10-07 09:50:16.168546] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:27.381 [2024-10-07 09:50:16.168573] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21d6c90 (9): Bad file descriptor 00:29:27.381 [2024-10-07 09:50:16.169770] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:29:27.381 [2024-10-07 09:50:16.169883] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:29:27.381 [2024-10-07 09:50:16.169911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.381 [2024-10-07 09:50:16.169933] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:29:27.381 [2024-10-07 09:50:16.169949] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:29:27.381 [2024-10-07 09:50:16.169963] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.381 [2024-10-07 09:50:16.169981] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d6c90 00:29:27.381 [2024-10-07 09:50:16.170016] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21d6c90 (9): Bad file descriptor 00:29:27.381 [2024-10-07 09:50:16.170053] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:27.381 [2024-10-07 09:50:16.170068] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:27.381 [2024-10-07 09:50:16.170084] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:27.381 [2024-10-07 09:50:16.170105] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.381 09:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:27.381 09:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:29:28.319 09:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 344765 00:29:28.319 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (344765) - No such process 00:29:28.319 09:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:29:28.319 09:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:29:28.319 09:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:29:28.319 09:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:29:28.319 09:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:29:28.319 09:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:29:28.319 09:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:29:28.319 09:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:29:28.319 { 00:29:28.319 "params": { 00:29:28.319 "name": "Nvme$subsystem", 00:29:28.319 "trtype": "$TEST_TRANSPORT", 00:29:28.319 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:28.319 "adrfam": "ipv4", 00:29:28.319 "trsvcid": "$NVMF_PORT", 00:29:28.319 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:28.319 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:28.319 "hdgst": ${hdgst:-false}, 00:29:28.319 "ddgst": ${ddgst:-false} 00:29:28.319 }, 00:29:28.319 "method": "bdev_nvme_attach_controller" 00:29:28.319 } 00:29:28.319 EOF 00:29:28.319 )") 00:29:28.319 09:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:29:28.319 09:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:29:28.319 09:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:29:28.319 09:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:29:28.319 "params": { 00:29:28.319 "name": "Nvme0", 00:29:28.319 "trtype": "tcp", 00:29:28.319 "traddr": "10.0.0.2", 00:29:28.319 "adrfam": "ipv4", 00:29:28.319 "trsvcid": "4420", 00:29:28.319 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:28.319 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:28.319 "hdgst": false, 00:29:28.319 "ddgst": false 00:29:28.319 }, 00:29:28.319 "method": "bdev_nvme_attach_controller" 00:29:28.319 }' 00:29:28.319 [2024-10-07 09:50:17.229111] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:29:28.319 [2024-10-07 09:50:17.229196] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid344921 ] 00:29:28.319 [2024-10-07 09:50:17.286148] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:28.578 [2024-10-07 09:50:17.400116] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:29:28.836 Running I/O for 1 seconds... 00:29:29.772 1664.00 IOPS, 104.00 MiB/s 00:29:29.772 Latency(us) 00:29:29.772 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:29.772 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:29.772 Verification LBA range: start 0x0 length 0x400 00:29:29.772 Nvme0n1 : 1.02 1700.12 106.26 0.00 0.00 37033.52 4296.25 33399.09 00:29:29.772 =================================================================================================================== 00:29:29.772 Total : 1700.12 106.26 0.00 0.00 37033.52 4296.25 33399.09 00:29:30.032 09:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:29:30.032 09:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:29:30.032 09:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:30.032 09:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:30.032 09:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:29:30.032 09:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@514 -- # nvmfcleanup 00:29:30.032 09:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:29:30.032 09:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:30.032 09:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:29:30.032 09:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:30.032 09:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:30.032 rmmod nvme_tcp 00:29:30.032 rmmod nvme_fabrics 00:29:30.032 rmmod nvme_keyring 00:29:30.032 09:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:30.032 09:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:29:30.032 09:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:29:30.032 09:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@515 -- # '[' -n 344611 ']' 00:29:30.032 09:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # killprocess 344611 00:29:30.032 09:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 344611 ']' 00:29:30.032 09:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 344611 00:29:30.032 09:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:29:30.032 09:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:30.032 09:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 344611 00:29:30.291 09:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:30.291 09:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:30.291 09:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 344611' 00:29:30.291 killing process with pid 344611 00:29:30.291 09:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 344611 00:29:30.291 09:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 344611 00:29:30.553 [2024-10-07 09:50:19.313529] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:29:30.553 09:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:29:30.553 09:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:29:30.553 09:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:29:30.553 09:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:29:30.553 09:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-save 00:29:30.553 09:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:29:30.553 09:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-restore 00:29:30.553 09:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:30.553 09:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:30.553 09:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:30.553 09:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:30.553 09:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:32.462 09:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:32.462 09:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:29:32.462 00:29:32.462 real 0m8.750s 00:29:32.462 user 0m17.978s 00:29:32.462 sys 0m3.577s 00:29:32.462 09:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:32.462 09:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:32.462 ************************************ 00:29:32.462 END TEST nvmf_host_management 00:29:32.462 ************************************ 00:29:32.462 09:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:29:32.462 09:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:29:32.462 09:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:32.463 09:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:32.463 ************************************ 00:29:32.463 START TEST nvmf_lvol 00:29:32.463 ************************************ 00:29:32.463 09:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:29:32.723 * Looking for test storage... 00:29:32.723 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:32.723 09:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:29:32.723 09:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1681 -- # lcov --version 00:29:32.723 09:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:29:32.723 09:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:29:32.723 09:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:32.723 09:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:32.723 09:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:32.723 09:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:29:32.723 09:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:29:32.723 09:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:29:32.723 09:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:29:32.723 09:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:29:32.723 09:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:29:32.723 09:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:29:32.723 09:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:32.723 09:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:29:32.723 09:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:29:32.723 09:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:32.723 09:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:32.723 09:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:29:32.723 09:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:29:32.723 09:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:32.723 09:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:29:32.723 09:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:29:32.723 09:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:29:32.723 09:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:29:32.723 09:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:32.723 09:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:29:32.723 09:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:29:32.723 09:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:32.723 09:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:32.723 09:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:29:32.723 09:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:32.723 09:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:29:32.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:32.723 --rc genhtml_branch_coverage=1 00:29:32.723 --rc genhtml_function_coverage=1 00:29:32.723 --rc genhtml_legend=1 00:29:32.723 --rc geninfo_all_blocks=1 00:29:32.723 --rc geninfo_unexecuted_blocks=1 00:29:32.723 00:29:32.723 ' 00:29:32.723 09:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:29:32.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:32.723 --rc genhtml_branch_coverage=1 00:29:32.723 --rc genhtml_function_coverage=1 00:29:32.723 --rc genhtml_legend=1 00:29:32.723 --rc geninfo_all_blocks=1 00:29:32.723 --rc geninfo_unexecuted_blocks=1 00:29:32.723 00:29:32.723 ' 00:29:32.723 09:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:29:32.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:32.723 --rc genhtml_branch_coverage=1 00:29:32.723 --rc genhtml_function_coverage=1 00:29:32.723 --rc genhtml_legend=1 00:29:32.723 --rc geninfo_all_blocks=1 00:29:32.723 --rc geninfo_unexecuted_blocks=1 00:29:32.723 00:29:32.723 ' 00:29:32.723 09:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:29:32.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:32.723 --rc genhtml_branch_coverage=1 00:29:32.723 --rc genhtml_function_coverage=1 00:29:32.723 --rc genhtml_legend=1 00:29:32.723 --rc geninfo_all_blocks=1 00:29:32.723 --rc geninfo_unexecuted_blocks=1 00:29:32.723 00:29:32.724 ' 00:29:32.724 09:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:32.724 09:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:29:32.724 09:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:32.724 09:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:32.724 09:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:32.724 09:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:32.724 09:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:32.724 09:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:32.724 09:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:32.724 09:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:32.724 09:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:32.724 09:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:32.724 09:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:29:32.724 09:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:29:32.724 09:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:32.724 09:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:32.724 09:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:32.724 09:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:32.724 09:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:32.724 09:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:29:32.724 09:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:32.724 09:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:32.724 09:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:32.724 09:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:32.724 09:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:32.724 09:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:32.724 09:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:29:32.724 09:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:32.724 09:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:29:32.724 09:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:32.724 09:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:32.724 09:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:32.724 09:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:32.724 09:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:32.724 09:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:32.724 09:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:32.724 09:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:32.724 09:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:32.724 09:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:32.724 09:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:32.724 09:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:32.724 09:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:29:32.724 09:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:29:32.724 09:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:32.724 09:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:29:32.724 09:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:29:32.724 09:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:32.724 09:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # prepare_net_devs 00:29:32.724 09:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@436 -- # local -g is_hw=no 00:29:32.724 09:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # remove_spdk_ns 00:29:32.724 09:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:32.724 09:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:32.724 09:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:32.724 09:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:29:32.724 09:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:29:32.724 09:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:29:32.724 09:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:34.631 09:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:34.631 09:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:29:34.631 09:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:34.631 09:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:34.631 09:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:34.631 09:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:34.631 09:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:34.631 09:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:29:34.631 09:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:34.631 09:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:29:34.631 09:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:29:34.631 09:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:29:34.631 09:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:29:34.631 09:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:29:34.631 09:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:29:34.631 09:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:34.631 09:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:34.631 09:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:34.631 09:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:34.631 09:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:34.631 09:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:34.631 09:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:34.631 09:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:34.631 09:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:34.631 09:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:34.631 09:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:34.631 09:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:34.631 09:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:34.631 09:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:34.631 09:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:34.631 09:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:34.631 09:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:34.631 09:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:34.631 09:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:34.631 09:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x1592)' 00:29:34.631 Found 0000:09:00.0 (0x8086 - 0x1592) 00:29:34.631 09:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:34.631 09:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:34.631 09:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:29:34.631 09:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:29:34.631 09:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:34.631 09:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:34.631 09:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x1592)' 00:29:34.631 Found 0000:09:00.1 (0x8086 - 0x1592) 00:29:34.631 09:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:34.631 09:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:34.631 09:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:29:34.631 09:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:29:34.631 09:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:34.631 09:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:34.631 09:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:34.631 09:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:34.631 09:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:34.631 09:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:34.631 09:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:34.632 09:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:34.632 09:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:34.632 09:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:34.632 09:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:34.632 09:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:29:34.632 Found net devices under 0000:09:00.0: cvl_0_0 00:29:34.632 09:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:34.632 09:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:34.632 09:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:34.632 09:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:34.632 09:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:34.632 09:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:34.632 09:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:34.632 09:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:34.632 09:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:29:34.632 Found net devices under 0000:09:00.1: cvl_0_1 00:29:34.632 09:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:34.632 09:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:29:34.632 09:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # is_hw=yes 00:29:34.632 09:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:29:34.632 09:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:29:34.632 09:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:29:34.632 09:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:34.632 09:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:34.632 09:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:34.632 09:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:34.632 09:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:34.632 09:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:34.632 09:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:34.632 09:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:34.632 09:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:34.632 09:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:34.632 09:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:34.632 09:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:34.632 09:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:34.632 09:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:34.632 09:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:34.891 09:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:34.891 09:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:34.891 09:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:34.891 09:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:34.891 09:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:34.891 09:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:34.891 09:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:34.891 09:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:34.891 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:34.891 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:29:34.891 00:29:34.891 --- 10.0.0.2 ping statistics --- 00:29:34.891 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:34.891 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:29:34.891 09:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:34.891 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:34.891 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.100 ms 00:29:34.891 00:29:34.891 --- 10.0.0.1 ping statistics --- 00:29:34.891 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:34.891 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:29:34.892 09:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:34.892 09:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@448 -- # return 0 00:29:34.892 09:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:29:34.892 09:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:34.892 09:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:29:34.892 09:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:29:34.892 09:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:34.892 09:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:29:34.892 09:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:29:34.892 09:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:29:34.892 09:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:29:34.892 09:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:34.892 09:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:34.892 09:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # nvmfpid=347010 00:29:34.892 09:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:29:34.892 09:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # waitforlisten 347010 00:29:34.892 09:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 347010 ']' 00:29:34.892 09:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:34.892 09:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:34.892 09:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:34.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:34.892 09:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:34.892 09:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:34.892 [2024-10-07 09:50:23.819804] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:34.892 [2024-10-07 09:50:23.820852] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:29:34.892 [2024-10-07 09:50:23.820902] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:34.892 [2024-10-07 09:50:23.881104] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:35.151 [2024-10-07 09:50:23.991925] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:35.151 [2024-10-07 09:50:23.991990] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:35.151 [2024-10-07 09:50:23.992003] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:35.151 [2024-10-07 09:50:23.992014] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:35.151 [2024-10-07 09:50:23.992024] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:35.151 [2024-10-07 09:50:23.992839] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:29:35.151 [2024-10-07 09:50:23.992938] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:29:35.151 [2024-10-07 09:50:23.992942] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:29:35.151 [2024-10-07 09:50:24.080064] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:35.151 [2024-10-07 09:50:24.080264] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:35.151 [2024-10-07 09:50:24.080271] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:35.151 [2024-10-07 09:50:24.080528] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:35.151 09:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:35.151 09:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:29:35.151 09:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:29:35.151 09:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:35.151 09:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:35.151 09:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:35.151 09:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:35.410 [2024-10-07 09:50:24.393649] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:35.668 09:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:35.927 09:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:29:35.927 09:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:36.186 09:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:29:36.186 09:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:29:36.445 09:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:29:36.705 09:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=61033ed0-8346-4a2e-b968-588ca61e956a 00:29:36.705 09:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 61033ed0-8346-4a2e-b968-588ca61e956a lvol 20 00:29:36.964 09:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=d2397765-e435-4472-b550-1e0494c263a2 00:29:36.964 09:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:37.223 09:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d2397765-e435-4472-b550-1e0494c263a2 00:29:37.481 09:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:37.740 [2024-10-07 09:50:26.637866] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:37.740 09:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:37.998 09:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=347417 00:29:37.998 09:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:29:37.999 09:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:29:38.935 09:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot d2397765-e435-4472-b550-1e0494c263a2 MY_SNAPSHOT 00:29:39.503 09:50:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=5eb97995-0d1a-4dda-9f49-dda8b4a10a10 00:29:39.503 09:50:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize d2397765-e435-4472-b550-1e0494c263a2 30 00:29:39.762 09:50:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 5eb97995-0d1a-4dda-9f49-dda8b4a10a10 MY_CLONE 00:29:40.020 09:50:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=e7b2ddd8-e023-4803-9908-85eca88e95ab 00:29:40.020 09:50:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate e7b2ddd8-e023-4803-9908-85eca88e95ab 00:29:40.587 09:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 347417 00:29:48.705 Initializing NVMe Controllers 00:29:48.705 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:29:48.705 Controller IO queue size 128, less than required. 00:29:48.705 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:48.705 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:29:48.705 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:29:48.705 Initialization complete. Launching workers. 00:29:48.705 ======================================================== 00:29:48.705 Latency(us) 00:29:48.705 Device Information : IOPS MiB/s Average min max 00:29:48.705 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10577.30 41.32 12105.74 2866.50 80735.63 00:29:48.705 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10441.30 40.79 12262.22 4376.67 77336.12 00:29:48.705 ======================================================== 00:29:48.705 Total : 21018.60 82.10 12183.47 2866.50 80735.63 00:29:48.705 00:29:48.705 09:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:48.963 09:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete d2397765-e435-4472-b550-1e0494c263a2 00:29:49.222 09:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 61033ed0-8346-4a2e-b968-588ca61e956a 00:29:49.480 09:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:29:49.480 09:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:29:49.480 09:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:29:49.480 09:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@514 -- # nvmfcleanup 00:29:49.480 09:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:29:49.480 09:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:49.480 09:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:29:49.480 09:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:49.480 09:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:49.480 rmmod nvme_tcp 00:29:49.480 rmmod nvme_fabrics 00:29:49.480 rmmod nvme_keyring 00:29:49.480 09:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:49.480 09:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:29:49.480 09:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:29:49.480 09:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@515 -- # '[' -n 347010 ']' 00:29:49.480 09:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # killprocess 347010 00:29:49.480 09:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 347010 ']' 00:29:49.480 09:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 347010 00:29:49.480 09:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:29:49.480 09:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:49.480 09:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 347010 00:29:49.480 09:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:49.480 09:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:49.480 09:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 347010' 00:29:49.480 killing process with pid 347010 00:29:49.480 09:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 347010 00:29:49.480 09:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 347010 00:29:49.739 09:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:29:49.739 09:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:29:49.739 09:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:29:49.739 09:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:29:49.739 09:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-save 00:29:49.739 09:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:29:49.739 09:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-restore 00:29:49.739 09:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:49.739 09:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:49.739 09:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:49.739 09:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:49.739 09:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:52.281 09:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:52.281 00:29:52.281 real 0m19.320s 00:29:52.281 user 0m56.631s 00:29:52.281 sys 0m7.880s 00:29:52.281 09:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:52.281 09:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:52.281 ************************************ 00:29:52.281 END TEST nvmf_lvol 00:29:52.281 ************************************ 00:29:52.281 09:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:29:52.281 09:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:29:52.281 09:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:52.281 09:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:52.281 ************************************ 00:29:52.281 START TEST nvmf_lvs_grow 00:29:52.281 ************************************ 00:29:52.281 09:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:29:52.281 * Looking for test storage... 00:29:52.281 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:52.281 09:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:29:52.281 09:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lcov --version 00:29:52.281 09:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:29:52.281 09:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:29:52.281 09:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:52.281 09:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:52.281 09:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:52.281 09:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:29:52.281 09:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:29:52.281 09:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:29:52.281 09:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:29:52.281 09:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:29:52.281 09:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:29:52.281 09:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:29:52.282 09:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:52.282 09:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:29:52.282 09:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:29:52.282 09:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:52.282 09:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:52.282 09:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:29:52.282 09:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:29:52.282 09:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:52.282 09:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:29:52.282 09:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:29:52.282 09:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:29:52.282 09:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:29:52.282 09:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:52.282 09:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:29:52.282 09:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:29:52.282 09:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:52.282 09:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:52.282 09:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:29:52.282 09:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:52.282 09:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:29:52.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:52.282 --rc genhtml_branch_coverage=1 00:29:52.282 --rc genhtml_function_coverage=1 00:29:52.282 --rc genhtml_legend=1 00:29:52.282 --rc geninfo_all_blocks=1 00:29:52.282 --rc geninfo_unexecuted_blocks=1 00:29:52.282 00:29:52.282 ' 00:29:52.282 09:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:29:52.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:52.282 --rc genhtml_branch_coverage=1 00:29:52.282 --rc genhtml_function_coverage=1 00:29:52.282 --rc genhtml_legend=1 00:29:52.282 --rc geninfo_all_blocks=1 00:29:52.282 --rc geninfo_unexecuted_blocks=1 00:29:52.282 00:29:52.282 ' 00:29:52.282 09:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:29:52.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:52.282 --rc genhtml_branch_coverage=1 00:29:52.282 --rc genhtml_function_coverage=1 00:29:52.282 --rc genhtml_legend=1 00:29:52.282 --rc geninfo_all_blocks=1 00:29:52.282 --rc geninfo_unexecuted_blocks=1 00:29:52.282 00:29:52.282 ' 00:29:52.282 09:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:29:52.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:52.282 --rc genhtml_branch_coverage=1 00:29:52.282 --rc genhtml_function_coverage=1 00:29:52.282 --rc genhtml_legend=1 00:29:52.282 --rc geninfo_all_blocks=1 00:29:52.282 --rc geninfo_unexecuted_blocks=1 00:29:52.282 00:29:52.282 ' 00:29:52.282 09:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:52.282 09:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:29:52.282 09:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:52.282 09:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:52.282 09:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:52.282 09:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:52.282 09:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:52.282 09:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:52.282 09:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:52.282 09:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:52.282 09:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:52.282 09:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:52.282 09:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:29:52.282 09:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:29:52.282 09:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:52.282 09:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:52.282 09:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:52.282 09:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:52.282 09:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:52.282 09:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:29:52.282 09:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:52.282 09:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:52.282 09:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:52.282 09:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.282 09:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.282 09:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.282 09:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:29:52.282 09:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.282 09:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:29:52.282 09:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:52.282 09:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:52.282 09:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:52.282 09:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:52.282 09:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:52.282 09:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:52.282 09:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:52.282 09:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:52.282 09:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:52.282 09:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:52.282 09:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:52.282 09:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:52.282 09:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:29:52.282 09:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:29:52.282 09:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:52.282 09:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # prepare_net_devs 00:29:52.282 09:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@436 -- # local -g is_hw=no 00:29:52.282 09:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # remove_spdk_ns 00:29:52.282 09:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:52.282 09:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:52.282 09:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:52.283 09:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:29:52.283 09:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:29:52.283 09:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:29:52.283 09:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:54.186 09:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:54.186 09:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:29:54.186 09:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:54.186 09:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:54.186 09:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:54.186 09:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:54.186 09:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:54.186 09:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:29:54.186 09:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:54.186 09:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:29:54.186 09:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:29:54.186 09:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:29:54.186 09:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:29:54.186 09:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:29:54.186 09:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:29:54.186 09:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:54.186 09:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:54.186 09:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:54.186 09:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:54.186 09:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:54.186 09:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:54.186 09:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:54.186 09:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:54.186 09:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:54.186 09:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:54.186 09:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:54.186 09:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:54.186 09:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:54.186 09:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:54.186 09:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:54.186 09:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:54.186 09:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:54.186 09:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:54.186 09:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:54.186 09:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x1592)' 00:29:54.186 Found 0000:09:00.0 (0x8086 - 0x1592) 00:29:54.186 09:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:54.186 09:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:54.186 09:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:29:54.186 09:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:29:54.186 09:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:54.186 09:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:54.186 09:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x1592)' 00:29:54.186 Found 0000:09:00.1 (0x8086 - 0x1592) 00:29:54.186 09:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:54.186 09:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:54.186 09:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:29:54.186 09:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:29:54.186 09:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:54.186 09:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:54.186 09:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:54.186 09:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:54.186 09:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:54.186 09:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:54.186 09:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:54.186 09:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:54.186 09:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:54.186 09:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:54.186 09:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:54.186 09:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:29:54.186 Found net devices under 0000:09:00.0: cvl_0_0 00:29:54.187 09:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:54.187 09:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:54.187 09:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:54.187 09:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:54.187 09:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:54.187 09:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:54.187 09:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:54.187 09:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:54.187 09:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:29:54.187 Found net devices under 0000:09:00.1: cvl_0_1 00:29:54.187 09:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:54.187 09:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:29:54.187 09:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # is_hw=yes 00:29:54.187 09:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:29:54.187 09:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:29:54.187 09:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:29:54.187 09:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:54.187 09:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:54.187 09:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:54.187 09:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:54.187 09:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:54.187 09:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:54.187 09:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:54.187 09:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:54.187 09:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:54.187 09:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:54.187 09:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:54.187 09:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:54.187 09:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:54.187 09:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:54.187 09:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:54.187 09:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:54.187 09:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:54.187 09:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:54.187 09:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:54.187 09:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:54.187 09:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:54.187 09:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:54.187 09:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:54.187 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:54.187 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.267 ms 00:29:54.187 00:29:54.187 --- 10.0.0.2 ping statistics --- 00:29:54.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:54.187 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:29:54.187 09:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:54.187 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:54.187 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:29:54.187 00:29:54.187 --- 10.0.0.1 ping statistics --- 00:29:54.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:54.187 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:29:54.187 09:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:54.187 09:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@448 -- # return 0 00:29:54.187 09:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:29:54.187 09:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:54.187 09:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:29:54.187 09:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:29:54.187 09:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:54.187 09:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:29:54.187 09:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:29:54.187 09:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:29:54.187 09:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:29:54.187 09:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:54.187 09:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:54.187 09:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # nvmfpid=350523 00:29:54.187 09:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:29:54.187 09:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # waitforlisten 350523 00:29:54.187 09:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 350523 ']' 00:29:54.187 09:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:54.187 09:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:54.187 09:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:54.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:54.187 09:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:54.187 09:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:54.187 [2024-10-07 09:50:43.094597] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:54.187 [2024-10-07 09:50:43.095727] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:29:54.187 [2024-10-07 09:50:43.095784] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:54.187 [2024-10-07 09:50:43.156863] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:54.447 [2024-10-07 09:50:43.265066] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:54.447 [2024-10-07 09:50:43.265126] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:54.447 [2024-10-07 09:50:43.265149] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:54.447 [2024-10-07 09:50:43.265160] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:54.447 [2024-10-07 09:50:43.265169] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:54.447 [2024-10-07 09:50:43.265743] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:29:54.447 [2024-10-07 09:50:43.351354] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:54.447 [2024-10-07 09:50:43.351622] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:54.447 09:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:54.447 09:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:29:54.447 09:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:29:54.447 09:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:54.447 09:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:54.447 09:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:54.447 09:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:54.707 [2024-10-07 09:50:43.642352] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:54.707 09:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:29:54.707 09:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:54.707 09:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:54.707 09:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:54.707 ************************************ 00:29:54.707 START TEST lvs_grow_clean 00:29:54.707 ************************************ 00:29:54.707 09:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:29:54.707 09:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:29:54.707 09:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:29:54.707 09:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:29:54.707 09:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:29:54.707 09:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:29:54.707 09:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:29:54.707 09:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:54.707 09:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:54.707 09:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:55.275 09:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:29:55.275 09:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:29:55.275 09:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=12bdb2d0-17ba-4820-8907-7b02a9866f58 00:29:55.275 09:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 12bdb2d0-17ba-4820-8907-7b02a9866f58 00:29:55.275 09:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:29:55.534 09:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:29:55.534 09:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:29:55.534 09:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 12bdb2d0-17ba-4820-8907-7b02a9866f58 lvol 150 00:29:56.102 09:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=8209081a-e4f9-4e04-9fc7-9e39c0abd02b 00:29:56.102 09:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:56.102 09:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:29:56.102 [2024-10-07 09:50:45.066224] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:29:56.102 [2024-10-07 09:50:45.066324] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:29:56.102 true 00:29:56.102 09:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 12bdb2d0-17ba-4820-8907-7b02a9866f58 00:29:56.102 09:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:29:56.363 09:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:29:56.363 09:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:56.931 09:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 8209081a-e4f9-4e04-9fc7-9e39c0abd02b 00:29:56.931 09:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:57.190 [2024-10-07 09:50:46.166550] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:57.190 09:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:57.757 09:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=350937 00:29:57.757 09:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:29:57.757 09:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:57.757 09:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 350937 /var/tmp/bdevperf.sock 00:29:57.757 09:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 350937 ']' 00:29:57.757 09:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:57.757 09:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:57.757 09:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:57.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:57.757 09:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:57.757 09:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:29:57.757 [2024-10-07 09:50:46.499185] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:29:57.757 [2024-10-07 09:50:46.499276] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid350937 ] 00:29:57.757 [2024-10-07 09:50:46.558103] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:57.757 [2024-10-07 09:50:46.669693] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:29:58.015 09:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:58.015 09:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:29:58.015 09:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:29:58.273 Nvme0n1 00:29:58.273 09:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:29:58.531 [ 00:29:58.531 { 00:29:58.531 "name": "Nvme0n1", 00:29:58.531 "aliases": [ 00:29:58.531 "8209081a-e4f9-4e04-9fc7-9e39c0abd02b" 00:29:58.531 ], 00:29:58.531 "product_name": "NVMe disk", 00:29:58.531 "block_size": 4096, 00:29:58.531 "num_blocks": 38912, 00:29:58.531 "uuid": "8209081a-e4f9-4e04-9fc7-9e39c0abd02b", 00:29:58.531 "numa_id": 0, 00:29:58.531 "assigned_rate_limits": { 00:29:58.531 "rw_ios_per_sec": 0, 00:29:58.531 "rw_mbytes_per_sec": 0, 00:29:58.531 "r_mbytes_per_sec": 0, 00:29:58.531 "w_mbytes_per_sec": 0 00:29:58.531 }, 00:29:58.531 "claimed": false, 00:29:58.531 "zoned": false, 00:29:58.531 "supported_io_types": { 00:29:58.531 "read": true, 00:29:58.531 "write": true, 00:29:58.531 "unmap": true, 00:29:58.531 "flush": true, 00:29:58.531 "reset": true, 00:29:58.531 "nvme_admin": true, 00:29:58.531 "nvme_io": true, 00:29:58.531 "nvme_io_md": false, 00:29:58.531 "write_zeroes": true, 00:29:58.531 "zcopy": false, 00:29:58.531 "get_zone_info": false, 00:29:58.531 "zone_management": false, 00:29:58.531 "zone_append": false, 00:29:58.531 "compare": true, 00:29:58.531 "compare_and_write": true, 00:29:58.531 "abort": true, 00:29:58.531 "seek_hole": false, 00:29:58.531 "seek_data": false, 00:29:58.531 "copy": true, 00:29:58.531 "nvme_iov_md": false 00:29:58.531 }, 00:29:58.531 "memory_domains": [ 00:29:58.531 { 00:29:58.531 "dma_device_id": "system", 00:29:58.532 "dma_device_type": 1 00:29:58.532 } 00:29:58.532 ], 00:29:58.532 "driver_specific": { 00:29:58.532 "nvme": [ 00:29:58.532 { 00:29:58.532 "trid": { 00:29:58.532 "trtype": "TCP", 00:29:58.532 "adrfam": "IPv4", 00:29:58.532 "traddr": "10.0.0.2", 00:29:58.532 "trsvcid": "4420", 00:29:58.532 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:58.532 }, 00:29:58.532 "ctrlr_data": { 00:29:58.532 "cntlid": 1, 00:29:58.532 "vendor_id": "0x8086", 00:29:58.532 "model_number": "SPDK bdev Controller", 00:29:58.532 "serial_number": "SPDK0", 00:29:58.532 "firmware_revision": "25.01", 00:29:58.532 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:58.532 "oacs": { 00:29:58.532 "security": 0, 00:29:58.532 "format": 0, 00:29:58.532 "firmware": 0, 00:29:58.532 "ns_manage": 0 00:29:58.532 }, 00:29:58.532 "multi_ctrlr": true, 00:29:58.532 "ana_reporting": false 00:29:58.532 }, 00:29:58.532 "vs": { 00:29:58.532 "nvme_version": "1.3" 00:29:58.532 }, 00:29:58.532 "ns_data": { 00:29:58.532 "id": 1, 00:29:58.532 "can_share": true 00:29:58.532 } 00:29:58.532 } 00:29:58.532 ], 00:29:58.532 "mp_policy": "active_passive" 00:29:58.532 } 00:29:58.532 } 00:29:58.532 ] 00:29:58.532 09:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=351066 00:29:58.532 09:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:58.532 09:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:29:58.532 Running I/O for 10 seconds... 00:29:59.906 Latency(us) 00:29:59.906 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:59.906 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:59.906 Nvme0n1 : 1.00 14833.00 57.94 0.00 0.00 0.00 0.00 0.00 00:29:59.906 =================================================================================================================== 00:29:59.906 Total : 14833.00 57.94 0.00 0.00 0.00 0.00 0.00 00:29:59.906 00:30:00.473 09:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 12bdb2d0-17ba-4820-8907-7b02a9866f58 00:30:00.732 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:00.732 Nvme0n1 : 2.00 15019.00 58.67 0.00 0.00 0.00 0.00 0.00 00:30:00.732 =================================================================================================================== 00:30:00.732 Total : 15019.00 58.67 0.00 0.00 0.00 0.00 0.00 00:30:00.732 00:30:00.732 true 00:30:00.732 09:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 12bdb2d0-17ba-4820-8907-7b02a9866f58 00:30:00.732 09:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:30:00.991 09:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:30:00.991 09:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:30:00.991 09:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 351066 00:30:01.557 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:01.557 Nvme0n1 : 3.00 15019.00 58.67 0.00 0.00 0.00 0.00 0.00 00:30:01.557 =================================================================================================================== 00:30:01.557 Total : 15019.00 58.67 0.00 0.00 0.00 0.00 0.00 00:30:01.557 00:30:02.933 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:02.933 Nvme0n1 : 4.00 15090.25 58.95 0.00 0.00 0.00 0.00 0.00 00:30:02.933 =================================================================================================================== 00:30:02.934 Total : 15090.25 58.95 0.00 0.00 0.00 0.00 0.00 00:30:02.934 00:30:03.869 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:03.869 Nvme0n1 : 5.00 15177.00 59.29 0.00 0.00 0.00 0.00 0.00 00:30:03.869 =================================================================================================================== 00:30:03.869 Total : 15177.00 59.29 0.00 0.00 0.00 0.00 0.00 00:30:03.869 00:30:04.806 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:04.806 Nvme0n1 : 6.00 15204.33 59.39 0.00 0.00 0.00 0.00 0.00 00:30:04.806 =================================================================================================================== 00:30:04.806 Total : 15204.33 59.39 0.00 0.00 0.00 0.00 0.00 00:30:04.806 00:30:05.743 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:05.743 Nvme0n1 : 7.00 15250.86 59.57 0.00 0.00 0.00 0.00 0.00 00:30:05.743 =================================================================================================================== 00:30:05.743 Total : 15250.86 59.57 0.00 0.00 0.00 0.00 0.00 00:30:05.743 00:30:06.679 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:06.679 Nvme0n1 : 8.00 15280.38 59.69 0.00 0.00 0.00 0.00 0.00 00:30:06.679 =================================================================================================================== 00:30:06.680 Total : 15280.38 59.69 0.00 0.00 0.00 0.00 0.00 00:30:06.680 00:30:07.615 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:07.615 Nvme0n1 : 9.00 15314.67 59.82 0.00 0.00 0.00 0.00 0.00 00:30:07.615 =================================================================================================================== 00:30:07.615 Total : 15314.67 59.82 0.00 0.00 0.00 0.00 0.00 00:30:07.615 00:30:08.549 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:08.549 Nvme0n1 : 10.00 15340.00 59.92 0.00 0.00 0.00 0.00 0.00 00:30:08.549 =================================================================================================================== 00:30:08.549 Total : 15340.00 59.92 0.00 0.00 0.00 0.00 0.00 00:30:08.549 00:30:08.549 00:30:08.549 Latency(us) 00:30:08.549 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:08.549 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:08.549 Nvme0n1 : 10.01 15338.40 59.92 0.00 0.00 8339.25 4344.79 18252.99 00:30:08.549 =================================================================================================================== 00:30:08.549 Total : 15338.40 59.92 0.00 0.00 8339.25 4344.79 18252.99 00:30:08.549 { 00:30:08.549 "results": [ 00:30:08.549 { 00:30:08.549 "job": "Nvme0n1", 00:30:08.549 "core_mask": "0x2", 00:30:08.549 "workload": "randwrite", 00:30:08.549 "status": "finished", 00:30:08.549 "queue_depth": 128, 00:30:08.549 "io_size": 4096, 00:30:08.549 "runtime": 10.005283, 00:30:08.549 "iops": 15338.396725010178, 00:30:08.549 "mibps": 59.91561220707101, 00:30:08.549 "io_failed": 0, 00:30:08.549 "io_timeout": 0, 00:30:08.549 "avg_latency_us": 8339.254123804318, 00:30:08.549 "min_latency_us": 4344.794074074074, 00:30:08.549 "max_latency_us": 18252.98962962963 00:30:08.549 } 00:30:08.549 ], 00:30:08.549 "core_count": 1 00:30:08.549 } 00:30:08.808 09:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 350937 00:30:08.808 09:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 350937 ']' 00:30:08.808 09:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 350937 00:30:08.808 09:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:30:08.808 09:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:08.808 09:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 350937 00:30:08.808 09:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:08.808 09:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:08.808 09:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 350937' 00:30:08.808 killing process with pid 350937 00:30:08.808 09:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 350937 00:30:08.808 Received shutdown signal, test time was about 10.000000 seconds 00:30:08.808 00:30:08.808 Latency(us) 00:30:08.808 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:08.808 =================================================================================================================== 00:30:08.808 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:08.808 09:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 350937 00:30:09.068 09:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:09.326 09:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:09.584 09:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 12bdb2d0-17ba-4820-8907-7b02a9866f58 00:30:09.584 09:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:30:09.843 09:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:30:09.843 09:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:30:09.843 09:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:30:10.101 [2024-10-07 09:50:58.998315] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:30:10.102 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 12bdb2d0-17ba-4820-8907-7b02a9866f58 00:30:10.102 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:30:10.102 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 12bdb2d0-17ba-4820-8907-7b02a9866f58 00:30:10.102 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:10.102 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:10.102 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:10.102 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:10.102 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:10.102 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:10.102 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:10.102 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:30:10.102 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 12bdb2d0-17ba-4820-8907-7b02a9866f58 00:30:10.361 request: 00:30:10.361 { 00:30:10.361 "uuid": "12bdb2d0-17ba-4820-8907-7b02a9866f58", 00:30:10.361 "method": "bdev_lvol_get_lvstores", 00:30:10.361 "req_id": 1 00:30:10.361 } 00:30:10.361 Got JSON-RPC error response 00:30:10.361 response: 00:30:10.361 { 00:30:10.361 "code": -19, 00:30:10.361 "message": "No such device" 00:30:10.361 } 00:30:10.361 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:30:10.361 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:10.361 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:10.361 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:10.361 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:10.621 aio_bdev 00:30:10.621 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 8209081a-e4f9-4e04-9fc7-9e39c0abd02b 00:30:10.621 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=8209081a-e4f9-4e04-9fc7-9e39c0abd02b 00:30:10.621 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:30:10.621 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:30:10.621 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:30:10.621 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:30:10.621 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:10.880 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 8209081a-e4f9-4e04-9fc7-9e39c0abd02b -t 2000 00:30:11.447 [ 00:30:11.447 { 00:30:11.447 "name": "8209081a-e4f9-4e04-9fc7-9e39c0abd02b", 00:30:11.447 "aliases": [ 00:30:11.447 "lvs/lvol" 00:30:11.447 ], 00:30:11.447 "product_name": "Logical Volume", 00:30:11.447 "block_size": 4096, 00:30:11.447 "num_blocks": 38912, 00:30:11.447 "uuid": "8209081a-e4f9-4e04-9fc7-9e39c0abd02b", 00:30:11.447 "assigned_rate_limits": { 00:30:11.447 "rw_ios_per_sec": 0, 00:30:11.447 "rw_mbytes_per_sec": 0, 00:30:11.447 "r_mbytes_per_sec": 0, 00:30:11.447 "w_mbytes_per_sec": 0 00:30:11.447 }, 00:30:11.447 "claimed": false, 00:30:11.447 "zoned": false, 00:30:11.447 "supported_io_types": { 00:30:11.447 "read": true, 00:30:11.447 "write": true, 00:30:11.447 "unmap": true, 00:30:11.447 "flush": false, 00:30:11.447 "reset": true, 00:30:11.447 "nvme_admin": false, 00:30:11.447 "nvme_io": false, 00:30:11.447 "nvme_io_md": false, 00:30:11.447 "write_zeroes": true, 00:30:11.447 "zcopy": false, 00:30:11.447 "get_zone_info": false, 00:30:11.447 "zone_management": false, 00:30:11.447 "zone_append": false, 00:30:11.447 "compare": false, 00:30:11.447 "compare_and_write": false, 00:30:11.447 "abort": false, 00:30:11.447 "seek_hole": true, 00:30:11.447 "seek_data": true, 00:30:11.447 "copy": false, 00:30:11.447 "nvme_iov_md": false 00:30:11.447 }, 00:30:11.447 "driver_specific": { 00:30:11.447 "lvol": { 00:30:11.447 "lvol_store_uuid": "12bdb2d0-17ba-4820-8907-7b02a9866f58", 00:30:11.447 "base_bdev": "aio_bdev", 00:30:11.447 "thin_provision": false, 00:30:11.447 "num_allocated_clusters": 38, 00:30:11.447 "snapshot": false, 00:30:11.447 "clone": false, 00:30:11.447 "esnap_clone": false 00:30:11.447 } 00:30:11.447 } 00:30:11.447 } 00:30:11.447 ] 00:30:11.447 09:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:30:11.448 09:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 12bdb2d0-17ba-4820-8907-7b02a9866f58 00:30:11.448 09:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:30:11.707 09:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:30:11.707 09:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 12bdb2d0-17ba-4820-8907-7b02a9866f58 00:30:11.707 09:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:30:11.966 09:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:30:11.966 09:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 8209081a-e4f9-4e04-9fc7-9e39c0abd02b 00:30:12.225 09:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 12bdb2d0-17ba-4820-8907-7b02a9866f58 00:30:12.483 09:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:30:12.743 09:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:12.743 00:30:12.743 real 0m17.932s 00:30:12.743 user 0m17.325s 00:30:12.743 sys 0m1.925s 00:30:12.743 09:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:12.743 09:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:30:12.743 ************************************ 00:30:12.743 END TEST lvs_grow_clean 00:30:12.743 ************************************ 00:30:12.743 09:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:30:12.743 09:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:30:12.743 09:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:12.743 09:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:12.743 ************************************ 00:30:12.743 START TEST lvs_grow_dirty 00:30:12.743 ************************************ 00:30:12.743 09:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:30:12.743 09:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:30:12.743 09:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:30:12.743 09:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:30:12.743 09:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:30:12.743 09:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:30:12.743 09:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:30:12.743 09:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:12.743 09:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:12.743 09:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:13.002 09:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:30:13.002 09:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:30:13.261 09:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=e6d78dc4-9596-401e-ac66-d00587a84b87 00:30:13.261 09:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e6d78dc4-9596-401e-ac66-d00587a84b87 00:30:13.261 09:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:30:13.829 09:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:30:13.829 09:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:30:13.829 09:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u e6d78dc4-9596-401e-ac66-d00587a84b87 lvol 150 00:30:13.829 09:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=c12a6eaa-3c5f-4d9c-a383-713b720fa542 00:30:13.829 09:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:13.829 09:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:30:14.088 [2024-10-07 09:51:03.062233] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:30:14.088 [2024-10-07 09:51:03.062336] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:30:14.088 true 00:30:14.088 09:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e6d78dc4-9596-401e-ac66-d00587a84b87 00:30:14.088 09:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:30:14.678 09:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:30:14.678 09:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:30:14.678 09:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c12a6eaa-3c5f-4d9c-a383-713b720fa542 00:30:14.991 09:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:15.266 [2024-10-07 09:51:04.174508] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:15.266 09:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:15.553 09:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=353121 00:30:15.554 09:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:30:15.554 09:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:15.554 09:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 353121 /var/tmp/bdevperf.sock 00:30:15.554 09:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 353121 ']' 00:30:15.554 09:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:15.554 09:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:15.554 09:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:15.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:15.554 09:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:15.554 09:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:15.554 [2024-10-07 09:51:04.510785] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:30:15.554 [2024-10-07 09:51:04.510875] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid353121 ] 00:30:15.838 [2024-10-07 09:51:04.572042] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:15.838 [2024-10-07 09:51:04.687060] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:30:15.838 09:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:15.838 09:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:30:15.838 09:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:30:16.477 Nvme0n1 00:30:16.477 09:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:30:16.477 [ 00:30:16.477 { 00:30:16.477 "name": "Nvme0n1", 00:30:16.477 "aliases": [ 00:30:16.477 "c12a6eaa-3c5f-4d9c-a383-713b720fa542" 00:30:16.477 ], 00:30:16.477 "product_name": "NVMe disk", 00:30:16.477 "block_size": 4096, 00:30:16.477 "num_blocks": 38912, 00:30:16.477 "uuid": "c12a6eaa-3c5f-4d9c-a383-713b720fa542", 00:30:16.477 "numa_id": 0, 00:30:16.477 "assigned_rate_limits": { 00:30:16.477 "rw_ios_per_sec": 0, 00:30:16.477 "rw_mbytes_per_sec": 0, 00:30:16.477 "r_mbytes_per_sec": 0, 00:30:16.477 "w_mbytes_per_sec": 0 00:30:16.477 }, 00:30:16.477 "claimed": false, 00:30:16.477 "zoned": false, 00:30:16.477 "supported_io_types": { 00:30:16.477 "read": true, 00:30:16.477 "write": true, 00:30:16.477 "unmap": true, 00:30:16.477 "flush": true, 00:30:16.477 "reset": true, 00:30:16.477 "nvme_admin": true, 00:30:16.477 "nvme_io": true, 00:30:16.477 "nvme_io_md": false, 00:30:16.477 "write_zeroes": true, 00:30:16.477 "zcopy": false, 00:30:16.477 "get_zone_info": false, 00:30:16.477 "zone_management": false, 00:30:16.477 "zone_append": false, 00:30:16.477 "compare": true, 00:30:16.477 "compare_and_write": true, 00:30:16.477 "abort": true, 00:30:16.477 "seek_hole": false, 00:30:16.477 "seek_data": false, 00:30:16.477 "copy": true, 00:30:16.477 "nvme_iov_md": false 00:30:16.477 }, 00:30:16.477 "memory_domains": [ 00:30:16.477 { 00:30:16.477 "dma_device_id": "system", 00:30:16.477 "dma_device_type": 1 00:30:16.477 } 00:30:16.477 ], 00:30:16.477 "driver_specific": { 00:30:16.477 "nvme": [ 00:30:16.477 { 00:30:16.477 "trid": { 00:30:16.477 "trtype": "TCP", 00:30:16.477 "adrfam": "IPv4", 00:30:16.477 "traddr": "10.0.0.2", 00:30:16.477 "trsvcid": "4420", 00:30:16.477 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:16.477 }, 00:30:16.477 "ctrlr_data": { 00:30:16.477 "cntlid": 1, 00:30:16.477 "vendor_id": "0x8086", 00:30:16.477 "model_number": "SPDK bdev Controller", 00:30:16.477 "serial_number": "SPDK0", 00:30:16.477 "firmware_revision": "25.01", 00:30:16.477 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:16.477 "oacs": { 00:30:16.477 "security": 0, 00:30:16.477 "format": 0, 00:30:16.477 "firmware": 0, 00:30:16.477 "ns_manage": 0 00:30:16.477 }, 00:30:16.477 "multi_ctrlr": true, 00:30:16.477 "ana_reporting": false 00:30:16.477 }, 00:30:16.477 "vs": { 00:30:16.477 "nvme_version": "1.3" 00:30:16.477 }, 00:30:16.477 "ns_data": { 00:30:16.477 "id": 1, 00:30:16.477 "can_share": true 00:30:16.477 } 00:30:16.477 } 00:30:16.477 ], 00:30:16.477 "mp_policy": "active_passive" 00:30:16.477 } 00:30:16.477 } 00:30:16.477 ] 00:30:16.477 09:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=353256 00:30:16.477 09:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:30:16.477 09:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:16.767 Running I/O for 10 seconds... 00:30:17.721 Latency(us) 00:30:17.721 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:17.721 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:17.721 Nvme0n1 : 1.00 14824.00 57.91 0.00 0.00 0.00 0.00 0.00 00:30:17.721 =================================================================================================================== 00:30:17.721 Total : 14824.00 57.91 0.00 0.00 0.00 0.00 0.00 00:30:17.721 00:30:18.661 09:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u e6d78dc4-9596-401e-ac66-d00587a84b87 00:30:18.661 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:18.661 Nvme0n1 : 2.00 14924.50 58.30 0.00 0.00 0.00 0.00 0.00 00:30:18.661 =================================================================================================================== 00:30:18.661 Total : 14924.50 58.30 0.00 0.00 0.00 0.00 0.00 00:30:18.661 00:30:18.920 true 00:30:18.920 09:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e6d78dc4-9596-401e-ac66-d00587a84b87 00:30:18.920 09:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:30:19.179 09:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:30:19.179 09:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:30:19.179 09:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 353256 00:30:19.747 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:19.747 Nvme0n1 : 3.00 14998.33 58.59 0.00 0.00 0.00 0.00 0.00 00:30:19.747 =================================================================================================================== 00:30:19.747 Total : 14998.33 58.59 0.00 0.00 0.00 0.00 0.00 00:30:19.747 00:30:20.682 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:20.682 Nvme0n1 : 4.00 15112.75 59.03 0.00 0.00 0.00 0.00 0.00 00:30:20.682 =================================================================================================================== 00:30:20.682 Total : 15112.75 59.03 0.00 0.00 0.00 0.00 0.00 00:30:20.682 00:30:21.617 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:21.617 Nvme0n1 : 5.00 15158.20 59.21 0.00 0.00 0.00 0.00 0.00 00:30:21.617 =================================================================================================================== 00:30:21.617 Total : 15158.20 59.21 0.00 0.00 0.00 0.00 0.00 00:30:21.617 00:30:23.003 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:23.003 Nvme0n1 : 6.00 15218.83 59.45 0.00 0.00 0.00 0.00 0.00 00:30:23.003 =================================================================================================================== 00:30:23.003 Total : 15218.83 59.45 0.00 0.00 0.00 0.00 0.00 00:30:23.003 00:30:23.939 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:23.939 Nvme0n1 : 7.00 15301.86 59.77 0.00 0.00 0.00 0.00 0.00 00:30:23.940 =================================================================================================================== 00:30:23.940 Total : 15301.86 59.77 0.00 0.00 0.00 0.00 0.00 00:30:23.940 00:30:24.878 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:24.878 Nvme0n1 : 8.00 15348.25 59.95 0.00 0.00 0.00 0.00 0.00 00:30:24.878 =================================================================================================================== 00:30:24.878 Total : 15348.25 59.95 0.00 0.00 0.00 0.00 0.00 00:30:24.878 00:30:25.814 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:25.814 Nvme0n1 : 9.00 15395.33 60.14 0.00 0.00 0.00 0.00 0.00 00:30:25.814 =================================================================================================================== 00:30:25.814 Total : 15395.33 60.14 0.00 0.00 0.00 0.00 0.00 00:30:25.814 00:30:26.749 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:26.749 Nvme0n1 : 10.00 15439.50 60.31 0.00 0.00 0.00 0.00 0.00 00:30:26.749 =================================================================================================================== 00:30:26.749 Total : 15439.50 60.31 0.00 0.00 0.00 0.00 0.00 00:30:26.749 00:30:26.749 00:30:26.749 Latency(us) 00:30:26.749 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:26.749 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:26.749 Nvme0n1 : 10.01 15440.83 60.32 0.00 0.00 8285.08 4975.88 18544.26 00:30:26.749 =================================================================================================================== 00:30:26.749 Total : 15440.83 60.32 0.00 0.00 8285.08 4975.88 18544.26 00:30:26.749 { 00:30:26.749 "results": [ 00:30:26.749 { 00:30:26.749 "job": "Nvme0n1", 00:30:26.749 "core_mask": "0x2", 00:30:26.749 "workload": "randwrite", 00:30:26.749 "status": "finished", 00:30:26.749 "queue_depth": 128, 00:30:26.749 "io_size": 4096, 00:30:26.749 "runtime": 10.007426, 00:30:26.749 "iops": 15440.833636941208, 00:30:26.749 "mibps": 60.315756394301594, 00:30:26.749 "io_failed": 0, 00:30:26.749 "io_timeout": 0, 00:30:26.749 "avg_latency_us": 8285.079626942748, 00:30:26.750 "min_latency_us": 4975.881481481481, 00:30:26.750 "max_latency_us": 18544.26074074074 00:30:26.750 } 00:30:26.750 ], 00:30:26.750 "core_count": 1 00:30:26.750 } 00:30:26.750 09:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 353121 00:30:26.750 09:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 353121 ']' 00:30:26.750 09:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 353121 00:30:26.750 09:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:30:26.750 09:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:26.750 09:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 353121 00:30:26.750 09:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:26.750 09:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:26.750 09:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 353121' 00:30:26.750 killing process with pid 353121 00:30:26.750 09:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 353121 00:30:26.750 Received shutdown signal, test time was about 10.000000 seconds 00:30:26.750 00:30:26.750 Latency(us) 00:30:26.750 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:26.750 =================================================================================================================== 00:30:26.750 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:26.750 09:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 353121 00:30:27.009 09:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:27.269 09:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:27.528 09:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e6d78dc4-9596-401e-ac66-d00587a84b87 00:30:27.528 09:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:30:27.787 09:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:30:27.787 09:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:30:27.787 09:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 350523 00:30:27.787 09:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 350523 00:30:27.787 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 350523 Killed "${NVMF_APP[@]}" "$@" 00:30:27.787 09:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:30:27.787 09:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:30:27.787 09:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:30:27.787 09:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:27.787 09:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:27.787 09:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # nvmfpid=355018 00:30:27.787 09:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:30:27.787 09:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # waitforlisten 355018 00:30:27.787 09:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 355018 ']' 00:30:27.787 09:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:27.787 09:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:27.787 09:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:27.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:27.787 09:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:27.787 09:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:28.045 [2024-10-07 09:51:16.810038] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:28.046 [2024-10-07 09:51:16.811162] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:30:28.046 [2024-10-07 09:51:16.811240] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:28.046 [2024-10-07 09:51:16.875069] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:28.046 [2024-10-07 09:51:16.980168] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:28.046 [2024-10-07 09:51:16.980244] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:28.046 [2024-10-07 09:51:16.980257] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:28.046 [2024-10-07 09:51:16.980268] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:28.046 [2024-10-07 09:51:16.980277] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:28.046 [2024-10-07 09:51:16.980781] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:30:28.304 [2024-10-07 09:51:17.063835] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:28.304 [2024-10-07 09:51:17.064132] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:28.304 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:28.304 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:30:28.305 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:30:28.305 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:28.305 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:28.305 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:28.305 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:28.563 [2024-10-07 09:51:17.367404] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:30:28.563 [2024-10-07 09:51:17.367538] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:30:28.563 [2024-10-07 09:51:17.367584] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:30:28.563 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:30:28.563 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev c12a6eaa-3c5f-4d9c-a383-713b720fa542 00:30:28.563 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=c12a6eaa-3c5f-4d9c-a383-713b720fa542 00:30:28.563 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:30:28.563 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:30:28.563 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:30:28.563 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:30:28.563 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:28.822 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b c12a6eaa-3c5f-4d9c-a383-713b720fa542 -t 2000 00:30:29.081 [ 00:30:29.081 { 00:30:29.081 "name": "c12a6eaa-3c5f-4d9c-a383-713b720fa542", 00:30:29.081 "aliases": [ 00:30:29.081 "lvs/lvol" 00:30:29.081 ], 00:30:29.081 "product_name": "Logical Volume", 00:30:29.081 "block_size": 4096, 00:30:29.081 "num_blocks": 38912, 00:30:29.081 "uuid": "c12a6eaa-3c5f-4d9c-a383-713b720fa542", 00:30:29.081 "assigned_rate_limits": { 00:30:29.081 "rw_ios_per_sec": 0, 00:30:29.081 "rw_mbytes_per_sec": 0, 00:30:29.081 "r_mbytes_per_sec": 0, 00:30:29.081 "w_mbytes_per_sec": 0 00:30:29.081 }, 00:30:29.081 "claimed": false, 00:30:29.081 "zoned": false, 00:30:29.081 "supported_io_types": { 00:30:29.081 "read": true, 00:30:29.081 "write": true, 00:30:29.081 "unmap": true, 00:30:29.081 "flush": false, 00:30:29.081 "reset": true, 00:30:29.081 "nvme_admin": false, 00:30:29.081 "nvme_io": false, 00:30:29.081 "nvme_io_md": false, 00:30:29.081 "write_zeroes": true, 00:30:29.081 "zcopy": false, 00:30:29.081 "get_zone_info": false, 00:30:29.081 "zone_management": false, 00:30:29.081 "zone_append": false, 00:30:29.081 "compare": false, 00:30:29.081 "compare_and_write": false, 00:30:29.081 "abort": false, 00:30:29.081 "seek_hole": true, 00:30:29.081 "seek_data": true, 00:30:29.081 "copy": false, 00:30:29.081 "nvme_iov_md": false 00:30:29.081 }, 00:30:29.081 "driver_specific": { 00:30:29.081 "lvol": { 00:30:29.081 "lvol_store_uuid": "e6d78dc4-9596-401e-ac66-d00587a84b87", 00:30:29.081 "base_bdev": "aio_bdev", 00:30:29.081 "thin_provision": false, 00:30:29.081 "num_allocated_clusters": 38, 00:30:29.081 "snapshot": false, 00:30:29.081 "clone": false, 00:30:29.081 "esnap_clone": false 00:30:29.081 } 00:30:29.081 } 00:30:29.081 } 00:30:29.081 ] 00:30:29.081 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:30:29.081 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e6d78dc4-9596-401e-ac66-d00587a84b87 00:30:29.081 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:30:29.341 09:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:30:29.341 09:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e6d78dc4-9596-401e-ac66-d00587a84b87 00:30:29.341 09:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:30:29.599 09:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:30:29.599 09:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:30:29.857 [2024-10-07 09:51:18.741279] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:30:29.857 09:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e6d78dc4-9596-401e-ac66-d00587a84b87 00:30:29.857 09:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:30:29.857 09:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e6d78dc4-9596-401e-ac66-d00587a84b87 00:30:29.857 09:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:29.857 09:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:29.857 09:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:29.857 09:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:29.857 09:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:29.857 09:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:29.857 09:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:29.857 09:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:30:29.857 09:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e6d78dc4-9596-401e-ac66-d00587a84b87 00:30:30.116 request: 00:30:30.116 { 00:30:30.116 "uuid": "e6d78dc4-9596-401e-ac66-d00587a84b87", 00:30:30.116 "method": "bdev_lvol_get_lvstores", 00:30:30.116 "req_id": 1 00:30:30.116 } 00:30:30.116 Got JSON-RPC error response 00:30:30.116 response: 00:30:30.116 { 00:30:30.116 "code": -19, 00:30:30.116 "message": "No such device" 00:30:30.116 } 00:30:30.116 09:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:30:30.116 09:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:30.116 09:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:30.116 09:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:30.116 09:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:30.375 aio_bdev 00:30:30.375 09:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev c12a6eaa-3c5f-4d9c-a383-713b720fa542 00:30:30.375 09:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=c12a6eaa-3c5f-4d9c-a383-713b720fa542 00:30:30.375 09:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:30:30.375 09:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:30:30.375 09:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:30:30.375 09:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:30:30.375 09:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:30.633 09:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b c12a6eaa-3c5f-4d9c-a383-713b720fa542 -t 2000 00:30:30.892 [ 00:30:30.892 { 00:30:30.892 "name": "c12a6eaa-3c5f-4d9c-a383-713b720fa542", 00:30:30.892 "aliases": [ 00:30:30.892 "lvs/lvol" 00:30:30.892 ], 00:30:30.892 "product_name": "Logical Volume", 00:30:30.892 "block_size": 4096, 00:30:30.892 "num_blocks": 38912, 00:30:30.892 "uuid": "c12a6eaa-3c5f-4d9c-a383-713b720fa542", 00:30:30.892 "assigned_rate_limits": { 00:30:30.892 "rw_ios_per_sec": 0, 00:30:30.892 "rw_mbytes_per_sec": 0, 00:30:30.892 "r_mbytes_per_sec": 0, 00:30:30.892 "w_mbytes_per_sec": 0 00:30:30.892 }, 00:30:30.892 "claimed": false, 00:30:30.892 "zoned": false, 00:30:30.892 "supported_io_types": { 00:30:30.892 "read": true, 00:30:30.892 "write": true, 00:30:30.892 "unmap": true, 00:30:30.892 "flush": false, 00:30:30.892 "reset": true, 00:30:30.892 "nvme_admin": false, 00:30:30.892 "nvme_io": false, 00:30:30.892 "nvme_io_md": false, 00:30:30.892 "write_zeroes": true, 00:30:30.892 "zcopy": false, 00:30:30.892 "get_zone_info": false, 00:30:30.892 "zone_management": false, 00:30:30.892 "zone_append": false, 00:30:30.892 "compare": false, 00:30:30.892 "compare_and_write": false, 00:30:30.892 "abort": false, 00:30:30.892 "seek_hole": true, 00:30:30.892 "seek_data": true, 00:30:30.892 "copy": false, 00:30:30.892 "nvme_iov_md": false 00:30:30.892 }, 00:30:30.892 "driver_specific": { 00:30:30.892 "lvol": { 00:30:30.892 "lvol_store_uuid": "e6d78dc4-9596-401e-ac66-d00587a84b87", 00:30:30.892 "base_bdev": "aio_bdev", 00:30:30.892 "thin_provision": false, 00:30:30.892 "num_allocated_clusters": 38, 00:30:30.892 "snapshot": false, 00:30:30.892 "clone": false, 00:30:30.892 "esnap_clone": false 00:30:30.892 } 00:30:30.892 } 00:30:30.892 } 00:30:30.892 ] 00:30:30.892 09:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:30:30.892 09:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e6d78dc4-9596-401e-ac66-d00587a84b87 00:30:30.892 09:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:30:31.461 09:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:30:31.461 09:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e6d78dc4-9596-401e-ac66-d00587a84b87 00:30:31.461 09:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:30:31.461 09:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:30:31.461 09:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete c12a6eaa-3c5f-4d9c-a383-713b720fa542 00:30:32.030 09:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e6d78dc4-9596-401e-ac66-d00587a84b87 00:30:32.289 09:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:30:32.548 09:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:32.548 00:30:32.548 real 0m19.709s 00:30:32.548 user 0m36.727s 00:30:32.548 sys 0m4.646s 00:30:32.548 09:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:32.548 09:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:32.548 ************************************ 00:30:32.548 END TEST lvs_grow_dirty 00:30:32.548 ************************************ 00:30:32.548 09:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:30:32.548 09:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:30:32.548 09:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:30:32.548 09:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:30:32.548 09:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:30:32.548 09:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:30:32.548 09:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:30:32.548 09:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:30:32.548 09:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:30:32.548 nvmf_trace.0 00:30:32.548 09:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:30:32.548 09:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:30:32.548 09:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@514 -- # nvmfcleanup 00:30:32.548 09:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:30:32.548 09:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:32.548 09:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:30:32.548 09:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:32.548 09:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:32.548 rmmod nvme_tcp 00:30:32.548 rmmod nvme_fabrics 00:30:32.548 rmmod nvme_keyring 00:30:32.548 09:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:32.548 09:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:30:32.548 09:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:30:32.548 09:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@515 -- # '[' -n 355018 ']' 00:30:32.548 09:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # killprocess 355018 00:30:32.548 09:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 355018 ']' 00:30:32.548 09:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 355018 00:30:32.548 09:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:30:32.548 09:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:32.548 09:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 355018 00:30:32.548 09:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:32.548 09:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:32.548 09:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 355018' 00:30:32.548 killing process with pid 355018 00:30:32.548 09:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 355018 00:30:32.548 09:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 355018 00:30:32.807 09:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:30:32.807 09:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:30:32.807 09:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:30:32.807 09:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:30:32.807 09:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-save 00:30:32.807 09:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:30:32.807 09:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-restore 00:30:32.807 09:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:32.807 09:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:32.807 09:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:32.807 09:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:32.807 09:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:35.355 09:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:35.355 00:30:35.355 real 0m43.015s 00:30:35.355 user 0m55.906s 00:30:35.355 sys 0m8.368s 00:30:35.355 09:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:35.355 09:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:35.355 ************************************ 00:30:35.355 END TEST nvmf_lvs_grow 00:30:35.355 ************************************ 00:30:35.355 09:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:30:35.355 09:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:30:35.355 09:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:35.355 09:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:35.355 ************************************ 00:30:35.355 START TEST nvmf_bdev_io_wait 00:30:35.355 ************************************ 00:30:35.355 09:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:30:35.355 * Looking for test storage... 00:30:35.355 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:35.355 09:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:30:35.355 09:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lcov --version 00:30:35.355 09:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:30:35.355 09:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:30:35.355 09:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:35.355 09:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:35.355 09:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:35.355 09:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:30:35.355 09:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:30:35.355 09:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:30:35.355 09:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:30:35.355 09:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:30:35.355 09:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:30:35.355 09:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:30:35.355 09:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:35.355 09:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:30:35.355 09:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:30:35.355 09:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:35.355 09:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:35.355 09:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:30:35.355 09:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:30:35.355 09:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:35.355 09:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:30:35.355 09:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:30:35.355 09:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:30:35.356 09:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:30:35.356 09:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:35.356 09:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:30:35.356 09:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:30:35.356 09:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:35.356 09:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:35.356 09:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:30:35.356 09:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:35.356 09:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:30:35.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:35.356 --rc genhtml_branch_coverage=1 00:30:35.356 --rc genhtml_function_coverage=1 00:30:35.356 --rc genhtml_legend=1 00:30:35.356 --rc geninfo_all_blocks=1 00:30:35.356 --rc geninfo_unexecuted_blocks=1 00:30:35.356 00:30:35.356 ' 00:30:35.356 09:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:30:35.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:35.356 --rc genhtml_branch_coverage=1 00:30:35.356 --rc genhtml_function_coverage=1 00:30:35.356 --rc genhtml_legend=1 00:30:35.356 --rc geninfo_all_blocks=1 00:30:35.356 --rc geninfo_unexecuted_blocks=1 00:30:35.356 00:30:35.356 ' 00:30:35.356 09:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:30:35.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:35.356 --rc genhtml_branch_coverage=1 00:30:35.356 --rc genhtml_function_coverage=1 00:30:35.356 --rc genhtml_legend=1 00:30:35.356 --rc geninfo_all_blocks=1 00:30:35.356 --rc geninfo_unexecuted_blocks=1 00:30:35.356 00:30:35.356 ' 00:30:35.356 09:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:30:35.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:35.356 --rc genhtml_branch_coverage=1 00:30:35.356 --rc genhtml_function_coverage=1 00:30:35.356 --rc genhtml_legend=1 00:30:35.356 --rc geninfo_all_blocks=1 00:30:35.356 --rc geninfo_unexecuted_blocks=1 00:30:35.356 00:30:35.356 ' 00:30:35.356 09:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:35.356 09:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:30:35.356 09:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:35.356 09:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:35.356 09:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:35.356 09:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:35.356 09:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:35.356 09:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:35.356 09:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:35.356 09:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:35.356 09:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:35.356 09:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:35.356 09:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:30:35.356 09:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:30:35.356 09:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:35.356 09:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:35.356 09:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:35.356 09:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:35.356 09:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:35.356 09:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:30:35.356 09:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:35.356 09:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:35.356 09:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:35.356 09:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:35.356 09:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:35.356 09:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:35.356 09:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:30:35.356 09:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:35.356 09:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:30:35.356 09:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:35.356 09:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:35.356 09:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:35.356 09:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:35.356 09:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:35.356 09:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:35.356 09:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:35.356 09:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:35.356 09:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:35.356 09:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:35.356 09:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:35.356 09:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:35.356 09:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:30:35.356 09:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:30:35.356 09:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:35.356 09:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # prepare_net_devs 00:30:35.356 09:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # local -g is_hw=no 00:30:35.356 09:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # remove_spdk_ns 00:30:35.356 09:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:35.356 09:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:35.356 09:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:35.356 09:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:30:35.356 09:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:30:35.356 09:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:30:35.356 09:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:37.266 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:37.266 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:30:37.266 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:37.266 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:37.266 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:37.266 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:37.266 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:37.266 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:30:37.266 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:37.266 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:30:37.266 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:30:37.266 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:30:37.266 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:30:37.266 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:30:37.266 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:30:37.266 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:37.266 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:37.267 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:37.267 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:37.267 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:37.267 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:37.267 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:37.267 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:37.267 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:37.267 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:37.267 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:37.267 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:37.267 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:37.267 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:37.267 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:37.267 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:37.267 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:37.267 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:37.267 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:37.267 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x1592)' 00:30:37.267 Found 0000:09:00.0 (0x8086 - 0x1592) 00:30:37.267 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:37.267 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:37.267 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:30:37.267 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:30:37.267 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:37.267 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:37.267 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x1592)' 00:30:37.267 Found 0000:09:00.1 (0x8086 - 0x1592) 00:30:37.267 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:37.267 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:37.267 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:30:37.267 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:30:37.267 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:37.267 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:37.267 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:37.267 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:37.267 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:37.267 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:37.267 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:37.267 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:37.267 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:37.267 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:37.267 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:37.267 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:30:37.267 Found net devices under 0000:09:00.0: cvl_0_0 00:30:37.267 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:37.267 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:37.267 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:37.267 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:37.267 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:37.267 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:37.267 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:37.267 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:37.267 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:30:37.267 Found net devices under 0000:09:00.1: cvl_0_1 00:30:37.267 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:37.267 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:30:37.267 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # is_hw=yes 00:30:37.267 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:30:37.267 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:30:37.267 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:30:37.267 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:37.267 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:37.267 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:37.267 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:37.267 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:37.267 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:37.267 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:37.267 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:37.267 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:37.267 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:37.267 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:37.267 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:37.267 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:37.267 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:37.267 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:37.267 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:37.267 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:37.267 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:37.267 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:37.267 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:37.267 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:37.267 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:37.267 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:37.267 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:37.267 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.240 ms 00:30:37.267 00:30:37.267 --- 10.0.0.2 ping statistics --- 00:30:37.267 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:37.267 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:30:37.267 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:37.267 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:37.267 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:30:37.267 00:30:37.267 --- 10.0.0.1 ping statistics --- 00:30:37.267 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:37.267 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:30:37.267 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:37.267 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # return 0 00:30:37.267 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:30:37.267 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:37.267 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:30:37.267 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:30:37.267 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:37.267 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:30:37.268 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:30:37.268 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:30:37.268 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:30:37.268 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:37.268 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:37.268 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # nvmfpid=357535 00:30:37.268 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # waitforlisten 357535 00:30:37.268 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 357535 ']' 00:30:37.268 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:37.268 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:30:37.268 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:37.268 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:37.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:37.268 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:37.268 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:37.268 [2024-10-07 09:51:26.222977] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:37.268 [2024-10-07 09:51:26.224062] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:30:37.268 [2024-10-07 09:51:26.224125] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:37.527 [2024-10-07 09:51:26.286637] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:37.527 [2024-10-07 09:51:26.397786] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:37.527 [2024-10-07 09:51:26.397848] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:37.527 [2024-10-07 09:51:26.397875] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:37.527 [2024-10-07 09:51:26.397886] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:37.527 [2024-10-07 09:51:26.397895] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:37.527 [2024-10-07 09:51:26.399459] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:30:37.527 [2024-10-07 09:51:26.399489] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:30:37.527 [2024-10-07 09:51:26.399546] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:30:37.527 [2024-10-07 09:51:26.399549] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:30:37.527 [2024-10-07 09:51:26.400084] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:37.527 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:37.527 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:30:37.527 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:30:37.527 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:37.527 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:37.527 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:37.527 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:30:37.527 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:37.527 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:37.527 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:37.527 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:30:37.527 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:37.527 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:37.787 [2024-10-07 09:51:26.549750] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:37.787 [2024-10-07 09:51:26.549962] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:37.787 [2024-10-07 09:51:26.550858] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:37.787 [2024-10-07 09:51:26.551682] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:30:37.787 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:37.787 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:37.787 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:37.787 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:37.787 [2024-10-07 09:51:26.556287] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:37.787 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:37.787 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:37.787 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:37.787 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:37.787 Malloc0 00:30:37.787 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:37.787 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:37.787 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:37.787 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:37.787 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:37.787 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:37.787 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:37.787 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:37.787 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:37.787 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:37.787 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:37.787 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:37.787 [2024-10-07 09:51:26.624460] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:37.787 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:37.787 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=357563 00:30:37.787 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:30:37.787 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:30:37.787 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:30:37.787 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=357565 00:30:37.787 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:30:37.787 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:37.787 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:37.787 { 00:30:37.787 "params": { 00:30:37.787 "name": "Nvme$subsystem", 00:30:37.787 "trtype": "$TEST_TRANSPORT", 00:30:37.787 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:37.787 "adrfam": "ipv4", 00:30:37.787 "trsvcid": "$NVMF_PORT", 00:30:37.787 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:37.787 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:37.787 "hdgst": ${hdgst:-false}, 00:30:37.787 "ddgst": ${ddgst:-false} 00:30:37.787 }, 00:30:37.787 "method": "bdev_nvme_attach_controller" 00:30:37.787 } 00:30:37.787 EOF 00:30:37.787 )") 00:30:37.787 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:30:37.787 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:30:37.787 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:30:37.787 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=357567 00:30:37.787 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:30:37.787 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:37.787 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:37.787 { 00:30:37.787 "params": { 00:30:37.787 "name": "Nvme$subsystem", 00:30:37.787 "trtype": "$TEST_TRANSPORT", 00:30:37.787 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:37.787 "adrfam": "ipv4", 00:30:37.787 "trsvcid": "$NVMF_PORT", 00:30:37.787 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:37.787 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:37.787 "hdgst": ${hdgst:-false}, 00:30:37.787 "ddgst": ${ddgst:-false} 00:30:37.787 }, 00:30:37.787 "method": "bdev_nvme_attach_controller" 00:30:37.787 } 00:30:37.787 EOF 00:30:37.787 )") 00:30:37.787 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:30:37.787 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:30:37.787 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:30:37.787 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:30:37.787 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=357570 00:30:37.787 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:30:37.787 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:30:37.787 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:37.787 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:37.787 { 00:30:37.787 "params": { 00:30:37.787 "name": "Nvme$subsystem", 00:30:37.787 "trtype": "$TEST_TRANSPORT", 00:30:37.787 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:37.787 "adrfam": "ipv4", 00:30:37.787 "trsvcid": "$NVMF_PORT", 00:30:37.787 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:37.787 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:37.787 "hdgst": ${hdgst:-false}, 00:30:37.787 "ddgst": ${ddgst:-false} 00:30:37.787 }, 00:30:37.787 "method": "bdev_nvme_attach_controller" 00:30:37.787 } 00:30:37.787 EOF 00:30:37.787 )") 00:30:37.787 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:30:37.787 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:30:37.787 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:30:37.787 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:30:37.787 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:30:37.787 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:37.787 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:37.787 { 00:30:37.787 "params": { 00:30:37.787 "name": "Nvme$subsystem", 00:30:37.787 "trtype": "$TEST_TRANSPORT", 00:30:37.787 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:37.787 "adrfam": "ipv4", 00:30:37.787 "trsvcid": "$NVMF_PORT", 00:30:37.787 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:37.787 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:37.787 "hdgst": ${hdgst:-false}, 00:30:37.787 "ddgst": ${ddgst:-false} 00:30:37.787 }, 00:30:37.787 "method": "bdev_nvme_attach_controller" 00:30:37.787 } 00:30:37.787 EOF 00:30:37.787 )") 00:30:37.787 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:30:37.787 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:30:37.787 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:30:37.788 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 357563 00:30:37.788 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:30:37.788 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:30:37.788 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:30:37.788 "params": { 00:30:37.788 "name": "Nvme1", 00:30:37.788 "trtype": "tcp", 00:30:37.788 "traddr": "10.0.0.2", 00:30:37.788 "adrfam": "ipv4", 00:30:37.788 "trsvcid": "4420", 00:30:37.788 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:37.788 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:37.788 "hdgst": false, 00:30:37.788 "ddgst": false 00:30:37.788 }, 00:30:37.788 "method": "bdev_nvme_attach_controller" 00:30:37.788 }' 00:30:37.788 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:30:37.788 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:30:37.788 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:30:37.788 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:30:37.788 "params": { 00:30:37.788 "name": "Nvme1", 00:30:37.788 "trtype": "tcp", 00:30:37.788 "traddr": "10.0.0.2", 00:30:37.788 "adrfam": "ipv4", 00:30:37.788 "trsvcid": "4420", 00:30:37.788 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:37.788 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:37.788 "hdgst": false, 00:30:37.788 "ddgst": false 00:30:37.788 }, 00:30:37.788 "method": "bdev_nvme_attach_controller" 00:30:37.788 }' 00:30:37.788 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:30:37.788 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:30:37.788 "params": { 00:30:37.788 "name": "Nvme1", 00:30:37.788 "trtype": "tcp", 00:30:37.788 "traddr": "10.0.0.2", 00:30:37.788 "adrfam": "ipv4", 00:30:37.788 "trsvcid": "4420", 00:30:37.788 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:37.788 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:37.788 "hdgst": false, 00:30:37.788 "ddgst": false 00:30:37.788 }, 00:30:37.788 "method": "bdev_nvme_attach_controller" 00:30:37.788 }' 00:30:37.788 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:30:37.788 09:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:30:37.788 "params": { 00:30:37.788 "name": "Nvme1", 00:30:37.788 "trtype": "tcp", 00:30:37.788 "traddr": "10.0.0.2", 00:30:37.788 "adrfam": "ipv4", 00:30:37.788 "trsvcid": "4420", 00:30:37.788 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:37.788 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:37.788 "hdgst": false, 00:30:37.788 "ddgst": false 00:30:37.788 }, 00:30:37.788 "method": "bdev_nvme_attach_controller" 00:30:37.788 }' 00:30:37.788 [2024-10-07 09:51:26.677760] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:30:37.788 [2024-10-07 09:51:26.677760] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:30:37.788 [2024-10-07 09:51:26.677762] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:30:37.788 [2024-10-07 09:51:26.677801] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:30:37.788 [2024-10-07 09:51:26.677846] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-10-07 09:51:26.677847] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-10-07 09:51:26.677847] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:30:37.788 --proc-type=auto ] 00:30:37.788 --proc-type=auto ] 00:30:37.788 [2024-10-07 09:51:26.677868] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:30:38.047 [2024-10-07 09:51:26.848281] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:38.047 [2024-10-07 09:51:26.947237] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 5 00:30:38.047 [2024-10-07 09:51:26.950999] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:38.305 [2024-10-07 09:51:27.051248] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:30:38.305 [2024-10-07 09:51:27.052460] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:38.305 [2024-10-07 09:51:27.121215] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:38.305 [2024-10-07 09:51:27.150614] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 6 00:30:38.305 [2024-10-07 09:51:27.213844] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 7 00:30:38.305 Running I/O for 1 seconds... 00:30:38.564 Running I/O for 1 seconds... 00:30:38.822 Running I/O for 1 seconds... 00:30:38.822 Running I/O for 1 seconds... 00:30:39.389 8320.00 IOPS, 32.50 MiB/s 00:30:39.389 Latency(us) 00:30:39.389 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:39.389 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:30:39.389 Nvme1n1 : 1.06 7964.29 31.11 0.00 0.00 15234.77 4247.70 59419.31 00:30:39.389 =================================================================================================================== 00:30:39.389 Total : 7964.29 31.11 0.00 0.00 15234.77 4247.70 59419.31 00:30:39.646 8520.00 IOPS, 33.28 MiB/s 00:30:39.646 Latency(us) 00:30:39.646 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:39.646 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:30:39.646 Nvme1n1 : 1.01 8577.03 33.50 0.00 0.00 14849.81 5995.33 20971.52 00:30:39.646 =================================================================================================================== 00:30:39.646 Total : 8577.03 33.50 0.00 0.00 14849.81 5995.33 20971.52 00:30:39.646 171536.00 IOPS, 670.06 MiB/s 00:30:39.646 Latency(us) 00:30:39.646 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:39.646 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:30:39.646 Nvme1n1 : 1.00 171205.01 668.77 0.00 0.00 743.65 321.61 1917.53 00:30:39.646 =================================================================================================================== 00:30:39.646 Total : 171205.01 668.77 0.00 0.00 743.65 321.61 1917.53 00:30:39.903 09:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 357565 00:30:39.903 09:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 357567 00:30:39.903 9508.00 IOPS, 37.14 MiB/s 00:30:39.903 Latency(us) 00:30:39.903 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:39.903 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:30:39.903 Nvme1n1 : 1.01 9598.89 37.50 0.00 0.00 13296.82 3665.16 38836.15 00:30:39.903 =================================================================================================================== 00:30:39.903 Total : 9598.89 37.50 0.00 0.00 13296.82 3665.16 38836.15 00:30:39.903 09:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 357570 00:30:40.162 09:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:40.162 09:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:40.162 09:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:40.162 09:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:40.162 09:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:30:40.162 09:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:30:40.162 09:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # nvmfcleanup 00:30:40.162 09:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:30:40.162 09:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:40.162 09:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:30:40.162 09:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:40.162 09:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:40.162 rmmod nvme_tcp 00:30:40.162 rmmod nvme_fabrics 00:30:40.162 rmmod nvme_keyring 00:30:40.420 09:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:40.420 09:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:30:40.420 09:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:30:40.420 09:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@515 -- # '[' -n 357535 ']' 00:30:40.420 09:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # killprocess 357535 00:30:40.420 09:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 357535 ']' 00:30:40.420 09:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 357535 00:30:40.420 09:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:30:40.420 09:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:40.420 09:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 357535 00:30:40.420 09:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:40.420 09:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:40.420 09:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 357535' 00:30:40.420 killing process with pid 357535 00:30:40.420 09:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 357535 00:30:40.420 09:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 357535 00:30:40.681 09:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:30:40.681 09:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:30:40.681 09:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:30:40.681 09:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:30:40.681 09:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-save 00:30:40.681 09:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:30:40.681 09:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-restore 00:30:40.681 09:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:40.681 09:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:40.681 09:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:40.681 09:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:40.681 09:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:42.591 09:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:42.591 00:30:42.591 real 0m7.614s 00:30:42.591 user 0m16.456s 00:30:42.591 sys 0m4.377s 00:30:42.591 09:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:42.591 09:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:42.591 ************************************ 00:30:42.591 END TEST nvmf_bdev_io_wait 00:30:42.591 ************************************ 00:30:42.591 09:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:30:42.591 09:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:30:42.591 09:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:42.591 09:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:42.591 ************************************ 00:30:42.591 START TEST nvmf_queue_depth 00:30:42.591 ************************************ 00:30:42.591 09:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:30:42.591 * Looking for test storage... 00:30:42.851 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:42.851 09:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:30:42.851 09:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lcov --version 00:30:42.851 09:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:30:42.851 09:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:30:42.851 09:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:42.851 09:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:42.851 09:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:42.851 09:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:30:42.851 09:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:30:42.851 09:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:30:42.851 09:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:30:42.851 09:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:30:42.851 09:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:30:42.851 09:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:30:42.851 09:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:42.851 09:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:30:42.851 09:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:30:42.851 09:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:42.851 09:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:42.851 09:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:30:42.851 09:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:30:42.851 09:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:42.851 09:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:30:42.851 09:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:30:42.851 09:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:30:42.851 09:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:30:42.851 09:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:42.851 09:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:30:42.851 09:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:30:42.851 09:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:42.851 09:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:42.851 09:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:30:42.851 09:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:42.851 09:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:30:42.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:42.851 --rc genhtml_branch_coverage=1 00:30:42.851 --rc genhtml_function_coverage=1 00:30:42.851 --rc genhtml_legend=1 00:30:42.851 --rc geninfo_all_blocks=1 00:30:42.851 --rc geninfo_unexecuted_blocks=1 00:30:42.851 00:30:42.851 ' 00:30:42.851 09:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:30:42.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:42.851 --rc genhtml_branch_coverage=1 00:30:42.851 --rc genhtml_function_coverage=1 00:30:42.851 --rc genhtml_legend=1 00:30:42.851 --rc geninfo_all_blocks=1 00:30:42.851 --rc geninfo_unexecuted_blocks=1 00:30:42.851 00:30:42.851 ' 00:30:42.851 09:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:30:42.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:42.851 --rc genhtml_branch_coverage=1 00:30:42.851 --rc genhtml_function_coverage=1 00:30:42.851 --rc genhtml_legend=1 00:30:42.851 --rc geninfo_all_blocks=1 00:30:42.851 --rc geninfo_unexecuted_blocks=1 00:30:42.851 00:30:42.851 ' 00:30:42.851 09:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:30:42.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:42.851 --rc genhtml_branch_coverage=1 00:30:42.851 --rc genhtml_function_coverage=1 00:30:42.851 --rc genhtml_legend=1 00:30:42.851 --rc geninfo_all_blocks=1 00:30:42.851 --rc geninfo_unexecuted_blocks=1 00:30:42.851 00:30:42.851 ' 00:30:42.851 09:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:42.851 09:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:30:42.851 09:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:42.851 09:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:42.851 09:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:42.851 09:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:42.851 09:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:42.851 09:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:42.851 09:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:42.851 09:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:42.851 09:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:42.851 09:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:42.852 09:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:30:42.852 09:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:30:42.852 09:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:42.852 09:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:42.852 09:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:42.852 09:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:42.852 09:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:42.852 09:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:30:42.852 09:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:42.852 09:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:42.852 09:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:42.852 09:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:42.852 09:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:42.852 09:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:42.852 09:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:30:42.852 09:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:42.852 09:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:30:42.852 09:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:42.852 09:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:42.852 09:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:42.852 09:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:42.852 09:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:42.852 09:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:42.852 09:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:42.852 09:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:42.852 09:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:42.852 09:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:42.852 09:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:30:42.852 09:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:30:42.852 09:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:42.852 09:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:30:42.852 09:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:30:42.852 09:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:42.852 09:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # prepare_net_devs 00:30:42.852 09:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@436 -- # local -g is_hw=no 00:30:42.852 09:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # remove_spdk_ns 00:30:42.852 09:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:42.852 09:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:42.852 09:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:42.852 09:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:30:42.852 09:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:30:42.852 09:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:30:42.852 09:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:44.759 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:44.759 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:30:44.759 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:44.759 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:44.759 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:44.759 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:44.759 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:44.759 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:30:44.759 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:44.759 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:30:44.759 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:30:44.759 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:30:44.759 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:30:44.759 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:30:44.759 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:30:44.759 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:44.759 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:44.760 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:44.760 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:44.760 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:44.760 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:44.760 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:44.760 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:44.760 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:44.760 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:44.760 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:44.760 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:44.760 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:44.760 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:44.760 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:44.760 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:44.760 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:44.760 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:44.760 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:44.760 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x1592)' 00:30:44.760 Found 0000:09:00.0 (0x8086 - 0x1592) 00:30:44.760 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:44.760 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:44.760 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:30:44.760 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:30:44.760 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:44.760 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:44.760 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x1592)' 00:30:44.760 Found 0000:09:00.1 (0x8086 - 0x1592) 00:30:44.760 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:44.760 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:44.760 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:30:44.760 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:30:44.760 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:44.760 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:44.760 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:44.760 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:44.760 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:44.760 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:44.760 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:44.760 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:44.760 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:44.760 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:44.760 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:44.760 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:30:44.760 Found net devices under 0000:09:00.0: cvl_0_0 00:30:44.760 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:44.760 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:44.760 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:44.760 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:44.760 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:44.760 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:44.760 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:44.760 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:44.760 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:30:44.760 Found net devices under 0000:09:00.1: cvl_0_1 00:30:44.760 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:44.760 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:30:44.760 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # is_hw=yes 00:30:44.760 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:30:44.760 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:30:44.760 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:30:44.760 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:44.760 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:44.760 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:44.760 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:44.760 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:44.760 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:44.760 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:44.760 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:44.760 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:44.760 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:44.760 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:44.760 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:44.760 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:44.760 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:44.760 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:45.020 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:45.020 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:45.020 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:45.020 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:45.020 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:45.020 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:45.020 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:45.020 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:45.020 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:45.020 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.160 ms 00:30:45.020 00:30:45.020 --- 10.0.0.2 ping statistics --- 00:30:45.020 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:45.020 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:30:45.020 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:45.020 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:45.020 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.068 ms 00:30:45.020 00:30:45.020 --- 10.0.0.1 ping statistics --- 00:30:45.020 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:45.020 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:30:45.020 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:45.020 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@448 -- # return 0 00:30:45.020 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:30:45.020 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:45.020 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:30:45.020 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:30:45.020 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:45.020 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:30:45.020 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:30:45.020 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:30:45.020 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:30:45.020 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:45.020 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:45.020 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # nvmfpid=359686 00:30:45.020 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:30:45.020 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # waitforlisten 359686 00:30:45.020 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 359686 ']' 00:30:45.020 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:45.020 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:45.020 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:45.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:45.020 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:45.020 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:45.020 [2024-10-07 09:51:33.927619] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:45.020 [2024-10-07 09:51:33.928812] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:30:45.020 [2024-10-07 09:51:33.928868] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:45.020 [2024-10-07 09:51:33.999282] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:45.280 [2024-10-07 09:51:34.109295] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:45.280 [2024-10-07 09:51:34.109338] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:45.280 [2024-10-07 09:51:34.109361] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:45.280 [2024-10-07 09:51:34.109371] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:45.280 [2024-10-07 09:51:34.109380] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:45.280 [2024-10-07 09:51:34.109894] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:30:45.280 [2024-10-07 09:51:34.195318] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:45.280 [2024-10-07 09:51:34.195613] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:45.280 09:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:45.280 09:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:30:45.281 09:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:30:45.281 09:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:45.281 09:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:45.281 09:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:45.281 09:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:45.281 09:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:45.281 09:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:45.281 [2024-10-07 09:51:34.250441] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:45.281 09:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:45.281 09:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:45.281 09:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:45.281 09:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:45.540 Malloc0 00:30:45.540 09:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:45.540 09:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:45.540 09:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:45.540 09:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:45.540 09:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:45.540 09:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:45.540 09:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:45.540 09:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:45.540 09:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:45.540 09:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:45.540 09:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:45.540 09:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:45.540 [2024-10-07 09:51:34.314555] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:45.540 09:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:45.540 09:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=359820 00:30:45.540 09:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:30:45.540 09:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:45.540 09:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 359820 /var/tmp/bdevperf.sock 00:30:45.540 09:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 359820 ']' 00:30:45.540 09:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:45.540 09:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:45.540 09:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:45.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:45.540 09:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:45.540 09:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:45.540 [2024-10-07 09:51:34.360399] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:30:45.540 [2024-10-07 09:51:34.360458] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid359820 ] 00:30:45.540 [2024-10-07 09:51:34.415062] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:45.540 [2024-10-07 09:51:34.519030] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:30:45.798 09:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:45.799 09:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:30:45.799 09:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:45.799 09:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:45.799 09:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:45.799 NVMe0n1 00:30:45.799 09:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:45.799 09:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:46.057 Running I/O for 10 seconds... 00:30:56.280 8194.00 IOPS, 32.01 MiB/s 8577.50 IOPS, 33.51 MiB/s 8537.33 IOPS, 33.35 MiB/s 8594.75 IOPS, 33.57 MiB/s 8603.60 IOPS, 33.61 MiB/s 8688.83 IOPS, 33.94 MiB/s 8660.00 IOPS, 33.83 MiB/s 8706.25 IOPS, 34.01 MiB/s 8727.67 IOPS, 34.09 MiB/s 8709.60 IOPS, 34.02 MiB/s 00:30:56.280 Latency(us) 00:30:56.280 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:56.280 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:30:56.280 Verification LBA range: start 0x0 length 0x4000 00:30:56.280 NVMe0n1 : 10.11 8718.57 34.06 0.00 0.00 116527.69 15437.37 69516.71 00:30:56.280 =================================================================================================================== 00:30:56.280 Total : 8718.57 34.06 0.00 0.00 116527.69 15437.37 69516.71 00:30:56.280 { 00:30:56.280 "results": [ 00:30:56.280 { 00:30:56.280 "job": "NVMe0n1", 00:30:56.280 "core_mask": "0x1", 00:30:56.280 "workload": "verify", 00:30:56.280 "status": "finished", 00:30:56.280 "verify_range": { 00:30:56.280 "start": 0, 00:30:56.280 "length": 16384 00:30:56.280 }, 00:30:56.280 "queue_depth": 1024, 00:30:56.280 "io_size": 4096, 00:30:56.280 "runtime": 10.107164, 00:30:56.280 "iops": 8718.568334302283, 00:30:56.280 "mibps": 34.05690755586829, 00:30:56.280 "io_failed": 0, 00:30:56.280 "io_timeout": 0, 00:30:56.280 "avg_latency_us": 116527.693943545, 00:30:56.280 "min_latency_us": 15437.368888888888, 00:30:56.280 "max_latency_us": 69516.70518518519 00:30:56.280 } 00:30:56.280 ], 00:30:56.280 "core_count": 1 00:30:56.280 } 00:30:56.280 09:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 359820 00:30:56.280 09:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 359820 ']' 00:30:56.280 09:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 359820 00:30:56.280 09:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:30:56.280 09:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:56.280 09:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 359820 00:30:56.280 09:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:56.280 09:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:56.280 09:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 359820' 00:30:56.280 killing process with pid 359820 00:30:56.280 09:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 359820 00:30:56.281 Received shutdown signal, test time was about 10.000000 seconds 00:30:56.281 00:30:56.281 Latency(us) 00:30:56.281 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:56.281 =================================================================================================================== 00:30:56.281 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:56.281 09:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 359820 00:30:56.539 09:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:30:56.539 09:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:30:56.539 09:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@514 -- # nvmfcleanup 00:30:56.539 09:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:30:56.539 09:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:56.539 09:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:30:56.539 09:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:56.539 09:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:56.539 rmmod nvme_tcp 00:30:56.539 rmmod nvme_fabrics 00:30:56.539 rmmod nvme_keyring 00:30:56.539 09:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:56.539 09:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:30:56.539 09:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:30:56.539 09:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@515 -- # '[' -n 359686 ']' 00:30:56.539 09:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # killprocess 359686 00:30:56.539 09:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 359686 ']' 00:30:56.539 09:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 359686 00:30:56.539 09:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:30:56.539 09:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:56.539 09:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 359686 00:30:56.539 09:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:56.539 09:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:56.539 09:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 359686' 00:30:56.539 killing process with pid 359686 00:30:56.539 09:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 359686 00:30:56.539 09:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 359686 00:30:56.797 09:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:30:56.797 09:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:30:56.797 09:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:30:56.797 09:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:30:56.797 09:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-save 00:30:56.797 09:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:30:56.797 09:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-restore 00:30:56.797 09:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:56.797 09:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:56.797 09:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:56.797 09:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:56.797 09:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:59.336 09:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:59.336 00:30:59.336 real 0m16.222s 00:30:59.336 user 0m22.467s 00:30:59.336 sys 0m3.360s 00:30:59.336 09:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:59.336 09:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:59.336 ************************************ 00:30:59.336 END TEST nvmf_queue_depth 00:30:59.336 ************************************ 00:30:59.337 09:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:30:59.337 09:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:30:59.337 09:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:59.337 09:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:59.337 ************************************ 00:30:59.337 START TEST nvmf_target_multipath 00:30:59.337 ************************************ 00:30:59.337 09:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:30:59.337 * Looking for test storage... 00:30:59.337 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:59.337 09:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:30:59.337 09:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lcov --version 00:30:59.337 09:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:30:59.337 09:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:30:59.337 09:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:59.337 09:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:59.337 09:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:59.337 09:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:30:59.337 09:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:30:59.337 09:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:30:59.337 09:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:30:59.337 09:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:30:59.337 09:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:30:59.337 09:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:30:59.337 09:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:59.337 09:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:30:59.337 09:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:30:59.337 09:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:59.337 09:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:59.337 09:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:30:59.337 09:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:30:59.337 09:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:59.337 09:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:30:59.337 09:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:30:59.337 09:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:30:59.337 09:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:30:59.337 09:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:59.337 09:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:30:59.337 09:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:30:59.337 09:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:59.337 09:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:59.337 09:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:30:59.337 09:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:59.337 09:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:30:59.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:59.337 --rc genhtml_branch_coverage=1 00:30:59.337 --rc genhtml_function_coverage=1 00:30:59.337 --rc genhtml_legend=1 00:30:59.337 --rc geninfo_all_blocks=1 00:30:59.337 --rc geninfo_unexecuted_blocks=1 00:30:59.337 00:30:59.337 ' 00:30:59.337 09:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:30:59.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:59.337 --rc genhtml_branch_coverage=1 00:30:59.337 --rc genhtml_function_coverage=1 00:30:59.337 --rc genhtml_legend=1 00:30:59.337 --rc geninfo_all_blocks=1 00:30:59.337 --rc geninfo_unexecuted_blocks=1 00:30:59.337 00:30:59.337 ' 00:30:59.337 09:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:30:59.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:59.337 --rc genhtml_branch_coverage=1 00:30:59.337 --rc genhtml_function_coverage=1 00:30:59.337 --rc genhtml_legend=1 00:30:59.337 --rc geninfo_all_blocks=1 00:30:59.337 --rc geninfo_unexecuted_blocks=1 00:30:59.337 00:30:59.337 ' 00:30:59.337 09:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:30:59.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:59.337 --rc genhtml_branch_coverage=1 00:30:59.337 --rc genhtml_function_coverage=1 00:30:59.337 --rc genhtml_legend=1 00:30:59.337 --rc geninfo_all_blocks=1 00:30:59.337 --rc geninfo_unexecuted_blocks=1 00:30:59.337 00:30:59.337 ' 00:30:59.337 09:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:59.337 09:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:30:59.337 09:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:59.337 09:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:59.337 09:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:59.337 09:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:59.337 09:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:59.337 09:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:59.337 09:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:59.337 09:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:59.337 09:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:59.337 09:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:59.337 09:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:30:59.337 09:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:30:59.337 09:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:59.337 09:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:59.337 09:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:59.337 09:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:59.337 09:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:59.337 09:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:30:59.337 09:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:59.337 09:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:59.337 09:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:59.337 09:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:59.337 09:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:59.337 09:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:59.337 09:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:30:59.337 09:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:59.337 09:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:30:59.337 09:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:59.337 09:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:59.337 09:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:59.337 09:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:59.337 09:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:59.337 09:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:59.337 09:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:59.337 09:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:59.337 09:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:59.337 09:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:59.337 09:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:59.337 09:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:59.337 09:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:30:59.337 09:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:59.337 09:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:30:59.337 09:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:30:59.337 09:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:59.337 09:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # prepare_net_devs 00:30:59.337 09:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@436 -- # local -g is_hw=no 00:30:59.337 09:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # remove_spdk_ns 00:30:59.337 09:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:59.337 09:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:59.338 09:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:59.338 09:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:30:59.338 09:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:30:59.338 09:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:30:59.338 09:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:31:01.242 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:01.242 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:31:01.242 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:01.242 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:01.242 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:01.242 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:01.242 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:01.242 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:31:01.242 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:01.242 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:31:01.242 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:31:01.242 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:31:01.242 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:31:01.242 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:31:01.242 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:31:01.242 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:01.242 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:01.242 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:01.242 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:01.242 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:01.242 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:01.242 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:01.242 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:01.242 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:01.242 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:01.242 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:01.242 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:01.242 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:01.242 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:01.242 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:01.242 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:01.242 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:01.242 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:01.242 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:01.242 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x1592)' 00:31:01.242 Found 0000:09:00.0 (0x8086 - 0x1592) 00:31:01.242 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:01.242 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:01.242 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:31:01.242 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:31:01.242 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:01.242 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:01.242 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x1592)' 00:31:01.242 Found 0000:09:00.1 (0x8086 - 0x1592) 00:31:01.242 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:01.242 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:01.242 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:31:01.242 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:31:01.242 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:01.242 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:01.242 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:01.242 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:01.242 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:01.242 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:01.242 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:01.242 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:01.242 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:01.242 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:01.242 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:01.242 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:31:01.243 Found net devices under 0000:09:00.0: cvl_0_0 00:31:01.243 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:01.243 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:01.243 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:01.243 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:01.243 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:01.243 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:01.243 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:01.243 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:01.243 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:31:01.243 Found net devices under 0000:09:00.1: cvl_0_1 00:31:01.243 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:01.243 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:31:01.243 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # is_hw=yes 00:31:01.243 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:31:01.243 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:31:01.243 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:31:01.243 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:01.243 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:01.243 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:01.243 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:01.243 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:01.243 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:01.243 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:01.243 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:01.243 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:01.243 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:01.243 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:01.243 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:01.243 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:01.243 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:01.243 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:01.243 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:01.243 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:01.243 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:01.243 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:01.243 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:01.243 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:01.243 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:01.243 09:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:01.243 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:01.243 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:31:01.243 00:31:01.243 --- 10.0.0.2 ping statistics --- 00:31:01.243 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:01.243 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:31:01.243 09:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:01.243 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:01.243 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms 00:31:01.243 00:31:01.243 --- 10.0.0.1 ping statistics --- 00:31:01.243 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:01.243 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:31:01.243 09:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:01.243 09:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@448 -- # return 0 00:31:01.243 09:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:31:01.243 09:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:01.243 09:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:31:01.243 09:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:31:01.243 09:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:01.243 09:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:31:01.243 09:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:31:01.243 09:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:31:01.243 09:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:31:01.243 only one NIC for nvmf test 00:31:01.243 09:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:31:01.243 09:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:31:01.243 09:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:31:01.243 09:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:01.243 09:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:31:01.243 09:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:01.243 09:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:01.243 rmmod nvme_tcp 00:31:01.243 rmmod nvme_fabrics 00:31:01.243 rmmod nvme_keyring 00:31:01.243 09:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:01.243 09:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:31:01.243 09:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:31:01.243 09:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:31:01.243 09:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:31:01.243 09:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:31:01.243 09:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:31:01.243 09:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:31:01.243 09:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:31:01.243 09:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:31:01.243 09:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:31:01.243 09:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:01.243 09:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:01.243 09:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:01.243 09:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:01.243 09:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:03.151 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:03.151 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:31:03.151 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:31:03.151 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:31:03.151 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:31:03.151 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:03.151 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:31:03.151 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:03.151 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:03.151 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:03.151 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:31:03.151 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:31:03.151 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:31:03.151 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:31:03.151 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:31:03.151 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:31:03.151 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:31:03.151 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:31:03.151 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:31:03.151 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:31:03.151 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:03.151 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:03.151 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:03.151 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:03.151 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:03.151 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:03.151 00:31:03.151 real 0m4.335s 00:31:03.151 user 0m0.856s 00:31:03.151 sys 0m1.484s 00:31:03.151 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:03.151 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:31:03.151 ************************************ 00:31:03.151 END TEST nvmf_target_multipath 00:31:03.151 ************************************ 00:31:03.410 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:31:03.410 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:31:03.410 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:03.410 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:03.410 ************************************ 00:31:03.410 START TEST nvmf_zcopy 00:31:03.410 ************************************ 00:31:03.410 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:31:03.410 * Looking for test storage... 00:31:03.410 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:03.410 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:31:03.410 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lcov --version 00:31:03.410 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:31:03.410 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:31:03.410 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:03.410 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:03.410 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:03.410 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:31:03.410 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:31:03.410 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:31:03.410 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:31:03.411 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:31:03.411 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:31:03.411 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:31:03.411 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:03.411 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:31:03.411 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:31:03.411 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:03.411 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:03.411 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:31:03.411 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:31:03.411 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:03.411 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:31:03.411 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:31:03.411 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:31:03.411 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:31:03.411 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:03.411 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:31:03.411 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:31:03.411 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:03.411 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:03.411 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:31:03.411 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:03.411 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:31:03.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:03.411 --rc genhtml_branch_coverage=1 00:31:03.411 --rc genhtml_function_coverage=1 00:31:03.411 --rc genhtml_legend=1 00:31:03.411 --rc geninfo_all_blocks=1 00:31:03.411 --rc geninfo_unexecuted_blocks=1 00:31:03.411 00:31:03.411 ' 00:31:03.411 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:31:03.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:03.411 --rc genhtml_branch_coverage=1 00:31:03.411 --rc genhtml_function_coverage=1 00:31:03.411 --rc genhtml_legend=1 00:31:03.411 --rc geninfo_all_blocks=1 00:31:03.411 --rc geninfo_unexecuted_blocks=1 00:31:03.411 00:31:03.411 ' 00:31:03.411 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:31:03.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:03.411 --rc genhtml_branch_coverage=1 00:31:03.411 --rc genhtml_function_coverage=1 00:31:03.411 --rc genhtml_legend=1 00:31:03.411 --rc geninfo_all_blocks=1 00:31:03.411 --rc geninfo_unexecuted_blocks=1 00:31:03.411 00:31:03.411 ' 00:31:03.411 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:31:03.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:03.411 --rc genhtml_branch_coverage=1 00:31:03.411 --rc genhtml_function_coverage=1 00:31:03.411 --rc genhtml_legend=1 00:31:03.411 --rc geninfo_all_blocks=1 00:31:03.411 --rc geninfo_unexecuted_blocks=1 00:31:03.411 00:31:03.411 ' 00:31:03.411 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:03.411 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:31:03.411 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:03.411 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:03.411 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:03.411 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:03.411 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:03.411 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:03.411 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:03.411 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:03.411 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:03.411 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:03.411 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:31:03.411 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:31:03.411 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:03.411 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:03.411 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:03.411 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:03.411 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:03.411 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:31:03.411 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:03.411 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:03.411 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:03.411 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:03.411 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:03.411 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:03.411 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:31:03.411 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:03.411 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:31:03.411 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:03.411 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:03.411 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:03.411 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:03.411 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:03.411 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:03.411 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:03.411 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:03.411 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:03.411 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:03.411 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:31:03.411 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:31:03.411 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:03.411 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # prepare_net_devs 00:31:03.411 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@436 -- # local -g is_hw=no 00:31:03.411 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # remove_spdk_ns 00:31:03.411 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:03.411 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:03.411 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:03.411 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:31:03.411 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:31:03.412 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:31:03.412 09:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:05.315 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:05.315 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:31:05.315 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:05.315 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:05.315 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:05.315 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:05.315 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:05.315 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:31:05.315 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:05.315 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:31:05.315 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:31:05.315 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:31:05.315 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:31:05.315 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:31:05.315 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:31:05.315 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:05.315 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:05.315 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:05.315 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:05.315 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:05.316 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:05.316 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:05.316 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:05.316 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:05.316 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:05.316 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:05.316 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:05.316 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:05.316 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:05.316 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:05.316 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:05.316 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:05.316 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:05.316 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:05.316 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x1592)' 00:31:05.316 Found 0000:09:00.0 (0x8086 - 0x1592) 00:31:05.316 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:05.316 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:05.316 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:31:05.316 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:31:05.316 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:05.316 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:05.316 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x1592)' 00:31:05.316 Found 0000:09:00.1 (0x8086 - 0x1592) 00:31:05.316 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:05.316 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:05.316 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:31:05.316 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:31:05.316 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:05.316 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:05.316 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:05.316 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:05.316 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:05.316 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:05.316 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:05.316 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:05.316 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:05.316 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:05.316 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:05.316 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:31:05.316 Found net devices under 0000:09:00.0: cvl_0_0 00:31:05.316 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:05.316 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:05.316 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:05.316 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:05.316 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:05.316 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:05.316 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:05.316 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:05.316 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:31:05.316 Found net devices under 0000:09:00.1: cvl_0_1 00:31:05.316 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:05.316 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:31:05.316 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # is_hw=yes 00:31:05.316 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:31:05.316 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:31:05.316 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:31:05.316 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:05.316 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:05.316 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:05.316 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:05.316 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:05.316 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:05.316 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:05.316 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:05.316 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:05.316 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:05.316 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:05.316 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:05.316 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:05.316 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:05.316 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:05.575 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:05.575 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:05.575 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:05.575 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:05.575 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:05.575 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:05.575 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:05.575 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:05.575 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:05.575 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.275 ms 00:31:05.575 00:31:05.575 --- 10.0.0.2 ping statistics --- 00:31:05.575 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:05.575 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:31:05.575 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:05.575 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:05.575 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.071 ms 00:31:05.575 00:31:05.575 --- 10.0.0.1 ping statistics --- 00:31:05.575 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:05.575 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:31:05.575 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:05.575 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@448 -- # return 0 00:31:05.575 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:31:05.575 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:05.575 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:31:05.575 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:31:05.575 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:05.575 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:31:05.575 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:31:05.575 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:31:05.575 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:31:05.575 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:05.575 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:05.575 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # nvmfpid=364726 00:31:05.575 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:31:05.575 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # waitforlisten 364726 00:31:05.575 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 364726 ']' 00:31:05.575 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:05.575 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:05.575 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:05.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:05.575 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:05.575 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:05.575 [2024-10-07 09:51:54.468071] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:05.575 [2024-10-07 09:51:54.469158] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:31:05.575 [2024-10-07 09:51:54.469210] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:05.575 [2024-10-07 09:51:54.528342] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:05.834 [2024-10-07 09:51:54.637008] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:05.834 [2024-10-07 09:51:54.637060] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:05.834 [2024-10-07 09:51:54.637083] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:05.834 [2024-10-07 09:51:54.637094] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:05.834 [2024-10-07 09:51:54.637103] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:05.834 [2024-10-07 09:51:54.637609] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:31:05.834 [2024-10-07 09:51:54.723533] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:05.834 [2024-10-07 09:51:54.723853] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:05.834 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:05.834 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:31:05.834 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:31:05.834 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:05.834 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:05.834 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:05.834 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:31:05.834 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:31:05.834 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:05.834 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:05.834 [2024-10-07 09:51:54.770173] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:05.834 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:05.834 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:31:05.834 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:05.834 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:05.835 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:05.835 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:05.835 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:05.835 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:05.835 [2024-10-07 09:51:54.786362] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:05.835 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:05.835 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:05.835 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:05.835 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:05.835 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:05.835 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:31:05.835 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:05.835 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:06.093 malloc0 00:31:06.093 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:06.093 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:31:06.093 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:06.093 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:06.093 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:06.093 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:31:06.093 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:31:06.093 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:31:06.093 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:31:06.093 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:31:06.093 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:31:06.093 { 00:31:06.093 "params": { 00:31:06.093 "name": "Nvme$subsystem", 00:31:06.093 "trtype": "$TEST_TRANSPORT", 00:31:06.093 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:06.093 "adrfam": "ipv4", 00:31:06.093 "trsvcid": "$NVMF_PORT", 00:31:06.093 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:06.093 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:06.093 "hdgst": ${hdgst:-false}, 00:31:06.093 "ddgst": ${ddgst:-false} 00:31:06.093 }, 00:31:06.093 "method": "bdev_nvme_attach_controller" 00:31:06.093 } 00:31:06.093 EOF 00:31:06.093 )") 00:31:06.093 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:31:06.093 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:31:06.093 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:31:06.093 09:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:31:06.093 "params": { 00:31:06.093 "name": "Nvme1", 00:31:06.093 "trtype": "tcp", 00:31:06.093 "traddr": "10.0.0.2", 00:31:06.093 "adrfam": "ipv4", 00:31:06.093 "trsvcid": "4420", 00:31:06.093 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:06.093 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:06.093 "hdgst": false, 00:31:06.093 "ddgst": false 00:31:06.093 }, 00:31:06.093 "method": "bdev_nvme_attach_controller" 00:31:06.093 }' 00:31:06.093 [2024-10-07 09:51:54.890183] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:31:06.093 [2024-10-07 09:51:54.890250] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid364771 ] 00:31:06.093 [2024-10-07 09:51:54.945631] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:06.093 [2024-10-07 09:51:55.055069] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:31:06.659 Running I/O for 10 seconds... 00:31:16.696 5471.00 IOPS, 42.74 MiB/s 5537.50 IOPS, 43.26 MiB/s 5537.00 IOPS, 43.26 MiB/s 5554.00 IOPS, 43.39 MiB/s 5563.20 IOPS, 43.46 MiB/s 5570.17 IOPS, 43.52 MiB/s 5571.86 IOPS, 43.53 MiB/s 5572.00 IOPS, 43.53 MiB/s 5572.33 IOPS, 43.53 MiB/s 5571.50 IOPS, 43.53 MiB/s 00:31:16.696 Latency(us) 00:31:16.696 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:16.696 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:31:16.696 Verification LBA range: start 0x0 length 0x1000 00:31:16.696 Nvme1n1 : 10.02 5573.91 43.55 0.00 0.00 22901.27 3058.35 29515.47 00:31:16.696 =================================================================================================================== 00:31:16.696 Total : 5573.91 43.55 0.00 0.00 22901.27 3058.35 29515.47 00:31:16.955 09:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=365942 00:31:16.955 09:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:31:16.955 09:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:16.955 09:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:31:16.955 09:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:31:16.955 09:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:31:16.955 09:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:31:16.955 09:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:31:16.955 09:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:31:16.955 { 00:31:16.955 "params": { 00:31:16.955 "name": "Nvme$subsystem", 00:31:16.955 "trtype": "$TEST_TRANSPORT", 00:31:16.955 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:16.955 "adrfam": "ipv4", 00:31:16.955 "trsvcid": "$NVMF_PORT", 00:31:16.955 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:16.955 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:16.955 "hdgst": ${hdgst:-false}, 00:31:16.955 "ddgst": ${ddgst:-false} 00:31:16.955 }, 00:31:16.955 "method": "bdev_nvme_attach_controller" 00:31:16.955 } 00:31:16.955 EOF 00:31:16.955 )") 00:31:16.955 09:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:31:16.955 [2024-10-07 09:52:05.714140] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.955 [2024-10-07 09:52:05.714175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.955 09:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:31:16.955 09:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:31:16.955 09:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:31:16.955 "params": { 00:31:16.955 "name": "Nvme1", 00:31:16.955 "trtype": "tcp", 00:31:16.955 "traddr": "10.0.0.2", 00:31:16.955 "adrfam": "ipv4", 00:31:16.955 "trsvcid": "4420", 00:31:16.955 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:16.955 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:16.955 "hdgst": false, 00:31:16.955 "ddgst": false 00:31:16.955 }, 00:31:16.955 "method": "bdev_nvme_attach_controller" 00:31:16.955 }' 00:31:16.955 [2024-10-07 09:52:05.722068] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.955 [2024-10-07 09:52:05.722090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.955 [2024-10-07 09:52:05.730100] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.955 [2024-10-07 09:52:05.730121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.955 [2024-10-07 09:52:05.738066] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.955 [2024-10-07 09:52:05.738087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.955 [2024-10-07 09:52:05.746090] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.955 [2024-10-07 09:52:05.746110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.955 [2024-10-07 09:52:05.752912] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:31:16.955 [2024-10-07 09:52:05.752987] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid365942 ] 00:31:16.955 [2024-10-07 09:52:05.754081] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.955 [2024-10-07 09:52:05.754102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.955 [2024-10-07 09:52:05.762082] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.955 [2024-10-07 09:52:05.762101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.955 [2024-10-07 09:52:05.770081] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.955 [2024-10-07 09:52:05.770101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.955 [2024-10-07 09:52:05.778089] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.955 [2024-10-07 09:52:05.778109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.955 [2024-10-07 09:52:05.786076] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.955 [2024-10-07 09:52:05.786097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.955 [2024-10-07 09:52:05.794081] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.955 [2024-10-07 09:52:05.794101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.955 [2024-10-07 09:52:05.802081] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.955 [2024-10-07 09:52:05.802101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.955 [2024-10-07 09:52:05.810068] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.955 [2024-10-07 09:52:05.810088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.955 [2024-10-07 09:52:05.811866] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:16.955 [2024-10-07 09:52:05.818101] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.955 [2024-10-07 09:52:05.818135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.955 [2024-10-07 09:52:05.826110] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.955 [2024-10-07 09:52:05.826146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.955 [2024-10-07 09:52:05.834080] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.955 [2024-10-07 09:52:05.834101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.955 [2024-10-07 09:52:05.842081] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.955 [2024-10-07 09:52:05.842100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.955 [2024-10-07 09:52:05.850082] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.955 [2024-10-07 09:52:05.850102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.955 [2024-10-07 09:52:05.858083] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.955 [2024-10-07 09:52:05.858104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.955 [2024-10-07 09:52:05.866080] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.955 [2024-10-07 09:52:05.866100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.955 [2024-10-07 09:52:05.874100] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.955 [2024-10-07 09:52:05.874129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.955 [2024-10-07 09:52:05.882094] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.956 [2024-10-07 09:52:05.882129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.956 [2024-10-07 09:52:05.890098] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.956 [2024-10-07 09:52:05.890134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.956 [2024-10-07 09:52:05.898081] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.956 [2024-10-07 09:52:05.898101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.956 [2024-10-07 09:52:05.906081] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.956 [2024-10-07 09:52:05.906101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.956 [2024-10-07 09:52:05.914081] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.956 [2024-10-07 09:52:05.914102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.956 [2024-10-07 09:52:05.922083] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.956 [2024-10-07 09:52:05.922104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.956 [2024-10-07 09:52:05.925776] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:31:16.956 [2024-10-07 09:52:05.930079] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.956 [2024-10-07 09:52:05.930099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.956 [2024-10-07 09:52:05.938097] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.956 [2024-10-07 09:52:05.938117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.956 [2024-10-07 09:52:05.946107] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.956 [2024-10-07 09:52:05.946142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.214 [2024-10-07 09:52:05.954148] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.214 [2024-10-07 09:52:05.954185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.214 [2024-10-07 09:52:05.962110] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.214 [2024-10-07 09:52:05.962146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.214 [2024-10-07 09:52:05.970107] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.214 [2024-10-07 09:52:05.970144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.214 [2024-10-07 09:52:05.978113] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.214 [2024-10-07 09:52:05.978147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.214 [2024-10-07 09:52:05.986116] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.214 [2024-10-07 09:52:05.986152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.214 [2024-10-07 09:52:05.994111] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.214 [2024-10-07 09:52:05.994133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.214 [2024-10-07 09:52:06.002105] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.214 [2024-10-07 09:52:06.002142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.214 [2024-10-07 09:52:06.010110] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.214 [2024-10-07 09:52:06.010148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.214 [2024-10-07 09:52:06.018110] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.214 [2024-10-07 09:52:06.018148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.214 [2024-10-07 09:52:06.026080] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.214 [2024-10-07 09:52:06.026100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.214 [2024-10-07 09:52:06.034084] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.214 [2024-10-07 09:52:06.034104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.214 [2024-10-07 09:52:06.042087] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.214 [2024-10-07 09:52:06.042111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.214 [2024-10-07 09:52:06.050086] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.214 [2024-10-07 09:52:06.050110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.214 [2024-10-07 09:52:06.058070] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.214 [2024-10-07 09:52:06.058092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.214 [2024-10-07 09:52:06.066085] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.214 [2024-10-07 09:52:06.066107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.214 [2024-10-07 09:52:06.074083] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.215 [2024-10-07 09:52:06.074105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.215 [2024-10-07 09:52:06.082085] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.215 [2024-10-07 09:52:06.082108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.215 [2024-10-07 09:52:06.090084] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.215 [2024-10-07 09:52:06.090107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.215 [2024-10-07 09:52:06.098100] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.215 [2024-10-07 09:52:06.098124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.215 [2024-10-07 09:52:06.106083] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.215 [2024-10-07 09:52:06.106105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.215 Running I/O for 5 seconds... 00:31:17.215 [2024-10-07 09:52:06.121879] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.215 [2024-10-07 09:52:06.121917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.215 [2024-10-07 09:52:06.132938] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.215 [2024-10-07 09:52:06.132982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.215 [2024-10-07 09:52:06.146119] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.215 [2024-10-07 09:52:06.146161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.215 [2024-10-07 09:52:06.156268] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.215 [2024-10-07 09:52:06.156295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.215 [2024-10-07 09:52:06.171333] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.215 [2024-10-07 09:52:06.171360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.215 [2024-10-07 09:52:06.182544] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.215 [2024-10-07 09:52:06.182570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.215 [2024-10-07 09:52:06.193699] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.215 [2024-10-07 09:52:06.193728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.215 [2024-10-07 09:52:06.205036] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.215 [2024-10-07 09:52:06.205061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.473 [2024-10-07 09:52:06.220395] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.473 [2024-10-07 09:52:06.220421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.473 [2024-10-07 09:52:06.235155] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.473 [2024-10-07 09:52:06.235181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.473 [2024-10-07 09:52:06.244489] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.473 [2024-10-07 09:52:06.244516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.473 [2024-10-07 09:52:06.256445] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.473 [2024-10-07 09:52:06.256472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.473 [2024-10-07 09:52:06.267718] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.473 [2024-10-07 09:52:06.267746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.473 [2024-10-07 09:52:06.283715] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.473 [2024-10-07 09:52:06.283742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.473 [2024-10-07 09:52:06.294079] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.473 [2024-10-07 09:52:06.294120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.473 [2024-10-07 09:52:06.305475] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.473 [2024-10-07 09:52:06.305501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.473 [2024-10-07 09:52:06.319139] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.473 [2024-10-07 09:52:06.319166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.473 [2024-10-07 09:52:06.328263] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.473 [2024-10-07 09:52:06.328289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.473 [2024-10-07 09:52:06.340441] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.473 [2024-10-07 09:52:06.340468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.473 [2024-10-07 09:52:06.351206] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.473 [2024-10-07 09:52:06.351247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.473 [2024-10-07 09:52:06.362082] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.473 [2024-10-07 09:52:06.362109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.473 [2024-10-07 09:52:06.373697] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.473 [2024-10-07 09:52:06.373723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.473 [2024-10-07 09:52:06.384560] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.473 [2024-10-07 09:52:06.384586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.473 [2024-10-07 09:52:06.400225] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.473 [2024-10-07 09:52:06.400251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.473 [2024-10-07 09:52:06.410264] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.473 [2024-10-07 09:52:06.410290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.473 [2024-10-07 09:52:06.422298] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.473 [2024-10-07 09:52:06.422325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.473 [2024-10-07 09:52:06.433437] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.473 [2024-10-07 09:52:06.433463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.473 [2024-10-07 09:52:06.444594] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.473 [2024-10-07 09:52:06.444620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.473 [2024-10-07 09:52:06.455675] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.473 [2024-10-07 09:52:06.455717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.473 [2024-10-07 09:52:06.467030] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.473 [2024-10-07 09:52:06.467059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.731 [2024-10-07 09:52:06.482979] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.731 [2024-10-07 09:52:06.483021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.731 [2024-10-07 09:52:06.492636] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.731 [2024-10-07 09:52:06.492664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.731 [2024-10-07 09:52:06.504797] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.731 [2024-10-07 09:52:06.504825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.731 [2024-10-07 09:52:06.519829] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.731 [2024-10-07 09:52:06.519857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.731 [2024-10-07 09:52:06.536435] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.731 [2024-10-07 09:52:06.536463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.731 [2024-10-07 09:52:06.551105] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.731 [2024-10-07 09:52:06.551132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.731 [2024-10-07 09:52:06.560602] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.731 [2024-10-07 09:52:06.560628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.731 [2024-10-07 09:52:06.572954] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.731 [2024-10-07 09:52:06.572996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.731 [2024-10-07 09:52:06.586207] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.731 [2024-10-07 09:52:06.586245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.731 [2024-10-07 09:52:06.596038] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.731 [2024-10-07 09:52:06.596065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.731 [2024-10-07 09:52:06.611362] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.731 [2024-10-07 09:52:06.611390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.731 [2024-10-07 09:52:06.622127] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.731 [2024-10-07 09:52:06.622153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.731 [2024-10-07 09:52:06.633369] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.731 [2024-10-07 09:52:06.633395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.731 [2024-10-07 09:52:06.648137] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.731 [2024-10-07 09:52:06.648164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.731 [2024-10-07 09:52:06.663149] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.731 [2024-10-07 09:52:06.663176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.731 [2024-10-07 09:52:06.672512] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.731 [2024-10-07 09:52:06.672538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.731 [2024-10-07 09:52:06.684635] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.731 [2024-10-07 09:52:06.684660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.731 [2024-10-07 09:52:06.698241] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.731 [2024-10-07 09:52:06.698268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.731 [2024-10-07 09:52:06.708116] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.731 [2024-10-07 09:52:06.708142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.731 [2024-10-07 09:52:06.723384] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.731 [2024-10-07 09:52:06.723411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.073 [2024-10-07 09:52:06.739632] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.073 [2024-10-07 09:52:06.739687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.073 [2024-10-07 09:52:06.749394] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.073 [2024-10-07 09:52:06.749420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.073 [2024-10-07 09:52:06.761639] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.073 [2024-10-07 09:52:06.761690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.073 [2024-10-07 09:52:06.772806] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.073 [2024-10-07 09:52:06.772834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.073 [2024-10-07 09:52:06.788188] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.073 [2024-10-07 09:52:06.788215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.073 [2024-10-07 09:52:06.801026] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.073 [2024-10-07 09:52:06.801054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.073 [2024-10-07 09:52:06.811525] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.073 [2024-10-07 09:52:06.811552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.073 [2024-10-07 09:52:06.828111] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.073 [2024-10-07 09:52:06.828148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.073 [2024-10-07 09:52:06.843817] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.073 [2024-10-07 09:52:06.843845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.073 [2024-10-07 09:52:06.861606] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.073 [2024-10-07 09:52:06.861633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.073 [2024-10-07 09:52:06.874133] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.073 [2024-10-07 09:52:06.874160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.073 [2024-10-07 09:52:06.884061] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.073 [2024-10-07 09:52:06.884087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.073 [2024-10-07 09:52:06.898785] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.073 [2024-10-07 09:52:06.898812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.073 [2024-10-07 09:52:06.909545] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.073 [2024-10-07 09:52:06.909572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.073 [2024-10-07 09:52:06.921527] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.073 [2024-10-07 09:52:06.921552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.073 [2024-10-07 09:52:06.932779] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.073 [2024-10-07 09:52:06.932805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.073 [2024-10-07 09:52:06.947589] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.073 [2024-10-07 09:52:06.947616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.073 [2024-10-07 09:52:06.957114] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.073 [2024-10-07 09:52:06.957140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.073 [2024-10-07 09:52:06.969170] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.073 [2024-10-07 09:52:06.969197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.073 [2024-10-07 09:52:06.984511] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.073 [2024-10-07 09:52:06.984538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.073 [2024-10-07 09:52:07.000209] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.073 [2024-10-07 09:52:07.000242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.073 [2024-10-07 09:52:07.015045] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.073 [2024-10-07 09:52:07.015072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.073 [2024-10-07 09:52:07.025167] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.073 [2024-10-07 09:52:07.025208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.385 [2024-10-07 09:52:07.039421] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.385 [2024-10-07 09:52:07.039453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.385 [2024-10-07 09:52:07.052912] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.385 [2024-10-07 09:52:07.052942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.385 [2024-10-07 09:52:07.066169] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.385 [2024-10-07 09:52:07.066195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.385 [2024-10-07 09:52:07.076169] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.385 [2024-10-07 09:52:07.076196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.385 [2024-10-07 09:52:07.091848] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.385 [2024-10-07 09:52:07.091876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.385 [2024-10-07 09:52:07.107543] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.385 [2024-10-07 09:52:07.107569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.385 11276.00 IOPS, 88.09 MiB/s [2024-10-07 09:52:07.117506] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.385 [2024-10-07 09:52:07.117532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.385 [2024-10-07 09:52:07.129783] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.385 [2024-10-07 09:52:07.129812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.385 [2024-10-07 09:52:07.141082] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.385 [2024-10-07 09:52:07.141108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.385 [2024-10-07 09:52:07.154376] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.385 [2024-10-07 09:52:07.154402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.385 [2024-10-07 09:52:07.164273] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.385 [2024-10-07 09:52:07.164299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.385 [2024-10-07 09:52:07.179409] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.385 [2024-10-07 09:52:07.179435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.385 [2024-10-07 09:52:07.195721] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.385 [2024-10-07 09:52:07.195763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.385 [2024-10-07 09:52:07.206032] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.385 [2024-10-07 09:52:07.206058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.385 [2024-10-07 09:52:07.217625] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.385 [2024-10-07 09:52:07.217651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.385 [2024-10-07 09:52:07.229121] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.385 [2024-10-07 09:52:07.229147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.385 [2024-10-07 09:52:07.242585] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.385 [2024-10-07 09:52:07.242611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.385 [2024-10-07 09:52:07.252355] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.385 [2024-10-07 09:52:07.252381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.385 [2024-10-07 09:52:07.267098] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.385 [2024-10-07 09:52:07.267125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.385 [2024-10-07 09:52:07.283736] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.385 [2024-10-07 09:52:07.283763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.385 [2024-10-07 09:52:07.293440] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.385 [2024-10-07 09:52:07.293466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.385 [2024-10-07 09:52:07.305502] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.385 [2024-10-07 09:52:07.305528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.385 [2024-10-07 09:52:07.318723] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.385 [2024-10-07 09:52:07.318751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.385 [2024-10-07 09:52:07.327904] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.385 [2024-10-07 09:52:07.327931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.385 [2024-10-07 09:52:07.343215] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.385 [2024-10-07 09:52:07.343241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.385 [2024-10-07 09:52:07.354152] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.385 [2024-10-07 09:52:07.354177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.385 [2024-10-07 09:52:07.364861] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.385 [2024-10-07 09:52:07.364887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.652 [2024-10-07 09:52:07.379392] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.652 [2024-10-07 09:52:07.379420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.652 [2024-10-07 09:52:07.389352] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.652 [2024-10-07 09:52:07.389379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.652 [2024-10-07 09:52:07.400986] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.652 [2024-10-07 09:52:07.401014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.652 [2024-10-07 09:52:07.414679] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.652 [2024-10-07 09:52:07.414705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.652 [2024-10-07 09:52:07.424113] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.652 [2024-10-07 09:52:07.424139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.652 [2024-10-07 09:52:07.439088] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.652 [2024-10-07 09:52:07.439114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.653 [2024-10-07 09:52:07.449611] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.653 [2024-10-07 09:52:07.449650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.653 [2024-10-07 09:52:07.461703] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.653 [2024-10-07 09:52:07.461738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.653 [2024-10-07 09:52:07.475076] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.653 [2024-10-07 09:52:07.475103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.653 [2024-10-07 09:52:07.485108] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.653 [2024-10-07 09:52:07.485134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.653 [2024-10-07 09:52:07.497606] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.653 [2024-10-07 09:52:07.497633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.653 [2024-10-07 09:52:07.509034] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.653 [2024-10-07 09:52:07.509060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.653 [2024-10-07 09:52:07.524467] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.653 [2024-10-07 09:52:07.524508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.653 [2024-10-07 09:52:07.540565] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.653 [2024-10-07 09:52:07.540591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.653 [2024-10-07 09:52:07.554726] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.653 [2024-10-07 09:52:07.554752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.653 [2024-10-07 09:52:07.564352] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.653 [2024-10-07 09:52:07.564378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.653 [2024-10-07 09:52:07.576876] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.653 [2024-10-07 09:52:07.576903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.653 [2024-10-07 09:52:07.590844] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.653 [2024-10-07 09:52:07.590871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.653 [2024-10-07 09:52:07.600574] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.653 [2024-10-07 09:52:07.600600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.653 [2024-10-07 09:52:07.612900] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.653 [2024-10-07 09:52:07.612926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.653 [2024-10-07 09:52:07.624638] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.653 [2024-10-07 09:52:07.624690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.653 [2024-10-07 09:52:07.635518] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.653 [2024-10-07 09:52:07.635544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.918 [2024-10-07 09:52:07.646829] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.918 [2024-10-07 09:52:07.646856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.918 [2024-10-07 09:52:07.657574] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.918 [2024-10-07 09:52:07.657602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.918 [2024-10-07 09:52:07.668889] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.918 [2024-10-07 09:52:07.668916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.918 [2024-10-07 09:52:07.684606] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.918 [2024-10-07 09:52:07.684632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.918 [2024-10-07 09:52:07.699610] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.918 [2024-10-07 09:52:07.699636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.918 [2024-10-07 09:52:07.709657] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.918 [2024-10-07 09:52:07.709710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.918 [2024-10-07 09:52:07.722087] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.918 [2024-10-07 09:52:07.722113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.918 [2024-10-07 09:52:07.733238] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.918 [2024-10-07 09:52:07.733264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.918 [2024-10-07 09:52:07.747608] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.918 [2024-10-07 09:52:07.747634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.918 [2024-10-07 09:52:07.757313] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.918 [2024-10-07 09:52:07.757339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.918 [2024-10-07 09:52:07.769539] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.918 [2024-10-07 09:52:07.769577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.918 [2024-10-07 09:52:07.781093] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.918 [2024-10-07 09:52:07.781132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.918 [2024-10-07 09:52:07.796361] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.918 [2024-10-07 09:52:07.796387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.918 [2024-10-07 09:52:07.811268] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.918 [2024-10-07 09:52:07.811294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.918 [2024-10-07 09:52:07.821017] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.918 [2024-10-07 09:52:07.821043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.918 [2024-10-07 09:52:07.833160] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.918 [2024-10-07 09:52:07.833184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.918 [2024-10-07 09:52:07.848134] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.918 [2024-10-07 09:52:07.848160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.918 [2024-10-07 09:52:07.866079] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.918 [2024-10-07 09:52:07.866105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.918 [2024-10-07 09:52:07.877111] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.918 [2024-10-07 09:52:07.877137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.918 [2024-10-07 09:52:07.888441] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.918 [2024-10-07 09:52:07.888467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.918 [2024-10-07 09:52:07.903618] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.918 [2024-10-07 09:52:07.903643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.918 [2024-10-07 09:52:07.913302] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.918 [2024-10-07 09:52:07.913329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.177 [2024-10-07 09:52:07.925457] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.177 [2024-10-07 09:52:07.925484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.177 [2024-10-07 09:52:07.938840] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.177 [2024-10-07 09:52:07.938868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.177 [2024-10-07 09:52:07.948568] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.177 [2024-10-07 09:52:07.948594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.177 [2024-10-07 09:52:07.960375] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.177 [2024-10-07 09:52:07.960402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.177 [2024-10-07 09:52:07.976781] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.177 [2024-10-07 09:52:07.976808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.177 [2024-10-07 09:52:07.991105] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.177 [2024-10-07 09:52:07.991133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.177 [2024-10-07 09:52:08.000806] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.177 [2024-10-07 09:52:08.000833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.177 [2024-10-07 09:52:08.013466] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.177 [2024-10-07 09:52:08.013514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.177 [2024-10-07 09:52:08.027145] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.177 [2024-10-07 09:52:08.027171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.177 [2024-10-07 09:52:08.036856] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.177 [2024-10-07 09:52:08.036882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.177 [2024-10-07 09:52:08.049086] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.177 [2024-10-07 09:52:08.049112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.177 [2024-10-07 09:52:08.063893] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.177 [2024-10-07 09:52:08.063933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.177 [2024-10-07 09:52:08.073495] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.177 [2024-10-07 09:52:08.073520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.177 [2024-10-07 09:52:08.085243] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.177 [2024-10-07 09:52:08.085270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.177 [2024-10-07 09:52:08.096423] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.177 [2024-10-07 09:52:08.096448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.177 [2024-10-07 09:52:08.107721] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.177 [2024-10-07 09:52:08.107748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.177 11321.00 IOPS, 88.45 MiB/s [2024-10-07 09:52:08.118738] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.177 [2024-10-07 09:52:08.118765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.177 [2024-10-07 09:52:08.128602] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.177 [2024-10-07 09:52:08.128628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.177 [2024-10-07 09:52:08.140990] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.177 [2024-10-07 09:52:08.141017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.177 [2024-10-07 09:52:08.155822] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.177 [2024-10-07 09:52:08.155849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.177 [2024-10-07 09:52:08.165497] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.177 [2024-10-07 09:52:08.165523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.436 [2024-10-07 09:52:08.177612] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.436 [2024-10-07 09:52:08.177638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.436 [2024-10-07 09:52:08.191546] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.436 [2024-10-07 09:52:08.191573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.436 [2024-10-07 09:52:08.201738] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.436 [2024-10-07 09:52:08.201764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.436 [2024-10-07 09:52:08.213629] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.436 [2024-10-07 09:52:08.213680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.436 [2024-10-07 09:52:08.224893] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.436 [2024-10-07 09:52:08.224920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.436 [2024-10-07 09:52:08.238155] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.436 [2024-10-07 09:52:08.238205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.436 [2024-10-07 09:52:08.247267] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.436 [2024-10-07 09:52:08.247293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.436 [2024-10-07 09:52:08.259151] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.436 [2024-10-07 09:52:08.259177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.436 [2024-10-07 09:52:08.270089] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.436 [2024-10-07 09:52:08.270112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.436 [2024-10-07 09:52:08.281052] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.436 [2024-10-07 09:52:08.281077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.436 [2024-10-07 09:52:08.295149] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.436 [2024-10-07 09:52:08.295175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.436 [2024-10-07 09:52:08.304741] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.436 [2024-10-07 09:52:08.304769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.436 [2024-10-07 09:52:08.316821] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.436 [2024-10-07 09:52:08.316848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.436 [2024-10-07 09:52:08.331685] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.436 [2024-10-07 09:52:08.331713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.436 [2024-10-07 09:52:08.341276] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.436 [2024-10-07 09:52:08.341303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.436 [2024-10-07 09:52:08.353260] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.436 [2024-10-07 09:52:08.353286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.436 [2024-10-07 09:52:08.367310] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.436 [2024-10-07 09:52:08.367337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.436 [2024-10-07 09:52:08.377231] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.436 [2024-10-07 09:52:08.377273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.436 [2024-10-07 09:52:08.389336] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.436 [2024-10-07 09:52:08.389363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.436 [2024-10-07 09:52:08.402737] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.436 [2024-10-07 09:52:08.402765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.436 [2024-10-07 09:52:08.412195] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.436 [2024-10-07 09:52:08.412221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.436 [2024-10-07 09:52:08.427069] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.436 [2024-10-07 09:52:08.427093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.694 [2024-10-07 09:52:08.437068] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.694 [2024-10-07 09:52:08.437094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.694 [2024-10-07 09:52:08.449034] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.694 [2024-10-07 09:52:08.449060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.694 [2024-10-07 09:52:08.464608] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.694 [2024-10-07 09:52:08.464635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.694 [2024-10-07 09:52:08.479569] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.694 [2024-10-07 09:52:08.479610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.694 [2024-10-07 09:52:08.489366] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.694 [2024-10-07 09:52:08.489392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.694 [2024-10-07 09:52:08.501835] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.694 [2024-10-07 09:52:08.501862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.694 [2024-10-07 09:52:08.512943] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.694 [2024-10-07 09:52:08.512984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.694 [2024-10-07 09:52:08.528333] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.694 [2024-10-07 09:52:08.528372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.694 [2024-10-07 09:52:08.544290] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.694 [2024-10-07 09:52:08.544317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.694 [2024-10-07 09:52:08.559143] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.694 [2024-10-07 09:52:08.559169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.694 [2024-10-07 09:52:08.568715] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.694 [2024-10-07 09:52:08.568742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.694 [2024-10-07 09:52:08.580893] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.694 [2024-10-07 09:52:08.580926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.694 [2024-10-07 09:52:08.595978] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.694 [2024-10-07 09:52:08.596020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.694 [2024-10-07 09:52:08.606193] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.694 [2024-10-07 09:52:08.606218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.694 [2024-10-07 09:52:08.618394] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.694 [2024-10-07 09:52:08.618434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.694 [2024-10-07 09:52:08.629510] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.694 [2024-10-07 09:52:08.629536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.694 [2024-10-07 09:52:08.641205] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.694 [2024-10-07 09:52:08.641231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.694 [2024-10-07 09:52:08.656596] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.694 [2024-10-07 09:52:08.656623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.694 [2024-10-07 09:52:08.670939] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.694 [2024-10-07 09:52:08.670986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.694 [2024-10-07 09:52:08.680529] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.694 [2024-10-07 09:52:08.680555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.952 [2024-10-07 09:52:08.692590] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.952 [2024-10-07 09:52:08.692617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.952 [2024-10-07 09:52:08.709267] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.952 [2024-10-07 09:52:08.709308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.952 [2024-10-07 09:52:08.723337] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.952 [2024-10-07 09:52:08.723363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.952 [2024-10-07 09:52:08.732575] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.952 [2024-10-07 09:52:08.732601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.952 [2024-10-07 09:52:08.744622] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.952 [2024-10-07 09:52:08.744663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.952 [2024-10-07 09:52:08.760449] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.952 [2024-10-07 09:52:08.760475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.952 [2024-10-07 09:52:08.774831] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.952 [2024-10-07 09:52:08.774858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.952 [2024-10-07 09:52:08.784718] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.952 [2024-10-07 09:52:08.784745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.952 [2024-10-07 09:52:08.797504] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.952 [2024-10-07 09:52:08.797528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.952 [2024-10-07 09:52:08.810719] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.952 [2024-10-07 09:52:08.810746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.952 [2024-10-07 09:52:08.820896] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.952 [2024-10-07 09:52:08.820923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.952 [2024-10-07 09:52:08.833066] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.952 [2024-10-07 09:52:08.833090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.952 [2024-10-07 09:52:08.846477] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.952 [2024-10-07 09:52:08.846503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.952 [2024-10-07 09:52:08.856067] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.952 [2024-10-07 09:52:08.856093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.952 [2024-10-07 09:52:08.870678] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.952 [2024-10-07 09:52:08.870705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.952 [2024-10-07 09:52:08.881568] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.952 [2024-10-07 09:52:08.881592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.952 [2024-10-07 09:52:08.892349] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.952 [2024-10-07 09:52:08.892375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.952 [2024-10-07 09:52:08.907778] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.952 [2024-10-07 09:52:08.907805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.952 [2024-10-07 09:52:08.917247] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.952 [2024-10-07 09:52:08.917272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.952 [2024-10-07 09:52:08.928922] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.952 [2024-10-07 09:52:08.928948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.952 [2024-10-07 09:52:08.944512] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.952 [2024-10-07 09:52:08.944553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.210 [2024-10-07 09:52:08.960674] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.210 [2024-10-07 09:52:08.960700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.210 [2024-10-07 09:52:08.974859] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.210 [2024-10-07 09:52:08.974887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.210 [2024-10-07 09:52:08.984474] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.210 [2024-10-07 09:52:08.984499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.210 [2024-10-07 09:52:08.997285] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.210 [2024-10-07 09:52:08.997310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.210 [2024-10-07 09:52:09.012501] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.210 [2024-10-07 09:52:09.012528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.210 [2024-10-07 09:52:09.027427] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.210 [2024-10-07 09:52:09.027453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.210 [2024-10-07 09:52:09.036502] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.210 [2024-10-07 09:52:09.036527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.210 [2024-10-07 09:52:09.051821] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.210 [2024-10-07 09:52:09.051849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.210 [2024-10-07 09:52:09.063462] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.210 [2024-10-07 09:52:09.063496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.210 [2024-10-07 09:52:09.074540] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.210 [2024-10-07 09:52:09.074564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.210 [2024-10-07 09:52:09.085765] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.210 [2024-10-07 09:52:09.085791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.210 [2024-10-07 09:52:09.096733] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.210 [2024-10-07 09:52:09.096760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.210 [2024-10-07 09:52:09.111876] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.210 [2024-10-07 09:52:09.111919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.210 11358.67 IOPS, 88.74 MiB/s [2024-10-07 09:52:09.121710] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.210 [2024-10-07 09:52:09.121739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.210 [2024-10-07 09:52:09.133597] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.210 [2024-10-07 09:52:09.133623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.210 [2024-10-07 09:52:09.144871] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.210 [2024-10-07 09:52:09.144897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.210 [2024-10-07 09:52:09.158906] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.210 [2024-10-07 09:52:09.158933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.210 [2024-10-07 09:52:09.168536] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.210 [2024-10-07 09:52:09.168569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.210 [2024-10-07 09:52:09.180743] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.210 [2024-10-07 09:52:09.180770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.210 [2024-10-07 09:52:09.195029] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.210 [2024-10-07 09:52:09.195055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.210 [2024-10-07 09:52:09.204057] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.210 [2024-10-07 09:52:09.204083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.469 [2024-10-07 09:52:09.216515] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.469 [2024-10-07 09:52:09.216539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.469 [2024-10-07 09:52:09.232322] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.469 [2024-10-07 09:52:09.232347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.469 [2024-10-07 09:52:09.246288] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.469 [2024-10-07 09:52:09.246314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.469 [2024-10-07 09:52:09.255533] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.469 [2024-10-07 09:52:09.255559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.469 [2024-10-07 09:52:09.267733] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.469 [2024-10-07 09:52:09.267760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.469 [2024-10-07 09:52:09.278722] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.469 [2024-10-07 09:52:09.278748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.469 [2024-10-07 09:52:09.294977] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.469 [2024-10-07 09:52:09.295002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.469 [2024-10-07 09:52:09.304333] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.469 [2024-10-07 09:52:09.304359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.469 [2024-10-07 09:52:09.316813] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.469 [2024-10-07 09:52:09.316839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.469 [2024-10-07 09:52:09.331093] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.469 [2024-10-07 09:52:09.331118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.469 [2024-10-07 09:52:09.340453] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.469 [2024-10-07 09:52:09.340479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.469 [2024-10-07 09:52:09.352474] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.469 [2024-10-07 09:52:09.352500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.469 [2024-10-07 09:52:09.368609] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.469 [2024-10-07 09:52:09.368635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.469 [2024-10-07 09:52:09.380926] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.469 [2024-10-07 09:52:09.380952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.469 [2024-10-07 09:52:09.394598] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.469 [2024-10-07 09:52:09.394624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.469 [2024-10-07 09:52:09.404077] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.469 [2024-10-07 09:52:09.404110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.469 [2024-10-07 09:52:09.419101] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.469 [2024-10-07 09:52:09.419125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.469 [2024-10-07 09:52:09.429863] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.469 [2024-10-07 09:52:09.429890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.469 [2024-10-07 09:52:09.443572] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.469 [2024-10-07 09:52:09.443612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.469 [2024-10-07 09:52:09.452719] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.469 [2024-10-07 09:52:09.452760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.469 [2024-10-07 09:52:09.464803] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.469 [2024-10-07 09:52:09.464831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.729 [2024-10-07 09:52:09.477781] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.729 [2024-10-07 09:52:09.477808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.729 [2024-10-07 09:52:09.487171] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.729 [2024-10-07 09:52:09.487197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.729 [2024-10-07 09:52:09.499070] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.729 [2024-10-07 09:52:09.499093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.729 [2024-10-07 09:52:09.509798] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.729 [2024-10-07 09:52:09.509826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.729 [2024-10-07 09:52:09.520310] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.729 [2024-10-07 09:52:09.520336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.729 [2024-10-07 09:52:09.535375] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.729 [2024-10-07 09:52:09.535400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.729 [2024-10-07 09:52:09.545125] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.729 [2024-10-07 09:52:09.545150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.729 [2024-10-07 09:52:09.556990] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.729 [2024-10-07 09:52:09.557014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.729 [2024-10-07 09:52:09.571145] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.729 [2024-10-07 09:52:09.571171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.729 [2024-10-07 09:52:09.580401] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.729 [2024-10-07 09:52:09.580427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.729 [2024-10-07 09:52:09.595680] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.729 [2024-10-07 09:52:09.595708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.729 [2024-10-07 09:52:09.606227] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.729 [2024-10-07 09:52:09.606267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.729 [2024-10-07 09:52:09.617198] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.729 [2024-10-07 09:52:09.617224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.729 [2024-10-07 09:52:09.630189] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.729 [2024-10-07 09:52:09.630223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.729 [2024-10-07 09:52:09.639865] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.729 [2024-10-07 09:52:09.639892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.729 [2024-10-07 09:52:09.656307] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.729 [2024-10-07 09:52:09.656333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.729 [2024-10-07 09:52:09.672306] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.729 [2024-10-07 09:52:09.672331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.729 [2024-10-07 09:52:09.682147] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.729 [2024-10-07 09:52:09.682173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.729 [2024-10-07 09:52:09.693090] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.729 [2024-10-07 09:52:09.693115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.729 [2024-10-07 09:52:09.705787] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.729 [2024-10-07 09:52:09.705828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.729 [2024-10-07 09:52:09.716000] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.729 [2024-10-07 09:52:09.716039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.988 [2024-10-07 09:52:09.727256] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.988 [2024-10-07 09:52:09.727282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.988 [2024-10-07 09:52:09.738375] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.988 [2024-10-07 09:52:09.738402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.988 [2024-10-07 09:52:09.749440] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.988 [2024-10-07 09:52:09.749466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.988 [2024-10-07 09:52:09.760706] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.988 [2024-10-07 09:52:09.760732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.988 [2024-10-07 09:52:09.776417] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.988 [2024-10-07 09:52:09.776443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.988 [2024-10-07 09:52:09.792373] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.988 [2024-10-07 09:52:09.792399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.988 [2024-10-07 09:52:09.807311] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.988 [2024-10-07 09:52:09.807336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.988 [2024-10-07 09:52:09.817572] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.988 [2024-10-07 09:52:09.817597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.988 [2024-10-07 09:52:09.829908] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.988 [2024-10-07 09:52:09.829935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.988 [2024-10-07 09:52:09.841224] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.988 [2024-10-07 09:52:09.841249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.988 [2024-10-07 09:52:09.853901] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.988 [2024-10-07 09:52:09.853942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.988 [2024-10-07 09:52:09.863897] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.988 [2024-10-07 09:52:09.863937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.988 [2024-10-07 09:52:09.876322] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.988 [2024-10-07 09:52:09.876346] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.988 [2024-10-07 09:52:09.887055] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.988 [2024-10-07 09:52:09.887081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.988 [2024-10-07 09:52:09.897797] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.988 [2024-10-07 09:52:09.897823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.988 [2024-10-07 09:52:09.909178] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.988 [2024-10-07 09:52:09.909204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.988 [2024-10-07 09:52:09.922227] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.988 [2024-10-07 09:52:09.922252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.988 [2024-10-07 09:52:09.932177] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.988 [2024-10-07 09:52:09.932202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.988 [2024-10-07 09:52:09.947070] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.988 [2024-10-07 09:52:09.947096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.988 [2024-10-07 09:52:09.957964] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.988 [2024-10-07 09:52:09.957992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.988 [2024-10-07 09:52:09.969364] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.988 [2024-10-07 09:52:09.969388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.988 [2024-10-07 09:52:09.980485] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.988 [2024-10-07 09:52:09.980510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.247 [2024-10-07 09:52:09.991917] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.247 [2024-10-07 09:52:09.991943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.247 [2024-10-07 09:52:10.002886] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.247 [2024-10-07 09:52:10.002913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.247 [2024-10-07 09:52:10.019605] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.247 [2024-10-07 09:52:10.019644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.247 [2024-10-07 09:52:10.029275] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.247 [2024-10-07 09:52:10.029305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.247 [2024-10-07 09:52:10.042413] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.247 [2024-10-07 09:52:10.042446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.247 [2024-10-07 09:52:10.053244] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.247 [2024-10-07 09:52:10.053270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.247 [2024-10-07 09:52:10.066786] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.247 [2024-10-07 09:52:10.066813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.247 [2024-10-07 09:52:10.076070] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.247 [2024-10-07 09:52:10.076097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.247 [2024-10-07 09:52:10.088826] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.247 [2024-10-07 09:52:10.088853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.247 [2024-10-07 09:52:10.103780] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.247 [2024-10-07 09:52:10.103807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.247 [2024-10-07 09:52:10.113398] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.247 [2024-10-07 09:52:10.113424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.247 11389.25 IOPS, 88.98 MiB/s [2024-10-07 09:52:10.125737] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.247 [2024-10-07 09:52:10.125764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.247 [2024-10-07 09:52:10.139929] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.247 [2024-10-07 09:52:10.139969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.247 [2024-10-07 09:52:10.149564] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.247 [2024-10-07 09:52:10.149589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.247 [2024-10-07 09:52:10.161909] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.247 [2024-10-07 09:52:10.161937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.247 [2024-10-07 09:52:10.173377] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.247 [2024-10-07 09:52:10.173418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.247 [2024-10-07 09:52:10.188704] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.248 [2024-10-07 09:52:10.188732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.248 [2024-10-07 09:52:10.202432] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.248 [2024-10-07 09:52:10.202473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.248 [2024-10-07 09:52:10.211804] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.248 [2024-10-07 09:52:10.211831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.248 [2024-10-07 09:52:10.224358] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.248 [2024-10-07 09:52:10.224385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.248 [2024-10-07 09:52:10.235366] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.248 [2024-10-07 09:52:10.235391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.507 [2024-10-07 09:52:10.246250] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.507 [2024-10-07 09:52:10.246276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.507 [2024-10-07 09:52:10.257613] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.507 [2024-10-07 09:52:10.257640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.507 [2024-10-07 09:52:10.268682] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.507 [2024-10-07 09:52:10.268710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.507 [2024-10-07 09:52:10.282237] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.507 [2024-10-07 09:52:10.282263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.507 [2024-10-07 09:52:10.292366] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.507 [2024-10-07 09:52:10.292391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.507 [2024-10-07 09:52:10.304492] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.507 [2024-10-07 09:52:10.304517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.507 [2024-10-07 09:52:10.320295] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.507 [2024-10-07 09:52:10.320333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.507 [2024-10-07 09:52:10.335036] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.507 [2024-10-07 09:52:10.335061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.507 [2024-10-07 09:52:10.344616] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.507 [2024-10-07 09:52:10.344642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.507 [2024-10-07 09:52:10.357017] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.507 [2024-10-07 09:52:10.357056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.507 [2024-10-07 09:52:10.371689] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.507 [2024-10-07 09:52:10.371716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.507 [2024-10-07 09:52:10.381371] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.507 [2024-10-07 09:52:10.381397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.507 [2024-10-07 09:52:10.393353] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.507 [2024-10-07 09:52:10.393378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.507 [2024-10-07 09:52:10.406270] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.507 [2024-10-07 09:52:10.406295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.507 [2024-10-07 09:52:10.415986] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.507 [2024-10-07 09:52:10.416011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.507 [2024-10-07 09:52:10.430969] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.507 [2024-10-07 09:52:10.430994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.507 [2024-10-07 09:52:10.440713] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.507 [2024-10-07 09:52:10.440739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.507 [2024-10-07 09:52:10.455962] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.507 [2024-10-07 09:52:10.455988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.507 [2024-10-07 09:52:10.466692] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.507 [2024-10-07 09:52:10.466719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.507 [2024-10-07 09:52:10.477682] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.507 [2024-10-07 09:52:10.477709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.507 [2024-10-07 09:52:10.489055] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.507 [2024-10-07 09:52:10.489080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.766 [2024-10-07 09:52:10.505081] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.766 [2024-10-07 09:52:10.505107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.766 [2024-10-07 09:52:10.516109] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.766 [2024-10-07 09:52:10.516134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.766 [2024-10-07 09:52:10.528535] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.766 [2024-10-07 09:52:10.528560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.766 [2024-10-07 09:52:10.539800] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.766 [2024-10-07 09:52:10.539837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.766 [2024-10-07 09:52:10.550820] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.766 [2024-10-07 09:52:10.550846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.766 [2024-10-07 09:52:10.561771] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.766 [2024-10-07 09:52:10.561795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.766 [2024-10-07 09:52:10.573051] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.766 [2024-10-07 09:52:10.573075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.766 [2024-10-07 09:52:10.587930] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.766 [2024-10-07 09:52:10.587969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.766 [2024-10-07 09:52:10.597627] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.766 [2024-10-07 09:52:10.597677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.766 [2024-10-07 09:52:10.609875] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.766 [2024-10-07 09:52:10.609901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.766 [2024-10-07 09:52:10.621752] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.766 [2024-10-07 09:52:10.621792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.766 [2024-10-07 09:52:10.635451] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.766 [2024-10-07 09:52:10.635492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.766 [2024-10-07 09:52:10.645081] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.766 [2024-10-07 09:52:10.645106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.766 [2024-10-07 09:52:10.657378] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.766 [2024-10-07 09:52:10.657404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.766 [2024-10-07 09:52:10.668923] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.766 [2024-10-07 09:52:10.668964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.766 [2024-10-07 09:52:10.680564] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.766 [2024-10-07 09:52:10.680588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.766 [2024-10-07 09:52:10.691756] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.766 [2024-10-07 09:52:10.691782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.766 [2024-10-07 09:52:10.702661] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.766 [2024-10-07 09:52:10.702717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.766 [2024-10-07 09:52:10.712938] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.766 [2024-10-07 09:52:10.712978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.766 [2024-10-07 09:52:10.725131] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.766 [2024-10-07 09:52:10.725157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.766 [2024-10-07 09:52:10.739229] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.766 [2024-10-07 09:52:10.739256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.766 [2024-10-07 09:52:10.749364] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.766 [2024-10-07 09:52:10.749389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.766 [2024-10-07 09:52:10.761179] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.766 [2024-10-07 09:52:10.761215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.025 [2024-10-07 09:52:10.775634] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.025 [2024-10-07 09:52:10.775686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.025 [2024-10-07 09:52:10.785294] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.025 [2024-10-07 09:52:10.785320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.025 [2024-10-07 09:52:10.797222] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.025 [2024-10-07 09:52:10.797248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.025 [2024-10-07 09:52:10.811125] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.025 [2024-10-07 09:52:10.811150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.025 [2024-10-07 09:52:10.820281] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.025 [2024-10-07 09:52:10.820307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.025 [2024-10-07 09:52:10.834923] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.025 [2024-10-07 09:52:10.834950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.025 [2024-10-07 09:52:10.845126] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.025 [2024-10-07 09:52:10.845153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.025 [2024-10-07 09:52:10.857358] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.025 [2024-10-07 09:52:10.857383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.025 [2024-10-07 09:52:10.868631] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.025 [2024-10-07 09:52:10.868655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.025 [2024-10-07 09:52:10.879690] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.025 [2024-10-07 09:52:10.879715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.025 [2024-10-07 09:52:10.890787] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.025 [2024-10-07 09:52:10.890813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.025 [2024-10-07 09:52:10.902109] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.025 [2024-10-07 09:52:10.902133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.025 [2024-10-07 09:52:10.913290] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.025 [2024-10-07 09:52:10.913329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.025 [2024-10-07 09:52:10.928053] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.025 [2024-10-07 09:52:10.928078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.025 [2024-10-07 09:52:10.937743] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.025 [2024-10-07 09:52:10.937768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.025 [2024-10-07 09:52:10.950130] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.025 [2024-10-07 09:52:10.950155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.025 [2024-10-07 09:52:10.961402] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.025 [2024-10-07 09:52:10.961427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.025 [2024-10-07 09:52:10.976360] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.025 [2024-10-07 09:52:10.976386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.025 [2024-10-07 09:52:10.990304] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.025 [2024-10-07 09:52:10.990340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.025 [2024-10-07 09:52:11.000023] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.025 [2024-10-07 09:52:11.000048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.025 [2024-10-07 09:52:11.012493] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.025 [2024-10-07 09:52:11.012518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.285 [2024-10-07 09:52:11.028219] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.285 [2024-10-07 09:52:11.028244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.285 [2024-10-07 09:52:11.043100] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.285 [2024-10-07 09:52:11.043140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.285 [2024-10-07 09:52:11.052765] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.285 [2024-10-07 09:52:11.052791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.285 [2024-10-07 09:52:11.065052] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.285 [2024-10-07 09:52:11.065076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.285 [2024-10-07 09:52:11.079627] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.285 [2024-10-07 09:52:11.079678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.285 [2024-10-07 09:52:11.089272] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.285 [2024-10-07 09:52:11.089298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.285 [2024-10-07 09:52:11.101970] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.285 [2024-10-07 09:52:11.101996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.285 [2024-10-07 09:52:11.113135] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.285 [2024-10-07 09:52:11.113161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.285 11380.40 IOPS, 88.91 MiB/s [2024-10-07 09:52:11.126478] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.285 [2024-10-07 09:52:11.126504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.285 [2024-10-07 09:52:11.134089] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.285 [2024-10-07 09:52:11.134111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.285 00:31:22.285 Latency(us) 00:31:22.285 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:22.285 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:31:22.285 Nvme1n1 : 5.01 11381.91 88.92 0.00 0.00 11231.68 2852.03 21748.24 00:31:22.285 =================================================================================================================== 00:31:22.285 Total : 11381.91 88.92 0.00 0.00 11231.68 2852.03 21748.24 00:31:22.285 [2024-10-07 09:52:11.142085] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.285 [2024-10-07 09:52:11.142108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.285 [2024-10-07 09:52:11.150086] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.285 [2024-10-07 09:52:11.150108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.285 [2024-10-07 09:52:11.158103] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.285 [2024-10-07 09:52:11.158138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.285 [2024-10-07 09:52:11.166133] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.285 [2024-10-07 09:52:11.166184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.285 [2024-10-07 09:52:11.174134] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.285 [2024-10-07 09:52:11.174183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.285 [2024-10-07 09:52:11.182129] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.285 [2024-10-07 09:52:11.182175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.285 [2024-10-07 09:52:11.190141] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.285 [2024-10-07 09:52:11.190191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.285 [2024-10-07 09:52:11.198137] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.285 [2024-10-07 09:52:11.198188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.285 [2024-10-07 09:52:11.206140] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.285 [2024-10-07 09:52:11.206183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.285 [2024-10-07 09:52:11.214137] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.285 [2024-10-07 09:52:11.214183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.285 [2024-10-07 09:52:11.222132] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.285 [2024-10-07 09:52:11.222182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.285 [2024-10-07 09:52:11.230135] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.285 [2024-10-07 09:52:11.230183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.285 [2024-10-07 09:52:11.238133] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.285 [2024-10-07 09:52:11.238183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.285 [2024-10-07 09:52:11.246132] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.285 [2024-10-07 09:52:11.246181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.285 [2024-10-07 09:52:11.254131] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.285 [2024-10-07 09:52:11.254175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.285 [2024-10-07 09:52:11.262128] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.285 [2024-10-07 09:52:11.262176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.285 [2024-10-07 09:52:11.270127] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.285 [2024-10-07 09:52:11.270176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.285 [2024-10-07 09:52:11.278112] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.285 [2024-10-07 09:52:11.278147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.544 [2024-10-07 09:52:11.286082] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.544 [2024-10-07 09:52:11.286102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.544 [2024-10-07 09:52:11.294084] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.544 [2024-10-07 09:52:11.294104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.544 [2024-10-07 09:52:11.302081] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.544 [2024-10-07 09:52:11.302102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.544 [2024-10-07 09:52:11.310078] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.544 [2024-10-07 09:52:11.310101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.544 [2024-10-07 09:52:11.318135] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.544 [2024-10-07 09:52:11.318181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.544 [2024-10-07 09:52:11.326132] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.544 [2024-10-07 09:52:11.326174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.544 [2024-10-07 09:52:11.334084] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.544 [2024-10-07 09:52:11.334105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.544 [2024-10-07 09:52:11.342083] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.544 [2024-10-07 09:52:11.342103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.544 [2024-10-07 09:52:11.350082] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.544 [2024-10-07 09:52:11.350101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.544 [2024-10-07 09:52:11.358082] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.544 [2024-10-07 09:52:11.358101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.544 [2024-10-07 09:52:11.366104] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.544 [2024-10-07 09:52:11.366139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.544 [2024-10-07 09:52:11.374131] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.544 [2024-10-07 09:52:11.374173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.544 [2024-10-07 09:52:11.382118] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.544 [2024-10-07 09:52:11.382155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.544 [2024-10-07 09:52:11.394099] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.544 [2024-10-07 09:52:11.394121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.544 [2024-10-07 09:52:11.402102] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.544 [2024-10-07 09:52:11.402122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.544 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (365942) - No such process 00:31:22.544 09:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 365942 00:31:22.544 09:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:22.544 09:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.544 09:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:22.544 09:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.544 09:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:31:22.544 09:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.544 09:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:22.544 delay0 00:31:22.544 09:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.544 09:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:31:22.544 09:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.544 09:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:22.544 09:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.544 09:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:31:22.803 [2024-10-07 09:52:11.558766] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:31:30.916 Initializing NVMe Controllers 00:31:30.916 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:30.916 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:30.916 Initialization complete. Launching workers. 00:31:30.916 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 237, failed: 18106 00:31:30.916 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 18226, failed to submit 117 00:31:30.916 success 18118, unsuccessful 108, failed 0 00:31:30.916 09:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:31:30.916 09:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:31:30.916 09:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@514 -- # nvmfcleanup 00:31:30.916 09:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:31:30.916 09:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:30.916 09:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:31:30.916 09:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:30.916 09:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:30.916 rmmod nvme_tcp 00:31:30.916 rmmod nvme_fabrics 00:31:30.916 rmmod nvme_keyring 00:31:30.916 09:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:30.916 09:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:31:30.916 09:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:31:30.916 09:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@515 -- # '[' -n 364726 ']' 00:31:30.916 09:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # killprocess 364726 00:31:30.916 09:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 364726 ']' 00:31:30.916 09:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 364726 00:31:30.916 09:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:31:30.916 09:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:30.916 09:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 364726 00:31:30.916 09:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:31:30.916 09:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:31:30.916 09:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 364726' 00:31:30.916 killing process with pid 364726 00:31:30.916 09:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 364726 00:31:30.916 09:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 364726 00:31:30.916 09:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:31:30.916 09:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:31:30.916 09:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:31:30.916 09:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:31:30.916 09:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-save 00:31:30.916 09:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:31:30.916 09:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-restore 00:31:30.916 09:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:30.916 09:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:30.916 09:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:30.916 09:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:30.917 09:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:32.297 09:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:32.297 00:31:32.297 real 0m28.852s 00:31:32.297 user 0m40.005s 00:31:32.297 sys 0m10.489s 00:31:32.297 09:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:32.297 09:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:32.297 ************************************ 00:31:32.297 END TEST nvmf_zcopy 00:31:32.297 ************************************ 00:31:32.297 09:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:31:32.297 09:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:31:32.298 09:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:32.298 09:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:32.298 ************************************ 00:31:32.298 START TEST nvmf_nmic 00:31:32.298 ************************************ 00:31:32.298 09:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:31:32.298 * Looking for test storage... 00:31:32.298 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:32.298 09:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:31:32.298 09:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1681 -- # lcov --version 00:31:32.298 09:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:31:32.298 09:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:31:32.298 09:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:32.298 09:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:32.298 09:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:32.298 09:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:31:32.298 09:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:31:32.298 09:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:31:32.298 09:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:31:32.298 09:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:31:32.298 09:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:31:32.298 09:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:31:32.298 09:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:32.298 09:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:31:32.298 09:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:31:32.298 09:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:32.298 09:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:32.298 09:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:31:32.298 09:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:31:32.298 09:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:32.298 09:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:31:32.298 09:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:31:32.298 09:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:31:32.298 09:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:31:32.298 09:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:32.298 09:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:31:32.298 09:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:31:32.298 09:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:32.298 09:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:32.298 09:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:31:32.298 09:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:32.298 09:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:31:32.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:32.298 --rc genhtml_branch_coverage=1 00:31:32.298 --rc genhtml_function_coverage=1 00:31:32.298 --rc genhtml_legend=1 00:31:32.298 --rc geninfo_all_blocks=1 00:31:32.298 --rc geninfo_unexecuted_blocks=1 00:31:32.298 00:31:32.298 ' 00:31:32.298 09:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:31:32.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:32.298 --rc genhtml_branch_coverage=1 00:31:32.298 --rc genhtml_function_coverage=1 00:31:32.298 --rc genhtml_legend=1 00:31:32.298 --rc geninfo_all_blocks=1 00:31:32.298 --rc geninfo_unexecuted_blocks=1 00:31:32.298 00:31:32.298 ' 00:31:32.298 09:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:31:32.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:32.298 --rc genhtml_branch_coverage=1 00:31:32.298 --rc genhtml_function_coverage=1 00:31:32.298 --rc genhtml_legend=1 00:31:32.298 --rc geninfo_all_blocks=1 00:31:32.298 --rc geninfo_unexecuted_blocks=1 00:31:32.298 00:31:32.298 ' 00:31:32.298 09:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:31:32.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:32.298 --rc genhtml_branch_coverage=1 00:31:32.298 --rc genhtml_function_coverage=1 00:31:32.298 --rc genhtml_legend=1 00:31:32.298 --rc geninfo_all_blocks=1 00:31:32.298 --rc geninfo_unexecuted_blocks=1 00:31:32.298 00:31:32.298 ' 00:31:32.298 09:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:32.298 09:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:31:32.298 09:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:32.298 09:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:32.298 09:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:32.298 09:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:32.298 09:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:32.298 09:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:32.298 09:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:32.298 09:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:32.298 09:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:32.298 09:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:32.298 09:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:31:32.298 09:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:31:32.298 09:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:32.298 09:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:32.298 09:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:32.298 09:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:32.298 09:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:32.298 09:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:31:32.298 09:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:32.298 09:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:32.298 09:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:32.298 09:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:32.298 09:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:32.298 09:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:32.298 09:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:31:32.298 09:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:32.298 09:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:31:32.298 09:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:32.298 09:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:32.299 09:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:32.299 09:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:32.299 09:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:32.299 09:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:32.299 09:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:32.299 09:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:32.299 09:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:32.299 09:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:32.299 09:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:32.299 09:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:32.299 09:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:31:32.299 09:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:31:32.299 09:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:32.299 09:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # prepare_net_devs 00:31:32.299 09:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@436 -- # local -g is_hw=no 00:31:32.299 09:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # remove_spdk_ns 00:31:32.299 09:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:32.299 09:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:32.299 09:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:32.299 09:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:31:32.299 09:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:31:32.299 09:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:31:32.299 09:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:34.836 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:34.836 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:31:34.836 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:34.836 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:34.836 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:34.836 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:34.836 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:34.836 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:31:34.836 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:34.836 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:31:34.836 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:31:34.836 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:31:34.836 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:31:34.836 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:31:34.836 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:31:34.836 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:34.836 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:34.836 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:34.836 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:34.836 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:34.836 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:34.836 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:34.836 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:34.836 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:34.836 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:34.836 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:34.836 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:34.836 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:34.836 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:34.836 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:34.836 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:34.836 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:34.836 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:34.836 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:34.836 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x1592)' 00:31:34.836 Found 0000:09:00.0 (0x8086 - 0x1592) 00:31:34.836 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:34.836 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:34.836 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:31:34.836 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:31:34.836 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:34.836 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:34.836 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x1592)' 00:31:34.836 Found 0000:09:00.1 (0x8086 - 0x1592) 00:31:34.836 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:34.836 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:34.836 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:31:34.836 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:31:34.837 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:34.837 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:34.837 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:34.837 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:34.837 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:34.837 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:34.837 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:34.837 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:34.837 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:34.837 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:34.837 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:34.837 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:31:34.837 Found net devices under 0000:09:00.0: cvl_0_0 00:31:34.837 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:34.837 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:34.837 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:34.837 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:34.837 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:34.837 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:34.837 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:34.837 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:34.837 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:31:34.837 Found net devices under 0000:09:00.1: cvl_0_1 00:31:34.837 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:34.837 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:31:34.837 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # is_hw=yes 00:31:34.837 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:31:34.837 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:31:34.837 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:31:34.837 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:34.837 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:34.837 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:34.837 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:34.837 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:34.837 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:34.837 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:34.837 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:34.837 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:34.837 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:34.837 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:34.837 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:34.837 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:34.837 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:34.837 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:34.837 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:34.837 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:34.837 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:34.837 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:34.837 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:34.837 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:34.837 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:34.837 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:34.837 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:34.837 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.242 ms 00:31:34.837 00:31:34.837 --- 10.0.0.2 ping statistics --- 00:31:34.837 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:34.837 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:31:34.837 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:34.837 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:34.837 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.067 ms 00:31:34.837 00:31:34.837 --- 10.0.0.1 ping statistics --- 00:31:34.837 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:34.837 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:31:34.837 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:34.837 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@448 -- # return 0 00:31:34.837 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:31:34.837 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:34.837 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:31:34.837 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:31:34.837 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:34.837 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:31:34.837 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:31:34.837 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:31:34.837 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:31:34.837 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:34.837 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:34.837 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # nvmfpid=369243 00:31:34.838 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:31:34.838 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # waitforlisten 369243 00:31:34.838 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 369243 ']' 00:31:34.838 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:34.838 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:34.838 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:34.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:34.838 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:34.838 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:34.838 [2024-10-07 09:52:23.511293] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:34.838 [2024-10-07 09:52:23.512344] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:31:34.838 [2024-10-07 09:52:23.512398] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:34.838 [2024-10-07 09:52:23.579699] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:34.838 [2024-10-07 09:52:23.686830] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:34.838 [2024-10-07 09:52:23.686892] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:34.838 [2024-10-07 09:52:23.686920] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:34.838 [2024-10-07 09:52:23.686932] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:34.838 [2024-10-07 09:52:23.686941] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:34.838 [2024-10-07 09:52:23.688493] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:31:34.838 [2024-10-07 09:52:23.688560] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:31:34.838 [2024-10-07 09:52:23.688581] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:31:34.838 [2024-10-07 09:52:23.688584] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:31:34.838 [2024-10-07 09:52:23.783743] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:34.838 [2024-10-07 09:52:23.783994] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:34.838 [2024-10-07 09:52:23.784282] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:34.838 [2024-10-07 09:52:23.784841] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:34.838 [2024-10-07 09:52:23.785094] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:34.838 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:34.838 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:31:34.838 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:31:34.838 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:34.838 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:34.838 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:34.838 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:34.838 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:34.838 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:35.097 [2024-10-07 09:52:23.833331] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:35.097 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:35.097 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:35.097 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:35.097 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:35.097 Malloc0 00:31:35.097 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:35.097 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:31:35.097 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:35.097 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:35.097 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:35.097 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:35.097 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:35.097 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:35.097 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:35.097 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:35.097 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:35.097 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:35.097 [2024-10-07 09:52:23.889471] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:35.097 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:35.097 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:31:35.097 test case1: single bdev can't be used in multiple subsystems 00:31:35.097 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:31:35.097 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:35.097 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:35.097 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:35.097 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:35.097 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:35.097 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:35.097 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:35.097 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:31:35.097 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:31:35.097 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:35.097 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:35.097 [2024-10-07 09:52:23.913241] bdev.c:8202:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:31:35.097 [2024-10-07 09:52:23.913269] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:31:35.097 [2024-10-07 09:52:23.913298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.097 request: 00:31:35.097 { 00:31:35.097 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:31:35.097 "namespace": { 00:31:35.097 "bdev_name": "Malloc0", 00:31:35.097 "no_auto_visible": false 00:31:35.097 }, 00:31:35.097 "method": "nvmf_subsystem_add_ns", 00:31:35.097 "req_id": 1 00:31:35.097 } 00:31:35.097 Got JSON-RPC error response 00:31:35.097 response: 00:31:35.097 { 00:31:35.097 "code": -32602, 00:31:35.097 "message": "Invalid parameters" 00:31:35.097 } 00:31:35.097 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:31:35.097 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:31:35.097 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:31:35.097 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:31:35.098 Adding namespace failed - expected result. 00:31:35.098 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:31:35.098 test case2: host connect to nvmf target in multiple paths 00:31:35.098 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:35.098 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:35.098 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:35.098 [2024-10-07 09:52:23.925329] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:35.098 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:35.098 09:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid=21b7cb46-a602-e411-a339-001e67bc3be4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:31:35.356 09:52:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid=21b7cb46-a602-e411-a339-001e67bc3be4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:31:35.614 09:52:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:31:35.614 09:52:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:31:35.614 09:52:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:31:35.614 09:52:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:31:35.614 09:52:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:31:37.513 09:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:31:37.513 09:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:31:37.513 09:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:31:37.513 09:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:31:37.513 09:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:31:37.513 09:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:31:37.513 09:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:31:37.513 [global] 00:31:37.513 thread=1 00:31:37.513 invalidate=1 00:31:37.513 rw=write 00:31:37.513 time_based=1 00:31:37.513 runtime=1 00:31:37.513 ioengine=libaio 00:31:37.513 direct=1 00:31:37.513 bs=4096 00:31:37.513 iodepth=1 00:31:37.513 norandommap=0 00:31:37.513 numjobs=1 00:31:37.513 00:31:37.513 verify_dump=1 00:31:37.513 verify_backlog=512 00:31:37.513 verify_state_save=0 00:31:37.513 do_verify=1 00:31:37.513 verify=crc32c-intel 00:31:37.513 [job0] 00:31:37.513 filename=/dev/nvme0n1 00:31:37.513 Could not set queue depth (nvme0n1) 00:31:37.771 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:37.771 fio-3.35 00:31:37.771 Starting 1 thread 00:31:39.144 00:31:39.144 job0: (groupid=0, jobs=1): err= 0: pid=369723: Mon Oct 7 09:52:27 2024 00:31:39.144 read: IOPS=21, BW=86.5KiB/s (88.6kB/s)(88.0KiB/1017msec) 00:31:39.144 slat (nsec): min=14077, max=34504, avg=18529.32, stdev=7247.11 00:31:39.144 clat (usec): min=40869, max=41981, avg=41031.58, stdev=223.40 00:31:39.144 lat (usec): min=40901, max=41998, avg=41050.11, stdev=223.11 00:31:39.144 clat percentiles (usec): 00:31:39.144 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:31:39.144 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:39.144 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:39.144 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:31:39.144 | 99.99th=[42206] 00:31:39.144 write: IOPS=503, BW=2014KiB/s (2062kB/s)(2048KiB/1017msec); 0 zone resets 00:31:39.144 slat (nsec): min=8616, max=64582, avg=20001.74, stdev=8773.01 00:31:39.144 clat (usec): min=144, max=386, avg=197.11, stdev=41.12 00:31:39.144 lat (usec): min=153, max=406, avg=217.11, stdev=44.19 00:31:39.144 clat percentiles (usec): 00:31:39.144 | 1.00th=[ 151], 5.00th=[ 155], 10.00th=[ 163], 20.00th=[ 174], 00:31:39.144 | 30.00th=[ 178], 40.00th=[ 180], 50.00th=[ 184], 60.00th=[ 188], 00:31:39.144 | 70.00th=[ 196], 80.00th=[ 219], 90.00th=[ 251], 95.00th=[ 289], 00:31:39.144 | 99.00th=[ 359], 99.50th=[ 363], 99.90th=[ 388], 99.95th=[ 388], 00:31:39.144 | 99.99th=[ 388] 00:31:39.144 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:31:39.144 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:39.144 lat (usec) : 250=85.96%, 500=9.93% 00:31:39.144 lat (msec) : 50=4.12% 00:31:39.144 cpu : usr=0.98%, sys=0.98%, ctx=534, majf=0, minf=1 00:31:39.144 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:39.144 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:39.144 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:39.144 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:39.144 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:39.144 00:31:39.144 Run status group 0 (all jobs): 00:31:39.144 READ: bw=86.5KiB/s (88.6kB/s), 86.5KiB/s-86.5KiB/s (88.6kB/s-88.6kB/s), io=88.0KiB (90.1kB), run=1017-1017msec 00:31:39.144 WRITE: bw=2014KiB/s (2062kB/s), 2014KiB/s-2014KiB/s (2062kB/s-2062kB/s), io=2048KiB (2097kB), run=1017-1017msec 00:31:39.144 00:31:39.144 Disk stats (read/write): 00:31:39.144 nvme0n1: ios=69/512, merge=0/0, ticks=806/64, in_queue=870, util=91.58% 00:31:39.144 09:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:31:39.144 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:31:39.144 09:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:31:39.144 09:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:31:39.144 09:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:31:39.144 09:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:39.144 09:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:31:39.144 09:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:39.144 09:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:31:39.144 09:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:31:39.144 09:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:31:39.144 09:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@514 -- # nvmfcleanup 00:31:39.144 09:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:31:39.144 09:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:39.144 09:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:31:39.144 09:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:39.144 09:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:39.144 rmmod nvme_tcp 00:31:39.144 rmmod nvme_fabrics 00:31:39.144 rmmod nvme_keyring 00:31:39.144 09:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:39.144 09:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:31:39.144 09:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:31:39.144 09:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@515 -- # '[' -n 369243 ']' 00:31:39.144 09:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # killprocess 369243 00:31:39.144 09:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 369243 ']' 00:31:39.144 09:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 369243 00:31:39.144 09:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:31:39.144 09:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:39.144 09:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 369243 00:31:39.144 09:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:39.144 09:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:39.144 09:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 369243' 00:31:39.144 killing process with pid 369243 00:31:39.144 09:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 369243 00:31:39.144 09:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 369243 00:31:39.403 09:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:31:39.403 09:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:31:39.403 09:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:31:39.403 09:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:31:39.403 09:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-save 00:31:39.403 09:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:31:39.403 09:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-restore 00:31:39.403 09:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:39.403 09:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:39.403 09:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:39.403 09:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:39.403 09:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:41.307 09:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:41.307 00:31:41.307 real 0m9.170s 00:31:41.307 user 0m16.946s 00:31:41.307 sys 0m3.341s 00:31:41.307 09:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:41.307 09:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:41.307 ************************************ 00:31:41.307 END TEST nvmf_nmic 00:31:41.307 ************************************ 00:31:41.307 09:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:31:41.307 09:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:31:41.307 09:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:41.307 09:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:41.566 ************************************ 00:31:41.566 START TEST nvmf_fio_target 00:31:41.566 ************************************ 00:31:41.566 09:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:31:41.566 * Looking for test storage... 00:31:41.566 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:41.566 09:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:31:41.566 09:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lcov --version 00:31:41.566 09:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:31:41.566 09:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:31:41.566 09:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:41.566 09:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:41.566 09:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:41.566 09:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:31:41.566 09:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:31:41.566 09:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:31:41.566 09:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:31:41.566 09:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:31:41.566 09:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:31:41.566 09:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:31:41.566 09:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:41.566 09:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:31:41.566 09:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:31:41.566 09:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:41.566 09:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:41.566 09:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:31:41.566 09:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:31:41.566 09:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:41.566 09:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:31:41.566 09:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:31:41.566 09:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:31:41.566 09:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:31:41.566 09:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:41.566 09:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:31:41.566 09:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:31:41.566 09:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:41.566 09:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:41.566 09:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:31:41.566 09:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:41.566 09:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:31:41.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:41.566 --rc genhtml_branch_coverage=1 00:31:41.566 --rc genhtml_function_coverage=1 00:31:41.566 --rc genhtml_legend=1 00:31:41.566 --rc geninfo_all_blocks=1 00:31:41.566 --rc geninfo_unexecuted_blocks=1 00:31:41.566 00:31:41.566 ' 00:31:41.566 09:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:31:41.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:41.566 --rc genhtml_branch_coverage=1 00:31:41.566 --rc genhtml_function_coverage=1 00:31:41.566 --rc genhtml_legend=1 00:31:41.566 --rc geninfo_all_blocks=1 00:31:41.566 --rc geninfo_unexecuted_blocks=1 00:31:41.566 00:31:41.566 ' 00:31:41.566 09:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:31:41.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:41.566 --rc genhtml_branch_coverage=1 00:31:41.566 --rc genhtml_function_coverage=1 00:31:41.566 --rc genhtml_legend=1 00:31:41.566 --rc geninfo_all_blocks=1 00:31:41.566 --rc geninfo_unexecuted_blocks=1 00:31:41.566 00:31:41.566 ' 00:31:41.566 09:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:31:41.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:41.566 --rc genhtml_branch_coverage=1 00:31:41.566 --rc genhtml_function_coverage=1 00:31:41.566 --rc genhtml_legend=1 00:31:41.566 --rc geninfo_all_blocks=1 00:31:41.566 --rc geninfo_unexecuted_blocks=1 00:31:41.566 00:31:41.566 ' 00:31:41.566 09:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:41.566 09:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:31:41.566 09:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:41.566 09:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:41.566 09:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:41.566 09:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:41.566 09:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:41.566 09:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:41.567 09:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:41.567 09:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:41.567 09:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:41.567 09:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:41.567 09:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:31:41.567 09:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:31:41.567 09:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:41.567 09:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:41.567 09:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:41.567 09:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:41.567 09:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:41.567 09:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:31:41.567 09:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:41.567 09:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:41.567 09:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:41.567 09:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:41.567 09:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:41.567 09:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:41.567 09:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:31:41.567 09:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:41.567 09:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:31:41.567 09:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:41.567 09:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:41.567 09:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:41.567 09:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:41.567 09:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:41.567 09:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:41.567 09:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:41.567 09:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:41.567 09:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:41.567 09:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:41.567 09:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:41.567 09:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:41.567 09:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:41.567 09:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:31:41.567 09:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:31:41.567 09:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:41.567 09:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:31:41.567 09:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:31:41.567 09:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:31:41.567 09:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:41.567 09:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:41.567 09:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:41.567 09:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:31:41.567 09:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:31:41.567 09:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:31:41.567 09:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:44.101 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:44.101 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:31:44.101 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:44.101 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:44.102 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:44.102 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:44.102 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:44.102 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:31:44.102 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:44.102 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:31:44.102 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:31:44.102 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:31:44.102 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:31:44.102 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:31:44.102 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:31:44.102 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:44.102 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:44.102 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:44.102 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:44.102 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:44.102 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:44.102 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:44.102 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:44.102 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:44.102 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:44.102 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:44.102 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:44.102 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:44.102 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:44.102 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:44.102 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:44.102 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:44.102 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:44.102 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:44.102 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x1592)' 00:31:44.102 Found 0000:09:00.0 (0x8086 - 0x1592) 00:31:44.102 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:44.102 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:44.102 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:31:44.102 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:31:44.102 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:44.102 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:44.102 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x1592)' 00:31:44.102 Found 0000:09:00.1 (0x8086 - 0x1592) 00:31:44.102 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:44.102 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:44.102 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:31:44.102 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:31:44.102 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:44.102 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:44.102 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:44.102 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:44.102 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:44.102 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:44.102 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:44.102 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:44.102 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:44.102 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:44.102 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:44.102 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:31:44.102 Found net devices under 0000:09:00.0: cvl_0_0 00:31:44.102 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:44.102 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:44.102 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:44.102 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:44.102 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:44.102 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:44.102 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:44.102 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:44.102 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:31:44.102 Found net devices under 0000:09:00.1: cvl_0_1 00:31:44.102 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:44.102 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:31:44.102 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # is_hw=yes 00:31:44.102 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:31:44.102 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:31:44.102 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:31:44.102 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:44.102 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:44.102 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:44.102 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:44.102 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:44.102 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:44.102 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:44.102 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:44.102 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:44.102 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:44.102 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:44.102 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:44.102 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:44.102 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:44.102 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:44.102 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:44.102 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:44.102 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:44.102 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:44.102 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:44.102 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:44.102 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:44.102 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:44.102 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:44.102 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:31:44.102 00:31:44.102 --- 10.0.0.2 ping statistics --- 00:31:44.102 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:44.103 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:31:44.103 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:44.103 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:44.103 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.052 ms 00:31:44.103 00:31:44.103 --- 10.0.0.1 ping statistics --- 00:31:44.103 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:44.103 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:31:44.103 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:44.103 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@448 -- # return 0 00:31:44.103 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:31:44.103 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:44.103 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:31:44.103 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:31:44.103 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:44.103 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:31:44.103 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:31:44.103 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:31:44.103 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:31:44.103 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:44.103 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:44.103 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # nvmfpid=371697 00:31:44.103 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # waitforlisten 371697 00:31:44.103 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 371697 ']' 00:31:44.103 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:31:44.103 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:44.103 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:44.103 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:44.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:44.103 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:44.103 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:44.103 [2024-10-07 09:52:32.703753] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:44.103 [2024-10-07 09:52:32.704831] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:31:44.103 [2024-10-07 09:52:32.704882] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:44.103 [2024-10-07 09:52:32.766378] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:44.103 [2024-10-07 09:52:32.874826] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:44.103 [2024-10-07 09:52:32.874881] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:44.103 [2024-10-07 09:52:32.874902] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:44.103 [2024-10-07 09:52:32.874914] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:44.103 [2024-10-07 09:52:32.874923] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:44.103 [2024-10-07 09:52:32.876398] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:31:44.103 [2024-10-07 09:52:32.876455] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:31:44.103 [2024-10-07 09:52:32.876524] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:31:44.103 [2024-10-07 09:52:32.876527] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:31:44.103 [2024-10-07 09:52:32.974392] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:44.103 [2024-10-07 09:52:32.974614] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:44.103 [2024-10-07 09:52:32.974985] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:44.103 [2024-10-07 09:52:32.975592] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:44.103 [2024-10-07 09:52:32.975846] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:44.103 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:44.103 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:31:44.103 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:31:44.103 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:44.103 09:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:44.103 09:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:44.103 09:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:44.361 [2024-10-07 09:52:33.281216] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:44.361 09:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:44.620 09:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:31:44.620 09:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:45.186 09:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:31:45.186 09:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:45.445 09:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:31:45.445 09:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:45.703 09:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:31:45.703 09:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:31:45.962 09:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:46.220 09:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:31:46.220 09:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:46.480 09:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:31:46.480 09:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:46.738 09:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:31:46.738 09:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:31:47.306 09:52:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:31:47.564 09:52:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:31:47.564 09:52:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:47.822 09:52:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:31:47.822 09:52:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:31:48.080 09:52:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:48.338 [2024-10-07 09:52:37.201370] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:48.338 09:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:31:48.596 09:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:31:48.854 09:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid=21b7cb46-a602-e411-a339-001e67bc3be4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:31:49.112 09:52:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:31:49.112 09:52:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:31:49.112 09:52:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:31:49.112 09:52:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:31:49.112 09:52:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:31:49.112 09:52:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:31:51.640 09:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:31:51.640 09:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:31:51.640 09:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:31:51.640 09:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:31:51.640 09:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:31:51.641 09:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:31:51.641 09:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:31:51.641 [global] 00:31:51.641 thread=1 00:31:51.641 invalidate=1 00:31:51.641 rw=write 00:31:51.641 time_based=1 00:31:51.641 runtime=1 00:31:51.641 ioengine=libaio 00:31:51.641 direct=1 00:31:51.641 bs=4096 00:31:51.641 iodepth=1 00:31:51.641 norandommap=0 00:31:51.641 numjobs=1 00:31:51.641 00:31:51.641 verify_dump=1 00:31:51.641 verify_backlog=512 00:31:51.641 verify_state_save=0 00:31:51.641 do_verify=1 00:31:51.641 verify=crc32c-intel 00:31:51.641 [job0] 00:31:51.641 filename=/dev/nvme0n1 00:31:51.641 [job1] 00:31:51.641 filename=/dev/nvme0n2 00:31:51.641 [job2] 00:31:51.641 filename=/dev/nvme0n3 00:31:51.641 [job3] 00:31:51.641 filename=/dev/nvme0n4 00:31:51.641 Could not set queue depth (nvme0n1) 00:31:51.641 Could not set queue depth (nvme0n2) 00:31:51.641 Could not set queue depth (nvme0n3) 00:31:51.641 Could not set queue depth (nvme0n4) 00:31:51.641 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:51.641 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:51.641 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:51.641 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:51.641 fio-3.35 00:31:51.641 Starting 4 threads 00:31:52.576 00:31:52.576 job0: (groupid=0, jobs=1): err= 0: pid=372716: Mon Oct 7 09:52:41 2024 00:31:52.576 read: IOPS=1251, BW=5007KiB/s (5127kB/s)(5012KiB/1001msec) 00:31:52.576 slat (nsec): min=7258, max=54278, avg=17800.10, stdev=3869.44 00:31:52.576 clat (usec): min=234, max=40946, avg=503.62, stdev=2273.94 00:31:52.576 lat (usec): min=244, max=40960, avg=521.42, stdev=2273.89 00:31:52.576 clat percentiles (usec): 00:31:52.576 | 1.00th=[ 260], 5.00th=[ 273], 10.00th=[ 289], 20.00th=[ 306], 00:31:52.576 | 30.00th=[ 322], 40.00th=[ 355], 50.00th=[ 396], 60.00th=[ 408], 00:31:52.576 | 70.00th=[ 416], 80.00th=[ 429], 90.00th=[ 449], 95.00th=[ 465], 00:31:52.576 | 99.00th=[ 603], 99.50th=[ 676], 99.90th=[40633], 99.95th=[41157], 00:31:52.576 | 99.99th=[41157] 00:31:52.576 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:31:52.576 slat (nsec): min=6301, max=63881, avg=21041.66, stdev=6198.32 00:31:52.576 clat (usec): min=142, max=500, avg=194.44, stdev=43.32 00:31:52.576 lat (usec): min=159, max=541, avg=215.48, stdev=43.96 00:31:52.576 clat percentiles (usec): 00:31:52.576 | 1.00th=[ 157], 5.00th=[ 167], 10.00th=[ 169], 20.00th=[ 174], 00:31:52.576 | 30.00th=[ 176], 40.00th=[ 178], 50.00th=[ 182], 60.00th=[ 186], 00:31:52.576 | 70.00th=[ 190], 80.00th=[ 198], 90.00th=[ 247], 95.00th=[ 285], 00:31:52.576 | 99.00th=[ 396], 99.50th=[ 445], 99.90th=[ 494], 99.95th=[ 502], 00:31:52.576 | 99.99th=[ 502] 00:31:52.576 bw ( KiB/s): min= 4416, max= 4416, per=23.01%, avg=4416.00, stdev= 0.00, samples=1 00:31:52.576 iops : min= 1104, max= 1104, avg=1104.00, stdev= 0.00, samples=1 00:31:52.576 lat (usec) : 250=49.84%, 500=49.09%, 750=0.93% 00:31:52.576 lat (msec) : 50=0.14% 00:31:52.576 cpu : usr=4.00%, sys=7.10%, ctx=2790, majf=0, minf=1 00:31:52.576 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:52.576 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:52.576 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:52.576 issued rwts: total=1253,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:52.576 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:52.576 job1: (groupid=0, jobs=1): err= 0: pid=372717: Mon Oct 7 09:52:41 2024 00:31:52.576 read: IOPS=21, BW=87.5KiB/s (89.6kB/s)(88.0KiB/1006msec) 00:31:52.576 slat (nsec): min=13375, max=33956, avg=28154.59, stdev=7977.71 00:31:52.576 clat (usec): min=282, max=41970, avg=39179.27, stdev=8690.77 00:31:52.576 lat (usec): min=304, max=41986, avg=39207.43, stdev=8692.08 00:31:52.576 clat percentiles (usec): 00:31:52.576 | 1.00th=[ 281], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:31:52.576 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:52.576 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:52.576 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:31:52.576 | 99.99th=[42206] 00:31:52.576 write: IOPS=508, BW=2036KiB/s (2085kB/s)(2048KiB/1006msec); 0 zone resets 00:31:52.576 slat (nsec): min=6358, max=67493, avg=17531.01, stdev=8817.61 00:31:52.576 clat (usec): min=167, max=1447, avg=256.65, stdev=93.27 00:31:52.576 lat (usec): min=175, max=1456, avg=274.18, stdev=93.85 00:31:52.576 clat percentiles (usec): 00:31:52.576 | 1.00th=[ 188], 5.00th=[ 202], 10.00th=[ 206], 20.00th=[ 212], 00:31:52.576 | 30.00th=[ 219], 40.00th=[ 225], 50.00th=[ 233], 60.00th=[ 239], 00:31:52.576 | 70.00th=[ 249], 80.00th=[ 269], 90.00th=[ 338], 95.00th=[ 388], 00:31:52.576 | 99.00th=[ 478], 99.50th=[ 963], 99.90th=[ 1450], 99.95th=[ 1450], 00:31:52.576 | 99.99th=[ 1450] 00:31:52.576 bw ( KiB/s): min= 4096, max= 4096, per=21.35%, avg=4096.00, stdev= 0.00, samples=1 00:31:52.576 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:52.576 lat (usec) : 250=68.16%, 500=27.15%, 1000=0.56% 00:31:52.576 lat (msec) : 2=0.19%, 50=3.93% 00:31:52.576 cpu : usr=0.20%, sys=1.39%, ctx=534, majf=0, minf=1 00:31:52.576 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:52.576 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:52.576 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:52.576 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:52.576 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:52.576 job2: (groupid=0, jobs=1): err= 0: pid=372718: Mon Oct 7 09:52:41 2024 00:31:52.576 read: IOPS=587, BW=2352KiB/s (2408kB/s)(2420KiB/1029msec) 00:31:52.576 slat (nsec): min=5983, max=35571, avg=12700.68, stdev=6818.56 00:31:52.576 clat (usec): min=222, max=41980, avg=1292.05, stdev=6337.51 00:31:52.576 lat (usec): min=229, max=42014, avg=1304.75, stdev=6340.44 00:31:52.576 clat percentiles (usec): 00:31:52.576 | 1.00th=[ 231], 5.00th=[ 241], 10.00th=[ 243], 20.00th=[ 249], 00:31:52.576 | 30.00th=[ 253], 40.00th=[ 262], 50.00th=[ 269], 60.00th=[ 273], 00:31:52.576 | 70.00th=[ 281], 80.00th=[ 293], 90.00th=[ 326], 95.00th=[ 453], 00:31:52.576 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:31:52.576 | 99.99th=[42206] 00:31:52.576 write: IOPS=995, BW=3981KiB/s (4076kB/s)(4096KiB/1029msec); 0 zone resets 00:31:52.576 slat (nsec): min=8453, max=56679, avg=20416.10, stdev=6992.22 00:31:52.576 clat (usec): min=162, max=311, avg=205.52, stdev=15.24 00:31:52.576 lat (usec): min=172, max=333, avg=225.94, stdev=17.61 00:31:52.576 clat percentiles (usec): 00:31:52.576 | 1.00th=[ 174], 5.00th=[ 184], 10.00th=[ 190], 20.00th=[ 194], 00:31:52.576 | 30.00th=[ 198], 40.00th=[ 202], 50.00th=[ 204], 60.00th=[ 208], 00:31:52.576 | 70.00th=[ 212], 80.00th=[ 217], 90.00th=[ 225], 95.00th=[ 233], 00:31:52.576 | 99.00th=[ 249], 99.50th=[ 255], 99.90th=[ 273], 99.95th=[ 314], 00:31:52.576 | 99.99th=[ 314] 00:31:52.576 bw ( KiB/s): min= 8192, max= 8192, per=42.69%, avg=8192.00, stdev= 0.00, samples=1 00:31:52.576 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:31:52.576 lat (usec) : 250=70.29%, 500=28.48%, 750=0.25% 00:31:52.576 lat (msec) : 10=0.06%, 50=0.92% 00:31:52.576 cpu : usr=2.33%, sys=3.11%, ctx=1630, majf=0, minf=1 00:31:52.576 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:52.576 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:52.576 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:52.576 issued rwts: total=605,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:52.576 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:52.576 job3: (groupid=0, jobs=1): err= 0: pid=372719: Mon Oct 7 09:52:41 2024 00:31:52.576 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:31:52.576 slat (nsec): min=6239, max=45592, avg=14962.44, stdev=5302.45 00:31:52.576 clat (usec): min=210, max=634, avg=320.10, stdev=66.74 00:31:52.576 lat (usec): min=216, max=654, avg=335.06, stdev=70.49 00:31:52.576 clat percentiles (usec): 00:31:52.576 | 1.00th=[ 215], 5.00th=[ 223], 10.00th=[ 235], 20.00th=[ 258], 00:31:52.576 | 30.00th=[ 269], 40.00th=[ 289], 50.00th=[ 314], 60.00th=[ 343], 00:31:52.576 | 70.00th=[ 375], 80.00th=[ 388], 90.00th=[ 404], 95.00th=[ 416], 00:31:52.576 | 99.00th=[ 445], 99.50th=[ 529], 99.90th=[ 635], 99.95th=[ 635], 00:31:52.576 | 99.99th=[ 635] 00:31:52.576 write: IOPS=1862, BW=7449KiB/s (7627kB/s)(7456KiB/1001msec); 0 zone resets 00:31:52.576 slat (usec): min=7, max=20913, avg=34.29, stdev=484.70 00:31:52.576 clat (usec): min=160, max=534, avg=216.82, stdev=48.75 00:31:52.576 lat (usec): min=169, max=21439, avg=251.12, stdev=494.37 00:31:52.576 clat percentiles (usec): 00:31:52.576 | 1.00th=[ 174], 5.00th=[ 178], 10.00th=[ 180], 20.00th=[ 186], 00:31:52.576 | 30.00th=[ 192], 40.00th=[ 196], 50.00th=[ 200], 60.00th=[ 206], 00:31:52.577 | 70.00th=[ 215], 80.00th=[ 241], 90.00th=[ 281], 95.00th=[ 318], 00:31:52.577 | 99.00th=[ 412], 99.50th=[ 461], 99.90th=[ 529], 99.95th=[ 537], 00:31:52.577 | 99.99th=[ 537] 00:31:52.577 bw ( KiB/s): min= 8192, max= 8192, per=42.69%, avg=8192.00, stdev= 0.00, samples=1 00:31:52.577 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:31:52.577 lat (usec) : 250=52.53%, 500=47.06%, 750=0.41% 00:31:52.577 cpu : usr=4.50%, sys=8.80%, ctx=3403, majf=0, minf=1 00:31:52.577 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:52.577 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:52.577 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:52.577 issued rwts: total=1536,1864,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:52.577 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:52.577 00:31:52.577 Run status group 0 (all jobs): 00:31:52.577 READ: bw=13.0MiB/s (13.6MB/s), 87.5KiB/s-6138KiB/s (89.6kB/s-6285kB/s), io=13.3MiB (14.0MB), run=1001-1029msec 00:31:52.577 WRITE: bw=18.7MiB/s (19.6MB/s), 2036KiB/s-7449KiB/s (2085kB/s-7627kB/s), io=19.3MiB (20.2MB), run=1001-1029msec 00:31:52.577 00:31:52.577 Disk stats (read/write): 00:31:52.577 nvme0n1: ios=1073/1286, merge=0/0, ticks=842/231, in_queue=1073, util=85.57% 00:31:52.577 nvme0n2: ios=68/512, merge=0/0, ticks=767/121, in_queue=888, util=90.95% 00:31:52.577 nvme0n3: ios=626/1024, merge=0/0, ticks=1463/194, in_queue=1657, util=93.42% 00:31:52.577 nvme0n4: ios=1404/1536, merge=0/0, ticks=540/319, in_queue=859, util=96.31% 00:31:52.577 09:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:31:52.577 [global] 00:31:52.577 thread=1 00:31:52.577 invalidate=1 00:31:52.577 rw=randwrite 00:31:52.577 time_based=1 00:31:52.577 runtime=1 00:31:52.577 ioengine=libaio 00:31:52.577 direct=1 00:31:52.577 bs=4096 00:31:52.577 iodepth=1 00:31:52.577 norandommap=0 00:31:52.577 numjobs=1 00:31:52.577 00:31:52.577 verify_dump=1 00:31:52.577 verify_backlog=512 00:31:52.577 verify_state_save=0 00:31:52.577 do_verify=1 00:31:52.577 verify=crc32c-intel 00:31:52.577 [job0] 00:31:52.577 filename=/dev/nvme0n1 00:31:52.577 [job1] 00:31:52.577 filename=/dev/nvme0n2 00:31:52.577 [job2] 00:31:52.577 filename=/dev/nvme0n3 00:31:52.577 [job3] 00:31:52.577 filename=/dev/nvme0n4 00:31:52.577 Could not set queue depth (nvme0n1) 00:31:52.577 Could not set queue depth (nvme0n2) 00:31:52.577 Could not set queue depth (nvme0n3) 00:31:52.577 Could not set queue depth (nvme0n4) 00:31:52.834 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:52.834 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:52.835 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:52.835 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:52.835 fio-3.35 00:31:52.835 Starting 4 threads 00:31:54.209 00:31:54.209 job0: (groupid=0, jobs=1): err= 0: pid=372961: Mon Oct 7 09:52:42 2024 00:31:54.210 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:31:54.210 slat (nsec): min=5332, max=35343, avg=8381.32, stdev=4463.18 00:31:54.210 clat (usec): min=204, max=1148, avg=253.52, stdev=38.80 00:31:54.210 lat (usec): min=210, max=1154, avg=261.91, stdev=40.11 00:31:54.210 clat percentiles (usec): 00:31:54.210 | 1.00th=[ 212], 5.00th=[ 219], 10.00th=[ 221], 20.00th=[ 225], 00:31:54.210 | 30.00th=[ 229], 40.00th=[ 235], 50.00th=[ 243], 60.00th=[ 265], 00:31:54.210 | 70.00th=[ 277], 80.00th=[ 281], 90.00th=[ 289], 95.00th=[ 297], 00:31:54.210 | 99.00th=[ 318], 99.50th=[ 359], 99.90th=[ 408], 99.95th=[ 996], 00:31:54.210 | 99.99th=[ 1156] 00:31:54.210 write: IOPS=2257, BW=9031KiB/s (9248kB/s)(9040KiB/1001msec); 0 zone resets 00:31:54.210 slat (nsec): min=6934, max=70225, avg=9974.77, stdev=4801.22 00:31:54.210 clat (usec): min=142, max=535, avg=188.25, stdev=42.32 00:31:54.210 lat (usec): min=150, max=547, avg=198.22, stdev=43.07 00:31:54.210 clat percentiles (usec): 00:31:54.210 | 1.00th=[ 147], 5.00th=[ 151], 10.00th=[ 155], 20.00th=[ 157], 00:31:54.210 | 30.00th=[ 161], 40.00th=[ 167], 50.00th=[ 174], 60.00th=[ 184], 00:31:54.210 | 70.00th=[ 196], 80.00th=[ 219], 90.00th=[ 245], 95.00th=[ 249], 00:31:54.210 | 99.00th=[ 371], 99.50th=[ 412], 99.90th=[ 469], 99.95th=[ 478], 00:31:54.210 | 99.99th=[ 537] 00:31:54.210 bw ( KiB/s): min= 8192, max= 8192, per=56.06%, avg=8192.00, stdev= 0.00, samples=1 00:31:54.210 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:31:54.210 lat (usec) : 250=75.44%, 500=24.49%, 750=0.02%, 1000=0.02% 00:31:54.210 lat (msec) : 2=0.02% 00:31:54.210 cpu : usr=2.30%, sys=6.10%, ctx=4309, majf=0, minf=2 00:31:54.210 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:54.210 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:54.210 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:54.210 issued rwts: total=2048,2260,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:54.210 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:54.210 job1: (groupid=0, jobs=1): err= 0: pid=372982: Mon Oct 7 09:52:42 2024 00:31:54.210 read: IOPS=21, BW=84.7KiB/s (86.7kB/s)(88.0KiB/1039msec) 00:31:54.210 slat (nsec): min=8362, max=32798, avg=14896.45, stdev=6568.15 00:31:54.210 clat (usec): min=40959, max=41184, avg=40991.21, stdev=45.16 00:31:54.210 lat (usec): min=40980, max=41192, avg=41006.11, stdev=43.08 00:31:54.210 clat percentiles (usec): 00:31:54.210 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:31:54.210 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:54.210 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:54.210 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:31:54.210 | 99.99th=[41157] 00:31:54.210 write: IOPS=492, BW=1971KiB/s (2018kB/s)(2048KiB/1039msec); 0 zone resets 00:31:54.210 slat (nsec): min=8536, max=49430, avg=11610.03, stdev=3866.44 00:31:54.210 clat (usec): min=150, max=418, avg=244.95, stdev=44.27 00:31:54.210 lat (usec): min=168, max=430, avg=256.56, stdev=44.56 00:31:54.210 clat percentiles (usec): 00:31:54.210 | 1.00th=[ 163], 5.00th=[ 176], 10.00th=[ 204], 20.00th=[ 233], 00:31:54.210 | 30.00th=[ 237], 40.00th=[ 239], 50.00th=[ 241], 60.00th=[ 243], 00:31:54.210 | 70.00th=[ 243], 80.00th=[ 247], 90.00th=[ 269], 95.00th=[ 367], 00:31:54.210 | 99.00th=[ 396], 99.50th=[ 404], 99.90th=[ 420], 99.95th=[ 420], 00:31:54.210 | 99.99th=[ 420] 00:31:54.210 bw ( KiB/s): min= 4096, max= 4096, per=28.03%, avg=4096.00, stdev= 0.00, samples=1 00:31:54.210 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:54.210 lat (usec) : 250=78.28%, 500=17.60% 00:31:54.210 lat (msec) : 50=4.12% 00:31:54.210 cpu : usr=0.77%, sys=0.29%, ctx=537, majf=0, minf=1 00:31:54.210 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:54.210 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:54.210 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:54.210 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:54.210 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:54.210 job2: (groupid=0, jobs=1): err= 0: pid=373014: Mon Oct 7 09:52:42 2024 00:31:54.210 read: IOPS=158, BW=635KiB/s (651kB/s)(636KiB/1001msec) 00:31:54.210 slat (nsec): min=5740, max=35591, avg=8687.01, stdev=4294.01 00:31:54.210 clat (usec): min=202, max=41056, avg=5603.35, stdev=13837.83 00:31:54.210 lat (usec): min=209, max=41074, avg=5612.04, stdev=13840.79 00:31:54.210 clat percentiles (usec): 00:31:54.210 | 1.00th=[ 204], 5.00th=[ 208], 10.00th=[ 208], 20.00th=[ 212], 00:31:54.210 | 30.00th=[ 215], 40.00th=[ 217], 50.00th=[ 219], 60.00th=[ 221], 00:31:54.210 | 70.00th=[ 225], 80.00th=[ 235], 90.00th=[41157], 95.00th=[41157], 00:31:54.210 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:31:54.210 | 99.99th=[41157] 00:31:54.210 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:31:54.210 slat (nsec): min=6319, max=37018, avg=7976.09, stdev=2881.32 00:31:54.210 clat (usec): min=158, max=450, avg=192.38, stdev=37.01 00:31:54.210 lat (usec): min=166, max=486, avg=200.36, stdev=38.03 00:31:54.210 clat percentiles (usec): 00:31:54.210 | 1.00th=[ 163], 5.00th=[ 167], 10.00th=[ 167], 20.00th=[ 172], 00:31:54.210 | 30.00th=[ 176], 40.00th=[ 178], 50.00th=[ 180], 60.00th=[ 184], 00:31:54.210 | 70.00th=[ 188], 80.00th=[ 196], 90.00th=[ 247], 95.00th=[ 262], 00:31:54.210 | 99.00th=[ 334], 99.50th=[ 383], 99.90th=[ 453], 99.95th=[ 453], 00:31:54.210 | 99.99th=[ 453] 00:31:54.210 bw ( KiB/s): min= 4096, max= 4096, per=28.03%, avg=4096.00, stdev= 0.00, samples=1 00:31:54.210 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:54.210 lat (usec) : 250=89.87%, 500=6.86%, 1000=0.15% 00:31:54.210 lat (msec) : 50=3.13% 00:31:54.210 cpu : usr=0.20%, sys=0.60%, ctx=672, majf=0, minf=2 00:31:54.210 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:54.210 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:54.210 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:54.210 issued rwts: total=159,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:54.210 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:54.210 job3: (groupid=0, jobs=1): err= 0: pid=373027: Mon Oct 7 09:52:42 2024 00:31:54.210 read: IOPS=189, BW=758KiB/s (776kB/s)(760KiB/1003msec) 00:31:54.210 slat (nsec): min=4476, max=35463, avg=7330.98, stdev=4991.79 00:31:54.210 clat (usec): min=206, max=41109, avg=4735.96, stdev=12794.12 00:31:54.210 lat (usec): min=211, max=41114, avg=4743.29, stdev=12796.76 00:31:54.210 clat percentiles (usec): 00:31:54.210 | 1.00th=[ 210], 5.00th=[ 219], 10.00th=[ 221], 20.00th=[ 225], 00:31:54.210 | 30.00th=[ 227], 40.00th=[ 231], 50.00th=[ 235], 60.00th=[ 239], 00:31:54.210 | 70.00th=[ 243], 80.00th=[ 255], 90.00th=[40633], 95.00th=[41157], 00:31:54.210 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:31:54.210 | 99.99th=[41157] 00:31:54.210 write: IOPS=510, BW=2042KiB/s (2091kB/s)(2048KiB/1003msec); 0 zone resets 00:31:54.210 slat (nsec): min=6168, max=28575, avg=7397.39, stdev=2064.38 00:31:54.210 clat (usec): min=159, max=447, avg=181.39, stdev=17.29 00:31:54.210 lat (usec): min=166, max=473, avg=188.79, stdev=17.95 00:31:54.210 clat percentiles (usec): 00:31:54.210 | 1.00th=[ 163], 5.00th=[ 167], 10.00th=[ 169], 20.00th=[ 172], 00:31:54.210 | 30.00th=[ 174], 40.00th=[ 176], 50.00th=[ 180], 60.00th=[ 182], 00:31:54.210 | 70.00th=[ 186], 80.00th=[ 190], 90.00th=[ 196], 95.00th=[ 206], 00:31:54.210 | 99.00th=[ 221], 99.50th=[ 249], 99.90th=[ 449], 99.95th=[ 449], 00:31:54.210 | 99.99th=[ 449] 00:31:54.210 bw ( KiB/s): min= 4096, max= 4096, per=28.03%, avg=4096.00, stdev= 0.00, samples=1 00:31:54.210 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:54.210 lat (usec) : 250=93.45%, 500=3.56% 00:31:54.210 lat (msec) : 50=2.99% 00:31:54.210 cpu : usr=0.20%, sys=0.50%, ctx=703, majf=0, minf=1 00:31:54.210 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:54.210 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:54.210 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:54.210 issued rwts: total=190,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:54.210 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:54.210 00:31:54.210 Run status group 0 (all jobs): 00:31:54.210 READ: bw=9313KiB/s (9536kB/s), 84.7KiB/s-8184KiB/s (86.7kB/s-8380kB/s), io=9676KiB (9908kB), run=1001-1039msec 00:31:54.210 WRITE: bw=14.3MiB/s (15.0MB/s), 1971KiB/s-9031KiB/s (2018kB/s-9248kB/s), io=14.8MiB (15.5MB), run=1001-1039msec 00:31:54.210 00:31:54.210 Disk stats (read/write): 00:31:54.210 nvme0n1: ios=1669/2048, merge=0/0, ticks=1305/362, in_queue=1667, util=88.88% 00:31:54.210 nvme0n2: ios=41/512, merge=0/0, ticks=1686/124, in_queue=1810, util=97.36% 00:31:54.210 nvme0n3: ios=74/512, merge=0/0, ticks=977/90, in_queue=1067, util=92.66% 00:31:54.210 nvme0n4: ios=244/512, merge=0/0, ticks=1010/90, in_queue=1100, util=96.82% 00:31:54.210 09:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:31:54.210 [global] 00:31:54.210 thread=1 00:31:54.210 invalidate=1 00:31:54.210 rw=write 00:31:54.210 time_based=1 00:31:54.210 runtime=1 00:31:54.210 ioengine=libaio 00:31:54.210 direct=1 00:31:54.210 bs=4096 00:31:54.210 iodepth=128 00:31:54.210 norandommap=0 00:31:54.210 numjobs=1 00:31:54.210 00:31:54.210 verify_dump=1 00:31:54.210 verify_backlog=512 00:31:54.210 verify_state_save=0 00:31:54.210 do_verify=1 00:31:54.210 verify=crc32c-intel 00:31:54.210 [job0] 00:31:54.210 filename=/dev/nvme0n1 00:31:54.210 [job1] 00:31:54.210 filename=/dev/nvme0n2 00:31:54.210 [job2] 00:31:54.210 filename=/dev/nvme0n3 00:31:54.210 [job3] 00:31:54.210 filename=/dev/nvme0n4 00:31:54.210 Could not set queue depth (nvme0n1) 00:31:54.210 Could not set queue depth (nvme0n2) 00:31:54.210 Could not set queue depth (nvme0n3) 00:31:54.210 Could not set queue depth (nvme0n4) 00:31:54.507 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:54.507 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:54.507 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:54.507 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:54.507 fio-3.35 00:31:54.507 Starting 4 threads 00:31:55.442 00:31:55.442 job0: (groupid=0, jobs=1): err= 0: pid=373279: Mon Oct 7 09:52:44 2024 00:31:55.442 read: IOPS=5612, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1002msec) 00:31:55.442 slat (usec): min=3, max=10707, avg=93.33, stdev=669.49 00:31:55.442 clat (usec): min=1619, max=23176, avg=11742.92, stdev=2685.18 00:31:55.442 lat (usec): min=3242, max=23187, avg=11836.25, stdev=2720.48 00:31:55.442 clat percentiles (usec): 00:31:55.442 | 1.00th=[ 5145], 5.00th=[ 8455], 10.00th=[ 9503], 20.00th=[10159], 00:31:55.442 | 30.00th=[10421], 40.00th=[10814], 50.00th=[11469], 60.00th=[11731], 00:31:55.442 | 70.00th=[11994], 80.00th=[13042], 90.00th=[15664], 95.00th=[17171], 00:31:55.442 | 99.00th=[19792], 99.50th=[20841], 99.90th=[22676], 99.95th=[22676], 00:31:55.442 | 99.99th=[23200] 00:31:55.442 write: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec); 0 zone resets 00:31:55.442 slat (usec): min=3, max=9643, avg=76.03, stdev=532.58 00:31:55.442 clat (usec): min=2580, max=22595, avg=10848.45, stdev=2434.23 00:31:55.442 lat (usec): min=2587, max=22602, avg=10924.47, stdev=2469.11 00:31:55.442 clat percentiles (usec): 00:31:55.442 | 1.00th=[ 3949], 5.00th=[ 6783], 10.00th=[ 7635], 20.00th=[ 8979], 00:31:55.442 | 30.00th=[ 9896], 40.00th=[10683], 50.00th=[11076], 60.00th=[11600], 00:31:55.442 | 70.00th=[11731], 80.00th=[12256], 90.00th=[14091], 95.00th=[15139], 00:31:55.442 | 99.00th=[16319], 99.50th=[16909], 99.90th=[20841], 99.95th=[21627], 00:31:55.442 | 99.99th=[22676] 00:31:55.442 bw ( KiB/s): min=20776, max=24280, per=33.15%, avg=22528.00, stdev=2477.70, samples=2 00:31:55.442 iops : min= 5194, max= 6070, avg=5632.00, stdev=619.43, samples=2 00:31:55.442 lat (msec) : 2=0.01%, 4=0.76%, 10=23.39%, 20=75.27%, 50=0.57% 00:31:55.442 cpu : usr=5.49%, sys=10.09%, ctx=425, majf=0, minf=1 00:31:55.442 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:31:55.442 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:55.442 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:55.442 issued rwts: total=5624,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:55.442 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:55.442 job1: (groupid=0, jobs=1): err= 0: pid=373280: Mon Oct 7 09:52:44 2024 00:31:55.442 read: IOPS=3047, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1008msec) 00:31:55.442 slat (usec): min=2, max=10299, avg=140.53, stdev=798.71 00:31:55.442 clat (usec): min=7127, max=46532, avg=18136.56, stdev=7003.86 00:31:55.442 lat (usec): min=7131, max=46536, avg=18277.09, stdev=7068.56 00:31:55.442 clat percentiles (usec): 00:31:55.442 | 1.00th=[ 7373], 5.00th=[10421], 10.00th=[11469], 20.00th=[12649], 00:31:55.442 | 30.00th=[13435], 40.00th=[13960], 50.00th=[15139], 60.00th=[17171], 00:31:55.442 | 70.00th=[22676], 80.00th=[24511], 90.00th=[28967], 95.00th=[30016], 00:31:55.442 | 99.00th=[37487], 99.50th=[42730], 99.90th=[46400], 99.95th=[46400], 00:31:55.442 | 99.99th=[46400] 00:31:55.442 write: IOPS=3141, BW=12.3MiB/s (12.9MB/s)(12.4MiB/1008msec); 0 zone resets 00:31:55.442 slat (usec): min=3, max=26655, avg=173.30, stdev=923.59 00:31:55.442 clat (usec): min=3634, max=91026, avg=22346.07, stdev=15800.23 00:31:55.442 lat (usec): min=4310, max=91040, avg=22519.37, stdev=15897.37 00:31:55.442 clat percentiles (usec): 00:31:55.442 | 1.00th=[ 8094], 5.00th=[ 9110], 10.00th=[11076], 20.00th=[11863], 00:31:55.442 | 30.00th=[12518], 40.00th=[14091], 50.00th=[14484], 60.00th=[17695], 00:31:55.442 | 70.00th=[25297], 80.00th=[30016], 90.00th=[46400], 95.00th=[56886], 00:31:55.442 | 99.00th=[83362], 99.50th=[83362], 99.90th=[90702], 99.95th=[90702], 00:31:55.442 | 99.99th=[90702] 00:31:55.442 bw ( KiB/s): min=12288, max=12288, per=18.08%, avg=12288.00, stdev= 0.00, samples=2 00:31:55.442 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:31:55.442 lat (msec) : 4=0.02%, 10=5.15%, 20=58.71%, 50=32.46%, 100=3.67% 00:31:55.442 cpu : usr=3.18%, sys=4.47%, ctx=384, majf=0, minf=1 00:31:55.442 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:31:55.442 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:55.442 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:55.442 issued rwts: total=3072,3167,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:55.442 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:55.442 job2: (groupid=0, jobs=1): err= 0: pid=373281: Mon Oct 7 09:52:44 2024 00:31:55.442 read: IOPS=4071, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1006msec) 00:31:55.442 slat (usec): min=2, max=14462, avg=105.95, stdev=620.73 00:31:55.442 clat (usec): min=9877, max=38095, avg=15284.76, stdev=3970.15 00:31:55.443 lat (usec): min=9884, max=38119, avg=15390.71, stdev=4000.14 00:31:55.443 clat percentiles (usec): 00:31:55.443 | 1.00th=[10945], 5.00th=[11863], 10.00th=[11994], 20.00th=[12780], 00:31:55.443 | 30.00th=[13566], 40.00th=[13829], 50.00th=[14353], 60.00th=[15008], 00:31:55.443 | 70.00th=[15664], 80.00th=[16057], 90.00th=[17957], 95.00th=[24773], 00:31:55.443 | 99.00th=[33424], 99.50th=[33424], 99.90th=[33424], 99.95th=[33424], 00:31:55.443 | 99.99th=[38011] 00:31:55.443 write: IOPS=4488, BW=17.5MiB/s (18.4MB/s)(17.6MiB/1006msec); 0 zone resets 00:31:55.443 slat (usec): min=3, max=13119, avg=114.31, stdev=699.80 00:31:55.443 clat (usec): min=1282, max=28202, avg=14388.75, stdev=2420.40 00:31:55.443 lat (usec): min=4564, max=28227, avg=14503.05, stdev=2493.87 00:31:55.443 clat percentiles (usec): 00:31:55.443 | 1.00th=[ 6980], 5.00th=[11469], 10.00th=[12387], 20.00th=[12911], 00:31:55.443 | 30.00th=[13304], 40.00th=[13566], 50.00th=[14222], 60.00th=[14615], 00:31:55.443 | 70.00th=[15401], 80.00th=[15664], 90.00th=[17171], 95.00th=[18744], 00:31:55.443 | 99.00th=[20841], 99.50th=[21890], 99.90th=[26608], 99.95th=[27395], 00:31:55.443 | 99.99th=[28181] 00:31:55.443 bw ( KiB/s): min=16432, max=18664, per=25.82%, avg=17548.00, stdev=1578.26, samples=2 00:31:55.443 iops : min= 4108, max= 4666, avg=4387.00, stdev=394.57, samples=2 00:31:55.443 lat (msec) : 2=0.01%, 10=1.57%, 20=92.96%, 50=5.46% 00:31:55.443 cpu : usr=3.88%, sys=6.97%, ctx=405, majf=0, minf=1 00:31:55.443 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:31:55.443 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:55.443 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:55.443 issued rwts: total=4096,4515,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:55.443 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:55.443 job3: (groupid=0, jobs=1): err= 0: pid=373282: Mon Oct 7 09:52:44 2024 00:31:55.443 read: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec) 00:31:55.443 slat (usec): min=2, max=9783, avg=130.12, stdev=745.31 00:31:55.443 clat (usec): min=9411, max=40388, avg=16318.83, stdev=3951.41 00:31:55.443 lat (usec): min=9428, max=40394, avg=16448.95, stdev=4022.02 00:31:55.443 clat percentiles (usec): 00:31:55.443 | 1.00th=[10028], 5.00th=[11469], 10.00th=[12518], 20.00th=[13435], 00:31:55.443 | 30.00th=[13960], 40.00th=[14615], 50.00th=[15533], 60.00th=[16319], 00:31:55.443 | 70.00th=[17433], 80.00th=[19006], 90.00th=[21103], 95.00th=[23725], 00:31:55.443 | 99.00th=[28181], 99.50th=[32900], 99.90th=[40633], 99.95th=[40633], 00:31:55.443 | 99.99th=[40633] 00:31:55.443 write: IOPS=3798, BW=14.8MiB/s (15.6MB/s)(14.9MiB/1003msec); 0 zone resets 00:31:55.443 slat (usec): min=3, max=11799, avg=133.55, stdev=763.55 00:31:55.443 clat (usec): min=439, max=63501, avg=18023.65, stdev=8671.63 00:31:55.443 lat (usec): min=4599, max=63524, avg=18157.20, stdev=8740.32 00:31:55.443 clat percentiles (usec): 00:31:55.443 | 1.00th=[ 4948], 5.00th=[ 7635], 10.00th=[11076], 20.00th=[13173], 00:31:55.443 | 30.00th=[13829], 40.00th=[14484], 50.00th=[15008], 60.00th=[17171], 00:31:55.443 | 70.00th=[19530], 80.00th=[24249], 90.00th=[25297], 95.00th=[29492], 00:31:55.443 | 99.00th=[56361], 99.50th=[60556], 99.90th=[63701], 99.95th=[63701], 00:31:55.443 | 99.99th=[63701] 00:31:55.443 bw ( KiB/s): min=13072, max=16384, per=21.67%, avg=14728.00, stdev=2341.94, samples=2 00:31:55.443 iops : min= 3268, max= 4096, avg=3682.00, stdev=585.48, samples=2 00:31:55.443 lat (usec) : 500=0.01% 00:31:55.443 lat (msec) : 10=4.80%, 20=72.63%, 50=21.49%, 100=1.07% 00:31:55.443 cpu : usr=3.49%, sys=5.19%, ctx=355, majf=0, minf=1 00:31:55.443 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:31:55.443 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:55.443 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:55.443 issued rwts: total=3584,3810,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:55.443 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:55.443 00:31:55.443 Run status group 0 (all jobs): 00:31:55.443 READ: bw=63.5MiB/s (66.5MB/s), 11.9MiB/s-21.9MiB/s (12.5MB/s-23.0MB/s), io=64.0MiB (67.1MB), run=1002-1008msec 00:31:55.443 WRITE: bw=66.4MiB/s (69.6MB/s), 12.3MiB/s-22.0MiB/s (12.9MB/s-23.0MB/s), io=66.9MiB (70.1MB), run=1002-1008msec 00:31:55.443 00:31:55.443 Disk stats (read/write): 00:31:55.443 nvme0n1: ios=4660/4999, merge=0/0, ticks=51473/51976, in_queue=103449, util=95.99% 00:31:55.443 nvme0n2: ios=2610/2647, merge=0/0, ticks=20026/26628, in_queue=46654, util=89.53% 00:31:55.443 nvme0n3: ios=3630/3862, merge=0/0, ticks=21102/21976, in_queue=43078, util=97.71% 00:31:55.443 nvme0n4: ios=3030/3072, merge=0/0, ticks=25971/30833, in_queue=56804, util=98.63% 00:31:55.443 09:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:31:55.709 [global] 00:31:55.709 thread=1 00:31:55.709 invalidate=1 00:31:55.709 rw=randwrite 00:31:55.709 time_based=1 00:31:55.709 runtime=1 00:31:55.709 ioengine=libaio 00:31:55.709 direct=1 00:31:55.709 bs=4096 00:31:55.709 iodepth=128 00:31:55.709 norandommap=0 00:31:55.709 numjobs=1 00:31:55.709 00:31:55.709 verify_dump=1 00:31:55.709 verify_backlog=512 00:31:55.709 verify_state_save=0 00:31:55.709 do_verify=1 00:31:55.709 verify=crc32c-intel 00:31:55.709 [job0] 00:31:55.709 filename=/dev/nvme0n1 00:31:55.709 [job1] 00:31:55.709 filename=/dev/nvme0n2 00:31:55.709 [job2] 00:31:55.709 filename=/dev/nvme0n3 00:31:55.709 [job3] 00:31:55.709 filename=/dev/nvme0n4 00:31:55.709 Could not set queue depth (nvme0n1) 00:31:55.709 Could not set queue depth (nvme0n2) 00:31:55.709 Could not set queue depth (nvme0n3) 00:31:55.709 Could not set queue depth (nvme0n4) 00:31:55.709 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:55.709 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:55.709 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:55.709 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:55.709 fio-3.35 00:31:55.709 Starting 4 threads 00:31:57.086 00:31:57.086 job0: (groupid=0, jobs=1): err= 0: pid=373499: Mon Oct 7 09:52:45 2024 00:31:57.086 read: IOPS=3513, BW=13.7MiB/s (14.4MB/s)(13.8MiB/1006msec) 00:31:57.086 slat (usec): min=2, max=13059, avg=130.38, stdev=727.69 00:31:57.086 clat (usec): min=3223, max=34755, avg=16771.46, stdev=5035.63 00:31:57.086 lat (usec): min=5730, max=34767, avg=16901.85, stdev=5080.46 00:31:57.086 clat percentiles (usec): 00:31:57.086 | 1.00th=[ 8029], 5.00th=[ 9503], 10.00th=[10683], 20.00th=[12387], 00:31:57.086 | 30.00th=[13435], 40.00th=[14353], 50.00th=[15795], 60.00th=[17695], 00:31:57.086 | 70.00th=[19792], 80.00th=[21627], 90.00th=[22938], 95.00th=[25035], 00:31:57.086 | 99.00th=[29230], 99.50th=[30802], 99.90th=[33162], 99.95th=[33162], 00:31:57.086 | 99.99th=[34866] 00:31:57.086 write: IOPS=3562, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1006msec); 0 zone resets 00:31:57.086 slat (usec): min=3, max=20169, avg=140.69, stdev=844.56 00:31:57.086 clat (usec): min=2407, max=58131, avg=18975.92, stdev=9083.55 00:31:57.086 lat (usec): min=2424, max=58161, avg=19116.61, stdev=9151.30 00:31:57.086 clat percentiles (usec): 00:31:57.086 | 1.00th=[ 5997], 5.00th=[ 9110], 10.00th=[10945], 20.00th=[11994], 00:31:57.086 | 30.00th=[12387], 40.00th=[15008], 50.00th=[16188], 60.00th=[19792], 00:31:57.086 | 70.00th=[22152], 80.00th=[24249], 90.00th=[31065], 95.00th=[34866], 00:31:57.086 | 99.00th=[55837], 99.50th=[57410], 99.90th=[57934], 99.95th=[57934], 00:31:57.086 | 99.99th=[57934] 00:31:57.086 bw ( KiB/s): min=12288, max=16384, per=21.28%, avg=14336.00, stdev=2896.31, samples=2 00:31:57.086 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:31:57.086 lat (msec) : 4=0.34%, 10=7.25%, 20=58.22%, 50=33.54%, 100=0.65% 00:31:57.086 cpu : usr=3.68%, sys=4.78%, ctx=390, majf=0, minf=1 00:31:57.086 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:31:57.086 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:57.086 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:57.086 issued rwts: total=3535,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:57.086 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:57.086 job1: (groupid=0, jobs=1): err= 0: pid=373501: Mon Oct 7 09:52:45 2024 00:31:57.086 read: IOPS=5114, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1001msec) 00:31:57.086 slat (usec): min=2, max=3514, avg=90.41, stdev=389.84 00:31:57.086 clat (usec): min=8203, max=15404, avg=11786.72, stdev=1043.70 00:31:57.086 lat (usec): min=8306, max=15417, avg=11877.13, stdev=1013.01 00:31:57.086 clat percentiles (usec): 00:31:57.086 | 1.00th=[ 9372], 5.00th=[10159], 10.00th=[10421], 20.00th=[10945], 00:31:57.086 | 30.00th=[11207], 40.00th=[11600], 50.00th=[11863], 60.00th=[12125], 00:31:57.086 | 70.00th=[12387], 80.00th=[12518], 90.00th=[13042], 95.00th=[13566], 00:31:57.086 | 99.00th=[14484], 99.50th=[14877], 99.90th=[15139], 99.95th=[15401], 00:31:57.086 | 99.99th=[15401] 00:31:57.086 write: IOPS=5353, BW=20.9MiB/s (21.9MB/s)(20.9MiB/1001msec); 0 zone resets 00:31:57.086 slat (usec): min=2, max=15158, avg=92.43, stdev=451.98 00:31:57.086 clat (usec): min=345, max=33726, avg=12239.05, stdev=3096.01 00:31:57.086 lat (usec): min=2958, max=33736, avg=12331.48, stdev=3102.66 00:31:57.086 clat percentiles (usec): 00:31:57.086 | 1.00th=[ 6587], 5.00th=[ 9503], 10.00th=[10159], 20.00th=[10945], 00:31:57.086 | 30.00th=[11469], 40.00th=[11731], 50.00th=[11994], 60.00th=[12125], 00:31:57.086 | 70.00th=[12518], 80.00th=[12780], 90.00th=[13435], 95.00th=[14615], 00:31:57.086 | 99.00th=[29754], 99.50th=[31065], 99.90th=[33817], 99.95th=[33817], 00:31:57.086 | 99.99th=[33817] 00:31:57.086 bw ( KiB/s): min=20480, max=20480, per=30.40%, avg=20480.00, stdev= 0.00, samples=1 00:31:57.086 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:31:57.086 lat (usec) : 500=0.01% 00:31:57.086 lat (msec) : 4=0.36%, 10=6.15%, 20=91.90%, 50=1.58% 00:31:57.086 cpu : usr=4.10%, sys=8.30%, ctx=734, majf=0, minf=1 00:31:57.086 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:31:57.086 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:57.086 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:57.086 issued rwts: total=5120,5359,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:57.086 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:57.086 job2: (groupid=0, jobs=1): err= 0: pid=373507: Mon Oct 7 09:52:45 2024 00:31:57.086 read: IOPS=3053, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1006msec) 00:31:57.086 slat (usec): min=2, max=17714, avg=144.17, stdev=784.46 00:31:57.086 clat (usec): min=10976, max=66902, avg=20027.15, stdev=7862.55 00:31:57.086 lat (usec): min=10986, max=66912, avg=20171.33, stdev=7909.98 00:31:57.086 clat percentiles (usec): 00:31:57.086 | 1.00th=[11731], 5.00th=[12387], 10.00th=[13042], 20.00th=[13960], 00:31:57.086 | 30.00th=[14353], 40.00th=[15795], 50.00th=[17433], 60.00th=[20841], 00:31:57.086 | 70.00th=[22676], 80.00th=[24773], 90.00th=[29492], 95.00th=[34341], 00:31:57.086 | 99.00th=[51643], 99.50th=[66847], 99.90th=[66847], 99.95th=[66847], 00:31:57.086 | 99.99th=[66847] 00:31:57.086 write: IOPS=3373, BW=13.2MiB/s (13.8MB/s)(13.3MiB/1006msec); 0 zone resets 00:31:57.086 slat (usec): min=3, max=13025, avg=153.21, stdev=770.73 00:31:57.086 clat (usec): min=3945, max=43269, avg=19351.88, stdev=8384.25 00:31:57.086 lat (usec): min=7646, max=43308, avg=19505.10, stdev=8449.78 00:31:57.086 clat percentiles (usec): 00:31:57.086 | 1.00th=[ 9765], 5.00th=[12518], 10.00th=[12911], 20.00th=[13304], 00:31:57.086 | 30.00th=[13698], 40.00th=[13960], 50.00th=[14484], 60.00th=[15401], 00:31:57.086 | 70.00th=[22152], 80.00th=[26346], 90.00th=[33817], 95.00th=[38011], 00:31:57.086 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:31:57.086 | 99.99th=[43254] 00:31:57.086 bw ( KiB/s): min= 9752, max=16384, per=19.40%, avg=13068.00, stdev=4689.53, samples=2 00:31:57.086 iops : min= 2438, max= 4096, avg=3267.00, stdev=1172.38, samples=2 00:31:57.086 lat (msec) : 4=0.02%, 10=0.65%, 20=59.93%, 50=38.93%, 100=0.48% 00:31:57.086 cpu : usr=4.08%, sys=8.66%, ctx=346, majf=0, minf=2 00:31:57.086 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:31:57.086 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:57.086 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:57.086 issued rwts: total=3072,3394,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:57.086 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:57.086 job3: (groupid=0, jobs=1): err= 0: pid=373508: Mon Oct 7 09:52:45 2024 00:31:57.086 read: IOPS=4571, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1008msec) 00:31:57.087 slat (usec): min=2, max=16741, avg=106.79, stdev=735.25 00:31:57.087 clat (usec): min=1217, max=49818, avg=13851.70, stdev=5064.87 00:31:57.087 lat (usec): min=1222, max=49825, avg=13958.48, stdev=5113.63 00:31:57.087 clat percentiles (usec): 00:31:57.087 | 1.00th=[ 2212], 5.00th=[ 7898], 10.00th=[ 9896], 20.00th=[11207], 00:31:57.087 | 30.00th=[11994], 40.00th=[12387], 50.00th=[13566], 60.00th=[14091], 00:31:57.087 | 70.00th=[14877], 80.00th=[16057], 90.00th=[17433], 95.00th=[20055], 00:31:57.087 | 99.00th=[36963], 99.50th=[46924], 99.90th=[50070], 99.95th=[50070], 00:31:57.087 | 99.99th=[50070] 00:31:57.087 write: IOPS=4602, BW=18.0MiB/s (18.8MB/s)(18.1MiB/1008msec); 0 zone resets 00:31:57.087 slat (usec): min=3, max=10856, avg=94.00, stdev=590.17 00:31:57.087 clat (usec): min=1113, max=49799, avg=13652.75, stdev=4659.59 00:31:57.087 lat (usec): min=1131, max=49806, avg=13746.75, stdev=4693.95 00:31:57.087 clat percentiles (usec): 00:31:57.087 | 1.00th=[ 4817], 5.00th=[ 7111], 10.00th=[ 7832], 20.00th=[10814], 00:31:57.087 | 30.00th=[11863], 40.00th=[12256], 50.00th=[12911], 60.00th=[13566], 00:31:57.087 | 70.00th=[14877], 80.00th=[16188], 90.00th=[19530], 95.00th=[22414], 00:31:57.087 | 99.00th=[29754], 99.50th=[34341], 99.90th=[39584], 99.95th=[39584], 00:31:57.087 | 99.99th=[49546] 00:31:57.087 bw ( KiB/s): min=16384, max=20480, per=27.36%, avg=18432.00, stdev=2896.31, samples=2 00:31:57.087 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:31:57.087 lat (msec) : 2=0.39%, 4=0.83%, 10=11.80%, 20=79.71%, 50=7.27% 00:31:57.087 cpu : usr=5.36%, sys=9.04%, ctx=434, majf=0, minf=2 00:31:57.087 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:31:57.087 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:57.087 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:57.087 issued rwts: total=4608,4639,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:57.087 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:57.087 00:31:57.087 Run status group 0 (all jobs): 00:31:57.087 READ: bw=63.3MiB/s (66.4MB/s), 11.9MiB/s-20.0MiB/s (12.5MB/s-20.9MB/s), io=63.8MiB (66.9MB), run=1001-1008msec 00:31:57.087 WRITE: bw=65.8MiB/s (69.0MB/s), 13.2MiB/s-20.9MiB/s (13.8MB/s-21.9MB/s), io=66.3MiB (69.5MB), run=1001-1008msec 00:31:57.087 00:31:57.087 Disk stats (read/write): 00:31:57.087 nvme0n1: ios=3057/3072, merge=0/0, ticks=28236/34468, in_queue=62704, util=97.70% 00:31:57.087 nvme0n2: ios=4225/4608, merge=0/0, ticks=12237/14148, in_queue=26385, util=94.92% 00:31:57.087 nvme0n3: ios=2691/3072, merge=0/0, ticks=18562/19717, in_queue=38279, util=90.08% 00:31:57.087 nvme0n4: ios=3617/4013, merge=0/0, ticks=37743/39782, in_queue=77525, util=90.20% 00:31:57.087 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:31:57.087 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=373640 00:31:57.087 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:31:57.087 09:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:31:57.087 [global] 00:31:57.087 thread=1 00:31:57.087 invalidate=1 00:31:57.087 rw=read 00:31:57.087 time_based=1 00:31:57.087 runtime=10 00:31:57.087 ioengine=libaio 00:31:57.087 direct=1 00:31:57.087 bs=4096 00:31:57.087 iodepth=1 00:31:57.087 norandommap=1 00:31:57.087 numjobs=1 00:31:57.087 00:31:57.087 [job0] 00:31:57.087 filename=/dev/nvme0n1 00:31:57.087 [job1] 00:31:57.087 filename=/dev/nvme0n2 00:31:57.087 [job2] 00:31:57.087 filename=/dev/nvme0n3 00:31:57.087 [job3] 00:31:57.087 filename=/dev/nvme0n4 00:31:57.087 Could not set queue depth (nvme0n1) 00:31:57.087 Could not set queue depth (nvme0n2) 00:31:57.087 Could not set queue depth (nvme0n3) 00:31:57.087 Could not set queue depth (nvme0n4) 00:31:57.345 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:57.345 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:57.345 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:57.345 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:57.345 fio-3.35 00:31:57.345 Starting 4 threads 00:32:00.627 09:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:32:00.627 09:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:32:00.627 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=2641920, buflen=4096 00:32:00.627 fio: pid=373731, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:32:00.627 09:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:00.627 09:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:32:00.627 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=315392, buflen=4096 00:32:00.627 fio: pid=373730, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:32:00.885 09:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:00.885 09:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:32:00.885 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=2633728, buflen=4096 00:32:00.885 fio: pid=373728, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:32:01.143 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=27406336, buflen=4096 00:32:01.143 fio: pid=373729, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:32:01.143 09:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:01.143 09:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:32:01.143 00:32:01.143 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=373728: Mon Oct 7 09:52:50 2024 00:32:01.143 read: IOPS=184, BW=735KiB/s (753kB/s)(2572KiB/3497msec) 00:32:01.143 slat (usec): min=7, max=6888, avg=28.05, stdev=270.81 00:32:01.143 clat (usec): min=210, max=41292, avg=5388.83, stdev=13508.51 00:32:01.143 lat (usec): min=220, max=47952, avg=5416.90, stdev=13539.03 00:32:01.143 clat percentiles (usec): 00:32:01.143 | 1.00th=[ 241], 5.00th=[ 247], 10.00th=[ 249], 20.00th=[ 253], 00:32:01.143 | 30.00th=[ 255], 40.00th=[ 260], 50.00th=[ 262], 60.00th=[ 265], 00:32:01.143 | 70.00th=[ 269], 80.00th=[ 277], 90.00th=[41157], 95.00th=[41157], 00:32:01.143 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:32:01.143 | 99.99th=[41157] 00:32:01.143 bw ( KiB/s): min= 96, max= 2904, per=9.82%, avg=840.00, stdev=1125.88, samples=6 00:32:01.143 iops : min= 24, max= 726, avg=210.00, stdev=281.47, samples=6 00:32:01.143 lat (usec) : 250=10.71%, 500=76.24%, 750=0.31% 00:32:01.143 lat (msec) : 50=12.58% 00:32:01.143 cpu : usr=0.09%, sys=0.60%, ctx=647, majf=0, minf=2 00:32:01.143 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:01.143 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:01.143 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:01.143 issued rwts: total=644,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:01.143 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:01.143 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=373729: Mon Oct 7 09:52:50 2024 00:32:01.143 read: IOPS=1775, BW=7101KiB/s (7272kB/s)(26.1MiB/3769msec) 00:32:01.143 slat (usec): min=4, max=15617, avg=16.75, stdev=306.71 00:32:01.143 clat (usec): min=178, max=41045, avg=540.74, stdev=3452.16 00:32:01.143 lat (usec): min=184, max=41060, avg=557.49, stdev=3466.40 00:32:01.143 clat percentiles (usec): 00:32:01.143 | 1.00th=[ 196], 5.00th=[ 200], 10.00th=[ 204], 20.00th=[ 210], 00:32:01.143 | 30.00th=[ 219], 40.00th=[ 231], 50.00th=[ 245], 60.00th=[ 251], 00:32:01.143 | 70.00th=[ 258], 80.00th=[ 265], 90.00th=[ 277], 95.00th=[ 285], 00:32:01.143 | 99.00th=[ 562], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:32:01.143 | 99.99th=[41157] 00:32:01.143 bw ( KiB/s): min= 96, max=15512, per=75.80%, avg=6481.14, stdev=5990.01, samples=7 00:32:01.143 iops : min= 24, max= 3878, avg=1620.29, stdev=1497.50, samples=7 00:32:01.143 lat (usec) : 250=59.16%, 500=39.54%, 750=0.52% 00:32:01.143 lat (msec) : 10=0.01%, 20=0.01%, 50=0.73% 00:32:01.143 cpu : usr=0.74%, sys=2.36%, ctx=6699, majf=0, minf=1 00:32:01.143 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:01.143 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:01.143 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:01.143 issued rwts: total=6692,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:01.143 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:01.143 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=373730: Mon Oct 7 09:52:50 2024 00:32:01.143 read: IOPS=24, BW=96.0KiB/s (98.3kB/s)(308KiB/3208msec) 00:32:01.143 slat (nsec): min=9704, max=48108, avg=22680.68, stdev=10460.75 00:32:01.143 clat (usec): min=317, max=42096, avg=41334.13, stdev=4746.17 00:32:01.143 lat (usec): min=340, max=42115, avg=41356.98, stdev=4745.93 00:32:01.143 clat percentiles (usec): 00:32:01.143 | 1.00th=[ 318], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:32:01.143 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:32:01.143 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:32:01.143 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:32:01.143 | 99.99th=[42206] 00:32:01.143 bw ( KiB/s): min= 96, max= 96, per=1.12%, avg=96.00, stdev= 0.00, samples=6 00:32:01.143 iops : min= 24, max= 24, avg=24.00, stdev= 0.00, samples=6 00:32:01.143 lat (usec) : 500=1.28% 00:32:01.143 lat (msec) : 50=97.44% 00:32:01.143 cpu : usr=0.00%, sys=0.12%, ctx=81, majf=0, minf=1 00:32:01.143 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:01.143 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:01.143 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:01.143 issued rwts: total=78,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:01.143 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:01.143 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=373731: Mon Oct 7 09:52:50 2024 00:32:01.143 read: IOPS=219, BW=878KiB/s (900kB/s)(2580KiB/2937msec) 00:32:01.143 slat (nsec): min=4890, max=41012, avg=8922.15, stdev=6560.32 00:32:01.143 clat (usec): min=213, max=42979, avg=4498.96, stdev=12436.19 00:32:01.143 lat (usec): min=218, max=42995, avg=4507.87, stdev=12441.07 00:32:01.143 clat percentiles (usec): 00:32:01.143 | 1.00th=[ 223], 5.00th=[ 233], 10.00th=[ 243], 20.00th=[ 253], 00:32:01.143 | 30.00th=[ 262], 40.00th=[ 269], 50.00th=[ 269], 60.00th=[ 277], 00:32:01.143 | 70.00th=[ 281], 80.00th=[ 289], 90.00th=[41157], 95.00th=[41157], 00:32:01.144 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42730], 99.95th=[42730], 00:32:01.144 | 99.99th=[42730] 00:32:01.144 bw ( KiB/s): min= 96, max= 2216, per=6.09%, avg=521.60, stdev=947.20, samples=5 00:32:01.144 iops : min= 24, max= 554, avg=130.40, stdev=236.80, samples=5 00:32:01.144 lat (usec) : 250=16.87%, 500=72.60% 00:32:01.144 lat (msec) : 50=10.37% 00:32:01.144 cpu : usr=0.20%, sys=0.17%, ctx=648, majf=0, minf=2 00:32:01.144 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:01.144 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:01.144 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:01.144 issued rwts: total=646,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:01.144 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:01.144 00:32:01.144 Run status group 0 (all jobs): 00:32:01.144 READ: bw=8550KiB/s (8755kB/s), 96.0KiB/s-7101KiB/s (98.3kB/s-7272kB/s), io=31.5MiB (33.0MB), run=2937-3769msec 00:32:01.144 00:32:01.144 Disk stats (read/write): 00:32:01.144 nvme0n1: ios=683/0, merge=0/0, ticks=4459/0, in_queue=4459, util=99.69% 00:32:01.144 nvme0n2: ios=6055/0, merge=0/0, ticks=3435/0, in_queue=3435, util=95.10% 00:32:01.144 nvme0n3: ios=129/0, merge=0/0, ticks=3337/0, in_queue=3337, util=99.84% 00:32:01.144 nvme0n4: ios=690/0, merge=0/0, ticks=3256/0, in_queue=3256, util=99.90% 00:32:01.402 09:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:01.402 09:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:32:01.660 09:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:01.660 09:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:32:02.227 09:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:02.227 09:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:32:02.227 09:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:02.227 09:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:32:02.485 09:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:32:02.485 09:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 373640 00:32:02.485 09:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:32:02.485 09:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:32:02.743 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:32:02.743 09:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:32:02.743 09:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:32:02.743 09:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:32:02.743 09:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:02.743 09:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:32:02.743 09:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:02.743 09:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:32:02.743 09:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:32:02.743 09:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:32:02.743 nvmf hotplug test: fio failed as expected 00:32:02.743 09:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:03.001 09:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:32:03.001 09:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:32:03.001 09:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:32:03.001 09:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:32:03.001 09:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:32:03.001 09:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:32:03.001 09:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:32:03.001 09:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:03.001 09:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:32:03.001 09:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:03.001 09:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:03.001 rmmod nvme_tcp 00:32:03.001 rmmod nvme_fabrics 00:32:03.001 rmmod nvme_keyring 00:32:03.001 09:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:03.001 09:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:32:03.001 09:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:32:03.001 09:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@515 -- # '[' -n 371697 ']' 00:32:03.001 09:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # killprocess 371697 00:32:03.001 09:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 371697 ']' 00:32:03.001 09:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 371697 00:32:03.001 09:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:32:03.001 09:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:03.001 09:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 371697 00:32:03.001 09:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:03.001 09:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:03.001 09:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 371697' 00:32:03.001 killing process with pid 371697 00:32:03.001 09:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 371697 00:32:03.001 09:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 371697 00:32:03.568 09:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:32:03.568 09:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:32:03.568 09:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:32:03.568 09:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:32:03.569 09:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-save 00:32:03.569 09:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:32:03.569 09:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-restore 00:32:03.569 09:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:03.569 09:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:03.569 09:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:03.569 09:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:03.569 09:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:05.478 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:05.478 00:32:05.478 real 0m24.002s 00:32:05.478 user 1m8.670s 00:32:05.478 sys 0m9.774s 00:32:05.478 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:05.478 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:05.478 ************************************ 00:32:05.478 END TEST nvmf_fio_target 00:32:05.478 ************************************ 00:32:05.478 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:32:05.478 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:32:05.478 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:05.478 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:05.478 ************************************ 00:32:05.478 START TEST nvmf_bdevio 00:32:05.478 ************************************ 00:32:05.478 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:32:05.478 * Looking for test storage... 00:32:05.478 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:05.478 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:32:05.478 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lcov --version 00:32:05.478 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:32:05.737 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:32:05.737 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:05.737 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:05.737 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:05.737 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:32:05.737 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:32:05.737 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:32:05.737 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:32:05.737 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:32:05.737 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:32:05.737 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:32:05.737 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:05.737 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:32:05.737 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:32:05.737 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:05.737 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:05.737 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:32:05.737 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:32:05.737 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:05.737 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:32:05.737 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:32:05.737 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:32:05.737 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:32:05.737 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:05.737 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:32:05.737 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:32:05.737 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:05.737 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:05.737 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:32:05.737 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:05.737 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:32:05.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:05.737 --rc genhtml_branch_coverage=1 00:32:05.737 --rc genhtml_function_coverage=1 00:32:05.737 --rc genhtml_legend=1 00:32:05.737 --rc geninfo_all_blocks=1 00:32:05.737 --rc geninfo_unexecuted_blocks=1 00:32:05.737 00:32:05.737 ' 00:32:05.737 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:32:05.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:05.737 --rc genhtml_branch_coverage=1 00:32:05.737 --rc genhtml_function_coverage=1 00:32:05.737 --rc genhtml_legend=1 00:32:05.737 --rc geninfo_all_blocks=1 00:32:05.737 --rc geninfo_unexecuted_blocks=1 00:32:05.737 00:32:05.737 ' 00:32:05.737 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:32:05.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:05.737 --rc genhtml_branch_coverage=1 00:32:05.737 --rc genhtml_function_coverage=1 00:32:05.737 --rc genhtml_legend=1 00:32:05.737 --rc geninfo_all_blocks=1 00:32:05.737 --rc geninfo_unexecuted_blocks=1 00:32:05.737 00:32:05.737 ' 00:32:05.737 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:32:05.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:05.737 --rc genhtml_branch_coverage=1 00:32:05.737 --rc genhtml_function_coverage=1 00:32:05.737 --rc genhtml_legend=1 00:32:05.737 --rc geninfo_all_blocks=1 00:32:05.737 --rc geninfo_unexecuted_blocks=1 00:32:05.737 00:32:05.737 ' 00:32:05.738 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:05.738 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:32:05.738 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:05.738 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:05.738 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:05.738 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:05.738 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:05.738 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:05.738 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:05.738 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:05.738 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:05.738 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:05.738 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:32:05.738 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:32:05.738 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:05.738 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:05.738 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:05.738 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:05.738 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:05.738 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:32:05.738 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:05.738 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:05.738 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:05.738 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:05.738 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:05.738 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:05.738 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:32:05.738 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:05.738 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:32:05.738 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:05.738 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:05.738 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:05.738 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:05.738 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:05.738 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:05.738 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:05.738 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:05.738 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:05.738 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:05.738 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:05.738 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:05.738 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:32:05.738 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:32:05.738 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:05.738 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # prepare_net_devs 00:32:05.738 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@436 -- # local -g is_hw=no 00:32:05.738 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # remove_spdk_ns 00:32:05.738 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:05.738 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:05.738 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:05.738 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:32:05.738 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:32:05.738 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:32:05.738 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:07.645 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:07.645 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:32:07.645 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:07.645 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:07.645 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:07.645 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:07.645 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:07.645 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:32:07.645 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:07.645 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:32:07.645 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:32:07.645 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:32:07.645 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:32:07.645 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:32:07.645 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:32:07.645 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:07.645 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:07.645 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:07.646 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:07.646 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:07.646 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:07.646 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:07.646 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:07.646 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:07.646 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:07.646 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:07.646 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:07.646 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:07.646 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:07.646 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:07.646 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:07.646 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:07.646 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:07.646 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:07.646 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x1592)' 00:32:07.646 Found 0000:09:00.0 (0x8086 - 0x1592) 00:32:07.646 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:07.646 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:07.646 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:32:07.646 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:32:07.646 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:07.646 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:07.646 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x1592)' 00:32:07.646 Found 0000:09:00.1 (0x8086 - 0x1592) 00:32:07.646 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:07.646 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:07.646 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:32:07.646 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:32:07.646 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:07.646 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:07.646 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:07.646 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:07.646 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:07.646 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:07.646 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:07.646 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:07.646 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:07.646 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:07.646 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:07.646 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:32:07.646 Found net devices under 0000:09:00.0: cvl_0_0 00:32:07.646 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:07.646 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:07.646 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:07.646 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:07.646 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:07.646 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:07.646 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:07.646 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:07.646 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:32:07.646 Found net devices under 0000:09:00.1: cvl_0_1 00:32:07.646 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:07.646 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:32:07.646 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # is_hw=yes 00:32:07.646 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:32:07.646 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:32:07.646 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:32:07.646 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:07.646 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:07.646 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:07.646 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:07.646 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:07.646 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:07.646 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:07.646 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:07.646 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:07.646 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:07.646 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:07.646 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:07.646 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:07.646 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:07.646 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:07.646 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:07.646 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:07.646 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:07.646 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:07.906 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:07.906 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:07.906 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:07.906 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:07.906 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:07.906 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.216 ms 00:32:07.906 00:32:07.906 --- 10.0.0.2 ping statistics --- 00:32:07.906 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:07.906 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:32:07.906 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:07.906 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:07.906 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.081 ms 00:32:07.906 00:32:07.906 --- 10.0.0.1 ping statistics --- 00:32:07.906 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:07.906 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:32:07.906 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:07.906 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@448 -- # return 0 00:32:07.906 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:32:07.906 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:07.906 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:32:07.906 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:32:07.906 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:07.906 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:32:07.906 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:32:07.906 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:32:07.906 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:32:07.906 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:07.906 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:07.906 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # nvmfpid=376336 00:32:07.906 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:32:07.906 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # waitforlisten 376336 00:32:07.906 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 376336 ']' 00:32:07.906 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:07.906 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:07.906 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:07.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:07.906 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:07.906 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:07.906 [2024-10-07 09:52:56.756978] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:07.906 [2024-10-07 09:52:56.758011] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:32:07.906 [2024-10-07 09:52:56.758062] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:07.906 [2024-10-07 09:52:56.818673] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:08.166 [2024-10-07 09:52:56.927957] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:08.166 [2024-10-07 09:52:56.928009] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:08.166 [2024-10-07 09:52:56.928038] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:08.166 [2024-10-07 09:52:56.928049] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:08.166 [2024-10-07 09:52:56.928059] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:08.166 [2024-10-07 09:52:56.930689] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:32:08.166 [2024-10-07 09:52:56.930760] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 5 00:32:08.166 [2024-10-07 09:52:56.930813] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 6 00:32:08.166 [2024-10-07 09:52:56.930817] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:32:08.166 [2024-10-07 09:52:57.028286] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:08.166 [2024-10-07 09:52:57.028501] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:08.166 [2024-10-07 09:52:57.028823] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:08.166 [2024-10-07 09:52:57.029353] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:08.166 [2024-10-07 09:52:57.029580] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:32:08.166 09:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:08.166 09:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:32:08.166 09:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:32:08.166 09:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:08.166 09:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:08.166 09:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:08.166 09:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:08.166 09:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.166 09:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:08.166 [2024-10-07 09:52:57.079510] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:08.166 09:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.166 09:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:08.166 09:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.166 09:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:08.166 Malloc0 00:32:08.166 09:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.166 09:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:08.166 09:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.166 09:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:08.166 09:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.166 09:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:08.166 09:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.166 09:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:08.166 09:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.166 09:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:08.166 09:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.166 09:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:08.166 [2024-10-07 09:52:57.135661] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:08.166 09:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.166 09:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:32:08.166 09:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:32:08.166 09:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@558 -- # config=() 00:32:08.166 09:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@558 -- # local subsystem config 00:32:08.166 09:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:32:08.166 09:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:32:08.166 { 00:32:08.166 "params": { 00:32:08.166 "name": "Nvme$subsystem", 00:32:08.166 "trtype": "$TEST_TRANSPORT", 00:32:08.166 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:08.166 "adrfam": "ipv4", 00:32:08.166 "trsvcid": "$NVMF_PORT", 00:32:08.166 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:08.166 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:08.166 "hdgst": ${hdgst:-false}, 00:32:08.166 "ddgst": ${ddgst:-false} 00:32:08.166 }, 00:32:08.166 "method": "bdev_nvme_attach_controller" 00:32:08.166 } 00:32:08.166 EOF 00:32:08.166 )") 00:32:08.166 09:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@580 -- # cat 00:32:08.166 09:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # jq . 00:32:08.166 09:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@583 -- # IFS=, 00:32:08.166 09:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:32:08.166 "params": { 00:32:08.166 "name": "Nvme1", 00:32:08.166 "trtype": "tcp", 00:32:08.166 "traddr": "10.0.0.2", 00:32:08.166 "adrfam": "ipv4", 00:32:08.166 "trsvcid": "4420", 00:32:08.166 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:08.166 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:08.166 "hdgst": false, 00:32:08.166 "ddgst": false 00:32:08.166 }, 00:32:08.166 "method": "bdev_nvme_attach_controller" 00:32:08.166 }' 00:32:08.426 [2024-10-07 09:52:57.182287] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:32:08.426 [2024-10-07 09:52:57.182359] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid376366 ] 00:32:08.426 [2024-10-07 09:52:57.240495] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:08.426 [2024-10-07 09:52:57.357525] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:32:08.426 [2024-10-07 09:52:57.357578] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:32:08.426 [2024-10-07 09:52:57.357582] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:32:08.684 I/O targets: 00:32:08.684 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:32:08.684 00:32:08.684 00:32:08.684 CUnit - A unit testing framework for C - Version 2.1-3 00:32:08.684 http://cunit.sourceforge.net/ 00:32:08.684 00:32:08.684 00:32:08.684 Suite: bdevio tests on: Nvme1n1 00:32:08.684 Test: blockdev write read block ...passed 00:32:08.684 Test: blockdev write zeroes read block ...passed 00:32:08.684 Test: blockdev write zeroes read no split ...passed 00:32:08.684 Test: blockdev write zeroes read split ...passed 00:32:08.684 Test: blockdev write zeroes read split partial ...passed 00:32:08.684 Test: blockdev reset ...[2024-10-07 09:52:57.644056] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:08.684 [2024-10-07 09:52:57.644168] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf082b0 (9): Bad file descriptor 00:32:08.942 [2024-10-07 09:52:57.737033] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:32:08.942 passed 00:32:08.942 Test: blockdev write read 8 blocks ...passed 00:32:08.942 Test: blockdev write read size > 128k ...passed 00:32:08.942 Test: blockdev write read invalid size ...passed 00:32:08.942 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:32:08.942 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:32:08.942 Test: blockdev write read max offset ...passed 00:32:08.942 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:32:08.942 Test: blockdev writev readv 8 blocks ...passed 00:32:08.942 Test: blockdev writev readv 30 x 1block ...passed 00:32:08.942 Test: blockdev writev readv block ...passed 00:32:08.942 Test: blockdev writev readv size > 128k ...passed 00:32:09.200 Test: blockdev writev readv size > 128k in two iovs ...passed 00:32:09.200 Test: blockdev comparev and writev ...[2024-10-07 09:52:57.952880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:09.200 [2024-10-07 09:52:57.952916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:09.200 [2024-10-07 09:52:57.952940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:09.200 [2024-10-07 09:52:57.952957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:09.200 [2024-10-07 09:52:57.953339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:09.200 [2024-10-07 09:52:57.953365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:32:09.200 [2024-10-07 09:52:57.953387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:09.200 [2024-10-07 09:52:57.953404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:32:09.200 [2024-10-07 09:52:57.953803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:09.200 [2024-10-07 09:52:57.953828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:32:09.200 [2024-10-07 09:52:57.953850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:09.200 [2024-10-07 09:52:57.953877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:32:09.200 [2024-10-07 09:52:57.954257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:09.200 [2024-10-07 09:52:57.954282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:32:09.200 [2024-10-07 09:52:57.954304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:09.200 [2024-10-07 09:52:57.954320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:32:09.200 passed 00:32:09.200 Test: blockdev nvme passthru rw ...passed 00:32:09.200 Test: blockdev nvme passthru vendor specific ...[2024-10-07 09:52:58.036931] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:09.200 [2024-10-07 09:52:58.036959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:32:09.200 [2024-10-07 09:52:58.037120] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:09.200 [2024-10-07 09:52:58.037144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:32:09.200 [2024-10-07 09:52:58.037310] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:09.200 [2024-10-07 09:52:58.037334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:32:09.200 [2024-10-07 09:52:58.037494] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:09.200 [2024-10-07 09:52:58.037517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:32:09.200 passed 00:32:09.200 Test: blockdev nvme admin passthru ...passed 00:32:09.200 Test: blockdev copy ...passed 00:32:09.200 00:32:09.200 Run Summary: Type Total Ran Passed Failed Inactive 00:32:09.200 suites 1 1 n/a 0 0 00:32:09.200 tests 23 23 23 0 0 00:32:09.200 asserts 152 152 152 0 n/a 00:32:09.200 00:32:09.200 Elapsed time = 1.120 seconds 00:32:09.459 09:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:09.459 09:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:09.459 09:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:09.459 09:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:09.459 09:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:32:09.459 09:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:32:09.459 09:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@514 -- # nvmfcleanup 00:32:09.459 09:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:32:09.459 09:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:09.459 09:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:32:09.459 09:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:09.459 09:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:09.459 rmmod nvme_tcp 00:32:09.459 rmmod nvme_fabrics 00:32:09.459 rmmod nvme_keyring 00:32:09.459 09:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:09.459 09:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:32:09.459 09:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:32:09.459 09:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@515 -- # '[' -n 376336 ']' 00:32:09.459 09:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # killprocess 376336 00:32:09.459 09:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 376336 ']' 00:32:09.459 09:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 376336 00:32:09.459 09:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:32:09.459 09:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:09.459 09:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 376336 00:32:09.459 09:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:32:09.459 09:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:32:09.459 09:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 376336' 00:32:09.459 killing process with pid 376336 00:32:09.459 09:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 376336 00:32:09.459 09:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 376336 00:32:09.718 09:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:32:09.718 09:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:32:09.718 09:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:32:09.718 09:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:32:09.718 09:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-save 00:32:09.718 09:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:32:09.718 09:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-restore 00:32:09.718 09:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:09.718 09:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:09.718 09:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:09.718 09:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:09.718 09:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:12.257 09:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:12.257 00:32:12.257 real 0m6.382s 00:32:12.257 user 0m8.433s 00:32:12.257 sys 0m2.435s 00:32:12.257 09:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:12.257 09:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:12.257 ************************************ 00:32:12.257 END TEST nvmf_bdevio 00:32:12.257 ************************************ 00:32:12.257 09:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:32:12.258 00:32:12.258 real 3m55.760s 00:32:12.258 user 8m57.078s 00:32:12.258 sys 1m24.568s 00:32:12.258 09:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:12.258 09:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:12.258 ************************************ 00:32:12.258 END TEST nvmf_target_core_interrupt_mode 00:32:12.258 ************************************ 00:32:12.258 09:53:00 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:32:12.258 09:53:00 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:32:12.258 09:53:00 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:12.258 09:53:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:12.258 ************************************ 00:32:12.258 START TEST nvmf_interrupt 00:32:12.258 ************************************ 00:32:12.258 09:53:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:32:12.258 * Looking for test storage... 00:32:12.258 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:12.258 09:53:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:32:12.258 09:53:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1681 -- # lcov --version 00:32:12.258 09:53:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:32:12.258 09:53:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:32:12.258 09:53:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:12.258 09:53:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:12.258 09:53:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:12.258 09:53:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:32:12.258 09:53:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:32:12.258 09:53:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:32:12.258 09:53:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:32:12.258 09:53:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:32:12.258 09:53:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:32:12.258 09:53:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:32:12.258 09:53:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:12.258 09:53:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:32:12.258 09:53:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:32:12.258 09:53:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:12.258 09:53:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:12.258 09:53:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:32:12.258 09:53:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:32:12.258 09:53:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:12.258 09:53:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:32:12.258 09:53:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:32:12.258 09:53:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:32:12.258 09:53:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:32:12.258 09:53:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:12.258 09:53:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:32:12.258 09:53:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:32:12.258 09:53:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:12.258 09:53:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:12.258 09:53:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:32:12.258 09:53:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:12.258 09:53:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:32:12.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:12.258 --rc genhtml_branch_coverage=1 00:32:12.258 --rc genhtml_function_coverage=1 00:32:12.258 --rc genhtml_legend=1 00:32:12.258 --rc geninfo_all_blocks=1 00:32:12.258 --rc geninfo_unexecuted_blocks=1 00:32:12.258 00:32:12.258 ' 00:32:12.258 09:53:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:32:12.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:12.258 --rc genhtml_branch_coverage=1 00:32:12.258 --rc genhtml_function_coverage=1 00:32:12.258 --rc genhtml_legend=1 00:32:12.258 --rc geninfo_all_blocks=1 00:32:12.258 --rc geninfo_unexecuted_blocks=1 00:32:12.258 00:32:12.258 ' 00:32:12.258 09:53:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:32:12.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:12.258 --rc genhtml_branch_coverage=1 00:32:12.258 --rc genhtml_function_coverage=1 00:32:12.258 --rc genhtml_legend=1 00:32:12.258 --rc geninfo_all_blocks=1 00:32:12.258 --rc geninfo_unexecuted_blocks=1 00:32:12.258 00:32:12.258 ' 00:32:12.258 09:53:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:32:12.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:12.258 --rc genhtml_branch_coverage=1 00:32:12.258 --rc genhtml_function_coverage=1 00:32:12.258 --rc genhtml_legend=1 00:32:12.258 --rc geninfo_all_blocks=1 00:32:12.258 --rc geninfo_unexecuted_blocks=1 00:32:12.258 00:32:12.258 ' 00:32:12.258 09:53:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:12.258 09:53:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:32:12.258 09:53:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:12.258 09:53:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:12.258 09:53:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:12.258 09:53:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:12.258 09:53:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:12.258 09:53:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:12.258 09:53:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:12.258 09:53:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:12.258 09:53:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:12.258 09:53:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:12.258 09:53:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:32:12.258 09:53:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:32:12.258 09:53:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:12.258 09:53:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:12.258 09:53:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:12.258 09:53:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:12.258 09:53:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:12.258 09:53:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:32:12.258 09:53:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:12.258 09:53:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:12.258 09:53:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:12.258 09:53:00 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:12.258 09:53:00 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:12.258 09:53:00 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:12.258 09:53:00 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:32:12.258 09:53:00 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:12.258 09:53:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:32:12.258 09:53:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:12.258 09:53:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:12.258 09:53:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:12.258 09:53:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:12.258 09:53:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:12.258 09:53:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:12.258 09:53:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:12.258 09:53:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:12.259 09:53:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:12.259 09:53:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:12.259 09:53:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:32:12.259 09:53:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:32:12.259 09:53:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:32:12.259 09:53:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:32:12.259 09:53:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:12.259 09:53:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # prepare_net_devs 00:32:12.259 09:53:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@436 -- # local -g is_hw=no 00:32:12.259 09:53:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # remove_spdk_ns 00:32:12.259 09:53:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:12.259 09:53:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:12.259 09:53:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:12.259 09:53:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:32:12.259 09:53:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:32:12.259 09:53:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:32:12.259 09:53:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:14.164 09:53:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:14.164 09:53:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:32:14.164 09:53:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:14.164 09:53:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:14.164 09:53:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:14.164 09:53:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:14.164 09:53:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:14.164 09:53:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:32:14.164 09:53:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:14.164 09:53:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:32:14.164 09:53:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:32:14.164 09:53:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:32:14.164 09:53:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:32:14.164 09:53:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:32:14.164 09:53:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:32:14.164 09:53:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:14.164 09:53:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:14.164 09:53:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:14.164 09:53:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:14.164 09:53:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:14.164 09:53:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:14.164 09:53:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:14.164 09:53:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:14.164 09:53:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:14.164 09:53:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:14.164 09:53:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:14.164 09:53:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:14.164 09:53:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:14.164 09:53:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:14.164 09:53:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:14.164 09:53:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:14.164 09:53:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:14.164 09:53:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:14.164 09:53:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:14.164 09:53:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x1592)' 00:32:14.164 Found 0000:09:00.0 (0x8086 - 0x1592) 00:32:14.164 09:53:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:14.164 09:53:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:14.164 09:53:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:32:14.164 09:53:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:32:14.164 09:53:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:14.164 09:53:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:14.164 09:53:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x1592)' 00:32:14.164 Found 0000:09:00.1 (0x8086 - 0x1592) 00:32:14.164 09:53:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:14.164 09:53:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:14.164 09:53:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:32:14.164 09:53:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:32:14.164 09:53:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:14.164 09:53:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:14.164 09:53:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:14.164 09:53:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:14.164 09:53:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:14.164 09:53:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:14.164 09:53:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:14.164 09:53:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:14.164 09:53:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:14.164 09:53:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:14.164 09:53:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:14.164 09:53:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:32:14.164 Found net devices under 0000:09:00.0: cvl_0_0 00:32:14.164 09:53:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:14.164 09:53:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:14.164 09:53:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:14.164 09:53:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:14.164 09:53:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:14.164 09:53:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:14.164 09:53:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:14.164 09:53:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:14.164 09:53:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:32:14.164 Found net devices under 0000:09:00.1: cvl_0_1 00:32:14.164 09:53:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:14.164 09:53:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:32:14.164 09:53:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # is_hw=yes 00:32:14.164 09:53:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:32:14.164 09:53:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:32:14.164 09:53:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:32:14.164 09:53:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:14.164 09:53:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:14.164 09:53:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:14.164 09:53:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:14.164 09:53:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:14.164 09:53:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:14.164 09:53:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:14.164 09:53:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:14.164 09:53:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:14.164 09:53:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:14.164 09:53:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:14.164 09:53:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:14.164 09:53:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:14.164 09:53:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:14.164 09:53:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:14.164 09:53:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:14.164 09:53:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:14.164 09:53:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:14.164 09:53:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:14.164 09:53:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:14.164 09:53:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:14.164 09:53:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:14.164 09:53:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:14.164 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:14.164 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:32:14.164 00:32:14.164 --- 10.0.0.2 ping statistics --- 00:32:14.164 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:14.164 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:32:14.164 09:53:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:14.164 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:14.164 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.105 ms 00:32:14.164 00:32:14.164 --- 10.0.0.1 ping statistics --- 00:32:14.164 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:14.164 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:32:14.164 09:53:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:14.164 09:53:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@448 -- # return 0 00:32:14.164 09:53:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:32:14.164 09:53:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:14.164 09:53:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:32:14.164 09:53:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:32:14.164 09:53:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:14.164 09:53:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:32:14.165 09:53:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:32:14.165 09:53:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:32:14.165 09:53:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:32:14.165 09:53:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:14.165 09:53:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:14.165 09:53:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # nvmfpid=378359 00:32:14.165 09:53:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:32:14.165 09:53:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # waitforlisten 378359 00:32:14.165 09:53:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@831 -- # '[' -z 378359 ']' 00:32:14.165 09:53:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:14.165 09:53:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:14.165 09:53:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:14.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:14.165 09:53:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:14.165 09:53:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:14.165 [2024-10-07 09:53:03.108054] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:14.165 [2024-10-07 09:53:03.109103] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:32:14.165 [2024-10-07 09:53:03.109154] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:14.423 [2024-10-07 09:53:03.169748] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:14.423 [2024-10-07 09:53:03.278749] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:14.423 [2024-10-07 09:53:03.278811] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:14.423 [2024-10-07 09:53:03.278839] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:14.423 [2024-10-07 09:53:03.278851] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:14.423 [2024-10-07 09:53:03.278861] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:14.423 [2024-10-07 09:53:03.282688] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:32:14.423 [2024-10-07 09:53:03.282700] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:32:14.423 [2024-10-07 09:53:03.362352] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:14.423 [2024-10-07 09:53:03.362392] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:14.423 [2024-10-07 09:53:03.362645] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:14.424 09:53:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:14.424 09:53:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # return 0 00:32:14.424 09:53:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:32:14.424 09:53:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:14.424 09:53:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:14.424 09:53:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:14.424 09:53:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:32:14.424 09:53:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:32:14.424 09:53:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:32:14.424 09:53:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:32:14.683 5000+0 records in 00:32:14.683 5000+0 records out 00:32:14.683 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0138592 s, 739 MB/s 00:32:14.683 09:53:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:32:14.683 09:53:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:14.683 09:53:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:14.683 AIO0 00:32:14.683 09:53:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:14.683 09:53:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:32:14.683 09:53:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:14.683 09:53:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:14.683 [2024-10-07 09:53:03.467384] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:14.683 09:53:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:14.683 09:53:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:32:14.683 09:53:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:14.683 09:53:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:14.683 09:53:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:14.683 09:53:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:32:14.683 09:53:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:14.683 09:53:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:14.683 09:53:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:14.683 09:53:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:14.683 09:53:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:14.683 09:53:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:14.683 [2024-10-07 09:53:03.503630] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:14.683 09:53:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:14.683 09:53:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:32:14.683 09:53:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 378359 0 00:32:14.683 09:53:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 378359 0 idle 00:32:14.683 09:53:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=378359 00:32:14.683 09:53:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:14.683 09:53:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:14.683 09:53:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:14.683 09:53:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:14.683 09:53:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:14.683 09:53:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:14.683 09:53:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:14.683 09:53:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:14.683 09:53:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:14.683 09:53:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 378359 -w 256 00:32:14.683 09:53:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:14.941 09:53:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 378359 root 20 0 128.2g 46848 34176 S 0.0 0.1 0:00.30 reactor_0' 00:32:14.941 09:53:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 378359 root 20 0 128.2g 46848 34176 S 0.0 0.1 0:00.30 reactor_0 00:32:14.941 09:53:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:14.941 09:53:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:14.941 09:53:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:14.941 09:53:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:14.941 09:53:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:14.941 09:53:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:14.941 09:53:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:14.941 09:53:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:14.941 09:53:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:32:14.941 09:53:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 378359 1 00:32:14.941 09:53:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 378359 1 idle 00:32:14.941 09:53:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=378359 00:32:14.941 09:53:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:14.941 09:53:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:14.941 09:53:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:14.941 09:53:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:14.941 09:53:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:14.941 09:53:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:14.941 09:53:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:14.941 09:53:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:14.941 09:53:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:14.941 09:53:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 378359 -w 256 00:32:14.941 09:53:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:14.941 09:53:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 378364 root 20 0 128.2g 46848 34176 S 0.0 0.1 0:00.00 reactor_1' 00:32:14.941 09:53:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 378364 root 20 0 128.2g 46848 34176 S 0.0 0.1 0:00.00 reactor_1 00:32:14.941 09:53:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:14.942 09:53:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:14.942 09:53:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:14.942 09:53:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:14.942 09:53:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:14.942 09:53:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:14.942 09:53:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:14.942 09:53:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:14.942 09:53:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:32:14.942 09:53:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=378514 00:32:14.942 09:53:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:32:14.942 09:53:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:32:14.942 09:53:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:32:14.942 09:53:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 378359 0 00:32:14.942 09:53:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 378359 0 busy 00:32:14.942 09:53:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=378359 00:32:14.942 09:53:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:14.942 09:53:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:32:14.942 09:53:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:32:14.942 09:53:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:14.942 09:53:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:32:14.942 09:53:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:14.942 09:53:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:14.942 09:53:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:14.942 09:53:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 378359 -w 256 00:32:14.942 09:53:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:15.199 09:53:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 378359 root 20 0 128.2g 47616 34176 R 99.9 0.1 0:00.52 reactor_0' 00:32:15.199 09:53:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 378359 root 20 0 128.2g 47616 34176 R 99.9 0.1 0:00.52 reactor_0 00:32:15.199 09:53:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:15.199 09:53:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:15.199 09:53:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:32:15.199 09:53:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:32:15.199 09:53:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:32:15.199 09:53:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:32:15.199 09:53:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:32:15.199 09:53:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:15.199 09:53:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:32:15.199 09:53:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:32:15.199 09:53:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 378359 1 00:32:15.199 09:53:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 378359 1 busy 00:32:15.199 09:53:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=378359 00:32:15.199 09:53:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:15.199 09:53:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:32:15.199 09:53:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:32:15.199 09:53:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:15.199 09:53:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:32:15.199 09:53:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:15.199 09:53:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:15.200 09:53:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:15.200 09:53:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 378359 -w 256 00:32:15.200 09:53:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:15.200 09:53:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 378364 root 20 0 128.2g 47616 34176 R 99.9 0.1 0:00.27 reactor_1' 00:32:15.200 09:53:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 378364 root 20 0 128.2g 47616 34176 R 99.9 0.1 0:00.27 reactor_1 00:32:15.200 09:53:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:15.200 09:53:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:15.456 09:53:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:32:15.456 09:53:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:32:15.456 09:53:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:32:15.456 09:53:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:32:15.456 09:53:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:32:15.456 09:53:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:15.456 09:53:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 378514 00:32:25.422 Initializing NVMe Controllers 00:32:25.422 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:25.422 Controller IO queue size 256, less than required. 00:32:25.422 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:25.422 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:32:25.422 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:32:25.422 Initialization complete. Launching workers. 00:32:25.422 ======================================================== 00:32:25.422 Latency(us) 00:32:25.422 Device Information : IOPS MiB/s Average min max 00:32:25.422 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 13539.30 52.89 18920.87 4233.56 22840.89 00:32:25.422 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 13306.30 51.98 19253.01 4071.55 22268.39 00:32:25.422 ======================================================== 00:32:25.422 Total : 26845.59 104.87 19085.50 4071.55 22840.89 00:32:25.422 00:32:25.422 09:53:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:32:25.422 09:53:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 378359 0 00:32:25.422 09:53:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 378359 0 idle 00:32:25.422 09:53:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=378359 00:32:25.422 09:53:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:25.422 09:53:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:25.422 09:53:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:25.422 09:53:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:25.422 09:53:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:25.422 09:53:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:25.422 09:53:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:25.422 09:53:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:25.422 09:53:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:25.422 09:53:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 378359 -w 256 00:32:25.422 09:53:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:25.422 09:53:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 378359 root 20 0 128.2g 47616 34176 S 0.0 0.1 0:20.15 reactor_0' 00:32:25.422 09:53:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 378359 root 20 0 128.2g 47616 34176 S 0.0 0.1 0:20.15 reactor_0 00:32:25.422 09:53:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:25.422 09:53:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:25.422 09:53:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:25.422 09:53:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:25.422 09:53:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:25.422 09:53:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:25.422 09:53:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:25.422 09:53:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:25.422 09:53:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:32:25.422 09:53:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 378359 1 00:32:25.422 09:53:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 378359 1 idle 00:32:25.422 09:53:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=378359 00:32:25.422 09:53:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:25.422 09:53:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:25.422 09:53:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:25.422 09:53:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:25.422 09:53:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:25.422 09:53:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:25.422 09:53:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:25.422 09:53:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:25.422 09:53:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:25.422 09:53:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 378359 -w 256 00:32:25.422 09:53:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:25.422 09:53:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 378364 root 20 0 128.2g 47616 34176 S 0.0 0.1 0:09.88 reactor_1' 00:32:25.422 09:53:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 378364 root 20 0 128.2g 47616 34176 S 0.0 0.1 0:09.88 reactor_1 00:32:25.423 09:53:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:25.423 09:53:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:25.423 09:53:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:25.423 09:53:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:25.423 09:53:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:25.423 09:53:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:25.423 09:53:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:25.423 09:53:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:25.423 09:53:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid=21b7cb46-a602-e411-a339-001e67bc3be4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:32:25.682 09:53:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:32:25.682 09:53:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1198 -- # local i=0 00:32:25.682 09:53:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:32:25.682 09:53:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:32:25.682 09:53:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1205 -- # sleep 2 00:32:27.584 09:53:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:32:27.584 09:53:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:32:27.584 09:53:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:32:27.584 09:53:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:32:27.584 09:53:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:32:27.584 09:53:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # return 0 00:32:27.584 09:53:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:32:27.584 09:53:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 378359 0 00:32:27.584 09:53:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 378359 0 idle 00:32:27.584 09:53:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=378359 00:32:27.584 09:53:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:27.584 09:53:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:27.584 09:53:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:27.584 09:53:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:27.584 09:53:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:27.584 09:53:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:27.584 09:53:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:27.584 09:53:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:27.584 09:53:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:27.584 09:53:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 378359 -w 256 00:32:27.584 09:53:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:27.843 09:53:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 378359 root 20 0 128.2g 59904 34176 S 0.0 0.1 0:20.24 reactor_0' 00:32:27.843 09:53:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 378359 root 20 0 128.2g 59904 34176 S 0.0 0.1 0:20.24 reactor_0 00:32:27.843 09:53:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:27.843 09:53:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:27.843 09:53:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:27.843 09:53:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:27.843 09:53:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:27.843 09:53:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:27.843 09:53:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:27.844 09:53:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:27.844 09:53:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:32:27.844 09:53:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 378359 1 00:32:27.844 09:53:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 378359 1 idle 00:32:27.844 09:53:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=378359 00:32:27.844 09:53:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:27.844 09:53:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:27.844 09:53:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:27.844 09:53:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:27.844 09:53:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:27.844 09:53:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:27.844 09:53:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:27.844 09:53:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:27.844 09:53:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:27.844 09:53:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 378359 -w 256 00:32:27.844 09:53:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:28.103 09:53:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 378364 root 20 0 128.2g 59904 34176 S 0.0 0.1 0:09.92 reactor_1' 00:32:28.103 09:53:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 378364 root 20 0 128.2g 59904 34176 S 0.0 0.1 0:09.92 reactor_1 00:32:28.103 09:53:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:28.103 09:53:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:28.103 09:53:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:28.103 09:53:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:28.103 09:53:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:28.103 09:53:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:28.103 09:53:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:28.103 09:53:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:28.103 09:53:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:32:28.103 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:32:28.103 09:53:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:32:28.103 09:53:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1219 -- # local i=0 00:32:28.103 09:53:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:32:28.103 09:53:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:28.103 09:53:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:32:28.103 09:53:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:28.103 09:53:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # return 0 00:32:28.103 09:53:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:32:28.103 09:53:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:32:28.103 09:53:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@514 -- # nvmfcleanup 00:32:28.103 09:53:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:32:28.103 09:53:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:28.103 09:53:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:32:28.104 09:53:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:28.104 09:53:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:28.104 rmmod nvme_tcp 00:32:28.104 rmmod nvme_fabrics 00:32:28.104 rmmod nvme_keyring 00:32:28.104 09:53:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:28.104 09:53:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:32:28.104 09:53:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:32:28.104 09:53:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@515 -- # '[' -n 378359 ']' 00:32:28.104 09:53:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # killprocess 378359 00:32:28.104 09:53:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@950 -- # '[' -z 378359 ']' 00:32:28.104 09:53:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # kill -0 378359 00:32:28.104 09:53:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@955 -- # uname 00:32:28.104 09:53:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:28.104 09:53:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 378359 00:32:28.104 09:53:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:28.104 09:53:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:28.104 09:53:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 378359' 00:32:28.104 killing process with pid 378359 00:32:28.104 09:53:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@969 -- # kill 378359 00:32:28.104 09:53:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@974 -- # wait 378359 00:32:28.363 09:53:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:32:28.363 09:53:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:32:28.363 09:53:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:32:28.363 09:53:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:32:28.363 09:53:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@789 -- # iptables-save 00:32:28.363 09:53:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:32:28.363 09:53:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@789 -- # iptables-restore 00:32:28.623 09:53:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:28.623 09:53:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:28.623 09:53:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:28.623 09:53:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:28.623 09:53:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:30.528 09:53:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:30.528 00:32:30.528 real 0m18.583s 00:32:30.528 user 0m37.150s 00:32:30.528 sys 0m6.413s 00:32:30.528 09:53:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:30.528 09:53:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:30.528 ************************************ 00:32:30.528 END TEST nvmf_interrupt 00:32:30.528 ************************************ 00:32:30.528 00:32:30.528 real 24m55.289s 00:32:30.528 user 58m15.604s 00:32:30.528 sys 6m34.748s 00:32:30.528 09:53:19 nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:30.528 09:53:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:30.528 ************************************ 00:32:30.528 END TEST nvmf_tcp 00:32:30.528 ************************************ 00:32:30.528 09:53:19 -- spdk/autotest.sh@281 -- # [[ 0 -eq 0 ]] 00:32:30.528 09:53:19 -- spdk/autotest.sh@282 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:32:30.528 09:53:19 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:32:30.528 09:53:19 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:30.528 09:53:19 -- common/autotest_common.sh@10 -- # set +x 00:32:30.528 ************************************ 00:32:30.528 START TEST spdkcli_nvmf_tcp 00:32:30.528 ************************************ 00:32:30.528 09:53:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:32:30.528 * Looking for test storage... 00:32:30.810 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:32:30.810 09:53:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:32:30.810 09:53:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:32:30.810 09:53:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:32:30.810 09:53:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:32:30.810 09:53:19 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:30.810 09:53:19 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:30.810 09:53:19 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:30.810 09:53:19 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:32:30.810 09:53:19 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:32:30.810 09:53:19 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:32:30.810 09:53:19 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:32:30.810 09:53:19 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:32:30.810 09:53:19 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:32:30.810 09:53:19 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:32:30.810 09:53:19 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:30.810 09:53:19 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:32:30.810 09:53:19 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:32:30.810 09:53:19 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:30.810 09:53:19 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:30.810 09:53:19 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:32:30.810 09:53:19 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:32:30.810 09:53:19 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:30.810 09:53:19 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:32:30.810 09:53:19 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:32:30.810 09:53:19 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:32:30.810 09:53:19 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:32:30.810 09:53:19 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:30.810 09:53:19 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:32:30.810 09:53:19 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:32:30.810 09:53:19 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:30.810 09:53:19 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:30.810 09:53:19 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:32:30.810 09:53:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:30.810 09:53:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:32:30.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:30.810 --rc genhtml_branch_coverage=1 00:32:30.810 --rc genhtml_function_coverage=1 00:32:30.810 --rc genhtml_legend=1 00:32:30.810 --rc geninfo_all_blocks=1 00:32:30.810 --rc geninfo_unexecuted_blocks=1 00:32:30.810 00:32:30.810 ' 00:32:30.810 09:53:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:32:30.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:30.810 --rc genhtml_branch_coverage=1 00:32:30.810 --rc genhtml_function_coverage=1 00:32:30.810 --rc genhtml_legend=1 00:32:30.810 --rc geninfo_all_blocks=1 00:32:30.810 --rc geninfo_unexecuted_blocks=1 00:32:30.810 00:32:30.810 ' 00:32:30.810 09:53:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:32:30.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:30.810 --rc genhtml_branch_coverage=1 00:32:30.810 --rc genhtml_function_coverage=1 00:32:30.810 --rc genhtml_legend=1 00:32:30.810 --rc geninfo_all_blocks=1 00:32:30.810 --rc geninfo_unexecuted_blocks=1 00:32:30.810 00:32:30.810 ' 00:32:30.810 09:53:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:32:30.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:30.810 --rc genhtml_branch_coverage=1 00:32:30.810 --rc genhtml_function_coverage=1 00:32:30.810 --rc genhtml_legend=1 00:32:30.810 --rc geninfo_all_blocks=1 00:32:30.810 --rc geninfo_unexecuted_blocks=1 00:32:30.810 00:32:30.810 ' 00:32:30.810 09:53:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:32:30.810 09:53:19 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:32:30.810 09:53:19 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:32:30.811 09:53:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:30.811 09:53:19 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:32:30.811 09:53:19 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:30.811 09:53:19 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:30.811 09:53:19 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:30.811 09:53:19 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:30.811 09:53:19 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:30.811 09:53:19 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:30.811 09:53:19 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:30.811 09:53:19 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:30.811 09:53:19 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:30.811 09:53:19 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:30.811 09:53:19 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:32:30.811 09:53:19 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:32:30.811 09:53:19 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:30.811 09:53:19 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:30.811 09:53:19 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:30.811 09:53:19 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:30.811 09:53:19 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:30.811 09:53:19 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:32:30.811 09:53:19 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:30.811 09:53:19 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:30.811 09:53:19 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:30.811 09:53:19 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:30.811 09:53:19 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:30.811 09:53:19 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:30.811 09:53:19 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:32:30.811 09:53:19 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:30.811 09:53:19 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:32:30.811 09:53:19 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:30.811 09:53:19 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:30.811 09:53:19 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:30.811 09:53:19 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:30.811 09:53:19 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:30.811 09:53:19 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:30.811 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:30.811 09:53:19 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:30.811 09:53:19 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:30.811 09:53:19 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:30.811 09:53:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:32:30.811 09:53:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:32:30.811 09:53:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:32:30.811 09:53:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:32:30.811 09:53:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:30.811 09:53:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:30.811 09:53:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:32:30.811 09:53:19 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=380432 00:32:30.811 09:53:19 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:32:30.811 09:53:19 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 380432 00:32:30.811 09:53:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # '[' -z 380432 ']' 00:32:30.811 09:53:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:30.811 09:53:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:30.811 09:53:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:30.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:30.811 09:53:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:30.811 09:53:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:30.811 [2024-10-07 09:53:19.677384] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:32:30.811 [2024-10-07 09:53:19.677452] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid380432 ] 00:32:30.811 [2024-10-07 09:53:19.733763] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:31.085 [2024-10-07 09:53:19.855690] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:32:31.085 [2024-10-07 09:53:19.855694] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:32:31.085 09:53:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:31.085 09:53:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # return 0 00:32:31.085 09:53:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:32:31.085 09:53:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:31.085 09:53:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:31.085 09:53:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:32:31.085 09:53:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:32:31.085 09:53:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:32:31.085 09:53:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:31.085 09:53:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:31.085 09:53:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:32:31.085 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:32:31.085 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:32:31.085 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:32:31.085 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:32:31.085 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:32:31.085 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:32:31.085 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:32:31.085 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:32:31.085 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:32:31.085 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:31.085 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:31.085 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:32:31.085 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:31.085 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:31.085 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:32:31.085 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:31.085 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:32:31.085 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:32:31.085 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:31.085 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:32:31.085 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:32:31.085 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:32:31.086 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:32:31.086 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:31.086 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:32:31.086 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:32:31.086 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:32:31.086 ' 00:32:34.402 [2024-10-07 09:53:22.747390] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:35.335 [2024-10-07 09:53:24.019779] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:32:37.858 [2024-10-07 09:53:26.362862] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:32:39.755 [2024-10-07 09:53:28.385171] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:32:41.128 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:32:41.128 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:32:41.128 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:32:41.128 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:32:41.128 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:32:41.128 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:32:41.128 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:32:41.128 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:32:41.128 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:32:41.128 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:32:41.128 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:32:41.128 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:41.128 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:32:41.128 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:32:41.128 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:41.128 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:32:41.128 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:32:41.128 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:32:41.128 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:32:41.128 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:41.128 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:32:41.128 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:32:41.128 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:32:41.128 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:32:41.128 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:41.128 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:32:41.128 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:32:41.128 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:32:41.128 09:53:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:32:41.128 09:53:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:41.128 09:53:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:41.128 09:53:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:32:41.128 09:53:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:41.128 09:53:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:41.128 09:53:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:32:41.128 09:53:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:32:41.693 09:53:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:32:41.693 09:53:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:32:41.693 09:53:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:32:41.693 09:53:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:41.693 09:53:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:41.693 09:53:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:32:41.693 09:53:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:41.693 09:53:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:41.693 09:53:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:32:41.693 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:32:41.693 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:32:41.693 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:32:41.693 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:32:41.693 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:32:41.693 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:32:41.693 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:32:41.693 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:32:41.693 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:32:41.693 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:32:41.693 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:32:41.693 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:32:41.694 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:32:41.694 ' 00:32:46.955 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:32:46.955 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:32:46.955 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:32:46.955 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:32:46.955 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:32:46.955 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:32:46.955 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:32:46.955 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:32:46.955 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:32:46.955 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:32:46.955 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:32:46.955 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:32:46.955 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:32:46.955 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:32:46.955 09:53:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:32:46.955 09:53:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:46.955 09:53:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:47.213 09:53:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 380432 00:32:47.213 09:53:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 380432 ']' 00:32:47.213 09:53:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 380432 00:32:47.213 09:53:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # uname 00:32:47.213 09:53:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:47.213 09:53:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 380432 00:32:47.213 09:53:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:47.213 09:53:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:47.213 09:53:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 380432' 00:32:47.213 killing process with pid 380432 00:32:47.213 09:53:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@969 -- # kill 380432 00:32:47.213 09:53:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@974 -- # wait 380432 00:32:47.472 09:53:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:32:47.472 09:53:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:32:47.472 09:53:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 380432 ']' 00:32:47.472 09:53:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 380432 00:32:47.472 09:53:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 380432 ']' 00:32:47.472 09:53:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 380432 00:32:47.472 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (380432) - No such process 00:32:47.472 09:53:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@977 -- # echo 'Process with pid 380432 is not found' 00:32:47.472 Process with pid 380432 is not found 00:32:47.472 09:53:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:32:47.472 09:53:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:32:47.472 09:53:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:32:47.472 00:32:47.472 real 0m16.795s 00:32:47.472 user 0m35.641s 00:32:47.472 sys 0m0.882s 00:32:47.472 09:53:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:47.472 09:53:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:47.472 ************************************ 00:32:47.472 END TEST spdkcli_nvmf_tcp 00:32:47.472 ************************************ 00:32:47.472 09:53:36 -- spdk/autotest.sh@283 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:32:47.472 09:53:36 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:32:47.472 09:53:36 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:47.472 09:53:36 -- common/autotest_common.sh@10 -- # set +x 00:32:47.472 ************************************ 00:32:47.472 START TEST nvmf_identify_passthru 00:32:47.472 ************************************ 00:32:47.472 09:53:36 nvmf_identify_passthru -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:32:47.472 * Looking for test storage... 00:32:47.472 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:47.472 09:53:36 nvmf_identify_passthru -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:32:47.472 09:53:36 nvmf_identify_passthru -- common/autotest_common.sh@1681 -- # lcov --version 00:32:47.472 09:53:36 nvmf_identify_passthru -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:32:47.472 09:53:36 nvmf_identify_passthru -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:32:47.472 09:53:36 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:47.472 09:53:36 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:47.472 09:53:36 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:47.472 09:53:36 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:32:47.472 09:53:36 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:32:47.472 09:53:36 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:32:47.473 09:53:36 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:32:47.473 09:53:36 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:32:47.473 09:53:36 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:32:47.473 09:53:36 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:32:47.473 09:53:36 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:47.473 09:53:36 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:32:47.473 09:53:36 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:32:47.473 09:53:36 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:47.473 09:53:36 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:47.473 09:53:36 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:32:47.473 09:53:36 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:32:47.473 09:53:36 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:47.473 09:53:36 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:32:47.473 09:53:36 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:32:47.473 09:53:36 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:32:47.473 09:53:36 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:32:47.473 09:53:36 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:47.473 09:53:36 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:32:47.473 09:53:36 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:32:47.473 09:53:36 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:47.473 09:53:36 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:47.473 09:53:36 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:32:47.473 09:53:36 nvmf_identify_passthru -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:47.473 09:53:36 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:32:47.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:47.473 --rc genhtml_branch_coverage=1 00:32:47.473 --rc genhtml_function_coverage=1 00:32:47.473 --rc genhtml_legend=1 00:32:47.473 --rc geninfo_all_blocks=1 00:32:47.473 --rc geninfo_unexecuted_blocks=1 00:32:47.473 00:32:47.473 ' 00:32:47.473 09:53:36 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:32:47.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:47.473 --rc genhtml_branch_coverage=1 00:32:47.473 --rc genhtml_function_coverage=1 00:32:47.473 --rc genhtml_legend=1 00:32:47.473 --rc geninfo_all_blocks=1 00:32:47.473 --rc geninfo_unexecuted_blocks=1 00:32:47.473 00:32:47.473 ' 00:32:47.473 09:53:36 nvmf_identify_passthru -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:32:47.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:47.473 --rc genhtml_branch_coverage=1 00:32:47.473 --rc genhtml_function_coverage=1 00:32:47.473 --rc genhtml_legend=1 00:32:47.473 --rc geninfo_all_blocks=1 00:32:47.473 --rc geninfo_unexecuted_blocks=1 00:32:47.473 00:32:47.473 ' 00:32:47.473 09:53:36 nvmf_identify_passthru -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:32:47.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:47.473 --rc genhtml_branch_coverage=1 00:32:47.473 --rc genhtml_function_coverage=1 00:32:47.473 --rc genhtml_legend=1 00:32:47.473 --rc geninfo_all_blocks=1 00:32:47.473 --rc geninfo_unexecuted_blocks=1 00:32:47.473 00:32:47.473 ' 00:32:47.473 09:53:36 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:47.473 09:53:36 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:32:47.473 09:53:36 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:47.473 09:53:36 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:47.473 09:53:36 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:47.473 09:53:36 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:47.473 09:53:36 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:47.473 09:53:36 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:47.473 09:53:36 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:47.473 09:53:36 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:47.473 09:53:36 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:47.473 09:53:36 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:47.473 09:53:36 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:32:47.473 09:53:36 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:32:47.473 09:53:36 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:47.473 09:53:36 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:47.473 09:53:36 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:47.473 09:53:36 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:47.473 09:53:36 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:47.473 09:53:36 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:32:47.473 09:53:36 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:47.473 09:53:36 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:47.473 09:53:36 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:47.473 09:53:36 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:47.473 09:53:36 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:47.473 09:53:36 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:47.473 09:53:36 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:32:47.473 09:53:36 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:47.473 09:53:36 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:32:47.473 09:53:36 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:47.473 09:53:36 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:47.473 09:53:36 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:47.473 09:53:36 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:47.473 09:53:36 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:47.473 09:53:36 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:47.473 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:47.473 09:53:36 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:47.473 09:53:36 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:47.473 09:53:36 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:47.473 09:53:36 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:47.473 09:53:36 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:32:47.473 09:53:36 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:47.473 09:53:36 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:47.473 09:53:36 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:47.473 09:53:36 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:47.473 09:53:36 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:47.473 09:53:36 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:47.473 09:53:36 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:32:47.473 09:53:36 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:47.473 09:53:36 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:32:47.473 09:53:36 nvmf_identify_passthru -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:32:47.473 09:53:36 nvmf_identify_passthru -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:47.473 09:53:36 nvmf_identify_passthru -- nvmf/common.sh@474 -- # prepare_net_devs 00:32:47.473 09:53:36 nvmf_identify_passthru -- nvmf/common.sh@436 -- # local -g is_hw=no 00:32:47.473 09:53:36 nvmf_identify_passthru -- nvmf/common.sh@438 -- # remove_spdk_ns 00:32:47.473 09:53:36 nvmf_identify_passthru -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:47.474 09:53:36 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:47.474 09:53:36 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:47.474 09:53:36 nvmf_identify_passthru -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:32:47.474 09:53:36 nvmf_identify_passthru -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:32:47.474 09:53:36 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:32:47.474 09:53:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:49.378 09:53:38 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:49.378 09:53:38 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:32:49.378 09:53:38 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:49.378 09:53:38 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:49.378 09:53:38 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:49.378 09:53:38 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:49.378 09:53:38 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:49.378 09:53:38 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:32:49.378 09:53:38 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:49.378 09:53:38 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:32:49.378 09:53:38 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:32:49.378 09:53:38 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:32:49.378 09:53:38 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:32:49.378 09:53:38 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:32:49.378 09:53:38 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:32:49.378 09:53:38 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:49.378 09:53:38 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:49.378 09:53:38 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:49.378 09:53:38 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:49.378 09:53:38 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:49.378 09:53:38 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:49.378 09:53:38 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:49.378 09:53:38 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:49.378 09:53:38 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:49.378 09:53:38 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:49.378 09:53:38 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:49.378 09:53:38 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:49.378 09:53:38 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:49.378 09:53:38 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:49.378 09:53:38 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:49.378 09:53:38 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:49.378 09:53:38 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:49.378 09:53:38 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:49.378 09:53:38 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:49.378 09:53:38 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x1592)' 00:32:49.378 Found 0000:09:00.0 (0x8086 - 0x1592) 00:32:49.378 09:53:38 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:49.378 09:53:38 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:49.378 09:53:38 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:32:49.378 09:53:38 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:32:49.378 09:53:38 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:49.378 09:53:38 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:49.378 09:53:38 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x1592)' 00:32:49.378 Found 0000:09:00.1 (0x8086 - 0x1592) 00:32:49.378 09:53:38 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:49.378 09:53:38 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:49.378 09:53:38 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:32:49.378 09:53:38 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:32:49.378 09:53:38 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:49.378 09:53:38 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:49.378 09:53:38 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:49.378 09:53:38 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:49.378 09:53:38 nvmf_identify_passthru -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:49.378 09:53:38 nvmf_identify_passthru -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:49.378 09:53:38 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:49.378 09:53:38 nvmf_identify_passthru -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:49.378 09:53:38 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:49.378 09:53:38 nvmf_identify_passthru -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:49.378 09:53:38 nvmf_identify_passthru -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:49.378 09:53:38 nvmf_identify_passthru -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:32:49.378 Found net devices under 0000:09:00.0: cvl_0_0 00:32:49.378 09:53:38 nvmf_identify_passthru -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:49.378 09:53:38 nvmf_identify_passthru -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:49.378 09:53:38 nvmf_identify_passthru -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:49.378 09:53:38 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:49.378 09:53:38 nvmf_identify_passthru -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:49.378 09:53:38 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:49.378 09:53:38 nvmf_identify_passthru -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:49.378 09:53:38 nvmf_identify_passthru -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:49.378 09:53:38 nvmf_identify_passthru -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:32:49.378 Found net devices under 0000:09:00.1: cvl_0_1 00:32:49.378 09:53:38 nvmf_identify_passthru -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:49.378 09:53:38 nvmf_identify_passthru -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:32:49.378 09:53:38 nvmf_identify_passthru -- nvmf/common.sh@440 -- # is_hw=yes 00:32:49.378 09:53:38 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:32:49.378 09:53:38 nvmf_identify_passthru -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:32:49.378 09:53:38 nvmf_identify_passthru -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:32:49.378 09:53:38 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:49.378 09:53:38 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:49.378 09:53:38 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:49.378 09:53:38 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:49.378 09:53:38 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:49.378 09:53:38 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:49.378 09:53:38 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:49.378 09:53:38 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:49.378 09:53:38 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:49.378 09:53:38 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:49.378 09:53:38 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:49.378 09:53:38 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:49.378 09:53:38 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:49.378 09:53:38 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:49.642 09:53:38 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:49.642 09:53:38 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:49.642 09:53:38 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:49.642 09:53:38 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:49.642 09:53:38 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:49.642 09:53:38 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:49.642 09:53:38 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:49.642 09:53:38 nvmf_identify_passthru -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:49.642 09:53:38 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:49.642 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:49.642 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.175 ms 00:32:49.642 00:32:49.642 --- 10.0.0.2 ping statistics --- 00:32:49.642 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:49.642 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:32:49.642 09:53:38 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:49.642 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:49.642 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:32:49.642 00:32:49.642 --- 10.0.0.1 ping statistics --- 00:32:49.642 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:49.642 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:32:49.642 09:53:38 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:49.642 09:53:38 nvmf_identify_passthru -- nvmf/common.sh@448 -- # return 0 00:32:49.643 09:53:38 nvmf_identify_passthru -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:32:49.643 09:53:38 nvmf_identify_passthru -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:49.643 09:53:38 nvmf_identify_passthru -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:32:49.643 09:53:38 nvmf_identify_passthru -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:32:49.643 09:53:38 nvmf_identify_passthru -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:49.643 09:53:38 nvmf_identify_passthru -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:32:49.643 09:53:38 nvmf_identify_passthru -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:32:49.643 09:53:38 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:32:49.643 09:53:38 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:49.643 09:53:38 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:49.643 09:53:38 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:32:49.643 09:53:38 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # bdfs=() 00:32:49.643 09:53:38 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # local bdfs 00:32:49.643 09:53:38 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:32:49.643 09:53:38 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:32:49.643 09:53:38 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # bdfs=() 00:32:49.643 09:53:38 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # local bdfs 00:32:49.643 09:53:38 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:32:49.643 09:53:38 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:32:49.643 09:53:38 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:32:49.643 09:53:38 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:32:49.643 09:53:38 nvmf_identify_passthru -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:84:00.0 00:32:49.643 09:53:38 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # echo 0000:84:00.0 00:32:49.643 09:53:38 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:84:00.0 00:32:49.643 09:53:38 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:84:00.0 ']' 00:32:49.643 09:53:38 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:84:00.0' -i 0 00:32:49.643 09:53:38 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:32:49.643 09:53:38 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:32:53.833 09:53:42 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ807002Z71P0FGN 00:32:53.833 09:53:42 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:84:00.0' -i 0 00:32:53.833 09:53:42 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:32:53.833 09:53:42 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:32:58.021 09:53:46 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:32:58.021 09:53:46 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:32:58.021 09:53:46 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:58.021 09:53:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:58.021 09:53:46 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:32:58.021 09:53:46 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:58.021 09:53:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:58.021 09:53:46 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=384853 00:32:58.021 09:53:46 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:32:58.021 09:53:46 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:58.021 09:53:46 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 384853 00:32:58.021 09:53:46 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # '[' -z 384853 ']' 00:32:58.021 09:53:46 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:58.021 09:53:46 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:58.021 09:53:46 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:58.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:58.021 09:53:46 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:58.021 09:53:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:58.021 [2024-10-07 09:53:46.908469] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:32:58.021 [2024-10-07 09:53:46.908571] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:58.021 [2024-10-07 09:53:46.969923] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:58.280 [2024-10-07 09:53:47.077615] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:58.280 [2024-10-07 09:53:47.077707] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:58.280 [2024-10-07 09:53:47.077736] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:58.280 [2024-10-07 09:53:47.077748] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:58.280 [2024-10-07 09:53:47.077758] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:58.280 [2024-10-07 09:53:47.079226] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:32:58.280 [2024-10-07 09:53:47.079292] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:32:58.280 [2024-10-07 09:53:47.079361] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:32:58.280 [2024-10-07 09:53:47.079364] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:32:58.280 09:53:47 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:58.280 09:53:47 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # return 0 00:32:58.280 09:53:47 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:32:58.280 09:53:47 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:58.280 09:53:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:58.280 INFO: Log level set to 20 00:32:58.280 INFO: Requests: 00:32:58.280 { 00:32:58.280 "jsonrpc": "2.0", 00:32:58.280 "method": "nvmf_set_config", 00:32:58.280 "id": 1, 00:32:58.280 "params": { 00:32:58.280 "admin_cmd_passthru": { 00:32:58.280 "identify_ctrlr": true 00:32:58.280 } 00:32:58.280 } 00:32:58.280 } 00:32:58.280 00:32:58.280 INFO: response: 00:32:58.280 { 00:32:58.280 "jsonrpc": "2.0", 00:32:58.280 "id": 1, 00:32:58.280 "result": true 00:32:58.280 } 00:32:58.280 00:32:58.280 09:53:47 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:58.280 09:53:47 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:32:58.280 09:53:47 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:58.280 09:53:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:58.280 INFO: Setting log level to 20 00:32:58.280 INFO: Setting log level to 20 00:32:58.280 INFO: Log level set to 20 00:32:58.280 INFO: Log level set to 20 00:32:58.280 INFO: Requests: 00:32:58.280 { 00:32:58.280 "jsonrpc": "2.0", 00:32:58.280 "method": "framework_start_init", 00:32:58.280 "id": 1 00:32:58.280 } 00:32:58.280 00:32:58.280 INFO: Requests: 00:32:58.280 { 00:32:58.280 "jsonrpc": "2.0", 00:32:58.280 "method": "framework_start_init", 00:32:58.280 "id": 1 00:32:58.280 } 00:32:58.280 00:32:58.280 [2024-10-07 09:53:47.257697] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:32:58.280 INFO: response: 00:32:58.280 { 00:32:58.280 "jsonrpc": "2.0", 00:32:58.280 "id": 1, 00:32:58.280 "result": true 00:32:58.280 } 00:32:58.280 00:32:58.280 INFO: response: 00:32:58.280 { 00:32:58.280 "jsonrpc": "2.0", 00:32:58.280 "id": 1, 00:32:58.280 "result": true 00:32:58.280 } 00:32:58.280 00:32:58.280 09:53:47 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:58.280 09:53:47 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:58.280 09:53:47 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:58.280 09:53:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:58.280 INFO: Setting log level to 40 00:32:58.280 INFO: Setting log level to 40 00:32:58.280 INFO: Setting log level to 40 00:32:58.280 [2024-10-07 09:53:47.267759] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:58.280 09:53:47 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:58.280 09:53:47 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:32:58.538 09:53:47 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:58.538 09:53:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:58.538 09:53:47 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:84:00.0 00:32:58.538 09:53:47 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:58.538 09:53:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:01.818 Nvme0n1 00:33:01.818 09:53:50 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:01.818 09:53:50 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:33:01.818 09:53:50 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:01.818 09:53:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:01.818 09:53:50 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:01.818 09:53:50 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:33:01.818 09:53:50 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:01.818 09:53:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:01.818 09:53:50 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:01.818 09:53:50 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:01.818 09:53:50 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:01.818 09:53:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:01.818 [2024-10-07 09:53:50.159459] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:01.818 09:53:50 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:01.818 09:53:50 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:33:01.818 09:53:50 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:01.818 09:53:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:01.818 [ 00:33:01.818 { 00:33:01.818 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:33:01.818 "subtype": "Discovery", 00:33:01.818 "listen_addresses": [], 00:33:01.818 "allow_any_host": true, 00:33:01.818 "hosts": [] 00:33:01.818 }, 00:33:01.818 { 00:33:01.818 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:33:01.818 "subtype": "NVMe", 00:33:01.818 "listen_addresses": [ 00:33:01.818 { 00:33:01.818 "trtype": "TCP", 00:33:01.818 "adrfam": "IPv4", 00:33:01.818 "traddr": "10.0.0.2", 00:33:01.818 "trsvcid": "4420" 00:33:01.818 } 00:33:01.818 ], 00:33:01.818 "allow_any_host": true, 00:33:01.818 "hosts": [], 00:33:01.818 "serial_number": "SPDK00000000000001", 00:33:01.818 "model_number": "SPDK bdev Controller", 00:33:01.818 "max_namespaces": 1, 00:33:01.818 "min_cntlid": 1, 00:33:01.818 "max_cntlid": 65519, 00:33:01.818 "namespaces": [ 00:33:01.818 { 00:33:01.818 "nsid": 1, 00:33:01.818 "bdev_name": "Nvme0n1", 00:33:01.818 "name": "Nvme0n1", 00:33:01.818 "nguid": "C0166C20949E4370B08F27CB5D1B83A6", 00:33:01.818 "uuid": "c0166c20-949e-4370-b08f-27cb5d1b83a6" 00:33:01.818 } 00:33:01.818 ] 00:33:01.818 } 00:33:01.818 ] 00:33:01.818 09:53:50 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:01.818 09:53:50 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:33:01.818 09:53:50 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:33:01.818 09:53:50 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:33:01.818 09:53:50 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ807002Z71P0FGN 00:33:01.818 09:53:50 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:33:01.818 09:53:50 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:33:01.818 09:53:50 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:33:01.818 09:53:50 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:33:01.818 09:53:50 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ807002Z71P0FGN '!=' BTLJ807002Z71P0FGN ']' 00:33:01.818 09:53:50 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:33:01.818 09:53:50 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:01.818 09:53:50 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:01.818 09:53:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:01.818 09:53:50 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:01.818 09:53:50 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:33:01.818 09:53:50 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:33:01.818 09:53:50 nvmf_identify_passthru -- nvmf/common.sh@514 -- # nvmfcleanup 00:33:01.818 09:53:50 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:33:01.818 09:53:50 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:01.818 09:53:50 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:33:01.818 09:53:50 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:01.818 09:53:50 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:01.818 rmmod nvme_tcp 00:33:01.818 rmmod nvme_fabrics 00:33:01.818 rmmod nvme_keyring 00:33:01.818 09:53:50 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:01.818 09:53:50 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:33:01.818 09:53:50 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:33:01.818 09:53:50 nvmf_identify_passthru -- nvmf/common.sh@515 -- # '[' -n 384853 ']' 00:33:01.818 09:53:50 nvmf_identify_passthru -- nvmf/common.sh@516 -- # killprocess 384853 00:33:01.818 09:53:50 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # '[' -z 384853 ']' 00:33:01.818 09:53:50 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # kill -0 384853 00:33:01.818 09:53:50 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # uname 00:33:01.818 09:53:50 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:01.818 09:53:50 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 384853 00:33:01.818 09:53:50 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:01.818 09:53:50 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:01.818 09:53:50 nvmf_identify_passthru -- common/autotest_common.sh@968 -- # echo 'killing process with pid 384853' 00:33:01.818 killing process with pid 384853 00:33:01.818 09:53:50 nvmf_identify_passthru -- common/autotest_common.sh@969 -- # kill 384853 00:33:01.818 09:53:50 nvmf_identify_passthru -- common/autotest_common.sh@974 -- # wait 384853 00:33:03.717 09:53:52 nvmf_identify_passthru -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:33:03.717 09:53:52 nvmf_identify_passthru -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:33:03.717 09:53:52 nvmf_identify_passthru -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:33:03.717 09:53:52 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:33:03.717 09:53:52 nvmf_identify_passthru -- nvmf/common.sh@789 -- # iptables-save 00:33:03.717 09:53:52 nvmf_identify_passthru -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:33:03.717 09:53:52 nvmf_identify_passthru -- nvmf/common.sh@789 -- # iptables-restore 00:33:03.717 09:53:52 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:03.717 09:53:52 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:03.717 09:53:52 nvmf_identify_passthru -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:03.717 09:53:52 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:03.717 09:53:52 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:05.619 09:53:54 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:05.619 00:33:05.619 real 0m18.058s 00:33:05.619 user 0m26.488s 00:33:05.619 sys 0m3.048s 00:33:05.619 09:53:54 nvmf_identify_passthru -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:05.619 09:53:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:05.619 ************************************ 00:33:05.619 END TEST nvmf_identify_passthru 00:33:05.619 ************************************ 00:33:05.619 09:53:54 -- spdk/autotest.sh@285 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:33:05.619 09:53:54 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:33:05.619 09:53:54 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:05.619 09:53:54 -- common/autotest_common.sh@10 -- # set +x 00:33:05.619 ************************************ 00:33:05.619 START TEST nvmf_dif 00:33:05.619 ************************************ 00:33:05.619 09:53:54 nvmf_dif -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:33:05.619 * Looking for test storage... 00:33:05.619 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:05.619 09:53:54 nvmf_dif -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:33:05.619 09:53:54 nvmf_dif -- common/autotest_common.sh@1681 -- # lcov --version 00:33:05.619 09:53:54 nvmf_dif -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:33:05.619 09:53:54 nvmf_dif -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:33:05.619 09:53:54 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:05.619 09:53:54 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:05.619 09:53:54 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:05.619 09:53:54 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:33:05.619 09:53:54 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:33:05.619 09:53:54 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:33:05.619 09:53:54 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:33:05.619 09:53:54 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:33:05.619 09:53:54 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:33:05.619 09:53:54 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:33:05.619 09:53:54 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:05.619 09:53:54 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:33:05.619 09:53:54 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:33:05.619 09:53:54 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:05.619 09:53:54 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:05.619 09:53:54 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:33:05.619 09:53:54 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:33:05.619 09:53:54 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:05.619 09:53:54 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:33:05.619 09:53:54 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:33:05.619 09:53:54 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:33:05.619 09:53:54 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:33:05.619 09:53:54 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:05.619 09:53:54 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:33:05.619 09:53:54 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:33:05.619 09:53:54 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:05.619 09:53:54 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:05.619 09:53:54 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:33:05.619 09:53:54 nvmf_dif -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:05.619 09:53:54 nvmf_dif -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:33:05.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:05.619 --rc genhtml_branch_coverage=1 00:33:05.619 --rc genhtml_function_coverage=1 00:33:05.619 --rc genhtml_legend=1 00:33:05.619 --rc geninfo_all_blocks=1 00:33:05.619 --rc geninfo_unexecuted_blocks=1 00:33:05.619 00:33:05.619 ' 00:33:05.619 09:53:54 nvmf_dif -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:33:05.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:05.619 --rc genhtml_branch_coverage=1 00:33:05.619 --rc genhtml_function_coverage=1 00:33:05.619 --rc genhtml_legend=1 00:33:05.619 --rc geninfo_all_blocks=1 00:33:05.619 --rc geninfo_unexecuted_blocks=1 00:33:05.619 00:33:05.619 ' 00:33:05.619 09:53:54 nvmf_dif -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:33:05.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:05.619 --rc genhtml_branch_coverage=1 00:33:05.619 --rc genhtml_function_coverage=1 00:33:05.619 --rc genhtml_legend=1 00:33:05.619 --rc geninfo_all_blocks=1 00:33:05.619 --rc geninfo_unexecuted_blocks=1 00:33:05.619 00:33:05.619 ' 00:33:05.619 09:53:54 nvmf_dif -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:33:05.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:05.619 --rc genhtml_branch_coverage=1 00:33:05.619 --rc genhtml_function_coverage=1 00:33:05.619 --rc genhtml_legend=1 00:33:05.619 --rc geninfo_all_blocks=1 00:33:05.619 --rc geninfo_unexecuted_blocks=1 00:33:05.619 00:33:05.619 ' 00:33:05.619 09:53:54 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:05.619 09:53:54 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:33:05.619 09:53:54 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:05.619 09:53:54 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:05.619 09:53:54 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:05.619 09:53:54 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:05.619 09:53:54 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:05.619 09:53:54 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:05.619 09:53:54 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:05.619 09:53:54 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:05.619 09:53:54 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:05.619 09:53:54 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:05.619 09:53:54 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:33:05.619 09:53:54 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:33:05.619 09:53:54 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:05.619 09:53:54 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:05.619 09:53:54 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:05.619 09:53:54 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:05.619 09:53:54 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:05.619 09:53:54 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:33:05.619 09:53:54 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:05.619 09:53:54 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:05.619 09:53:54 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:05.619 09:53:54 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:05.619 09:53:54 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:05.619 09:53:54 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:05.619 09:53:54 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:33:05.619 09:53:54 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:05.619 09:53:54 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:33:05.619 09:53:54 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:05.619 09:53:54 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:05.619 09:53:54 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:05.619 09:53:54 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:05.619 09:53:54 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:05.619 09:53:54 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:05.619 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:05.619 09:53:54 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:05.619 09:53:54 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:05.619 09:53:54 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:05.619 09:53:54 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:33:05.619 09:53:54 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:33:05.619 09:53:54 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:33:05.619 09:53:54 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:33:05.619 09:53:54 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:33:05.619 09:53:54 nvmf_dif -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:33:05.619 09:53:54 nvmf_dif -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:05.619 09:53:54 nvmf_dif -- nvmf/common.sh@474 -- # prepare_net_devs 00:33:05.619 09:53:54 nvmf_dif -- nvmf/common.sh@436 -- # local -g is_hw=no 00:33:05.619 09:53:54 nvmf_dif -- nvmf/common.sh@438 -- # remove_spdk_ns 00:33:05.619 09:53:54 nvmf_dif -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:05.620 09:53:54 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:05.620 09:53:54 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:05.620 09:53:54 nvmf_dif -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:33:05.620 09:53:54 nvmf_dif -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:33:05.620 09:53:54 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:33:05.620 09:53:54 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:07.523 09:53:56 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:07.523 09:53:56 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:33:07.523 09:53:56 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:07.523 09:53:56 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:07.523 09:53:56 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:07.523 09:53:56 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:07.523 09:53:56 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:07.523 09:53:56 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:33:07.523 09:53:56 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:07.523 09:53:56 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:33:07.523 09:53:56 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:33:07.523 09:53:56 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:33:07.523 09:53:56 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:33:07.523 09:53:56 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:33:07.523 09:53:56 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:33:07.523 09:53:56 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:07.523 09:53:56 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:07.523 09:53:56 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:07.523 09:53:56 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:07.523 09:53:56 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:07.523 09:53:56 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:07.523 09:53:56 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:07.523 09:53:56 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:07.524 09:53:56 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:07.524 09:53:56 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:07.524 09:53:56 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:07.524 09:53:56 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:07.524 09:53:56 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:07.524 09:53:56 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:07.524 09:53:56 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:07.524 09:53:56 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:07.524 09:53:56 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:07.524 09:53:56 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:07.524 09:53:56 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:07.524 09:53:56 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x1592)' 00:33:07.524 Found 0000:09:00.0 (0x8086 - 0x1592) 00:33:07.524 09:53:56 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:07.524 09:53:56 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:07.524 09:53:56 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:33:07.524 09:53:56 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:33:07.524 09:53:56 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:07.524 09:53:56 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:07.524 09:53:56 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x1592)' 00:33:07.524 Found 0000:09:00.1 (0x8086 - 0x1592) 00:33:07.524 09:53:56 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:07.524 09:53:56 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:07.524 09:53:56 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:33:07.782 09:53:56 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:33:07.782 09:53:56 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:07.782 09:53:56 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:07.782 09:53:56 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:07.782 09:53:56 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:07.783 09:53:56 nvmf_dif -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:07.783 09:53:56 nvmf_dif -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:07.783 09:53:56 nvmf_dif -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:07.783 09:53:56 nvmf_dif -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:07.783 09:53:56 nvmf_dif -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:07.783 09:53:56 nvmf_dif -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:07.783 09:53:56 nvmf_dif -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:07.783 09:53:56 nvmf_dif -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:33:07.783 Found net devices under 0000:09:00.0: cvl_0_0 00:33:07.783 09:53:56 nvmf_dif -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:07.783 09:53:56 nvmf_dif -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:07.783 09:53:56 nvmf_dif -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:07.783 09:53:56 nvmf_dif -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:07.783 09:53:56 nvmf_dif -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:07.783 09:53:56 nvmf_dif -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:07.783 09:53:56 nvmf_dif -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:07.783 09:53:56 nvmf_dif -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:07.783 09:53:56 nvmf_dif -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:33:07.783 Found net devices under 0000:09:00.1: cvl_0_1 00:33:07.783 09:53:56 nvmf_dif -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:07.783 09:53:56 nvmf_dif -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:33:07.783 09:53:56 nvmf_dif -- nvmf/common.sh@440 -- # is_hw=yes 00:33:07.783 09:53:56 nvmf_dif -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:33:07.783 09:53:56 nvmf_dif -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:33:07.783 09:53:56 nvmf_dif -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:33:07.783 09:53:56 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:07.783 09:53:56 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:07.783 09:53:56 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:07.783 09:53:56 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:07.783 09:53:56 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:07.783 09:53:56 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:07.783 09:53:56 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:07.783 09:53:56 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:07.783 09:53:56 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:07.783 09:53:56 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:07.783 09:53:56 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:07.783 09:53:56 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:07.783 09:53:56 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:07.783 09:53:56 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:07.783 09:53:56 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:07.783 09:53:56 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:07.783 09:53:56 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:07.783 09:53:56 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:07.783 09:53:56 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:07.783 09:53:56 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:07.783 09:53:56 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:07.783 09:53:56 nvmf_dif -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:07.783 09:53:56 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:07.783 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:07.783 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.225 ms 00:33:07.783 00:33:07.783 --- 10.0.0.2 ping statistics --- 00:33:07.783 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:07.783 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:33:07.783 09:53:56 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:07.783 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:07.783 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.156 ms 00:33:07.783 00:33:07.783 --- 10.0.0.1 ping statistics --- 00:33:07.783 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:07.783 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:33:07.783 09:53:56 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:07.783 09:53:56 nvmf_dif -- nvmf/common.sh@448 -- # return 0 00:33:07.783 09:53:56 nvmf_dif -- nvmf/common.sh@476 -- # '[' iso == iso ']' 00:33:07.783 09:53:56 nvmf_dif -- nvmf/common.sh@477 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:08.719 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:33:08.719 0000:84:00.0 (8086 0a54): Already using the vfio-pci driver 00:33:08.719 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:33:08.719 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:33:08.719 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:33:08.719 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:33:08.719 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:33:08.719 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:33:08.719 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:33:08.719 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:33:08.719 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:33:08.719 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:33:08.719 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:33:08.719 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:33:08.719 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:33:08.978 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:33:08.978 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:33:08.978 09:53:57 nvmf_dif -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:08.978 09:53:57 nvmf_dif -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:33:08.978 09:53:57 nvmf_dif -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:33:08.978 09:53:57 nvmf_dif -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:08.978 09:53:57 nvmf_dif -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:33:08.978 09:53:57 nvmf_dif -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:33:08.978 09:53:57 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:33:08.978 09:53:57 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:33:08.978 09:53:57 nvmf_dif -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:33:08.978 09:53:57 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:08.978 09:53:57 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:08.978 09:53:57 nvmf_dif -- nvmf/common.sh@507 -- # nvmfpid=387967 00:33:08.978 09:53:57 nvmf_dif -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:33:08.978 09:53:57 nvmf_dif -- nvmf/common.sh@508 -- # waitforlisten 387967 00:33:08.978 09:53:57 nvmf_dif -- common/autotest_common.sh@831 -- # '[' -z 387967 ']' 00:33:08.978 09:53:57 nvmf_dif -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:08.978 09:53:57 nvmf_dif -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:08.978 09:53:57 nvmf_dif -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:08.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:08.979 09:53:57 nvmf_dif -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:08.979 09:53:57 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:09.238 [2024-10-07 09:53:58.000439] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:33:09.238 [2024-10-07 09:53:58.000515] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:09.238 [2024-10-07 09:53:58.058649] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:09.238 [2024-10-07 09:53:58.157582] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:09.238 [2024-10-07 09:53:58.157646] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:09.238 [2024-10-07 09:53:58.157682] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:09.238 [2024-10-07 09:53:58.157694] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:09.238 [2024-10-07 09:53:58.157703] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:09.238 [2024-10-07 09:53:58.158193] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:33:09.496 09:53:58 nvmf_dif -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:09.496 09:53:58 nvmf_dif -- common/autotest_common.sh@864 -- # return 0 00:33:09.496 09:53:58 nvmf_dif -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:33:09.496 09:53:58 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:09.496 09:53:58 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:09.496 09:53:58 nvmf_dif -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:09.496 09:53:58 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:33:09.496 09:53:58 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:33:09.496 09:53:58 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:09.496 09:53:58 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:09.496 [2024-10-07 09:53:58.298044] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:09.496 09:53:58 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:09.496 09:53:58 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:33:09.496 09:53:58 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:33:09.496 09:53:58 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:09.496 09:53:58 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:09.496 ************************************ 00:33:09.496 START TEST fio_dif_1_default 00:33:09.496 ************************************ 00:33:09.496 09:53:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # fio_dif_1 00:33:09.496 09:53:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:33:09.496 09:53:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:33:09.496 09:53:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:33:09.497 09:53:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:33:09.497 09:53:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:33:09.497 09:53:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:09.497 09:53:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:09.497 09:53:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:09.497 bdev_null0 00:33:09.497 09:53:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:09.497 09:53:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:09.497 09:53:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:09.497 09:53:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:09.497 09:53:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:09.497 09:53:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:09.497 09:53:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:09.497 09:53:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:09.497 09:53:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:09.497 09:53:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:09.497 09:53:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:09.497 09:53:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:09.497 [2024-10-07 09:53:58.354282] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:09.497 09:53:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:09.497 09:53:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:33:09.497 09:53:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:33:09.497 09:53:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:33:09.497 09:53:58 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # config=() 00:33:09.497 09:53:58 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # local subsystem config 00:33:09.497 09:53:58 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:33:09.497 09:53:58 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:33:09.497 { 00:33:09.497 "params": { 00:33:09.497 "name": "Nvme$subsystem", 00:33:09.497 "trtype": "$TEST_TRANSPORT", 00:33:09.497 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:09.497 "adrfam": "ipv4", 00:33:09.497 "trsvcid": "$NVMF_PORT", 00:33:09.497 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:09.497 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:09.497 "hdgst": ${hdgst:-false}, 00:33:09.497 "ddgst": ${ddgst:-false} 00:33:09.497 }, 00:33:09.497 "method": "bdev_nvme_attach_controller" 00:33:09.497 } 00:33:09.497 EOF 00:33:09.497 )") 00:33:09.497 09:53:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:09.497 09:53:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:09.497 09:53:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:33:09.497 09:53:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:33:09.497 09:53:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:09.497 09:53:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:33:09.497 09:53:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:09.497 09:53:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:33:09.497 09:53:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:33:09.497 09:53:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:33:09.497 09:53:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:33:09.497 09:53:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:33:09.497 09:53:58 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@580 -- # cat 00:33:09.497 09:53:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:09.497 09:53:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:33:09.497 09:53:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:33:09.497 09:53:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:33:09.497 09:53:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:33:09.497 09:53:58 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # jq . 00:33:09.497 09:53:58 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@583 -- # IFS=, 00:33:09.497 09:53:58 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:33:09.497 "params": { 00:33:09.497 "name": "Nvme0", 00:33:09.497 "trtype": "tcp", 00:33:09.497 "traddr": "10.0.0.2", 00:33:09.497 "adrfam": "ipv4", 00:33:09.497 "trsvcid": "4420", 00:33:09.497 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:09.497 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:09.497 "hdgst": false, 00:33:09.497 "ddgst": false 00:33:09.497 }, 00:33:09.497 "method": "bdev_nvme_attach_controller" 00:33:09.497 }' 00:33:09.497 09:53:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:33:09.497 09:53:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:33:09.497 09:53:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:33:09.497 09:53:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:09.497 09:53:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:33:09.497 09:53:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:33:09.497 09:53:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:33:09.497 09:53:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:33:09.497 09:53:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:09.497 09:53:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:09.755 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:09.755 fio-3.35 00:33:09.755 Starting 1 thread 00:33:21.952 00:33:21.952 filename0: (groupid=0, jobs=1): err= 0: pid=388191: Mon Oct 7 09:54:09 2024 00:33:21.952 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10010msec) 00:33:21.952 slat (nsec): min=4862, max=88679, avg=9265.77, stdev=4226.31 00:33:21.952 clat (usec): min=40823, max=46129, avg=40991.93, stdev=331.38 00:33:21.952 lat (usec): min=40850, max=46156, avg=41001.19, stdev=331.49 00:33:21.952 clat percentiles (usec): 00:33:21.952 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:33:21.952 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:33:21.952 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:33:21.952 | 99.00th=[41157], 99.50th=[41157], 99.90th=[45876], 99.95th=[45876], 00:33:21.952 | 99.99th=[45876] 00:33:21.952 bw ( KiB/s): min= 384, max= 416, per=99.48%, avg=388.80, stdev=11.72, samples=20 00:33:21.952 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:33:21.952 lat (msec) : 50=100.00% 00:33:21.952 cpu : usr=90.50%, sys=9.18%, ctx=21, majf=0, minf=234 00:33:21.952 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:21.952 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:21.952 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:21.952 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:21.952 latency : target=0, window=0, percentile=100.00%, depth=4 00:33:21.952 00:33:21.952 Run status group 0 (all jobs): 00:33:21.952 READ: bw=390KiB/s (399kB/s), 390KiB/s-390KiB/s (399kB/s-399kB/s), io=3904KiB (3998kB), run=10010-10010msec 00:33:21.952 09:54:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:33:21.952 09:54:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:33:21.952 09:54:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:33:21.952 09:54:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:21.952 09:54:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:33:21.952 09:54:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:21.952 09:54:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:21.952 09:54:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:21.952 09:54:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:21.952 09:54:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:21.952 09:54:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:21.952 09:54:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:21.952 09:54:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:21.952 00:33:21.952 real 0m11.180s 00:33:21.952 user 0m10.324s 00:33:21.952 sys 0m1.170s 00:33:21.952 09:54:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:21.952 09:54:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:21.952 ************************************ 00:33:21.952 END TEST fio_dif_1_default 00:33:21.952 ************************************ 00:33:21.952 09:54:09 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:33:21.952 09:54:09 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:33:21.952 09:54:09 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:21.952 09:54:09 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:21.952 ************************************ 00:33:21.952 START TEST fio_dif_1_multi_subsystems 00:33:21.952 ************************************ 00:33:21.952 09:54:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # fio_dif_1_multi_subsystems 00:33:21.952 09:54:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:33:21.952 09:54:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:33:21.952 09:54:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:33:21.952 09:54:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:33:21.952 09:54:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:33:21.952 09:54:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:33:21.952 09:54:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:21.952 09:54:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:21.952 09:54:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:21.952 bdev_null0 00:33:21.952 09:54:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:21.952 09:54:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:21.952 09:54:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:21.952 09:54:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:21.952 09:54:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:21.952 09:54:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:21.952 09:54:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:21.952 09:54:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:21.952 09:54:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:21.952 09:54:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:21.952 09:54:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:21.952 09:54:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:21.952 [2024-10-07 09:54:09.580309] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:21.952 09:54:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:21.952 09:54:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:33:21.952 09:54:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:33:21.952 09:54:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:33:21.952 09:54:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:33:21.952 09:54:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:21.952 09:54:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:21.952 bdev_null1 00:33:21.952 09:54:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:21.952 09:54:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:33:21.952 09:54:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:21.952 09:54:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:21.952 09:54:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:21.952 09:54:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:33:21.952 09:54:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:21.952 09:54:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:21.952 09:54:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:21.952 09:54:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:21.952 09:54:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:21.952 09:54:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:21.952 09:54:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:21.952 09:54:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:33:21.952 09:54:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:33:21.952 09:54:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:33:21.952 09:54:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # config=() 00:33:21.952 09:54:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:21.952 09:54:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # local subsystem config 00:33:21.952 09:54:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:21.952 09:54:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:33:21.952 09:54:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:33:21.952 09:54:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:33:21.952 09:54:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:33:21.952 { 00:33:21.952 "params": { 00:33:21.952 "name": "Nvme$subsystem", 00:33:21.952 "trtype": "$TEST_TRANSPORT", 00:33:21.952 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:21.952 "adrfam": "ipv4", 00:33:21.952 "trsvcid": "$NVMF_PORT", 00:33:21.952 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:21.952 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:21.952 "hdgst": ${hdgst:-false}, 00:33:21.952 "ddgst": ${ddgst:-false} 00:33:21.952 }, 00:33:21.952 "method": "bdev_nvme_attach_controller" 00:33:21.952 } 00:33:21.952 EOF 00:33:21.952 )") 00:33:21.952 09:54:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:21.953 09:54:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:33:21.953 09:54:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:33:21.953 09:54:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:33:21.953 09:54:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:21.953 09:54:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:33:21.953 09:54:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:33:21.953 09:54:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:33:21.953 09:54:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # cat 00:33:21.953 09:54:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:21.953 09:54:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:33:21.953 09:54:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:33:21.953 09:54:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:33:21.953 09:54:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:33:21.953 09:54:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:33:21.953 09:54:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:33:21.953 09:54:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:33:21.953 { 00:33:21.953 "params": { 00:33:21.953 "name": "Nvme$subsystem", 00:33:21.953 "trtype": "$TEST_TRANSPORT", 00:33:21.953 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:21.953 "adrfam": "ipv4", 00:33:21.953 "trsvcid": "$NVMF_PORT", 00:33:21.953 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:21.953 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:21.953 "hdgst": ${hdgst:-false}, 00:33:21.953 "ddgst": ${ddgst:-false} 00:33:21.953 }, 00:33:21.953 "method": "bdev_nvme_attach_controller" 00:33:21.953 } 00:33:21.953 EOF 00:33:21.953 )") 00:33:21.953 09:54:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # cat 00:33:21.953 09:54:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:33:21.953 09:54:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:33:21.953 09:54:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # jq . 00:33:21.953 09:54:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@583 -- # IFS=, 00:33:21.953 09:54:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:33:21.953 "params": { 00:33:21.953 "name": "Nvme0", 00:33:21.953 "trtype": "tcp", 00:33:21.953 "traddr": "10.0.0.2", 00:33:21.953 "adrfam": "ipv4", 00:33:21.953 "trsvcid": "4420", 00:33:21.953 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:21.953 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:21.953 "hdgst": false, 00:33:21.953 "ddgst": false 00:33:21.953 }, 00:33:21.953 "method": "bdev_nvme_attach_controller" 00:33:21.953 },{ 00:33:21.953 "params": { 00:33:21.953 "name": "Nvme1", 00:33:21.953 "trtype": "tcp", 00:33:21.953 "traddr": "10.0.0.2", 00:33:21.953 "adrfam": "ipv4", 00:33:21.953 "trsvcid": "4420", 00:33:21.953 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:21.953 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:21.953 "hdgst": false, 00:33:21.953 "ddgst": false 00:33:21.953 }, 00:33:21.953 "method": "bdev_nvme_attach_controller" 00:33:21.953 }' 00:33:21.953 09:54:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:33:21.953 09:54:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:33:21.953 09:54:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:33:21.953 09:54:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:21.953 09:54:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:33:21.953 09:54:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:33:21.953 09:54:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:33:21.953 09:54:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:33:21.953 09:54:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:21.953 09:54:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:21.953 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:21.953 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:21.953 fio-3.35 00:33:21.953 Starting 2 threads 00:33:31.923 00:33:31.923 filename0: (groupid=0, jobs=1): err= 0: pid=390164: Mon Oct 7 09:54:20 2024 00:33:31.923 read: IOPS=235, BW=941KiB/s (964kB/s)(9424KiB/10011msec) 00:33:31.923 slat (nsec): min=7215, max=63657, avg=9142.07, stdev=3212.87 00:33:31.923 clat (usec): min=530, max=44572, avg=16967.97, stdev=20007.09 00:33:31.923 lat (usec): min=538, max=44622, avg=16977.11, stdev=20007.07 00:33:31.923 clat percentiles (usec): 00:33:31.923 | 1.00th=[ 545], 5.00th=[ 570], 10.00th=[ 586], 20.00th=[ 611], 00:33:31.923 | 30.00th=[ 644], 40.00th=[ 701], 50.00th=[ 758], 60.00th=[ 1090], 00:33:31.923 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:33:31.923 | 99.00th=[42206], 99.50th=[42730], 99.90th=[44303], 99.95th=[44303], 00:33:31.923 | 99.99th=[44827] 00:33:31.923 bw ( KiB/s): min= 768, max= 1152, per=70.54%, avg=940.80, stdev=101.94, samples=20 00:33:31.923 iops : min= 192, max= 288, avg=235.20, stdev=25.48, samples=20 00:33:31.923 lat (usec) : 750=49.28%, 1000=10.27% 00:33:31.923 lat (msec) : 2=0.55%, 50=39.90% 00:33:31.923 cpu : usr=94.77%, sys=4.92%, ctx=16, majf=0, minf=194 00:33:31.923 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:31.923 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:31.923 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:31.923 issued rwts: total=2356,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:31.923 latency : target=0, window=0, percentile=100.00%, depth=4 00:33:31.923 filename1: (groupid=0, jobs=1): err= 0: pid=390165: Mon Oct 7 09:54:20 2024 00:33:31.923 read: IOPS=98, BW=393KiB/s (402kB/s)(3936KiB/10026msec) 00:33:31.924 slat (nsec): min=7236, max=49481, avg=9478.48, stdev=3244.22 00:33:31.924 clat (usec): min=581, max=44605, avg=40725.19, stdev=3635.47 00:33:31.924 lat (usec): min=589, max=44655, avg=40734.67, stdev=3635.52 00:33:31.924 clat percentiles (usec): 00:33:31.924 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:33:31.924 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:33:31.924 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:33:31.924 | 99.00th=[42206], 99.50th=[42206], 99.90th=[44827], 99.95th=[44827], 00:33:31.924 | 99.99th=[44827] 00:33:31.924 bw ( KiB/s): min= 384, max= 416, per=29.34%, avg=392.00, stdev=14.22, samples=20 00:33:31.924 iops : min= 96, max= 104, avg=98.00, stdev= 3.55, samples=20 00:33:31.924 lat (usec) : 750=0.41%, 1000=0.41% 00:33:31.924 lat (msec) : 50=99.19% 00:33:31.924 cpu : usr=95.25%, sys=4.44%, ctx=15, majf=0, minf=127 00:33:31.924 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:31.924 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:31.924 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:31.924 issued rwts: total=984,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:31.924 latency : target=0, window=0, percentile=100.00%, depth=4 00:33:31.924 00:33:31.924 Run status group 0 (all jobs): 00:33:31.924 READ: bw=1333KiB/s (1365kB/s), 393KiB/s-941KiB/s (402kB/s-964kB/s), io=13.0MiB (13.7MB), run=10011-10026msec 00:33:32.182 09:54:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:33:32.182 09:54:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:33:32.182 09:54:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:33:32.182 09:54:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:32.182 09:54:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:33:32.182 09:54:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:32.182 09:54:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:32.182 09:54:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:32.182 09:54:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:32.182 09:54:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:32.182 09:54:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:32.182 09:54:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:32.182 09:54:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:32.182 09:54:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:33:32.182 09:54:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:33:32.182 09:54:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:33:32.182 09:54:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:32.182 09:54:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:32.182 09:54:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:32.182 09:54:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:32.182 09:54:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:33:32.182 09:54:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:32.182 09:54:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:32.182 09:54:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:32.182 00:33:32.182 real 0m11.510s 00:33:32.182 user 0m20.477s 00:33:32.182 sys 0m1.228s 00:33:32.182 09:54:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:32.182 09:54:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:32.182 ************************************ 00:33:32.182 END TEST fio_dif_1_multi_subsystems 00:33:32.182 ************************************ 00:33:32.182 09:54:21 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:33:32.182 09:54:21 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:33:32.182 09:54:21 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:32.182 09:54:21 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:32.182 ************************************ 00:33:32.182 START TEST fio_dif_rand_params 00:33:32.182 ************************************ 00:33:32.182 09:54:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # fio_dif_rand_params 00:33:32.182 09:54:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:33:32.182 09:54:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:33:32.182 09:54:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:33:32.182 09:54:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:33:32.182 09:54:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:33:32.182 09:54:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:33:32.182 09:54:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:33:32.182 09:54:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:33:32.182 09:54:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:33:32.182 09:54:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:32.182 09:54:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:33:32.182 09:54:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:33:32.182 09:54:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:33:32.182 09:54:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:32.182 09:54:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:32.182 bdev_null0 00:33:32.182 09:54:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:32.182 09:54:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:32.182 09:54:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:32.182 09:54:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:32.182 09:54:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:32.182 09:54:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:32.182 09:54:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:32.182 09:54:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:32.182 09:54:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:32.182 09:54:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:32.182 09:54:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:32.182 09:54:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:32.182 [2024-10-07 09:54:21.145870] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:32.182 09:54:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:32.182 09:54:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:33:32.182 09:54:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:33:32.182 09:54:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:33:32.182 09:54:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config=() 00:33:32.182 09:54:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # local subsystem config 00:33:32.182 09:54:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:33:32.182 09:54:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:32.182 09:54:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:33:32.182 { 00:33:32.183 "params": { 00:33:32.183 "name": "Nvme$subsystem", 00:33:32.183 "trtype": "$TEST_TRANSPORT", 00:33:32.183 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:32.183 "adrfam": "ipv4", 00:33:32.183 "trsvcid": "$NVMF_PORT", 00:33:32.183 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:32.183 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:32.183 "hdgst": ${hdgst:-false}, 00:33:32.183 "ddgst": ${ddgst:-false} 00:33:32.183 }, 00:33:32.183 "method": "bdev_nvme_attach_controller" 00:33:32.183 } 00:33:32.183 EOF 00:33:32.183 )") 00:33:32.183 09:54:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:32.183 09:54:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:33:32.183 09:54:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:32.183 09:54:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:33:32.183 09:54:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:33:32.183 09:54:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:32.183 09:54:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:33:32.183 09:54:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:33:32.183 09:54:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:33:32.183 09:54:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:33:32.183 09:54:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:33:32.183 09:54:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:33:32.183 09:54:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:32.183 09:54:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:33:32.183 09:54:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:33:32.183 09:54:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:33:32.183 09:54:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:32.183 09:54:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # jq . 00:33:32.183 09:54:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@583 -- # IFS=, 00:33:32.183 09:54:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:33:32.183 "params": { 00:33:32.183 "name": "Nvme0", 00:33:32.183 "trtype": "tcp", 00:33:32.183 "traddr": "10.0.0.2", 00:33:32.183 "adrfam": "ipv4", 00:33:32.183 "trsvcid": "4420", 00:33:32.183 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:32.183 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:32.183 "hdgst": false, 00:33:32.183 "ddgst": false 00:33:32.183 }, 00:33:32.183 "method": "bdev_nvme_attach_controller" 00:33:32.183 }' 00:33:32.183 09:54:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:33:32.183 09:54:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:33:32.183 09:54:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:33:32.183 09:54:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:32.183 09:54:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:33:32.183 09:54:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:33:32.441 09:54:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:33:32.441 09:54:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:33:32.441 09:54:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:32.441 09:54:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:32.441 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:33:32.441 ... 00:33:32.441 fio-3.35 00:33:32.441 Starting 3 threads 00:33:39.000 00:33:39.000 filename0: (groupid=0, jobs=1): err= 0: pid=391617: Mon Oct 7 09:54:27 2024 00:33:39.000 read: IOPS=233, BW=29.2MiB/s (30.6MB/s)(146MiB/5004msec) 00:33:39.000 slat (nsec): min=7415, max=78401, avg=18384.00, stdev=4311.26 00:33:39.000 clat (usec): min=4282, max=52770, avg=12817.79, stdev=5182.42 00:33:39.000 lat (usec): min=4294, max=52788, avg=12836.17, stdev=5182.35 00:33:39.000 clat percentiles (usec): 00:33:39.000 | 1.00th=[ 6587], 5.00th=[ 8586], 10.00th=[ 9241], 20.00th=[10552], 00:33:39.000 | 30.00th=[11338], 40.00th=[11863], 50.00th=[12387], 60.00th=[12911], 00:33:39.000 | 70.00th=[13566], 80.00th=[14091], 90.00th=[15008], 95.00th=[15795], 00:33:39.000 | 99.00th=[50594], 99.50th=[52167], 99.90th=[52691], 99.95th=[52691], 00:33:39.000 | 99.99th=[52691] 00:33:39.000 bw ( KiB/s): min=26880, max=32768, per=33.76%, avg=29849.60, stdev=1651.15, samples=10 00:33:39.000 iops : min= 210, max= 256, avg=233.20, stdev=12.90, samples=10 00:33:39.000 lat (msec) : 10=15.65%, 20=82.81%, 50=0.51%, 100=1.03% 00:33:39.000 cpu : usr=95.24%, sys=4.28%, ctx=17, majf=0, minf=86 00:33:39.000 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:39.000 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:39.000 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:39.000 issued rwts: total=1169,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:39.000 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:39.000 filename0: (groupid=0, jobs=1): err= 0: pid=391618: Mon Oct 7 09:54:27 2024 00:33:39.000 read: IOPS=226, BW=28.3MiB/s (29.7MB/s)(143MiB/5047msec) 00:33:39.000 slat (nsec): min=8029, max=52828, avg=18584.68, stdev=4698.66 00:33:39.000 clat (usec): min=5480, max=92919, avg=13186.14, stdev=8673.64 00:33:39.000 lat (usec): min=5500, max=92935, avg=13204.73, stdev=8673.65 00:33:39.000 clat percentiles (usec): 00:33:39.000 | 1.00th=[ 6980], 5.00th=[ 8356], 10.00th=[ 9503], 20.00th=[10290], 00:33:39.000 | 30.00th=[10814], 40.00th=[11207], 50.00th=[11600], 60.00th=[11994], 00:33:39.000 | 70.00th=[12518], 80.00th=[13173], 90.00th=[14222], 95.00th=[15533], 00:33:39.000 | 99.00th=[53216], 99.50th=[54789], 99.90th=[92799], 99.95th=[92799], 00:33:39.000 | 99.99th=[92799] 00:33:39.000 bw ( KiB/s): min=23552, max=32512, per=33.01%, avg=29184.00, stdev=3346.55, samples=10 00:33:39.000 iops : min= 184, max= 254, avg=228.00, stdev=26.14, samples=10 00:33:39.000 lat (msec) : 10=16.01%, 20=79.97%, 50=0.87%, 100=3.15% 00:33:39.000 cpu : usr=93.10%, sys=5.51%, ctx=132, majf=0, minf=113 00:33:39.000 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:39.000 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:39.000 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:39.000 issued rwts: total=1143,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:39.000 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:39.000 filename0: (groupid=0, jobs=1): err= 0: pid=391619: Mon Oct 7 09:54:27 2024 00:33:39.000 read: IOPS=232, BW=29.1MiB/s (30.5MB/s)(147MiB/5044msec) 00:33:39.000 slat (nsec): min=7962, max=71877, avg=17540.79, stdev=3822.82 00:33:39.000 clat (usec): min=6390, max=89361, avg=12832.57, stdev=5337.36 00:33:39.000 lat (usec): min=6405, max=89376, avg=12850.11, stdev=5337.13 00:33:39.000 clat percentiles (usec): 00:33:39.000 | 1.00th=[ 7177], 5.00th=[ 7898], 10.00th=[ 8455], 20.00th=[10421], 00:33:39.000 | 30.00th=[11469], 40.00th=[12125], 50.00th=[12649], 60.00th=[13173], 00:33:39.000 | 70.00th=[13698], 80.00th=[14353], 90.00th=[15270], 95.00th=[16057], 00:33:39.000 | 99.00th=[47973], 99.50th=[51643], 99.90th=[53740], 99.95th=[89654], 00:33:39.000 | 99.99th=[89654] 00:33:39.000 bw ( KiB/s): min=24576, max=34048, per=33.94%, avg=30003.20, stdev=3085.72, samples=10 00:33:39.000 iops : min= 192, max= 266, avg=234.40, stdev=24.11, samples=10 00:33:39.000 lat (msec) : 10=18.06%, 20=80.58%, 50=0.43%, 100=0.94% 00:33:39.000 cpu : usr=95.80%, sys=3.69%, ctx=16, majf=0, minf=119 00:33:39.000 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:39.000 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:39.000 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:39.000 issued rwts: total=1174,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:39.000 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:39.000 00:33:39.001 Run status group 0 (all jobs): 00:33:39.001 READ: bw=86.3MiB/s (90.5MB/s), 28.3MiB/s-29.2MiB/s (29.7MB/s-30.6MB/s), io=436MiB (457MB), run=5004-5047msec 00:33:39.001 09:54:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:33:39.001 09:54:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:33:39.001 09:54:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:39.001 09:54:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:39.001 09:54:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:33:39.001 09:54:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:39.001 09:54:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:39.001 09:54:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:39.001 09:54:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:39.001 09:54:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:39.001 09:54:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:39.001 09:54:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:39.001 09:54:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:39.001 09:54:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:33:39.001 09:54:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:33:39.001 09:54:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:33:39.001 09:54:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:33:39.001 09:54:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:33:39.001 09:54:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:33:39.001 09:54:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:33:39.001 09:54:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:33:39.001 09:54:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:39.001 09:54:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:33:39.001 09:54:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:33:39.001 09:54:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:33:39.001 09:54:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:39.001 09:54:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:39.001 bdev_null0 00:33:39.001 09:54:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:39.001 09:54:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:39.001 09:54:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:39.001 09:54:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:39.001 09:54:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:39.001 09:54:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:39.001 09:54:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:39.001 09:54:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:39.001 09:54:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:39.001 09:54:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:39.001 09:54:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:39.001 09:54:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:39.001 [2024-10-07 09:54:27.492198] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:39.001 09:54:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:39.001 09:54:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:39.001 09:54:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:33:39.001 09:54:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:33:39.001 09:54:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:33:39.001 09:54:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:39.001 09:54:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:39.001 bdev_null1 00:33:39.001 09:54:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:39.001 09:54:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:33:39.001 09:54:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:39.001 09:54:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:39.001 09:54:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:39.001 09:54:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:33:39.001 09:54:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:39.001 09:54:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:39.001 09:54:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:39.001 09:54:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:39.001 09:54:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:39.001 09:54:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:39.001 09:54:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:39.001 09:54:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:39.001 09:54:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:33:39.001 09:54:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:33:39.001 09:54:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:33:39.001 09:54:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:39.001 09:54:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:39.001 bdev_null2 00:33:39.001 09:54:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:39.001 09:54:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:33:39.001 09:54:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:39.001 09:54:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:39.001 09:54:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:39.001 09:54:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:33:39.001 09:54:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:39.001 09:54:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:39.001 09:54:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:39.001 09:54:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:33:39.001 09:54:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:39.001 09:54:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:39.001 09:54:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:39.001 09:54:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:33:39.001 09:54:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:33:39.001 09:54:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:33:39.001 09:54:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config=() 00:33:39.001 09:54:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # local subsystem config 00:33:39.001 09:54:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:33:39.001 09:54:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:39.001 09:54:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:33:39.001 { 00:33:39.001 "params": { 00:33:39.001 "name": "Nvme$subsystem", 00:33:39.001 "trtype": "$TEST_TRANSPORT", 00:33:39.001 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:39.001 "adrfam": "ipv4", 00:33:39.001 "trsvcid": "$NVMF_PORT", 00:33:39.001 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:39.001 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:39.001 "hdgst": ${hdgst:-false}, 00:33:39.001 "ddgst": ${ddgst:-false} 00:33:39.002 }, 00:33:39.002 "method": "bdev_nvme_attach_controller" 00:33:39.002 } 00:33:39.002 EOF 00:33:39.002 )") 00:33:39.002 09:54:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:39.002 09:54:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:33:39.002 09:54:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:33:39.002 09:54:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:39.002 09:54:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:33:39.002 09:54:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:33:39.002 09:54:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:39.002 09:54:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:33:39.002 09:54:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:33:39.002 09:54:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:33:39.002 09:54:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:33:39.002 09:54:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:33:39.002 09:54:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:39.002 09:54:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:33:39.002 09:54:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:33:39.002 09:54:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:33:39.002 09:54:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:39.002 09:54:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:33:39.002 09:54:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:33:39.002 09:54:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:33:39.002 { 00:33:39.002 "params": { 00:33:39.002 "name": "Nvme$subsystem", 00:33:39.002 "trtype": "$TEST_TRANSPORT", 00:33:39.002 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:39.002 "adrfam": "ipv4", 00:33:39.002 "trsvcid": "$NVMF_PORT", 00:33:39.002 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:39.002 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:39.002 "hdgst": ${hdgst:-false}, 00:33:39.002 "ddgst": ${ddgst:-false} 00:33:39.002 }, 00:33:39.002 "method": "bdev_nvme_attach_controller" 00:33:39.002 } 00:33:39.002 EOF 00:33:39.002 )") 00:33:39.002 09:54:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:33:39.002 09:54:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:33:39.002 09:54:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:39.002 09:54:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:33:39.002 09:54:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:33:39.002 09:54:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:39.002 09:54:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:33:39.002 09:54:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:33:39.002 { 00:33:39.002 "params": { 00:33:39.002 "name": "Nvme$subsystem", 00:33:39.002 "trtype": "$TEST_TRANSPORT", 00:33:39.002 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:39.002 "adrfam": "ipv4", 00:33:39.002 "trsvcid": "$NVMF_PORT", 00:33:39.002 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:39.002 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:39.002 "hdgst": ${hdgst:-false}, 00:33:39.002 "ddgst": ${ddgst:-false} 00:33:39.002 }, 00:33:39.002 "method": "bdev_nvme_attach_controller" 00:33:39.002 } 00:33:39.002 EOF 00:33:39.002 )") 00:33:39.002 09:54:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:33:39.002 09:54:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # jq . 00:33:39.002 09:54:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@583 -- # IFS=, 00:33:39.002 09:54:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:33:39.002 "params": { 00:33:39.002 "name": "Nvme0", 00:33:39.002 "trtype": "tcp", 00:33:39.002 "traddr": "10.0.0.2", 00:33:39.002 "adrfam": "ipv4", 00:33:39.002 "trsvcid": "4420", 00:33:39.002 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:39.002 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:39.002 "hdgst": false, 00:33:39.002 "ddgst": false 00:33:39.002 }, 00:33:39.002 "method": "bdev_nvme_attach_controller" 00:33:39.002 },{ 00:33:39.002 "params": { 00:33:39.002 "name": "Nvme1", 00:33:39.002 "trtype": "tcp", 00:33:39.002 "traddr": "10.0.0.2", 00:33:39.002 "adrfam": "ipv4", 00:33:39.002 "trsvcid": "4420", 00:33:39.002 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:39.002 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:39.002 "hdgst": false, 00:33:39.002 "ddgst": false 00:33:39.002 }, 00:33:39.002 "method": "bdev_nvme_attach_controller" 00:33:39.002 },{ 00:33:39.002 "params": { 00:33:39.002 "name": "Nvme2", 00:33:39.002 "trtype": "tcp", 00:33:39.002 "traddr": "10.0.0.2", 00:33:39.002 "adrfam": "ipv4", 00:33:39.002 "trsvcid": "4420", 00:33:39.002 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:33:39.002 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:33:39.002 "hdgst": false, 00:33:39.002 "ddgst": false 00:33:39.002 }, 00:33:39.002 "method": "bdev_nvme_attach_controller" 00:33:39.002 }' 00:33:39.002 09:54:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:33:39.002 09:54:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:33:39.002 09:54:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:33:39.002 09:54:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:39.002 09:54:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:33:39.002 09:54:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:33:39.002 09:54:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:33:39.002 09:54:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:33:39.002 09:54:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:39.002 09:54:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:39.002 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:33:39.002 ... 00:33:39.002 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:33:39.002 ... 00:33:39.002 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:33:39.002 ... 00:33:39.002 fio-3.35 00:33:39.002 Starting 24 threads 00:33:51.203 00:33:51.203 filename0: (groupid=0, jobs=1): err= 0: pid=392360: Mon Oct 7 09:54:38 2024 00:33:51.203 read: IOPS=478, BW=1913KiB/s (1959kB/s)(18.7MiB/10005msec) 00:33:51.203 slat (nsec): min=6959, max=84865, avg=34072.19, stdev=8180.32 00:33:51.203 clat (usec): min=19979, max=60615, avg=33153.74, stdev=1773.12 00:33:51.203 lat (usec): min=20002, max=60631, avg=33187.81, stdev=1772.29 00:33:51.203 clat percentiles (usec): 00:33:51.203 | 1.00th=[32637], 5.00th=[32637], 10.00th=[32900], 20.00th=[32900], 00:33:51.203 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33162], 00:33:51.203 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33424], 95.00th=[33817], 00:33:51.203 | 99.00th=[33817], 99.50th=[34341], 99.90th=[60556], 99.95th=[60556], 00:33:51.203 | 99.99th=[60556] 00:33:51.203 bw ( KiB/s): min= 1664, max= 1920, per=4.15%, avg=1906.53, stdev=58.73, samples=19 00:33:51.203 iops : min= 416, max= 480, avg=476.63, stdev=14.68, samples=19 00:33:51.203 lat (msec) : 20=0.02%, 50=99.64%, 100=0.33% 00:33:51.203 cpu : usr=96.82%, sys=1.96%, ctx=198, majf=0, minf=9 00:33:51.203 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:51.203 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.203 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.203 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:51.203 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:51.203 filename0: (groupid=0, jobs=1): err= 0: pid=392361: Mon Oct 7 09:54:38 2024 00:33:51.203 read: IOPS=478, BW=1913KiB/s (1959kB/s)(18.7MiB/10004msec) 00:33:51.203 slat (nsec): min=11159, max=57702, avg=29380.51, stdev=9279.67 00:33:51.203 clat (usec): min=32527, max=45757, avg=33220.50, stdev=765.49 00:33:51.203 lat (usec): min=32556, max=45783, avg=33249.88, stdev=764.18 00:33:51.203 clat percentiles (usec): 00:33:51.203 | 1.00th=[32637], 5.00th=[32900], 10.00th=[32900], 20.00th=[32900], 00:33:51.203 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33162], 60.00th=[33162], 00:33:51.203 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33424], 95.00th=[33817], 00:33:51.203 | 99.00th=[33817], 99.50th=[34341], 99.90th=[45876], 99.95th=[45876], 00:33:51.203 | 99.99th=[45876] 00:33:51.203 bw ( KiB/s): min= 1792, max= 1920, per=4.17%, avg=1913.26, stdev=29.37, samples=19 00:33:51.203 iops : min= 448, max= 480, avg=478.32, stdev= 7.34, samples=19 00:33:51.203 lat (msec) : 50=100.00% 00:33:51.203 cpu : usr=98.45%, sys=1.15%, ctx=19, majf=0, minf=9 00:33:51.203 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:51.203 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.203 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.203 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:51.203 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:51.203 filename0: (groupid=0, jobs=1): err= 0: pid=392362: Mon Oct 7 09:54:38 2024 00:33:51.203 read: IOPS=478, BW=1913KiB/s (1959kB/s)(18.7MiB/10003msec) 00:33:51.203 slat (nsec): min=8216, max=87788, avg=31847.60, stdev=6674.27 00:33:51.203 clat (usec): min=17587, max=60897, avg=33182.08, stdev=928.80 00:33:51.203 lat (usec): min=17671, max=60913, avg=33213.93, stdev=927.14 00:33:51.203 clat percentiles (usec): 00:33:51.203 | 1.00th=[32637], 5.00th=[32637], 10.00th=[32900], 20.00th=[32900], 00:33:51.203 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33162], 00:33:51.203 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33424], 95.00th=[33817], 00:33:51.203 | 99.00th=[34341], 99.50th=[35390], 99.90th=[44303], 99.95th=[44303], 00:33:51.203 | 99.99th=[61080] 00:33:51.203 bw ( KiB/s): min= 1792, max= 1920, per=4.17%, avg=1913.26, stdev=29.37, samples=19 00:33:51.203 iops : min= 448, max= 480, avg=478.32, stdev= 7.34, samples=19 00:33:51.203 lat (msec) : 20=0.04%, 50=99.92%, 100=0.04% 00:33:51.203 cpu : usr=98.32%, sys=1.28%, ctx=37, majf=0, minf=9 00:33:51.203 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:51.203 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.203 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.203 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:51.203 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:51.203 filename0: (groupid=0, jobs=1): err= 0: pid=392363: Mon Oct 7 09:54:38 2024 00:33:51.203 read: IOPS=479, BW=1918KiB/s (1964kB/s)(18.8MiB/10012msec) 00:33:51.203 slat (nsec): min=5464, max=51479, avg=32343.69, stdev=5893.12 00:33:51.203 clat (usec): min=13666, max=55571, avg=33087.53, stdev=1975.71 00:33:51.203 lat (usec): min=13680, max=55593, avg=33119.87, stdev=1976.21 00:33:51.203 clat percentiles (usec): 00:33:51.203 | 1.00th=[31851], 5.00th=[32637], 10.00th=[32900], 20.00th=[32900], 00:33:51.203 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33162], 00:33:51.203 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33424], 95.00th=[33817], 00:33:51.203 | 99.00th=[34866], 99.50th=[47449], 99.90th=[49546], 99.95th=[49546], 00:33:51.203 | 99.99th=[55313] 00:33:51.203 bw ( KiB/s): min= 1792, max= 1920, per=4.17%, avg=1913.26, stdev=29.37, samples=19 00:33:51.203 iops : min= 448, max= 480, avg=478.32, stdev= 7.34, samples=19 00:33:51.203 lat (msec) : 20=0.88%, 50=99.08%, 100=0.04% 00:33:51.203 cpu : usr=98.52%, sys=1.10%, ctx=17, majf=0, minf=9 00:33:51.204 IO depths : 1=6.0%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:33:51.204 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.204 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.204 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:51.204 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:51.204 filename0: (groupid=0, jobs=1): err= 0: pid=392364: Mon Oct 7 09:54:38 2024 00:33:51.204 read: IOPS=479, BW=1916KiB/s (1962kB/s)(18.8MiB/10020msec) 00:33:51.204 slat (nsec): min=8463, max=47750, avg=27746.74, stdev=8400.96 00:33:51.204 clat (usec): min=25729, max=35180, avg=33175.15, stdev=514.03 00:33:51.204 lat (usec): min=25764, max=35224, avg=33202.90, stdev=511.89 00:33:51.204 clat percentiles (usec): 00:33:51.204 | 1.00th=[32637], 5.00th=[32900], 10.00th=[32900], 20.00th=[32900], 00:33:51.204 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33162], 60.00th=[33162], 00:33:51.204 | 70.00th=[33424], 80.00th=[33424], 90.00th=[33817], 95.00th=[33817], 00:33:51.204 | 99.00th=[34341], 99.50th=[34341], 99.90th=[34866], 99.95th=[35390], 00:33:51.204 | 99.99th=[35390] 00:33:51.204 bw ( KiB/s): min= 1792, max= 1920, per=4.17%, avg=1913.60, stdev=28.62, samples=20 00:33:51.204 iops : min= 448, max= 480, avg=478.40, stdev= 7.16, samples=20 00:33:51.204 lat (msec) : 50=100.00% 00:33:51.204 cpu : usr=98.52%, sys=1.09%, ctx=12, majf=0, minf=9 00:33:51.204 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:51.204 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.204 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.204 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:51.204 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:51.204 filename0: (groupid=0, jobs=1): err= 0: pid=392365: Mon Oct 7 09:54:38 2024 00:33:51.204 read: IOPS=479, BW=1918KiB/s (1964kB/s)(18.8MiB/10011msec) 00:33:51.204 slat (nsec): min=8579, max=53147, avg=27164.66, stdev=8758.78 00:33:51.204 clat (usec): min=17981, max=42152, avg=33149.86, stdev=744.73 00:33:51.204 lat (usec): min=17992, max=42164, avg=33177.02, stdev=744.07 00:33:51.204 clat percentiles (usec): 00:33:51.204 | 1.00th=[32375], 5.00th=[32900], 10.00th=[32900], 20.00th=[32900], 00:33:51.204 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33162], 60.00th=[33162], 00:33:51.204 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33424], 95.00th=[33817], 00:33:51.204 | 99.00th=[34341], 99.50th=[34866], 99.90th=[35390], 99.95th=[35390], 00:33:51.204 | 99.99th=[42206] 00:33:51.204 bw ( KiB/s): min= 1792, max= 1920, per=4.17%, avg=1913.60, stdev=28.62, samples=20 00:33:51.204 iops : min= 448, max= 480, avg=478.40, stdev= 7.16, samples=20 00:33:51.204 lat (msec) : 20=0.04%, 50=99.96% 00:33:51.204 cpu : usr=98.33%, sys=1.29%, ctx=12, majf=0, minf=9 00:33:51.204 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:51.204 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.204 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.204 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:51.204 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:51.204 filename0: (groupid=0, jobs=1): err= 0: pid=392366: Mon Oct 7 09:54:38 2024 00:33:51.204 read: IOPS=478, BW=1913KiB/s (1959kB/s)(18.7MiB/10004msec) 00:33:51.204 slat (nsec): min=8127, max=60756, avg=13712.09, stdev=6838.64 00:33:51.204 clat (usec): min=28267, max=45681, avg=33332.15, stdev=764.33 00:33:51.204 lat (usec): min=28278, max=45711, avg=33345.86, stdev=764.78 00:33:51.204 clat percentiles (usec): 00:33:51.204 | 1.00th=[32900], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:33:51.204 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:33:51.204 | 70.00th=[33424], 80.00th=[33424], 90.00th=[33817], 95.00th=[33817], 00:33:51.204 | 99.00th=[34341], 99.50th=[34341], 99.90th=[45351], 99.95th=[45876], 00:33:51.204 | 99.99th=[45876] 00:33:51.204 bw ( KiB/s): min= 1792, max= 1920, per=4.17%, avg=1913.26, stdev=29.37, samples=19 00:33:51.204 iops : min= 448, max= 480, avg=478.32, stdev= 7.34, samples=19 00:33:51.204 lat (msec) : 50=100.00% 00:33:51.204 cpu : usr=98.58%, sys=1.04%, ctx=12, majf=0, minf=9 00:33:51.204 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:51.204 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.204 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.204 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:51.204 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:51.204 filename0: (groupid=0, jobs=1): err= 0: pid=392368: Mon Oct 7 09:54:38 2024 00:33:51.204 read: IOPS=478, BW=1913KiB/s (1959kB/s)(18.7MiB/10003msec) 00:33:51.204 slat (usec): min=4, max=117, avg=45.91, stdev=25.69 00:33:51.204 clat (usec): min=25691, max=52973, avg=33036.72, stdev=1280.21 00:33:51.204 lat (usec): min=25706, max=52985, avg=33082.64, stdev=1276.44 00:33:51.204 clat percentiles (usec): 00:33:51.204 | 1.00th=[31851], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:33:51.204 | 30.00th=[32900], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:33:51.204 | 70.00th=[33162], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:33:51.204 | 99.00th=[34341], 99.50th=[34341], 99.90th=[52691], 99.95th=[53216], 00:33:51.204 | 99.99th=[53216] 00:33:51.204 bw ( KiB/s): min= 1792, max= 1920, per=4.15%, avg=1906.53, stdev=40.36, samples=19 00:33:51.204 iops : min= 448, max= 480, avg=476.63, stdev=10.09, samples=19 00:33:51.204 lat (msec) : 50=99.67%, 100=0.33% 00:33:51.204 cpu : usr=97.42%, sys=1.72%, ctx=110, majf=0, minf=9 00:33:51.204 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:51.204 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.204 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.204 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:51.204 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:51.204 filename1: (groupid=0, jobs=1): err= 0: pid=392369: Mon Oct 7 09:54:38 2024 00:33:51.204 read: IOPS=478, BW=1913KiB/s (1959kB/s)(18.7MiB/10005msec) 00:33:51.204 slat (nsec): min=14383, max=85496, avg=34007.94, stdev=12283.02 00:33:51.204 clat (usec): min=19705, max=60972, avg=33112.67, stdev=1789.28 00:33:51.204 lat (usec): min=19725, max=60986, avg=33146.67, stdev=1790.35 00:33:51.204 clat percentiles (usec): 00:33:51.204 | 1.00th=[32637], 5.00th=[32637], 10.00th=[32637], 20.00th=[32900], 00:33:51.204 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33162], 00:33:51.204 | 70.00th=[33162], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:33:51.204 | 99.00th=[33817], 99.50th=[34341], 99.90th=[61080], 99.95th=[61080], 00:33:51.204 | 99.99th=[61080] 00:33:51.204 bw ( KiB/s): min= 1664, max= 1920, per=4.15%, avg=1906.53, stdev=58.73, samples=19 00:33:51.204 iops : min= 416, max= 480, avg=476.63, stdev=14.68, samples=19 00:33:51.204 lat (msec) : 20=0.23%, 50=99.44%, 100=0.33% 00:33:51.204 cpu : usr=98.44%, sys=1.12%, ctx=17, majf=0, minf=9 00:33:51.204 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:51.204 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.204 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.204 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:51.204 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:51.204 filename1: (groupid=0, jobs=1): err= 0: pid=392370: Mon Oct 7 09:54:38 2024 00:33:51.204 read: IOPS=479, BW=1916KiB/s (1962kB/s)(18.8MiB/10020msec) 00:33:51.204 slat (nsec): min=8438, max=59761, avg=31869.11, stdev=7222.22 00:33:51.204 clat (usec): min=24033, max=45253, avg=33102.00, stdev=596.31 00:33:51.204 lat (usec): min=24068, max=45292, avg=33133.87, stdev=596.34 00:33:51.204 clat percentiles (usec): 00:33:51.204 | 1.00th=[32637], 5.00th=[32637], 10.00th=[32900], 20.00th=[32900], 00:33:51.204 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33162], 00:33:51.204 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33424], 95.00th=[33817], 00:33:51.204 | 99.00th=[34341], 99.50th=[34341], 99.90th=[34866], 99.95th=[35390], 00:33:51.204 | 99.99th=[45351] 00:33:51.204 bw ( KiB/s): min= 1792, max= 1920, per=4.17%, avg=1913.60, stdev=28.62, samples=20 00:33:51.204 iops : min= 448, max= 480, avg=478.40, stdev= 7.16, samples=20 00:33:51.204 lat (msec) : 50=100.00% 00:33:51.204 cpu : usr=98.11%, sys=1.19%, ctx=49, majf=0, minf=10 00:33:51.204 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:51.204 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.204 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.204 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:51.204 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:51.204 filename1: (groupid=0, jobs=1): err= 0: pid=392371: Mon Oct 7 09:54:38 2024 00:33:51.204 read: IOPS=479, BW=1918KiB/s (1964kB/s)(18.8MiB/10010msec) 00:33:51.204 slat (nsec): min=4538, max=53030, avg=32130.32, stdev=6948.27 00:33:51.204 clat (usec): min=13653, max=56215, avg=33078.27, stdev=1317.62 00:33:51.204 lat (usec): min=13667, max=56233, avg=33110.40, stdev=1318.28 00:33:51.204 clat percentiles (usec): 00:33:51.204 | 1.00th=[32637], 5.00th=[32637], 10.00th=[32900], 20.00th=[32900], 00:33:51.204 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33162], 00:33:51.204 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33424], 95.00th=[33817], 00:33:51.204 | 99.00th=[34341], 99.50th=[35390], 99.90th=[36963], 99.95th=[39060], 00:33:51.204 | 99.99th=[56361] 00:33:51.204 bw ( KiB/s): min= 1792, max= 1920, per=4.17%, avg=1913.26, stdev=29.37, samples=19 00:33:51.204 iops : min= 448, max= 480, avg=478.32, stdev= 7.34, samples=19 00:33:51.204 lat (msec) : 20=0.38%, 50=99.58%, 100=0.04% 00:33:51.204 cpu : usr=97.08%, sys=1.84%, ctx=399, majf=0, minf=9 00:33:51.204 IO depths : 1=5.8%, 2=12.0%, 4=25.0%, 8=50.5%, 16=6.8%, 32=0.0%, >=64=0.0% 00:33:51.204 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.204 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.204 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:51.204 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:51.204 filename1: (groupid=0, jobs=1): err= 0: pid=392372: Mon Oct 7 09:54:38 2024 00:33:51.204 read: IOPS=478, BW=1912KiB/s (1958kB/s)(18.7MiB/10007msec) 00:33:51.204 slat (nsec): min=3858, max=57065, avg=32867.44, stdev=5439.15 00:33:51.204 clat (usec): min=31729, max=47745, avg=33180.42, stdev=897.93 00:33:51.204 lat (usec): min=31760, max=47763, avg=33213.28, stdev=896.30 00:33:51.204 clat percentiles (usec): 00:33:51.204 | 1.00th=[32637], 5.00th=[32637], 10.00th=[32900], 20.00th=[32900], 00:33:51.204 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33162], 00:33:51.205 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33424], 95.00th=[33817], 00:33:51.205 | 99.00th=[34341], 99.50th=[35390], 99.90th=[47973], 99.95th=[47973], 00:33:51.205 | 99.99th=[47973] 00:33:51.205 bw ( KiB/s): min= 1792, max= 1920, per=4.15%, avg=1906.53, stdev=40.36, samples=19 00:33:51.205 iops : min= 448, max= 480, avg=476.63, stdev=10.09, samples=19 00:33:51.205 lat (msec) : 50=100.00% 00:33:51.205 cpu : usr=97.65%, sys=1.50%, ctx=125, majf=0, minf=9 00:33:51.205 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:51.205 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.205 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.205 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:51.205 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:51.205 filename1: (groupid=0, jobs=1): err= 0: pid=392373: Mon Oct 7 09:54:38 2024 00:33:51.205 read: IOPS=478, BW=1913KiB/s (1959kB/s)(18.7MiB/10004msec) 00:33:51.205 slat (nsec): min=5128, max=72439, avg=30956.20, stdev=8064.68 00:33:51.205 clat (usec): min=32522, max=45875, avg=33200.11, stdev=779.85 00:33:51.205 lat (usec): min=32564, max=45888, avg=33231.07, stdev=778.20 00:33:51.205 clat percentiles (usec): 00:33:51.205 | 1.00th=[32637], 5.00th=[32637], 10.00th=[32900], 20.00th=[32900], 00:33:51.205 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33162], 00:33:51.205 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33424], 95.00th=[33817], 00:33:51.205 | 99.00th=[33817], 99.50th=[34341], 99.90th=[45876], 99.95th=[45876], 00:33:51.205 | 99.99th=[45876] 00:33:51.205 bw ( KiB/s): min= 1792, max= 1920, per=4.17%, avg=1913.26, stdev=29.37, samples=19 00:33:51.205 iops : min= 448, max= 480, avg=478.32, stdev= 7.34, samples=19 00:33:51.205 lat (msec) : 50=100.00% 00:33:51.205 cpu : usr=98.04%, sys=1.34%, ctx=69, majf=0, minf=9 00:33:51.205 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:51.205 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.205 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.205 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:51.205 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:51.205 filename1: (groupid=0, jobs=1): err= 0: pid=392374: Mon Oct 7 09:54:38 2024 00:33:51.205 read: IOPS=479, BW=1916KiB/s (1962kB/s)(18.8MiB/10020msec) 00:33:51.205 slat (nsec): min=8406, max=51943, avg=28263.39, stdev=7697.22 00:33:51.205 clat (usec): min=24198, max=51803, avg=33158.66, stdev=1050.87 00:33:51.205 lat (usec): min=24210, max=51834, avg=33186.92, stdev=1050.27 00:33:51.205 clat percentiles (usec): 00:33:51.205 | 1.00th=[32637], 5.00th=[32900], 10.00th=[32900], 20.00th=[32900], 00:33:51.205 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33162], 00:33:51.205 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33424], 95.00th=[33817], 00:33:51.205 | 99.00th=[34341], 99.50th=[34341], 99.90th=[45351], 99.95th=[45351], 00:33:51.205 | 99.99th=[51643] 00:33:51.205 bw ( KiB/s): min= 1792, max= 1920, per=4.17%, avg=1913.60, stdev=28.62, samples=20 00:33:51.205 iops : min= 448, max= 480, avg=478.40, stdev= 7.16, samples=20 00:33:51.205 lat (msec) : 50=99.96%, 100=0.04% 00:33:51.205 cpu : usr=98.33%, sys=1.29%, ctx=11, majf=0, minf=9 00:33:51.205 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:51.205 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.205 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.205 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:51.205 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:51.205 filename1: (groupid=0, jobs=1): err= 0: pid=392376: Mon Oct 7 09:54:38 2024 00:33:51.205 read: IOPS=478, BW=1912KiB/s (1958kB/s)(18.7MiB/10006msec) 00:33:51.205 slat (usec): min=10, max=101, avg=30.70, stdev= 7.73 00:33:51.205 clat (usec): min=20002, max=77399, avg=33177.54, stdev=1958.12 00:33:51.205 lat (usec): min=20031, max=77420, avg=33208.24, stdev=1957.47 00:33:51.205 clat percentiles (usec): 00:33:51.205 | 1.00th=[32637], 5.00th=[32637], 10.00th=[32900], 20.00th=[32900], 00:33:51.205 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33162], 00:33:51.205 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33424], 95.00th=[33817], 00:33:51.205 | 99.00th=[33817], 99.50th=[34341], 99.90th=[61604], 99.95th=[61604], 00:33:51.205 | 99.99th=[77071] 00:33:51.205 bw ( KiB/s): min= 1660, max= 1920, per=4.15%, avg=1906.32, stdev=59.65, samples=19 00:33:51.205 iops : min= 415, max= 480, avg=476.58, stdev=14.91, samples=19 00:33:51.205 lat (msec) : 50=99.67%, 100=0.33% 00:33:51.205 cpu : usr=98.09%, sys=1.37%, ctx=41, majf=0, minf=9 00:33:51.205 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:51.205 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.205 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.205 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:51.205 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:51.205 filename1: (groupid=0, jobs=1): err= 0: pid=392377: Mon Oct 7 09:54:38 2024 00:33:51.205 read: IOPS=484, BW=1936KiB/s (1983kB/s)(18.9MiB/10010msec) 00:33:51.205 slat (nsec): min=3838, max=58504, avg=11614.02, stdev=4677.23 00:33:51.205 clat (usec): min=2890, max=39432, avg=32943.41, stdev=2602.17 00:33:51.205 lat (usec): min=2895, max=39441, avg=32955.02, stdev=2601.95 00:33:51.205 clat percentiles (usec): 00:33:51.205 | 1.00th=[20055], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:33:51.205 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:33:51.205 | 70.00th=[33424], 80.00th=[33424], 90.00th=[33817], 95.00th=[33817], 00:33:51.205 | 99.00th=[33817], 99.50th=[34341], 99.90th=[34341], 99.95th=[39584], 00:33:51.205 | 99.99th=[39584] 00:33:51.205 bw ( KiB/s): min= 1792, max= 2280, per=4.21%, avg=1931.60, stdev=86.84, samples=20 00:33:51.205 iops : min= 448, max= 570, avg=482.90, stdev=21.71, samples=20 00:33:51.205 lat (msec) : 4=0.33%, 10=0.33%, 20=0.27%, 50=99.07% 00:33:51.205 cpu : usr=97.57%, sys=1.38%, ctx=142, majf=0, minf=9 00:33:51.205 IO depths : 1=6.1%, 2=12.2%, 4=24.4%, 8=50.9%, 16=6.4%, 32=0.0%, >=64=0.0% 00:33:51.205 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.205 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.205 issued rwts: total=4845,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:51.205 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:51.205 filename2: (groupid=0, jobs=1): err= 0: pid=392378: Mon Oct 7 09:54:38 2024 00:33:51.205 read: IOPS=478, BW=1912KiB/s (1958kB/s)(18.7MiB/10006msec) 00:33:51.205 slat (nsec): min=5807, max=62285, avg=32151.04, stdev=8894.91 00:33:51.205 clat (usec): min=19835, max=62086, avg=33162.67, stdev=1852.58 00:33:51.205 lat (usec): min=19849, max=62102, avg=33194.82, stdev=1851.58 00:33:51.205 clat percentiles (usec): 00:33:51.205 | 1.00th=[32637], 5.00th=[32637], 10.00th=[32900], 20.00th=[32900], 00:33:51.205 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33162], 00:33:51.205 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33424], 95.00th=[33817], 00:33:51.205 | 99.00th=[33817], 99.50th=[34341], 99.90th=[62129], 99.95th=[62129], 00:33:51.205 | 99.99th=[62129] 00:33:51.205 bw ( KiB/s): min= 1667, max= 1920, per=4.15%, avg=1906.68, stdev=58.04, samples=19 00:33:51.205 iops : min= 416, max= 480, avg=476.63, stdev=14.68, samples=19 00:33:51.205 lat (msec) : 20=0.13%, 50=99.54%, 100=0.33% 00:33:51.205 cpu : usr=97.87%, sys=1.49%, ctx=79, majf=0, minf=9 00:33:51.205 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:51.205 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.205 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.205 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:51.205 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:51.205 filename2: (groupid=0, jobs=1): err= 0: pid=392379: Mon Oct 7 09:54:38 2024 00:33:51.205 read: IOPS=479, BW=1918KiB/s (1964kB/s)(18.8MiB/10011msec) 00:33:51.205 slat (nsec): min=8167, max=54217, avg=18121.33, stdev=8650.87 00:33:51.205 clat (usec): min=17484, max=35352, avg=33226.38, stdev=708.83 00:33:51.205 lat (usec): min=17495, max=35372, avg=33244.51, stdev=707.51 00:33:51.205 clat percentiles (usec): 00:33:51.205 | 1.00th=[32637], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:33:51.205 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33162], 60.00th=[33162], 00:33:51.205 | 70.00th=[33424], 80.00th=[33424], 90.00th=[33817], 95.00th=[33817], 00:33:51.205 | 99.00th=[34341], 99.50th=[34866], 99.90th=[35390], 99.95th=[35390], 00:33:51.205 | 99.99th=[35390] 00:33:51.205 bw ( KiB/s): min= 1792, max= 1920, per=4.17%, avg=1913.60, stdev=28.62, samples=20 00:33:51.205 iops : min= 448, max= 480, avg=478.40, stdev= 7.16, samples=20 00:33:51.205 lat (msec) : 20=0.04%, 50=99.96% 00:33:51.205 cpu : usr=98.10%, sys=1.42%, ctx=54, majf=0, minf=9 00:33:51.205 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:51.205 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.205 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.205 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:51.205 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:51.205 filename2: (groupid=0, jobs=1): err= 0: pid=392380: Mon Oct 7 09:54:38 2024 00:33:51.205 read: IOPS=478, BW=1913KiB/s (1959kB/s)(18.7MiB/10003msec) 00:33:51.205 slat (nsec): min=8229, max=55016, avg=32825.78, stdev=5775.01 00:33:51.205 clat (usec): min=31777, max=44181, avg=33163.80, stdev=705.68 00:33:51.205 lat (usec): min=31797, max=44197, avg=33196.62, stdev=704.48 00:33:51.205 clat percentiles (usec): 00:33:51.205 | 1.00th=[32637], 5.00th=[32637], 10.00th=[32900], 20.00th=[32900], 00:33:51.205 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33162], 00:33:51.205 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33424], 95.00th=[33817], 00:33:51.205 | 99.00th=[34341], 99.50th=[35390], 99.90th=[44303], 99.95th=[44303], 00:33:51.205 | 99.99th=[44303] 00:33:51.205 bw ( KiB/s): min= 1792, max= 1920, per=4.17%, avg=1913.26, stdev=29.37, samples=19 00:33:51.205 iops : min= 448, max= 480, avg=478.32, stdev= 7.34, samples=19 00:33:51.205 lat (msec) : 50=100.00% 00:33:51.205 cpu : usr=97.76%, sys=1.50%, ctx=74, majf=0, minf=9 00:33:51.205 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:51.205 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.205 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.205 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:51.205 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:51.205 filename2: (groupid=0, jobs=1): err= 0: pid=392381: Mon Oct 7 09:54:38 2024 00:33:51.205 read: IOPS=478, BW=1913KiB/s (1959kB/s)(18.7MiB/10004msec) 00:33:51.205 slat (nsec): min=10297, max=62214, avg=33071.12, stdev=6589.19 00:33:51.205 clat (usec): min=32541, max=45813, avg=33169.47, stdev=773.53 00:33:51.206 lat (usec): min=32572, max=45857, avg=33202.54, stdev=772.77 00:33:51.206 clat percentiles (usec): 00:33:51.206 | 1.00th=[32637], 5.00th=[32637], 10.00th=[32900], 20.00th=[32900], 00:33:51.206 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33162], 00:33:51.206 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33424], 95.00th=[33817], 00:33:51.206 | 99.00th=[33817], 99.50th=[34341], 99.90th=[45876], 99.95th=[45876], 00:33:51.206 | 99.99th=[45876] 00:33:51.206 bw ( KiB/s): min= 1792, max= 1920, per=4.17%, avg=1913.26, stdev=29.37, samples=19 00:33:51.206 iops : min= 448, max= 480, avg=478.32, stdev= 7.34, samples=19 00:33:51.206 lat (msec) : 50=100.00% 00:33:51.206 cpu : usr=98.48%, sys=1.12%, ctx=14, majf=0, minf=9 00:33:51.206 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:51.206 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.206 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.206 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:51.206 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:51.206 filename2: (groupid=0, jobs=1): err= 0: pid=392382: Mon Oct 7 09:54:38 2024 00:33:51.206 read: IOPS=478, BW=1913KiB/s (1959kB/s)(18.7MiB/10004msec) 00:33:51.206 slat (nsec): min=8361, max=62709, avg=23429.50, stdev=10604.98 00:33:51.206 clat (usec): min=28284, max=51812, avg=33278.19, stdev=809.39 00:33:51.206 lat (usec): min=28296, max=51843, avg=33301.62, stdev=808.21 00:33:51.206 clat percentiles (usec): 00:33:51.206 | 1.00th=[32637], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:33:51.206 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33162], 60.00th=[33162], 00:33:51.206 | 70.00th=[33424], 80.00th=[33424], 90.00th=[33817], 95.00th=[33817], 00:33:51.206 | 99.00th=[33817], 99.50th=[34341], 99.90th=[45351], 99.95th=[45351], 00:33:51.206 | 99.99th=[51643] 00:33:51.206 bw ( KiB/s): min= 1792, max= 1920, per=4.17%, avg=1913.26, stdev=29.37, samples=19 00:33:51.206 iops : min= 448, max= 480, avg=478.32, stdev= 7.34, samples=19 00:33:51.206 lat (msec) : 50=99.96%, 100=0.04% 00:33:51.206 cpu : usr=98.15%, sys=1.46%, ctx=15, majf=0, minf=9 00:33:51.206 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:51.206 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.206 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.206 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:51.206 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:51.206 filename2: (groupid=0, jobs=1): err= 0: pid=392384: Mon Oct 7 09:54:38 2024 00:33:51.206 read: IOPS=478, BW=1913KiB/s (1959kB/s)(18.7MiB/10003msec) 00:33:51.206 slat (usec): min=9, max=114, avg=48.11, stdev=22.87 00:33:51.206 clat (usec): min=19952, max=59761, avg=33031.69, stdev=1867.79 00:33:51.206 lat (usec): min=19977, max=59823, avg=33079.80, stdev=1867.15 00:33:51.206 clat percentiles (usec): 00:33:51.206 | 1.00th=[31851], 5.00th=[32375], 10.00th=[32637], 20.00th=[32637], 00:33:51.206 | 30.00th=[32900], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:33:51.206 | 70.00th=[33162], 80.00th=[33162], 90.00th=[33424], 95.00th=[33424], 00:33:51.206 | 99.00th=[33817], 99.50th=[40633], 99.90th=[59507], 99.95th=[59507], 00:33:51.206 | 99.99th=[59507] 00:33:51.206 bw ( KiB/s): min= 1776, max= 1920, per=4.15%, avg=1906.53, stdev=40.71, samples=19 00:33:51.206 iops : min= 444, max= 480, avg=476.63, stdev=10.18, samples=19 00:33:51.206 lat (msec) : 20=0.04%, 50=99.62%, 100=0.33% 00:33:51.206 cpu : usr=96.03%, sys=2.23%, ctx=434, majf=0, minf=9 00:33:51.206 IO depths : 1=6.0%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:33:51.206 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.206 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.206 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:51.206 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:51.206 filename2: (groupid=0, jobs=1): err= 0: pid=392385: Mon Oct 7 09:54:38 2024 00:33:51.206 read: IOPS=478, BW=1913KiB/s (1959kB/s)(18.7MiB/10005msec) 00:33:51.206 slat (nsec): min=6361, max=94140, avg=33024.42, stdev=12217.68 00:33:51.206 clat (usec): min=19775, max=61166, avg=33124.09, stdev=1807.57 00:33:51.206 lat (usec): min=19800, max=61187, avg=33157.11, stdev=1807.23 00:33:51.206 clat percentiles (usec): 00:33:51.206 | 1.00th=[32637], 5.00th=[32637], 10.00th=[32637], 20.00th=[32900], 00:33:51.206 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33162], 00:33:51.206 | 70.00th=[33162], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:33:51.206 | 99.00th=[33817], 99.50th=[34341], 99.90th=[61080], 99.95th=[61080], 00:33:51.206 | 99.99th=[61080] 00:33:51.206 bw ( KiB/s): min= 1664, max= 1920, per=4.15%, avg=1906.53, stdev=58.73, samples=19 00:33:51.206 iops : min= 416, max= 480, avg=476.63, stdev=14.68, samples=19 00:33:51.206 lat (msec) : 20=0.19%, 50=99.48%, 100=0.33% 00:33:51.206 cpu : usr=95.72%, sys=2.48%, ctx=262, majf=0, minf=9 00:33:51.206 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:51.206 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.206 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.206 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:51.206 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:51.206 filename2: (groupid=0, jobs=1): err= 0: pid=392386: Mon Oct 7 09:54:38 2024 00:33:51.206 read: IOPS=479, BW=1916KiB/s (1962kB/s)(18.8MiB/10020msec) 00:33:51.206 slat (nsec): min=8811, max=56552, avg=30180.09, stdev=6846.45 00:33:51.206 clat (usec): min=25717, max=35322, avg=33112.75, stdev=511.66 00:33:51.206 lat (usec): min=25737, max=35337, avg=33142.93, stdev=512.24 00:33:51.206 clat percentiles (usec): 00:33:51.206 | 1.00th=[32637], 5.00th=[32637], 10.00th=[32900], 20.00th=[32900], 00:33:51.206 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33162], 00:33:51.206 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33424], 95.00th=[33817], 00:33:51.206 | 99.00th=[34341], 99.50th=[34341], 99.90th=[35390], 99.95th=[35390], 00:33:51.206 | 99.99th=[35390] 00:33:51.206 bw ( KiB/s): min= 1792, max= 1920, per=4.17%, avg=1913.60, stdev=28.62, samples=20 00:33:51.206 iops : min= 448, max= 480, avg=478.40, stdev= 7.16, samples=20 00:33:51.206 lat (msec) : 50=100.00% 00:33:51.206 cpu : usr=98.00%, sys=1.33%, ctx=127, majf=0, minf=10 00:33:51.206 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:51.206 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.206 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.206 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:51.206 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:51.206 00:33:51.206 Run status group 0 (all jobs): 00:33:51.206 READ: bw=44.8MiB/s (47.0MB/s), 1912KiB/s-1936KiB/s (1958kB/s-1983kB/s), io=449MiB (471MB), run=10003-10020msec 00:33:51.206 09:54:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:33:51.206 09:54:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:33:51.206 09:54:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:51.206 09:54:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:51.206 09:54:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:33:51.206 09:54:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:51.206 09:54:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:51.206 09:54:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:51.206 09:54:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:51.206 09:54:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:51.206 09:54:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:51.206 09:54:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:51.206 09:54:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:51.206 09:54:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:51.206 09:54:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:33:51.206 09:54:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:33:51.206 09:54:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:51.206 09:54:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:51.206 09:54:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:51.206 09:54:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:51.206 09:54:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:33:51.206 09:54:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:51.206 09:54:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:51.206 09:54:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:51.206 09:54:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:51.206 09:54:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:33:51.206 09:54:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:33:51.206 09:54:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:33:51.206 09:54:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:51.206 09:54:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:51.206 09:54:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:51.206 09:54:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:33:51.206 09:54:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:51.206 09:54:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:51.206 09:54:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:51.206 09:54:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:33:51.206 09:54:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:33:51.206 09:54:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:33:51.206 09:54:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:33:51.206 09:54:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:33:51.206 09:54:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:33:51.206 09:54:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:33:51.206 09:54:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:33:51.206 09:54:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:51.206 09:54:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:33:51.206 09:54:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:33:51.206 09:54:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:51.207 09:54:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:51.207 09:54:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:51.207 bdev_null0 00:33:51.207 09:54:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:51.207 09:54:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:51.207 09:54:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:51.207 09:54:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:51.207 09:54:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:51.207 09:54:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:51.207 09:54:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:51.207 09:54:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:51.207 09:54:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:51.207 09:54:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:51.207 09:54:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:51.207 09:54:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:51.207 [2024-10-07 09:54:38.987265] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:51.207 09:54:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:51.207 09:54:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:51.207 09:54:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:33:51.207 09:54:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:33:51.207 09:54:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:33:51.207 09:54:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:51.207 09:54:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:51.207 bdev_null1 00:33:51.207 09:54:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:51.207 09:54:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:33:51.207 09:54:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:51.207 09:54:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:51.207 09:54:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:51.207 09:54:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:33:51.207 09:54:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:51.207 09:54:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:51.207 09:54:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:51.207 09:54:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:51.207 09:54:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:51.207 09:54:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:51.207 09:54:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:51.207 09:54:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:33:51.207 09:54:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:33:51.207 09:54:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:33:51.207 09:54:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config=() 00:33:51.207 09:54:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # local subsystem config 00:33:51.207 09:54:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:33:51.207 09:54:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:51.207 09:54:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:33:51.207 { 00:33:51.207 "params": { 00:33:51.207 "name": "Nvme$subsystem", 00:33:51.207 "trtype": "$TEST_TRANSPORT", 00:33:51.207 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:51.207 "adrfam": "ipv4", 00:33:51.207 "trsvcid": "$NVMF_PORT", 00:33:51.207 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:51.207 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:51.207 "hdgst": ${hdgst:-false}, 00:33:51.207 "ddgst": ${ddgst:-false} 00:33:51.207 }, 00:33:51.207 "method": "bdev_nvme_attach_controller" 00:33:51.207 } 00:33:51.207 EOF 00:33:51.207 )") 00:33:51.207 09:54:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:51.207 09:54:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:33:51.207 09:54:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:33:51.207 09:54:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:33:51.207 09:54:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:51.207 09:54:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:33:51.207 09:54:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:33:51.207 09:54:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:51.207 09:54:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:33:51.207 09:54:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:33:51.207 09:54:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:33:51.207 09:54:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:33:51.207 09:54:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:51.207 09:54:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:33:51.207 09:54:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:33:51.207 09:54:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:51.207 09:54:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:33:51.207 09:54:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:33:51.207 09:54:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:33:51.207 09:54:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:33:51.207 { 00:33:51.207 "params": { 00:33:51.207 "name": "Nvme$subsystem", 00:33:51.207 "trtype": "$TEST_TRANSPORT", 00:33:51.207 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:51.207 "adrfam": "ipv4", 00:33:51.207 "trsvcid": "$NVMF_PORT", 00:33:51.207 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:51.207 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:51.207 "hdgst": ${hdgst:-false}, 00:33:51.207 "ddgst": ${ddgst:-false} 00:33:51.207 }, 00:33:51.207 "method": "bdev_nvme_attach_controller" 00:33:51.207 } 00:33:51.207 EOF 00:33:51.207 )") 00:33:51.207 09:54:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:33:51.207 09:54:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:33:51.207 09:54:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:51.207 09:54:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # jq . 00:33:51.207 09:54:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@583 -- # IFS=, 00:33:51.207 09:54:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:33:51.207 "params": { 00:33:51.207 "name": "Nvme0", 00:33:51.207 "trtype": "tcp", 00:33:51.207 "traddr": "10.0.0.2", 00:33:51.207 "adrfam": "ipv4", 00:33:51.207 "trsvcid": "4420", 00:33:51.207 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:51.207 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:51.207 "hdgst": false, 00:33:51.207 "ddgst": false 00:33:51.207 }, 00:33:51.207 "method": "bdev_nvme_attach_controller" 00:33:51.207 },{ 00:33:51.207 "params": { 00:33:51.207 "name": "Nvme1", 00:33:51.207 "trtype": "tcp", 00:33:51.207 "traddr": "10.0.0.2", 00:33:51.207 "adrfam": "ipv4", 00:33:51.207 "trsvcid": "4420", 00:33:51.207 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:51.207 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:51.207 "hdgst": false, 00:33:51.207 "ddgst": false 00:33:51.207 }, 00:33:51.207 "method": "bdev_nvme_attach_controller" 00:33:51.207 }' 00:33:51.207 09:54:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:33:51.207 09:54:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:33:51.207 09:54:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:33:51.207 09:54:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:51.207 09:54:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:33:51.207 09:54:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:33:51.207 09:54:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:33:51.207 09:54:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:33:51.207 09:54:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:51.207 09:54:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:51.207 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:33:51.207 ... 00:33:51.207 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:33:51.207 ... 00:33:51.207 fio-3.35 00:33:51.207 Starting 4 threads 00:33:56.471 00:33:56.471 filename0: (groupid=0, jobs=1): err= 0: pid=393673: Mon Oct 7 09:54:45 2024 00:33:56.471 read: IOPS=1765, BW=13.8MiB/s (14.5MB/s)(69.0MiB/5002msec) 00:33:56.471 slat (nsec): min=7683, max=73135, avg=18262.66, stdev=9630.81 00:33:56.471 clat (usec): min=777, max=8185, avg=4462.18, stdev=780.63 00:33:56.471 lat (usec): min=797, max=8193, avg=4480.44, stdev=779.86 00:33:56.471 clat percentiles (usec): 00:33:56.471 | 1.00th=[ 2802], 5.00th=[ 3556], 10.00th=[ 3818], 20.00th=[ 4080], 00:33:56.471 | 30.00th=[ 4178], 40.00th=[ 4228], 50.00th=[ 4293], 60.00th=[ 4359], 00:33:56.471 | 70.00th=[ 4490], 80.00th=[ 4752], 90.00th=[ 5473], 95.00th=[ 6194], 00:33:56.471 | 99.00th=[ 7308], 99.50th=[ 7635], 99.90th=[ 7898], 99.95th=[ 7963], 00:33:56.471 | 99.99th=[ 8160] 00:33:56.471 bw ( KiB/s): min=13440, max=14688, per=24.16%, avg=14121.10, stdev=341.88, samples=10 00:33:56.471 iops : min= 1680, max= 1836, avg=1765.10, stdev=42.75, samples=10 00:33:56.471 lat (usec) : 1000=0.03% 00:33:56.471 lat (msec) : 2=0.33%, 4=16.53%, 10=83.11% 00:33:56.471 cpu : usr=92.00%, sys=5.54%, ctx=319, majf=0, minf=33 00:33:56.471 IO depths : 1=0.3%, 2=16.0%, 4=56.8%, 8=26.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:56.471 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:56.471 complete : 0=0.0%, 4=91.8%, 8=8.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:56.471 issued rwts: total=8832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:56.471 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:56.471 filename0: (groupid=0, jobs=1): err= 0: pid=393674: Mon Oct 7 09:54:45 2024 00:33:56.471 read: IOPS=1828, BW=14.3MiB/s (15.0MB/s)(71.5MiB/5003msec) 00:33:56.471 slat (nsec): min=7478, max=67022, avg=16964.09, stdev=8900.88 00:33:56.471 clat (usec): min=766, max=8103, avg=4310.02, stdev=696.80 00:33:56.471 lat (usec): min=780, max=8134, avg=4326.98, stdev=696.67 00:33:56.471 clat percentiles (usec): 00:33:56.471 | 1.00th=[ 2769], 5.00th=[ 3490], 10.00th=[ 3687], 20.00th=[ 3916], 00:33:56.471 | 30.00th=[ 4080], 40.00th=[ 4178], 50.00th=[ 4228], 60.00th=[ 4293], 00:33:56.471 | 70.00th=[ 4359], 80.00th=[ 4490], 90.00th=[ 4948], 95.00th=[ 5669], 00:33:56.471 | 99.00th=[ 7111], 99.50th=[ 7308], 99.90th=[ 7832], 99.95th=[ 7963], 00:33:56.471 | 99.99th=[ 8094] 00:33:56.471 bw ( KiB/s): min=13968, max=15232, per=25.02%, avg=14624.00, stdev=377.27, samples=10 00:33:56.471 iops : min= 1746, max= 1904, avg=1828.00, stdev=47.16, samples=10 00:33:56.471 lat (usec) : 1000=0.02% 00:33:56.471 lat (msec) : 2=0.31%, 4=23.94%, 10=75.73% 00:33:56.471 cpu : usr=92.82%, sys=5.14%, ctx=302, majf=0, minf=47 00:33:56.471 IO depths : 1=0.6%, 2=18.7%, 4=55.2%, 8=25.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:56.471 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:56.471 complete : 0=0.0%, 4=90.9%, 8=9.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:56.471 issued rwts: total=9148,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:56.471 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:56.471 filename1: (groupid=0, jobs=1): err= 0: pid=393675: Mon Oct 7 09:54:45 2024 00:33:56.471 read: IOPS=1781, BW=13.9MiB/s (14.6MB/s)(69.6MiB/5001msec) 00:33:56.471 slat (nsec): min=7238, max=65804, avg=16257.11, stdev=7779.81 00:33:56.471 clat (usec): min=946, max=9579, avg=4431.39, stdev=787.99 00:33:56.471 lat (usec): min=959, max=9607, avg=4447.65, stdev=787.43 00:33:56.471 clat percentiles (usec): 00:33:56.471 | 1.00th=[ 2999], 5.00th=[ 3523], 10.00th=[ 3720], 20.00th=[ 3982], 00:33:56.471 | 30.00th=[ 4178], 40.00th=[ 4228], 50.00th=[ 4293], 60.00th=[ 4359], 00:33:56.471 | 70.00th=[ 4424], 80.00th=[ 4621], 90.00th=[ 5342], 95.00th=[ 6194], 00:33:56.471 | 99.00th=[ 7308], 99.50th=[ 7570], 99.90th=[ 7898], 99.95th=[ 8029], 00:33:56.471 | 99.99th=[ 9634] 00:33:56.471 bw ( KiB/s): min=13904, max=14528, per=24.33%, avg=14222.22, stdev=217.54, samples=9 00:33:56.471 iops : min= 1738, max= 1816, avg=1777.78, stdev=27.19, samples=9 00:33:56.471 lat (usec) : 1000=0.03% 00:33:56.471 lat (msec) : 2=0.26%, 4=20.03%, 10=79.68% 00:33:56.471 cpu : usr=95.90%, sys=3.62%, ctx=7, majf=0, minf=45 00:33:56.471 IO depths : 1=0.4%, 2=15.9%, 4=57.2%, 8=26.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:56.471 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:56.471 complete : 0=0.0%, 4=91.6%, 8=8.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:56.471 issued rwts: total=8911,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:56.471 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:56.471 filename1: (groupid=0, jobs=1): err= 0: pid=393676: Mon Oct 7 09:54:45 2024 00:33:56.471 read: IOPS=1933, BW=15.1MiB/s (15.8MB/s)(75.6MiB/5004msec) 00:33:56.471 slat (nsec): min=7133, max=72631, avg=11940.37, stdev=6350.60 00:33:56.471 clat (usec): min=1137, max=6826, avg=4092.57, stdev=498.74 00:33:56.471 lat (usec): min=1156, max=6833, avg=4104.51, stdev=498.53 00:33:56.471 clat percentiles (usec): 00:33:56.471 | 1.00th=[ 2311], 5.00th=[ 3261], 10.00th=[ 3556], 20.00th=[ 3752], 00:33:56.471 | 30.00th=[ 3916], 40.00th=[ 4047], 50.00th=[ 4178], 60.00th=[ 4228], 00:33:56.471 | 70.00th=[ 4293], 80.00th=[ 4424], 90.00th=[ 4555], 95.00th=[ 4752], 00:33:56.471 | 99.00th=[ 5342], 99.50th=[ 5473], 99.90th=[ 6063], 99.95th=[ 6325], 00:33:56.471 | 99.99th=[ 6849] 00:33:56.471 bw ( KiB/s): min=15200, max=15744, per=26.48%, avg=15478.40, stdev=136.85, samples=10 00:33:56.471 iops : min= 1900, max= 1968, avg=1934.80, stdev=17.11, samples=10 00:33:56.471 lat (msec) : 2=0.59%, 4=34.75%, 10=64.66% 00:33:56.471 cpu : usr=95.24%, sys=4.26%, ctx=9, majf=0, minf=79 00:33:56.471 IO depths : 1=1.4%, 2=15.0%, 4=58.1%, 8=25.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:56.471 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:56.471 complete : 0=0.0%, 4=91.6%, 8=8.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:56.471 issued rwts: total=9675,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:56.471 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:56.471 00:33:56.471 Run status group 0 (all jobs): 00:33:56.471 READ: bw=57.1MiB/s (59.9MB/s), 13.8MiB/s-15.1MiB/s (14.5MB/s-15.8MB/s), io=286MiB (300MB), run=5001-5004msec 00:33:56.471 09:54:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:33:56.471 09:54:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:33:56.471 09:54:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:56.471 09:54:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:56.471 09:54:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:33:56.471 09:54:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:56.471 09:54:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:56.471 09:54:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:56.471 09:54:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:56.471 09:54:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:56.471 09:54:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:56.471 09:54:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:56.471 09:54:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:56.471 09:54:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:56.471 09:54:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:33:56.471 09:54:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:33:56.472 09:54:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:56.472 09:54:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:56.472 09:54:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:56.472 09:54:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:56.472 09:54:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:33:56.472 09:54:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:56.472 09:54:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:56.472 09:54:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:56.472 00:33:56.472 real 0m24.176s 00:33:56.472 user 4m33.028s 00:33:56.472 sys 0m5.998s 00:33:56.472 09:54:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:56.472 09:54:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:56.472 ************************************ 00:33:56.472 END TEST fio_dif_rand_params 00:33:56.472 ************************************ 00:33:56.472 09:54:45 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:33:56.472 09:54:45 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:33:56.472 09:54:45 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:56.472 09:54:45 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:56.472 ************************************ 00:33:56.472 START TEST fio_dif_digest 00:33:56.472 ************************************ 00:33:56.472 09:54:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # fio_dif_digest 00:33:56.472 09:54:45 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:33:56.472 09:54:45 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:33:56.472 09:54:45 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:33:56.472 09:54:45 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:33:56.472 09:54:45 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:33:56.472 09:54:45 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:33:56.472 09:54:45 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:33:56.472 09:54:45 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:33:56.472 09:54:45 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:33:56.472 09:54:45 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:33:56.472 09:54:45 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:33:56.472 09:54:45 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:33:56.472 09:54:45 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:33:56.472 09:54:45 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:33:56.472 09:54:45 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:33:56.472 09:54:45 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:33:56.472 09:54:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:56.472 09:54:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:56.472 bdev_null0 00:33:56.472 09:54:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:56.472 09:54:45 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:56.472 09:54:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:56.472 09:54:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:56.472 09:54:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:56.472 09:54:45 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:56.472 09:54:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:56.472 09:54:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:56.472 09:54:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:56.472 09:54:45 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:56.472 09:54:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:56.472 09:54:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:56.472 [2024-10-07 09:54:45.374381] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:56.472 09:54:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:56.472 09:54:45 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:33:56.472 09:54:45 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:33:56.472 09:54:45 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:33:56.472 09:54:45 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # config=() 00:33:56.472 09:54:45 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # local subsystem config 00:33:56.472 09:54:45 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:33:56.472 09:54:45 nvmf_dif.fio_dif_digest -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:33:56.472 { 00:33:56.472 "params": { 00:33:56.472 "name": "Nvme$subsystem", 00:33:56.472 "trtype": "$TEST_TRANSPORT", 00:33:56.472 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:56.472 "adrfam": "ipv4", 00:33:56.472 "trsvcid": "$NVMF_PORT", 00:33:56.472 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:56.472 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:56.472 "hdgst": ${hdgst:-false}, 00:33:56.472 "ddgst": ${ddgst:-false} 00:33:56.472 }, 00:33:56.472 "method": "bdev_nvme_attach_controller" 00:33:56.472 } 00:33:56.472 EOF 00:33:56.472 )") 00:33:56.472 09:54:45 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:56.472 09:54:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:56.472 09:54:45 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:33:56.472 09:54:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:33:56.472 09:54:45 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:33:56.472 09:54:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:56.472 09:54:45 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:33:56.472 09:54:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:33:56.472 09:54:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:56.472 09:54:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:33:56.472 09:54:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:33:56.472 09:54:45 nvmf_dif.fio_dif_digest -- nvmf/common.sh@580 -- # cat 00:33:56.472 09:54:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:33:56.472 09:54:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:56.472 09:54:45 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:33:56.472 09:54:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:33:56.472 09:54:45 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:33:56.472 09:54:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:33:56.472 09:54:45 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # jq . 00:33:56.472 09:54:45 nvmf_dif.fio_dif_digest -- nvmf/common.sh@583 -- # IFS=, 00:33:56.472 09:54:45 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:33:56.472 "params": { 00:33:56.472 "name": "Nvme0", 00:33:56.472 "trtype": "tcp", 00:33:56.472 "traddr": "10.0.0.2", 00:33:56.472 "adrfam": "ipv4", 00:33:56.472 "trsvcid": "4420", 00:33:56.472 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:56.472 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:56.472 "hdgst": true, 00:33:56.472 "ddgst": true 00:33:56.472 }, 00:33:56.472 "method": "bdev_nvme_attach_controller" 00:33:56.472 }' 00:33:56.472 09:54:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:33:56.472 09:54:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:33:56.472 09:54:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:33:56.472 09:54:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:56.472 09:54:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:33:56.472 09:54:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:33:56.472 09:54:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:33:56.472 09:54:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:33:56.472 09:54:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:56.472 09:54:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:56.731 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:33:56.731 ... 00:33:56.731 fio-3.35 00:33:56.731 Starting 3 threads 00:34:08.933 00:34:08.933 filename0: (groupid=0, jobs=1): err= 0: pid=394511: Mon Oct 7 09:54:56 2024 00:34:08.933 read: IOPS=216, BW=27.1MiB/s (28.4MB/s)(272MiB/10045msec) 00:34:08.933 slat (nsec): min=4801, max=98374, avg=14283.95, stdev=3524.54 00:34:08.933 clat (usec): min=10346, max=54374, avg=13810.90, stdev=1579.55 00:34:08.933 lat (usec): min=10359, max=54390, avg=13825.18, stdev=1579.71 00:34:08.933 clat percentiles (usec): 00:34:08.933 | 1.00th=[11338], 5.00th=[12125], 10.00th=[12518], 20.00th=[12911], 00:34:08.933 | 30.00th=[13173], 40.00th=[13566], 50.00th=[13698], 60.00th=[13960], 00:34:08.933 | 70.00th=[14353], 80.00th=[14615], 90.00th=[15008], 95.00th=[15401], 00:34:08.933 | 99.00th=[16450], 99.50th=[17171], 99.90th=[21890], 99.95th=[51643], 00:34:08.933 | 99.99th=[54264] 00:34:08.933 bw ( KiB/s): min=27136, max=29696, per=35.23%, avg=27827.20, stdev=538.91, samples=20 00:34:08.933 iops : min= 212, max= 232, avg=217.40, stdev= 4.21, samples=20 00:34:08.933 lat (msec) : 20=99.82%, 50=0.09%, 100=0.09% 00:34:08.933 cpu : usr=92.07%, sys=7.35%, ctx=24, majf=0, minf=145 00:34:08.933 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:08.933 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.933 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.933 issued rwts: total=2176,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:08.933 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:08.933 filename0: (groupid=0, jobs=1): err= 0: pid=394512: Mon Oct 7 09:54:56 2024 00:34:08.933 read: IOPS=195, BW=24.5MiB/s (25.7MB/s)(246MiB/10046msec) 00:34:08.933 slat (usec): min=4, max=106, avg=14.48, stdev= 3.72 00:34:08.933 clat (usec): min=11297, max=52835, avg=15266.49, stdev=1552.38 00:34:08.933 lat (usec): min=11310, max=52848, avg=15280.97, stdev=1552.35 00:34:08.933 clat percentiles (usec): 00:34:08.933 | 1.00th=[12649], 5.00th=[13566], 10.00th=[13960], 20.00th=[14353], 00:34:08.933 | 30.00th=[14746], 40.00th=[15008], 50.00th=[15270], 60.00th=[15401], 00:34:08.933 | 70.00th=[15664], 80.00th=[16057], 90.00th=[16450], 95.00th=[16909], 00:34:08.933 | 99.00th=[17695], 99.50th=[18220], 99.90th=[49021], 99.95th=[52691], 00:34:08.933 | 99.99th=[52691] 00:34:08.933 bw ( KiB/s): min=24368, max=26112, per=31.88%, avg=25180.00, stdev=436.85, samples=20 00:34:08.933 iops : min= 190, max= 204, avg=196.70, stdev= 3.45, samples=20 00:34:08.933 lat (msec) : 20=99.75%, 50=0.20%, 100=0.05% 00:34:08.933 cpu : usr=92.55%, sys=6.92%, ctx=37, majf=0, minf=206 00:34:08.933 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:08.933 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.933 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.933 issued rwts: total=1969,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:08.933 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:08.933 filename0: (groupid=0, jobs=1): err= 0: pid=394513: Mon Oct 7 09:54:56 2024 00:34:08.933 read: IOPS=205, BW=25.7MiB/s (26.9MB/s)(257MiB/10005msec) 00:34:08.933 slat (nsec): min=4205, max=42126, avg=13898.88, stdev=3146.18 00:34:08.933 clat (usec): min=7968, max=20397, avg=14594.94, stdev=949.15 00:34:08.933 lat (usec): min=7982, max=20408, avg=14608.84, stdev=949.28 00:34:08.933 clat percentiles (usec): 00:34:08.933 | 1.00th=[12387], 5.00th=[13173], 10.00th=[13435], 20.00th=[13829], 00:34:08.933 | 30.00th=[14091], 40.00th=[14353], 50.00th=[14615], 60.00th=[14746], 00:34:08.933 | 70.00th=[15008], 80.00th=[15270], 90.00th=[15795], 95.00th=[16188], 00:34:08.933 | 99.00th=[17171], 99.50th=[17433], 99.90th=[18482], 99.95th=[19268], 00:34:08.933 | 99.99th=[20317] 00:34:08.933 bw ( KiB/s): min=25344, max=26880, per=33.25%, avg=26265.60, stdev=375.14, samples=20 00:34:08.933 iops : min= 198, max= 210, avg=205.20, stdev= 2.93, samples=20 00:34:08.933 lat (msec) : 10=0.05%, 20=99.90%, 50=0.05% 00:34:08.933 cpu : usr=92.62%, sys=6.85%, ctx=18, majf=0, minf=170 00:34:08.933 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:08.933 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.933 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.933 issued rwts: total=2054,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:08.933 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:08.933 00:34:08.933 Run status group 0 (all jobs): 00:34:08.933 READ: bw=77.1MiB/s (80.9MB/s), 24.5MiB/s-27.1MiB/s (25.7MB/s-28.4MB/s), io=775MiB (813MB), run=10005-10046msec 00:34:08.933 09:54:56 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:34:08.933 09:54:56 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:34:08.933 09:54:56 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:34:08.933 09:54:56 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:08.933 09:54:56 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:34:08.934 09:54:56 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:08.934 09:54:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:08.934 09:54:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:08.934 09:54:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:08.934 09:54:56 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:08.934 09:54:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:08.934 09:54:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:08.934 09:54:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:08.934 00:34:08.934 real 0m11.091s 00:34:08.934 user 0m28.866s 00:34:08.934 sys 0m2.397s 00:34:08.934 09:54:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:08.934 09:54:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:08.934 ************************************ 00:34:08.934 END TEST fio_dif_digest 00:34:08.934 ************************************ 00:34:08.934 09:54:56 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:34:08.934 09:54:56 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:34:08.934 09:54:56 nvmf_dif -- nvmf/common.sh@514 -- # nvmfcleanup 00:34:08.934 09:54:56 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:34:08.934 09:54:56 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:08.934 09:54:56 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:34:08.934 09:54:56 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:08.934 09:54:56 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:08.934 rmmod nvme_tcp 00:34:08.934 rmmod nvme_fabrics 00:34:08.934 rmmod nvme_keyring 00:34:08.934 09:54:56 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:08.934 09:54:56 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:34:08.934 09:54:56 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:34:08.934 09:54:56 nvmf_dif -- nvmf/common.sh@515 -- # '[' -n 387967 ']' 00:34:08.934 09:54:56 nvmf_dif -- nvmf/common.sh@516 -- # killprocess 387967 00:34:08.934 09:54:56 nvmf_dif -- common/autotest_common.sh@950 -- # '[' -z 387967 ']' 00:34:08.934 09:54:56 nvmf_dif -- common/autotest_common.sh@954 -- # kill -0 387967 00:34:08.934 09:54:56 nvmf_dif -- common/autotest_common.sh@955 -- # uname 00:34:08.934 09:54:56 nvmf_dif -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:08.934 09:54:56 nvmf_dif -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 387967 00:34:08.934 09:54:56 nvmf_dif -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:08.934 09:54:56 nvmf_dif -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:08.934 09:54:56 nvmf_dif -- common/autotest_common.sh@968 -- # echo 'killing process with pid 387967' 00:34:08.934 killing process with pid 387967 00:34:08.934 09:54:56 nvmf_dif -- common/autotest_common.sh@969 -- # kill 387967 00:34:08.934 09:54:56 nvmf_dif -- common/autotest_common.sh@974 -- # wait 387967 00:34:08.934 09:54:56 nvmf_dif -- nvmf/common.sh@518 -- # '[' iso == iso ']' 00:34:08.934 09:54:56 nvmf_dif -- nvmf/common.sh@519 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:08.934 Waiting for block devices as requested 00:34:09.194 0000:84:00.0 (8086 0a54): vfio-pci -> nvme 00:34:09.194 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:34:09.457 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:34:09.457 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:34:09.457 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:34:09.457 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:34:09.776 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:34:09.776 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:34:09.776 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:34:09.776 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:34:10.089 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:34:10.089 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:34:10.089 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:34:10.089 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:34:10.373 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:34:10.373 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:34:10.373 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:34:10.373 09:54:59 nvmf_dif -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:34:10.373 09:54:59 nvmf_dif -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:34:10.373 09:54:59 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:34:10.373 09:54:59 nvmf_dif -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:34:10.373 09:54:59 nvmf_dif -- nvmf/common.sh@789 -- # iptables-save 00:34:10.373 09:54:59 nvmf_dif -- nvmf/common.sh@789 -- # iptables-restore 00:34:10.373 09:54:59 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:10.373 09:54:59 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:10.373 09:54:59 nvmf_dif -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:10.373 09:54:59 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:10.373 09:54:59 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:12.910 09:55:01 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:12.910 00:34:12.910 real 1m6.981s 00:34:12.910 user 6m29.246s 00:34:12.910 sys 0m17.830s 00:34:12.910 09:55:01 nvmf_dif -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:12.910 09:55:01 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:12.910 ************************************ 00:34:12.910 END TEST nvmf_dif 00:34:12.910 ************************************ 00:34:12.910 09:55:01 -- spdk/autotest.sh@286 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:34:12.910 09:55:01 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:34:12.910 09:55:01 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:12.910 09:55:01 -- common/autotest_common.sh@10 -- # set +x 00:34:12.910 ************************************ 00:34:12.910 START TEST nvmf_abort_qd_sizes 00:34:12.910 ************************************ 00:34:12.910 09:55:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:34:12.910 * Looking for test storage... 00:34:12.910 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:12.910 09:55:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:34:12.910 09:55:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # lcov --version 00:34:12.910 09:55:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:34:12.910 09:55:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:34:12.910 09:55:01 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:12.910 09:55:01 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:12.910 09:55:01 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:12.910 09:55:01 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:34:12.910 09:55:01 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:34:12.910 09:55:01 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:34:12.910 09:55:01 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:34:12.910 09:55:01 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:34:12.910 09:55:01 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:34:12.910 09:55:01 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:34:12.910 09:55:01 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:12.910 09:55:01 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:34:12.910 09:55:01 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:34:12.910 09:55:01 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:12.910 09:55:01 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:12.910 09:55:01 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:34:12.910 09:55:01 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:34:12.910 09:55:01 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:12.910 09:55:01 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:34:12.910 09:55:01 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:34:12.910 09:55:01 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:34:12.910 09:55:01 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:34:12.910 09:55:01 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:12.910 09:55:01 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:34:12.910 09:55:01 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:34:12.910 09:55:01 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:12.910 09:55:01 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:12.910 09:55:01 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:34:12.910 09:55:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:12.910 09:55:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:34:12.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:12.910 --rc genhtml_branch_coverage=1 00:34:12.910 --rc genhtml_function_coverage=1 00:34:12.910 --rc genhtml_legend=1 00:34:12.910 --rc geninfo_all_blocks=1 00:34:12.910 --rc geninfo_unexecuted_blocks=1 00:34:12.910 00:34:12.910 ' 00:34:12.910 09:55:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:34:12.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:12.911 --rc genhtml_branch_coverage=1 00:34:12.911 --rc genhtml_function_coverage=1 00:34:12.911 --rc genhtml_legend=1 00:34:12.911 --rc geninfo_all_blocks=1 00:34:12.911 --rc geninfo_unexecuted_blocks=1 00:34:12.911 00:34:12.911 ' 00:34:12.911 09:55:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:34:12.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:12.911 --rc genhtml_branch_coverage=1 00:34:12.911 --rc genhtml_function_coverage=1 00:34:12.911 --rc genhtml_legend=1 00:34:12.911 --rc geninfo_all_blocks=1 00:34:12.911 --rc geninfo_unexecuted_blocks=1 00:34:12.911 00:34:12.911 ' 00:34:12.911 09:55:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:34:12.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:12.911 --rc genhtml_branch_coverage=1 00:34:12.911 --rc genhtml_function_coverage=1 00:34:12.911 --rc genhtml_legend=1 00:34:12.911 --rc geninfo_all_blocks=1 00:34:12.911 --rc geninfo_unexecuted_blocks=1 00:34:12.911 00:34:12.911 ' 00:34:12.911 09:55:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:12.911 09:55:01 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:34:12.911 09:55:01 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:12.911 09:55:01 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:12.911 09:55:01 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:12.911 09:55:01 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:12.911 09:55:01 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:12.911 09:55:01 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:12.911 09:55:01 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:12.911 09:55:01 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:12.911 09:55:01 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:12.911 09:55:01 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:12.911 09:55:01 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:34:12.911 09:55:01 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:34:12.911 09:55:01 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:12.911 09:55:01 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:12.911 09:55:01 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:12.911 09:55:01 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:12.911 09:55:01 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:12.911 09:55:01 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:34:12.911 09:55:01 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:12.911 09:55:01 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:12.911 09:55:01 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:12.911 09:55:01 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:12.911 09:55:01 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:12.911 09:55:01 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:12.911 09:55:01 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:34:12.911 09:55:01 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:12.911 09:55:01 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:34:12.911 09:55:01 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:12.911 09:55:01 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:12.911 09:55:01 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:12.911 09:55:01 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:12.911 09:55:01 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:12.911 09:55:01 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:12.911 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:12.911 09:55:01 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:12.911 09:55:01 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:12.911 09:55:01 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:12.911 09:55:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:34:12.911 09:55:01 nvmf_abort_qd_sizes -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:34:12.911 09:55:01 nvmf_abort_qd_sizes -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:12.911 09:55:01 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # prepare_net_devs 00:34:12.911 09:55:01 nvmf_abort_qd_sizes -- nvmf/common.sh@436 -- # local -g is_hw=no 00:34:12.911 09:55:01 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # remove_spdk_ns 00:34:12.911 09:55:01 nvmf_abort_qd_sizes -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:12.911 09:55:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:12.911 09:55:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:12.911 09:55:01 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:34:12.911 09:55:01 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:34:12.911 09:55:01 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:34:12.911 09:55:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:14.814 09:55:03 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:14.814 09:55:03 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:34:14.814 09:55:03 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:14.814 09:55:03 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:14.814 09:55:03 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:14.814 09:55:03 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:14.814 09:55:03 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:14.814 09:55:03 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:34:14.814 09:55:03 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:14.814 09:55:03 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:34:14.814 09:55:03 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:34:14.814 09:55:03 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:34:14.814 09:55:03 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:34:14.814 09:55:03 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:34:14.814 09:55:03 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:34:14.814 09:55:03 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:14.814 09:55:03 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:14.814 09:55:03 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:14.814 09:55:03 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:14.814 09:55:03 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:14.814 09:55:03 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:14.814 09:55:03 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:14.814 09:55:03 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:14.814 09:55:03 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:14.814 09:55:03 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:14.814 09:55:03 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:14.814 09:55:03 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:14.814 09:55:03 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:14.814 09:55:03 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:14.814 09:55:03 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:14.814 09:55:03 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:14.814 09:55:03 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:14.814 09:55:03 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:14.814 09:55:03 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:14.814 09:55:03 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x1592)' 00:34:14.814 Found 0000:09:00.0 (0x8086 - 0x1592) 00:34:14.814 09:55:03 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:14.814 09:55:03 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:14.814 09:55:03 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:34:14.814 09:55:03 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:34:14.814 09:55:03 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:14.814 09:55:03 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:14.814 09:55:03 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x1592)' 00:34:14.814 Found 0000:09:00.1 (0x8086 - 0x1592) 00:34:14.814 09:55:03 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:14.814 09:55:03 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:14.814 09:55:03 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:34:14.814 09:55:03 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:34:14.814 09:55:03 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:14.814 09:55:03 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:14.814 09:55:03 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:14.814 09:55:03 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:14.814 09:55:03 nvmf_abort_qd_sizes -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:14.814 09:55:03 nvmf_abort_qd_sizes -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:14.814 09:55:03 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:14.814 09:55:03 nvmf_abort_qd_sizes -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:14.814 09:55:03 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:14.814 09:55:03 nvmf_abort_qd_sizes -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:14.814 09:55:03 nvmf_abort_qd_sizes -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:14.814 09:55:03 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:34:14.814 Found net devices under 0000:09:00.0: cvl_0_0 00:34:14.814 09:55:03 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:14.814 09:55:03 nvmf_abort_qd_sizes -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:14.814 09:55:03 nvmf_abort_qd_sizes -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:14.814 09:55:03 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:14.814 09:55:03 nvmf_abort_qd_sizes -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:14.814 09:55:03 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:14.814 09:55:03 nvmf_abort_qd_sizes -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:14.814 09:55:03 nvmf_abort_qd_sizes -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:14.814 09:55:03 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:34:14.814 Found net devices under 0000:09:00.1: cvl_0_1 00:34:14.814 09:55:03 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:14.814 09:55:03 nvmf_abort_qd_sizes -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:34:14.814 09:55:03 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # is_hw=yes 00:34:14.814 09:55:03 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:34:14.814 09:55:03 nvmf_abort_qd_sizes -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:34:14.814 09:55:03 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:34:14.814 09:55:03 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:14.814 09:55:03 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:14.815 09:55:03 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:14.815 09:55:03 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:14.815 09:55:03 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:14.815 09:55:03 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:14.815 09:55:03 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:14.815 09:55:03 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:14.815 09:55:03 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:14.815 09:55:03 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:14.815 09:55:03 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:14.815 09:55:03 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:14.815 09:55:03 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:14.815 09:55:03 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:14.815 09:55:03 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:14.815 09:55:03 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:14.815 09:55:03 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:14.815 09:55:03 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:14.815 09:55:03 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:14.815 09:55:03 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:14.815 09:55:03 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:14.815 09:55:03 nvmf_abort_qd_sizes -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:14.815 09:55:03 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:14.815 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:14.815 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.305 ms 00:34:14.815 00:34:14.815 --- 10.0.0.2 ping statistics --- 00:34:14.815 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:14.815 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:34:14.815 09:55:03 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:14.815 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:14.815 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:34:14.815 00:34:14.815 --- 10.0.0.1 ping statistics --- 00:34:14.815 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:14.815 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:34:14.815 09:55:03 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:14.815 09:55:03 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # return 0 00:34:14.815 09:55:03 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # '[' iso == iso ']' 00:34:14.815 09:55:03 nvmf_abort_qd_sizes -- nvmf/common.sh@477 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:16.193 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:34:16.193 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:34:16.193 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:34:16.193 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:34:16.193 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:34:16.193 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:34:16.193 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:34:16.193 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:34:16.193 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:34:16.193 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:34:16.193 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:34:16.193 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:34:16.193 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:34:16.193 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:34:16.193 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:34:16.193 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:34:17.130 0000:84:00.0 (8086 0a54): nvme -> vfio-pci 00:34:17.130 09:55:06 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:17.130 09:55:06 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:34:17.130 09:55:06 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:34:17.130 09:55:06 nvmf_abort_qd_sizes -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:17.130 09:55:06 nvmf_abort_qd_sizes -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:34:17.130 09:55:06 nvmf_abort_qd_sizes -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:34:17.130 09:55:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:34:17.130 09:55:06 nvmf_abort_qd_sizes -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:34:17.130 09:55:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:17.130 09:55:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:17.130 09:55:06 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # nvmfpid=399201 00:34:17.130 09:55:06 nvmf_abort_qd_sizes -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:34:17.130 09:55:06 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # waitforlisten 399201 00:34:17.130 09:55:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # '[' -z 399201 ']' 00:34:17.130 09:55:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:17.130 09:55:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:17.130 09:55:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:17.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:17.130 09:55:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:17.130 09:55:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:17.130 [2024-10-07 09:55:06.120235] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:34:17.130 [2024-10-07 09:55:06.120305] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:17.389 [2024-10-07 09:55:06.181021] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:17.389 [2024-10-07 09:55:06.287455] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:17.389 [2024-10-07 09:55:06.287516] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:17.389 [2024-10-07 09:55:06.287541] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:17.389 [2024-10-07 09:55:06.287552] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:17.389 [2024-10-07 09:55:06.287560] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:17.389 [2024-10-07 09:55:06.289043] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:34:17.389 [2024-10-07 09:55:06.289101] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:34:17.389 [2024-10-07 09:55:06.289167] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:34:17.389 [2024-10-07 09:55:06.289170] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:34:17.648 09:55:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:17.648 09:55:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # return 0 00:34:17.648 09:55:06 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:34:17.648 09:55:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:17.648 09:55:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:17.648 09:55:06 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:17.648 09:55:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:34:17.648 09:55:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:34:17.648 09:55:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:34:17.648 09:55:06 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:34:17.648 09:55:06 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:34:17.648 09:55:06 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:84:00.0 ]] 00:34:17.648 09:55:06 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:34:17.648 09:55:06 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:34:17.648 09:55:06 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:84:00.0 ]] 00:34:17.648 09:55:06 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:34:17.648 09:55:06 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:34:17.648 09:55:06 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:34:17.648 09:55:06 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:34:17.648 09:55:06 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:84:00.0 00:34:17.648 09:55:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:34:17.648 09:55:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:84:00.0 00:34:17.648 09:55:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:34:17.648 09:55:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:34:17.648 09:55:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:17.648 09:55:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:17.648 ************************************ 00:34:17.648 START TEST spdk_target_abort 00:34:17.648 ************************************ 00:34:17.648 09:55:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # spdk_target 00:34:17.648 09:55:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:34:17.648 09:55:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:84:00.0 -b spdk_target 00:34:17.648 09:55:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:17.648 09:55:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:20.932 spdk_targetn1 00:34:20.932 09:55:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.932 09:55:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:20.932 09:55:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.932 09:55:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:20.932 [2024-10-07 09:55:09.314554] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:20.932 09:55:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.932 09:55:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:34:20.932 09:55:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.932 09:55:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:20.932 09:55:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.932 09:55:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:34:20.932 09:55:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.932 09:55:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:20.932 09:55:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.932 09:55:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:34:20.932 09:55:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.932 09:55:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:20.932 [2024-10-07 09:55:09.346878] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:20.932 09:55:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.932 09:55:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:34:20.932 09:55:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:34:20.932 09:55:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:34:20.932 09:55:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:34:20.932 09:55:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:34:20.932 09:55:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:34:20.932 09:55:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:34:20.932 09:55:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:34:20.932 09:55:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:34:20.932 09:55:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:20.932 09:55:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:34:20.932 09:55:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:20.932 09:55:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:34:20.932 09:55:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:20.932 09:55:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:34:20.932 09:55:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:20.932 09:55:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:20.932 09:55:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:20.932 09:55:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:20.932 09:55:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:20.932 09:55:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:24.211 Initializing NVMe Controllers 00:34:24.211 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:34:24.211 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:24.211 Initialization complete. Launching workers. 00:34:24.211 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 12420, failed: 0 00:34:24.211 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1202, failed to submit 11218 00:34:24.211 success 736, unsuccessful 466, failed 0 00:34:24.211 09:55:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:24.211 09:55:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:27.492 Initializing NVMe Controllers 00:34:27.492 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:34:27.492 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:27.492 Initialization complete. Launching workers. 00:34:27.492 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8459, failed: 0 00:34:27.492 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1228, failed to submit 7231 00:34:27.492 success 316, unsuccessful 912, failed 0 00:34:27.492 09:55:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:27.492 09:55:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:30.771 Initializing NVMe Controllers 00:34:30.771 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:34:30.771 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:30.771 Initialization complete. Launching workers. 00:34:30.771 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 29927, failed: 0 00:34:30.771 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2614, failed to submit 27313 00:34:30.771 success 455, unsuccessful 2159, failed 0 00:34:30.771 09:55:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:34:30.771 09:55:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:30.771 09:55:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:30.771 09:55:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:30.771 09:55:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:34:30.771 09:55:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:30.771 09:55:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:31.336 09:55:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:31.336 09:55:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 399201 00:34:31.336 09:55:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # '[' -z 399201 ']' 00:34:31.336 09:55:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # kill -0 399201 00:34:31.336 09:55:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # uname 00:34:31.336 09:55:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:31.336 09:55:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 399201 00:34:31.336 09:55:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:31.336 09:55:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:31.336 09:55:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 399201' 00:34:31.336 killing process with pid 399201 00:34:31.336 09:55:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@969 -- # kill 399201 00:34:31.336 09:55:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@974 -- # wait 399201 00:34:31.594 00:34:31.594 real 0m14.110s 00:34:31.594 user 0m52.842s 00:34:31.594 sys 0m2.864s 00:34:31.594 09:55:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:31.594 09:55:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:31.594 ************************************ 00:34:31.594 END TEST spdk_target_abort 00:34:31.594 ************************************ 00:34:31.853 09:55:20 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:34:31.853 09:55:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:34:31.853 09:55:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:31.853 09:55:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:31.853 ************************************ 00:34:31.853 START TEST kernel_target_abort 00:34:31.853 ************************************ 00:34:31.853 09:55:20 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # kernel_target 00:34:31.853 09:55:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:34:31.853 09:55:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@767 -- # local ip 00:34:31.853 09:55:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:31.853 09:55:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:31.853 09:55:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:31.853 09:55:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:31.853 09:55:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:31.853 09:55:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:31.853 09:55:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:31.853 09:55:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:31.853 09:55:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:31.853 09:55:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:34:31.853 09:55:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:34:31.853 09:55:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:34:31.853 09:55:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:31.853 09:55:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:31.853 09:55:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:34:31.853 09:55:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # local block nvme 00:34:31.853 09:55:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:34:31.853 09:55:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # modprobe nvmet 00:34:31.853 09:55:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:34:31.853 09:55:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:33.228 Waiting for block devices as requested 00:34:33.228 0000:84:00.0 (8086 0a54): vfio-pci -> nvme 00:34:33.228 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:34:33.228 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:34:33.228 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:34:33.486 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:34:33.486 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:34:33.486 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:34:33.486 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:34:33.486 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:34:33.745 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:34:33.745 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:34:33.745 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:34:34.005 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:34:34.005 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:34:34.005 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:34:34.005 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:34:34.264 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:34:34.264 09:55:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:34:34.264 09:55:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:34:34.264 09:55:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:34:34.264 09:55:23 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:34:34.264 09:55:23 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:34:34.264 09:55:23 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:34:34.264 09:55:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:34:34.264 09:55:23 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:34:34.264 09:55:23 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:34:34.264 No valid GPT data, bailing 00:34:34.264 09:55:23 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:34:34.264 09:55:23 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:34:34.264 09:55:23 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:34:34.264 09:55:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:34:34.264 09:55:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@682 -- # [[ -b /dev/nvme0n1 ]] 00:34:34.264 09:55:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:34.264 09:55:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:34.523 09:55:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:34:34.523 09:55:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:34:34.523 09:55:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo 1 00:34:34.523 09:55:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@694 -- # echo /dev/nvme0n1 00:34:34.523 09:55:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:34:34.523 09:55:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 10.0.0.1 00:34:34.523 09:55:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # echo tcp 00:34:34.523 09:55:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 4420 00:34:34.523 09:55:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo ipv4 00:34:34.523 09:55:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:34:34.523 09:55:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid=21b7cb46-a602-e411-a339-001e67bc3be4 -a 10.0.0.1 -t tcp -s 4420 00:34:34.523 00:34:34.523 Discovery Log Number of Records 2, Generation counter 2 00:34:34.523 =====Discovery Log Entry 0====== 00:34:34.523 trtype: tcp 00:34:34.523 adrfam: ipv4 00:34:34.523 subtype: current discovery subsystem 00:34:34.523 treq: not specified, sq flow control disable supported 00:34:34.523 portid: 1 00:34:34.523 trsvcid: 4420 00:34:34.523 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:34:34.523 traddr: 10.0.0.1 00:34:34.523 eflags: none 00:34:34.523 sectype: none 00:34:34.523 =====Discovery Log Entry 1====== 00:34:34.523 trtype: tcp 00:34:34.523 adrfam: ipv4 00:34:34.523 subtype: nvme subsystem 00:34:34.523 treq: not specified, sq flow control disable supported 00:34:34.523 portid: 1 00:34:34.523 trsvcid: 4420 00:34:34.523 subnqn: nqn.2016-06.io.spdk:testnqn 00:34:34.523 traddr: 10.0.0.1 00:34:34.523 eflags: none 00:34:34.523 sectype: none 00:34:34.523 09:55:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:34:34.523 09:55:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:34:34.523 09:55:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:34:34.523 09:55:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:34:34.524 09:55:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:34:34.524 09:55:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:34:34.524 09:55:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:34:34.524 09:55:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:34:34.524 09:55:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:34:34.524 09:55:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:34.524 09:55:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:34:34.524 09:55:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:34.524 09:55:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:34:34.524 09:55:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:34.524 09:55:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:34:34.524 09:55:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:34.524 09:55:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:34:34.524 09:55:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:34.524 09:55:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:34.524 09:55:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:34.524 09:55:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:37.798 Initializing NVMe Controllers 00:34:37.798 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:34:37.798 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:37.798 Initialization complete. Launching workers. 00:34:37.798 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 56053, failed: 0 00:34:37.798 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 56053, failed to submit 0 00:34:37.798 success 0, unsuccessful 56053, failed 0 00:34:37.798 09:55:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:37.798 09:55:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:41.076 Initializing NVMe Controllers 00:34:41.076 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:34:41.076 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:41.076 Initialization complete. Launching workers. 00:34:41.076 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 100600, failed: 0 00:34:41.076 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 25350, failed to submit 75250 00:34:41.076 success 0, unsuccessful 25350, failed 0 00:34:41.076 09:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:41.076 09:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:44.358 Initializing NVMe Controllers 00:34:44.358 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:34:44.358 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:44.358 Initialization complete. Launching workers. 00:34:44.358 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 95549, failed: 0 00:34:44.358 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 23882, failed to submit 71667 00:34:44.358 success 0, unsuccessful 23882, failed 0 00:34:44.358 09:55:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:34:44.358 09:55:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:34:44.358 09:55:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # echo 0 00:34:44.358 09:55:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:44.358 09:55:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:44.358 09:55:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:34:44.358 09:55:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:44.358 09:55:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:34:44.358 09:55:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modprobe -r nvmet_tcp nvmet 00:34:44.358 09:55:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@724 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:45.297 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:34:45.297 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:34:45.297 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:34:45.297 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:34:45.297 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:34:45.297 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:34:45.297 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:34:45.297 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:34:45.297 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:34:45.297 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:34:45.297 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:34:45.297 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:34:45.297 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:34:45.297 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:34:45.297 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:34:45.297 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:34:45.864 0000:84:00.0 (8086 0a54): nvme -> vfio-pci 00:34:46.123 00:34:46.123 real 0m14.418s 00:34:46.123 user 0m6.747s 00:34:46.123 sys 0m3.283s 00:34:46.123 09:55:35 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:46.123 09:55:35 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:46.123 ************************************ 00:34:46.123 END TEST kernel_target_abort 00:34:46.123 ************************************ 00:34:46.123 09:55:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:34:46.123 09:55:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:34:46.123 09:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@514 -- # nvmfcleanup 00:34:46.123 09:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:34:46.123 09:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:46.123 09:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:34:46.123 09:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:46.123 09:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:46.123 rmmod nvme_tcp 00:34:46.123 rmmod nvme_fabrics 00:34:46.123 rmmod nvme_keyring 00:34:46.123 09:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:46.123 09:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:34:46.123 09:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:34:46.123 09:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@515 -- # '[' -n 399201 ']' 00:34:46.123 09:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # killprocess 399201 00:34:46.123 09:55:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # '[' -z 399201 ']' 00:34:46.123 09:55:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # kill -0 399201 00:34:46.123 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (399201) - No such process 00:34:46.123 09:55:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@977 -- # echo 'Process with pid 399201 is not found' 00:34:46.123 Process with pid 399201 is not found 00:34:46.123 09:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # '[' iso == iso ']' 00:34:46.123 09:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@519 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:47.503 Waiting for block devices as requested 00:34:47.503 0000:84:00.0 (8086 0a54): vfio-pci -> nvme 00:34:47.503 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:34:47.503 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:34:47.762 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:34:47.762 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:34:47.762 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:34:47.762 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:34:48.021 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:34:48.021 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:34:48.021 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:34:48.021 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:34:48.280 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:34:48.280 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:34:48.280 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:34:48.280 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:34:48.541 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:34:48.541 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:34:48.802 09:55:37 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:34:48.802 09:55:37 nvmf_abort_qd_sizes -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:34:48.802 09:55:37 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:34:48.802 09:55:37 nvmf_abort_qd_sizes -- nvmf/common.sh@789 -- # iptables-save 00:34:48.802 09:55:37 nvmf_abort_qd_sizes -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:34:48.802 09:55:37 nvmf_abort_qd_sizes -- nvmf/common.sh@789 -- # iptables-restore 00:34:48.802 09:55:37 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:48.802 09:55:37 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:48.802 09:55:37 nvmf_abort_qd_sizes -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:48.802 09:55:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:48.802 09:55:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:50.708 09:55:39 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:50.708 00:34:50.708 real 0m38.157s 00:34:50.708 user 1m1.812s 00:34:50.708 sys 0m9.637s 00:34:50.708 09:55:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:50.708 09:55:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:50.708 ************************************ 00:34:50.708 END TEST nvmf_abort_qd_sizes 00:34:50.708 ************************************ 00:34:50.708 09:55:39 -- spdk/autotest.sh@288 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:34:50.708 09:55:39 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:34:50.708 09:55:39 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:50.708 09:55:39 -- common/autotest_common.sh@10 -- # set +x 00:34:50.708 ************************************ 00:34:50.708 START TEST keyring_file 00:34:50.708 ************************************ 00:34:50.708 09:55:39 keyring_file -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:34:50.968 * Looking for test storage... 00:34:50.968 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:34:50.968 09:55:39 keyring_file -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:34:50.968 09:55:39 keyring_file -- common/autotest_common.sh@1681 -- # lcov --version 00:34:50.968 09:55:39 keyring_file -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:34:50.968 09:55:39 keyring_file -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:34:50.968 09:55:39 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:50.968 09:55:39 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:50.968 09:55:39 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:50.968 09:55:39 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:34:50.968 09:55:39 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:34:50.968 09:55:39 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:34:50.968 09:55:39 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:34:50.968 09:55:39 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:34:50.968 09:55:39 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:34:50.968 09:55:39 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:34:50.968 09:55:39 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:50.968 09:55:39 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:34:50.968 09:55:39 keyring_file -- scripts/common.sh@345 -- # : 1 00:34:50.968 09:55:39 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:50.969 09:55:39 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:50.969 09:55:39 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:34:50.969 09:55:39 keyring_file -- scripts/common.sh@353 -- # local d=1 00:34:50.969 09:55:39 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:50.969 09:55:39 keyring_file -- scripts/common.sh@355 -- # echo 1 00:34:50.969 09:55:39 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:34:50.969 09:55:39 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:34:50.969 09:55:39 keyring_file -- scripts/common.sh@353 -- # local d=2 00:34:50.969 09:55:39 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:50.969 09:55:39 keyring_file -- scripts/common.sh@355 -- # echo 2 00:34:50.969 09:55:39 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:34:50.969 09:55:39 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:50.969 09:55:39 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:50.969 09:55:39 keyring_file -- scripts/common.sh@368 -- # return 0 00:34:50.969 09:55:39 keyring_file -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:50.969 09:55:39 keyring_file -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:34:50.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:50.969 --rc genhtml_branch_coverage=1 00:34:50.969 --rc genhtml_function_coverage=1 00:34:50.969 --rc genhtml_legend=1 00:34:50.969 --rc geninfo_all_blocks=1 00:34:50.969 --rc geninfo_unexecuted_blocks=1 00:34:50.969 00:34:50.969 ' 00:34:50.969 09:55:39 keyring_file -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:34:50.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:50.969 --rc genhtml_branch_coverage=1 00:34:50.969 --rc genhtml_function_coverage=1 00:34:50.969 --rc genhtml_legend=1 00:34:50.969 --rc geninfo_all_blocks=1 00:34:50.969 --rc geninfo_unexecuted_blocks=1 00:34:50.969 00:34:50.969 ' 00:34:50.969 09:55:39 keyring_file -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:34:50.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:50.969 --rc genhtml_branch_coverage=1 00:34:50.969 --rc genhtml_function_coverage=1 00:34:50.969 --rc genhtml_legend=1 00:34:50.969 --rc geninfo_all_blocks=1 00:34:50.969 --rc geninfo_unexecuted_blocks=1 00:34:50.969 00:34:50.969 ' 00:34:50.969 09:55:39 keyring_file -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:34:50.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:50.969 --rc genhtml_branch_coverage=1 00:34:50.969 --rc genhtml_function_coverage=1 00:34:50.969 --rc genhtml_legend=1 00:34:50.969 --rc geninfo_all_blocks=1 00:34:50.969 --rc geninfo_unexecuted_blocks=1 00:34:50.969 00:34:50.969 ' 00:34:50.969 09:55:39 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:34:50.969 09:55:39 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:50.969 09:55:39 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:34:50.969 09:55:39 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:50.969 09:55:39 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:50.969 09:55:39 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:50.969 09:55:39 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:50.969 09:55:39 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:50.969 09:55:39 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:50.969 09:55:39 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:50.969 09:55:39 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:50.969 09:55:39 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:50.969 09:55:39 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:50.969 09:55:39 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:34:50.969 09:55:39 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:34:50.969 09:55:39 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:50.969 09:55:39 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:50.969 09:55:39 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:50.969 09:55:39 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:50.969 09:55:39 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:50.969 09:55:39 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:34:50.969 09:55:39 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:50.969 09:55:39 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:50.969 09:55:39 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:50.969 09:55:39 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:50.969 09:55:39 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:50.969 09:55:39 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:50.969 09:55:39 keyring_file -- paths/export.sh@5 -- # export PATH 00:34:50.969 09:55:39 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:50.969 09:55:39 keyring_file -- nvmf/common.sh@51 -- # : 0 00:34:50.969 09:55:39 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:50.969 09:55:39 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:50.969 09:55:39 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:50.969 09:55:39 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:50.969 09:55:39 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:50.969 09:55:39 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:50.969 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:50.969 09:55:39 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:50.969 09:55:39 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:50.969 09:55:39 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:50.969 09:55:39 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:34:50.969 09:55:39 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:34:50.969 09:55:39 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:34:50.969 09:55:39 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:34:50.969 09:55:39 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:34:50.969 09:55:39 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:34:50.969 09:55:39 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:34:50.969 09:55:39 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:34:50.969 09:55:39 keyring_file -- keyring/common.sh@17 -- # name=key0 00:34:50.969 09:55:39 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:34:50.969 09:55:39 keyring_file -- keyring/common.sh@17 -- # digest=0 00:34:50.969 09:55:39 keyring_file -- keyring/common.sh@18 -- # mktemp 00:34:50.969 09:55:39 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.FLYHrD6uXx 00:34:50.969 09:55:39 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:34:50.969 09:55:39 keyring_file -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:34:50.969 09:55:39 keyring_file -- nvmf/common.sh@728 -- # local prefix key digest 00:34:50.969 09:55:39 keyring_file -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:34:50.969 09:55:39 keyring_file -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:34:50.969 09:55:39 keyring_file -- nvmf/common.sh@730 -- # digest=0 00:34:50.969 09:55:39 keyring_file -- nvmf/common.sh@731 -- # python - 00:34:50.969 09:55:39 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.FLYHrD6uXx 00:34:50.969 09:55:39 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.FLYHrD6uXx 00:34:50.969 09:55:39 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.FLYHrD6uXx 00:34:50.969 09:55:39 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:34:50.969 09:55:39 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:34:50.969 09:55:39 keyring_file -- keyring/common.sh@17 -- # name=key1 00:34:50.969 09:55:39 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:34:50.969 09:55:39 keyring_file -- keyring/common.sh@17 -- # digest=0 00:34:50.969 09:55:39 keyring_file -- keyring/common.sh@18 -- # mktemp 00:34:50.969 09:55:39 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.N3YFj57ZnX 00:34:50.969 09:55:39 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:34:50.969 09:55:39 keyring_file -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:34:50.969 09:55:39 keyring_file -- nvmf/common.sh@728 -- # local prefix key digest 00:34:50.969 09:55:39 keyring_file -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:34:50.970 09:55:39 keyring_file -- nvmf/common.sh@730 -- # key=112233445566778899aabbccddeeff00 00:34:50.970 09:55:39 keyring_file -- nvmf/common.sh@730 -- # digest=0 00:34:50.970 09:55:39 keyring_file -- nvmf/common.sh@731 -- # python - 00:34:50.970 09:55:39 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.N3YFj57ZnX 00:34:50.970 09:55:39 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.N3YFj57ZnX 00:34:50.970 09:55:39 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.N3YFj57ZnX 00:34:50.970 09:55:39 keyring_file -- keyring/file.sh@30 -- # tgtpid=404715 00:34:50.970 09:55:39 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:34:50.970 09:55:39 keyring_file -- keyring/file.sh@32 -- # waitforlisten 404715 00:34:50.970 09:55:39 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 404715 ']' 00:34:50.970 09:55:39 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:50.970 09:55:39 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:50.970 09:55:39 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:50.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:50.970 09:55:39 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:50.970 09:55:39 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:34:51.230 [2024-10-07 09:55:39.966819] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:34:51.230 [2024-10-07 09:55:39.966917] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid404715 ] 00:34:51.230 [2024-10-07 09:55:40.026097] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:51.230 [2024-10-07 09:55:40.143117] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:34:51.489 09:55:40 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:51.489 09:55:40 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:34:51.489 09:55:40 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:34:51.489 09:55:40 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:51.489 09:55:40 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:34:51.489 [2024-10-07 09:55:40.418484] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:51.489 null0 00:34:51.489 [2024-10-07 09:55:40.450547] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:34:51.489 [2024-10-07 09:55:40.451056] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:34:51.489 09:55:40 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:51.489 09:55:40 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:34:51.489 09:55:40 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:34:51.489 09:55:40 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:34:51.489 09:55:40 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:34:51.489 09:55:40 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:51.489 09:55:40 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:34:51.489 09:55:40 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:51.489 09:55:40 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:34:51.489 09:55:40 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:51.489 09:55:40 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:34:51.489 [2024-10-07 09:55:40.474593] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:34:51.489 request: 00:34:51.489 { 00:34:51.489 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:34:51.489 "secure_channel": false, 00:34:51.489 "listen_address": { 00:34:51.489 "trtype": "tcp", 00:34:51.489 "traddr": "127.0.0.1", 00:34:51.489 "trsvcid": "4420" 00:34:51.489 }, 00:34:51.489 "method": "nvmf_subsystem_add_listener", 00:34:51.489 "req_id": 1 00:34:51.489 } 00:34:51.489 Got JSON-RPC error response 00:34:51.489 response: 00:34:51.489 { 00:34:51.489 "code": -32602, 00:34:51.489 "message": "Invalid parameters" 00:34:51.489 } 00:34:51.489 09:55:40 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:34:51.489 09:55:40 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:34:51.489 09:55:40 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:51.489 09:55:40 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:51.489 09:55:40 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:51.489 09:55:40 keyring_file -- keyring/file.sh@47 -- # bperfpid=404725 00:34:51.489 09:55:40 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:34:51.489 09:55:40 keyring_file -- keyring/file.sh@49 -- # waitforlisten 404725 /var/tmp/bperf.sock 00:34:51.489 09:55:40 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 404725 ']' 00:34:51.489 09:55:40 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:51.489 09:55:40 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:51.489 09:55:40 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:51.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:51.489 09:55:40 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:51.489 09:55:40 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:34:51.748 [2024-10-07 09:55:40.522575] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:34:51.748 [2024-10-07 09:55:40.522650] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid404725 ] 00:34:51.748 [2024-10-07 09:55:40.577831] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:51.748 [2024-10-07 09:55:40.698634] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:34:52.007 09:55:40 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:52.007 09:55:40 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:34:52.007 09:55:40 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.FLYHrD6uXx 00:34:52.007 09:55:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.FLYHrD6uXx 00:34:52.265 09:55:41 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.N3YFj57ZnX 00:34:52.265 09:55:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.N3YFj57ZnX 00:34:52.524 09:55:41 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:34:52.524 09:55:41 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:34:52.524 09:55:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:52.524 09:55:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:52.524 09:55:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:52.782 09:55:41 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.FLYHrD6uXx == \/\t\m\p\/\t\m\p\.\F\L\Y\H\r\D\6\u\X\x ]] 00:34:52.782 09:55:41 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:34:52.783 09:55:41 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:34:52.783 09:55:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:52.783 09:55:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:52.783 09:55:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:34:53.083 09:55:41 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.N3YFj57ZnX == \/\t\m\p\/\t\m\p\.\N\3\Y\F\j\5\7\Z\n\X ]] 00:34:53.083 09:55:41 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:34:53.083 09:55:41 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:53.083 09:55:41 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:53.083 09:55:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:53.083 09:55:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:53.083 09:55:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:53.341 09:55:42 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:34:53.341 09:55:42 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:34:53.341 09:55:42 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:34:53.341 09:55:42 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:53.341 09:55:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:53.341 09:55:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:34:53.341 09:55:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:53.599 09:55:42 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:34:53.599 09:55:42 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:53.599 09:55:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:53.857 [2024-10-07 09:55:42.658262] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:34:53.857 nvme0n1 00:34:53.857 09:55:42 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:34:53.857 09:55:42 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:53.857 09:55:42 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:53.857 09:55:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:53.857 09:55:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:53.857 09:55:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:54.115 09:55:43 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:34:54.115 09:55:43 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:34:54.115 09:55:43 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:34:54.115 09:55:43 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:54.115 09:55:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:54.115 09:55:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:54.115 09:55:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:34:54.373 09:55:43 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:34:54.373 09:55:43 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:54.633 Running I/O for 1 seconds... 00:34:55.573 10316.00 IOPS, 40.30 MiB/s 00:34:55.573 Latency(us) 00:34:55.573 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:55.573 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:34:55.573 nvme0n1 : 1.01 10368.90 40.50 0.00 0.00 12308.41 5704.06 21068.61 00:34:55.573 =================================================================================================================== 00:34:55.573 Total : 10368.90 40.50 0.00 0.00 12308.41 5704.06 21068.61 00:34:55.573 { 00:34:55.573 "results": [ 00:34:55.573 { 00:34:55.573 "job": "nvme0n1", 00:34:55.573 "core_mask": "0x2", 00:34:55.573 "workload": "randrw", 00:34:55.573 "percentage": 50, 00:34:55.573 "status": "finished", 00:34:55.573 "queue_depth": 128, 00:34:55.573 "io_size": 4096, 00:34:55.573 "runtime": 1.007436, 00:34:55.573 "iops": 10368.89688277965, 00:34:55.573 "mibps": 40.50350344835801, 00:34:55.573 "io_failed": 0, 00:34:55.573 "io_timeout": 0, 00:34:55.573 "avg_latency_us": 12308.409565100234, 00:34:55.573 "min_latency_us": 5704.059259259259, 00:34:55.573 "max_latency_us": 21068.61037037037 00:34:55.573 } 00:34:55.573 ], 00:34:55.573 "core_count": 1 00:34:55.573 } 00:34:55.573 09:55:44 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:34:55.573 09:55:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:34:55.832 09:55:44 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:34:55.832 09:55:44 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:55.832 09:55:44 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:55.832 09:55:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:55.832 09:55:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:55.832 09:55:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:56.091 09:55:44 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:34:56.091 09:55:44 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:34:56.091 09:55:44 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:34:56.091 09:55:44 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:56.091 09:55:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:56.091 09:55:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:56.091 09:55:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:34:56.350 09:55:45 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:34:56.350 09:55:45 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:34:56.350 09:55:45 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:34:56.350 09:55:45 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:34:56.350 09:55:45 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:34:56.350 09:55:45 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:56.350 09:55:45 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:34:56.350 09:55:45 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:56.350 09:55:45 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:34:56.350 09:55:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:34:56.608 [2024-10-07 09:55:45.528624] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:34:56.608 [2024-10-07 09:55:45.529123] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd50620 (107): Transport endpoint is not connected 00:34:56.608 [2024-10-07 09:55:45.530115] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd50620 (9): Bad file descriptor 00:34:56.608 [2024-10-07 09:55:45.531114] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:34:56.608 [2024-10-07 09:55:45.531135] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:34:56.608 [2024-10-07 09:55:45.531163] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:34:56.608 [2024-10-07 09:55:45.531177] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:34:56.608 request: 00:34:56.608 { 00:34:56.608 "name": "nvme0", 00:34:56.608 "trtype": "tcp", 00:34:56.608 "traddr": "127.0.0.1", 00:34:56.608 "adrfam": "ipv4", 00:34:56.608 "trsvcid": "4420", 00:34:56.608 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:56.608 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:56.608 "prchk_reftag": false, 00:34:56.608 "prchk_guard": false, 00:34:56.608 "hdgst": false, 00:34:56.608 "ddgst": false, 00:34:56.608 "psk": "key1", 00:34:56.608 "allow_unrecognized_csi": false, 00:34:56.608 "method": "bdev_nvme_attach_controller", 00:34:56.608 "req_id": 1 00:34:56.608 } 00:34:56.608 Got JSON-RPC error response 00:34:56.608 response: 00:34:56.608 { 00:34:56.608 "code": -5, 00:34:56.608 "message": "Input/output error" 00:34:56.608 } 00:34:56.608 09:55:45 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:34:56.608 09:55:45 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:56.608 09:55:45 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:56.608 09:55:45 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:56.608 09:55:45 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:34:56.608 09:55:45 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:56.608 09:55:45 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:56.608 09:55:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:56.609 09:55:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:56.609 09:55:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:56.867 09:55:45 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:34:56.867 09:55:45 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:34:56.867 09:55:45 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:34:56.867 09:55:45 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:56.867 09:55:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:56.867 09:55:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:56.867 09:55:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:34:57.125 09:55:46 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:34:57.125 09:55:46 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:34:57.125 09:55:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:34:57.383 09:55:46 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:34:57.383 09:55:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:34:57.641 09:55:46 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:34:57.641 09:55:46 keyring_file -- keyring/file.sh@78 -- # jq length 00:34:57.641 09:55:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:57.900 09:55:46 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:34:57.900 09:55:46 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.FLYHrD6uXx 00:34:57.900 09:55:46 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.FLYHrD6uXx 00:34:57.900 09:55:46 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:34:57.900 09:55:46 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.FLYHrD6uXx 00:34:57.900 09:55:46 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:34:58.158 09:55:46 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:58.158 09:55:46 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:34:58.158 09:55:46 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:58.158 09:55:46 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.FLYHrD6uXx 00:34:58.158 09:55:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.FLYHrD6uXx 00:34:58.158 [2024-10-07 09:55:47.144125] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.FLYHrD6uXx': 0100660 00:34:58.158 [2024-10-07 09:55:47.144156] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:34:58.158 request: 00:34:58.158 { 00:34:58.158 "name": "key0", 00:34:58.158 "path": "/tmp/tmp.FLYHrD6uXx", 00:34:58.158 "method": "keyring_file_add_key", 00:34:58.158 "req_id": 1 00:34:58.158 } 00:34:58.158 Got JSON-RPC error response 00:34:58.158 response: 00:34:58.158 { 00:34:58.158 "code": -1, 00:34:58.158 "message": "Operation not permitted" 00:34:58.158 } 00:34:58.417 09:55:47 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:34:58.417 09:55:47 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:58.417 09:55:47 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:58.417 09:55:47 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:58.417 09:55:47 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.FLYHrD6uXx 00:34:58.417 09:55:47 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.FLYHrD6uXx 00:34:58.417 09:55:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.FLYHrD6uXx 00:34:58.675 09:55:47 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.FLYHrD6uXx 00:34:58.675 09:55:47 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:34:58.675 09:55:47 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:58.675 09:55:47 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:58.675 09:55:47 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:58.675 09:55:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:58.675 09:55:47 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:58.933 09:55:47 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:34:58.933 09:55:47 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:58.933 09:55:47 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:34:58.933 09:55:47 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:58.933 09:55:47 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:34:58.933 09:55:47 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:58.933 09:55:47 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:34:58.933 09:55:47 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:58.934 09:55:47 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:58.934 09:55:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:59.191 [2024-10-07 09:55:47.970383] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.FLYHrD6uXx': No such file or directory 00:34:59.191 [2024-10-07 09:55:47.970414] nvme_tcp.c:2609:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:34:59.191 [2024-10-07 09:55:47.970451] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:34:59.191 [2024-10-07 09:55:47.970463] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:34:59.191 [2024-10-07 09:55:47.970475] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:34:59.191 [2024-10-07 09:55:47.970486] bdev_nvme.c:6449:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:34:59.191 request: 00:34:59.191 { 00:34:59.191 "name": "nvme0", 00:34:59.191 "trtype": "tcp", 00:34:59.191 "traddr": "127.0.0.1", 00:34:59.191 "adrfam": "ipv4", 00:34:59.191 "trsvcid": "4420", 00:34:59.191 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:59.191 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:59.191 "prchk_reftag": false, 00:34:59.191 "prchk_guard": false, 00:34:59.191 "hdgst": false, 00:34:59.191 "ddgst": false, 00:34:59.191 "psk": "key0", 00:34:59.191 "allow_unrecognized_csi": false, 00:34:59.191 "method": "bdev_nvme_attach_controller", 00:34:59.191 "req_id": 1 00:34:59.191 } 00:34:59.191 Got JSON-RPC error response 00:34:59.191 response: 00:34:59.191 { 00:34:59.191 "code": -19, 00:34:59.191 "message": "No such device" 00:34:59.191 } 00:34:59.191 09:55:47 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:34:59.191 09:55:47 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:59.191 09:55:47 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:59.191 09:55:47 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:59.191 09:55:47 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:34:59.191 09:55:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:34:59.449 09:55:48 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:34:59.449 09:55:48 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:34:59.449 09:55:48 keyring_file -- keyring/common.sh@17 -- # name=key0 00:34:59.449 09:55:48 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:34:59.449 09:55:48 keyring_file -- keyring/common.sh@17 -- # digest=0 00:34:59.449 09:55:48 keyring_file -- keyring/common.sh@18 -- # mktemp 00:34:59.449 09:55:48 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.08FlkHxSgK 00:34:59.449 09:55:48 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:34:59.449 09:55:48 keyring_file -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:34:59.449 09:55:48 keyring_file -- nvmf/common.sh@728 -- # local prefix key digest 00:34:59.449 09:55:48 keyring_file -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:34:59.449 09:55:48 keyring_file -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:34:59.449 09:55:48 keyring_file -- nvmf/common.sh@730 -- # digest=0 00:34:59.449 09:55:48 keyring_file -- nvmf/common.sh@731 -- # python - 00:34:59.449 09:55:48 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.08FlkHxSgK 00:34:59.449 09:55:48 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.08FlkHxSgK 00:34:59.449 09:55:48 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.08FlkHxSgK 00:34:59.449 09:55:48 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.08FlkHxSgK 00:34:59.449 09:55:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.08FlkHxSgK 00:34:59.707 09:55:48 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:59.707 09:55:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:59.964 nvme0n1 00:34:59.964 09:55:48 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:34:59.964 09:55:48 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:59.964 09:55:48 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:59.964 09:55:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:59.964 09:55:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:59.964 09:55:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:00.223 09:55:49 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:35:00.223 09:55:49 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:35:00.223 09:55:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:00.481 09:55:49 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:35:00.481 09:55:49 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:35:00.481 09:55:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:00.481 09:55:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:00.481 09:55:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:01.047 09:55:49 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:35:01.047 09:55:49 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:35:01.047 09:55:49 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:01.047 09:55:49 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:01.047 09:55:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:01.047 09:55:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:01.047 09:55:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:01.047 09:55:50 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:35:01.047 09:55:50 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:01.047 09:55:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:01.615 09:55:50 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:35:01.616 09:55:50 keyring_file -- keyring/file.sh@105 -- # jq length 00:35:01.616 09:55:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:01.616 09:55:50 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:35:01.616 09:55:50 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.08FlkHxSgK 00:35:01.616 09:55:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.08FlkHxSgK 00:35:01.874 09:55:50 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.N3YFj57ZnX 00:35:01.874 09:55:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.N3YFj57ZnX 00:35:02.134 09:55:51 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:02.134 09:55:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:02.701 nvme0n1 00:35:02.701 09:55:51 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:35:02.701 09:55:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:35:02.960 09:55:51 keyring_file -- keyring/file.sh@113 -- # config='{ 00:35:02.960 "subsystems": [ 00:35:02.960 { 00:35:02.960 "subsystem": "keyring", 00:35:02.960 "config": [ 00:35:02.960 { 00:35:02.960 "method": "keyring_file_add_key", 00:35:02.960 "params": { 00:35:02.960 "name": "key0", 00:35:02.960 "path": "/tmp/tmp.08FlkHxSgK" 00:35:02.960 } 00:35:02.960 }, 00:35:02.960 { 00:35:02.960 "method": "keyring_file_add_key", 00:35:02.960 "params": { 00:35:02.960 "name": "key1", 00:35:02.960 "path": "/tmp/tmp.N3YFj57ZnX" 00:35:02.960 } 00:35:02.960 } 00:35:02.960 ] 00:35:02.960 }, 00:35:02.960 { 00:35:02.960 "subsystem": "iobuf", 00:35:02.960 "config": [ 00:35:02.960 { 00:35:02.960 "method": "iobuf_set_options", 00:35:02.960 "params": { 00:35:02.960 "small_pool_count": 8192, 00:35:02.960 "large_pool_count": 1024, 00:35:02.960 "small_bufsize": 8192, 00:35:02.960 "large_bufsize": 135168 00:35:02.960 } 00:35:02.960 } 00:35:02.960 ] 00:35:02.960 }, 00:35:02.960 { 00:35:02.960 "subsystem": "sock", 00:35:02.960 "config": [ 00:35:02.960 { 00:35:02.960 "method": "sock_set_default_impl", 00:35:02.960 "params": { 00:35:02.960 "impl_name": "posix" 00:35:02.960 } 00:35:02.960 }, 00:35:02.960 { 00:35:02.960 "method": "sock_impl_set_options", 00:35:02.960 "params": { 00:35:02.960 "impl_name": "ssl", 00:35:02.960 "recv_buf_size": 4096, 00:35:02.960 "send_buf_size": 4096, 00:35:02.960 "enable_recv_pipe": true, 00:35:02.960 "enable_quickack": false, 00:35:02.960 "enable_placement_id": 0, 00:35:02.960 "enable_zerocopy_send_server": true, 00:35:02.960 "enable_zerocopy_send_client": false, 00:35:02.960 "zerocopy_threshold": 0, 00:35:02.960 "tls_version": 0, 00:35:02.960 "enable_ktls": false 00:35:02.960 } 00:35:02.960 }, 00:35:02.960 { 00:35:02.960 "method": "sock_impl_set_options", 00:35:02.960 "params": { 00:35:02.960 "impl_name": "posix", 00:35:02.960 "recv_buf_size": 2097152, 00:35:02.960 "send_buf_size": 2097152, 00:35:02.960 "enable_recv_pipe": true, 00:35:02.960 "enable_quickack": false, 00:35:02.960 "enable_placement_id": 0, 00:35:02.960 "enable_zerocopy_send_server": true, 00:35:02.960 "enable_zerocopy_send_client": false, 00:35:02.960 "zerocopy_threshold": 0, 00:35:02.960 "tls_version": 0, 00:35:02.960 "enable_ktls": false 00:35:02.960 } 00:35:02.960 } 00:35:02.960 ] 00:35:02.960 }, 00:35:02.960 { 00:35:02.960 "subsystem": "vmd", 00:35:02.960 "config": [] 00:35:02.960 }, 00:35:02.960 { 00:35:02.960 "subsystem": "accel", 00:35:02.960 "config": [ 00:35:02.960 { 00:35:02.960 "method": "accel_set_options", 00:35:02.960 "params": { 00:35:02.960 "small_cache_size": 128, 00:35:02.960 "large_cache_size": 16, 00:35:02.960 "task_count": 2048, 00:35:02.960 "sequence_count": 2048, 00:35:02.960 "buf_count": 2048 00:35:02.960 } 00:35:02.960 } 00:35:02.960 ] 00:35:02.960 }, 00:35:02.960 { 00:35:02.960 "subsystem": "bdev", 00:35:02.960 "config": [ 00:35:02.960 { 00:35:02.960 "method": "bdev_set_options", 00:35:02.960 "params": { 00:35:02.960 "bdev_io_pool_size": 65535, 00:35:02.960 "bdev_io_cache_size": 256, 00:35:02.960 "bdev_auto_examine": true, 00:35:02.960 "iobuf_small_cache_size": 128, 00:35:02.960 "iobuf_large_cache_size": 16 00:35:02.960 } 00:35:02.960 }, 00:35:02.960 { 00:35:02.960 "method": "bdev_raid_set_options", 00:35:02.960 "params": { 00:35:02.960 "process_window_size_kb": 1024, 00:35:02.960 "process_max_bandwidth_mb_sec": 0 00:35:02.960 } 00:35:02.960 }, 00:35:02.960 { 00:35:02.960 "method": "bdev_iscsi_set_options", 00:35:02.960 "params": { 00:35:02.960 "timeout_sec": 30 00:35:02.960 } 00:35:02.960 }, 00:35:02.960 { 00:35:02.960 "method": "bdev_nvme_set_options", 00:35:02.960 "params": { 00:35:02.960 "action_on_timeout": "none", 00:35:02.960 "timeout_us": 0, 00:35:02.960 "timeout_admin_us": 0, 00:35:02.960 "keep_alive_timeout_ms": 10000, 00:35:02.960 "arbitration_burst": 0, 00:35:02.960 "low_priority_weight": 0, 00:35:02.960 "medium_priority_weight": 0, 00:35:02.960 "high_priority_weight": 0, 00:35:02.960 "nvme_adminq_poll_period_us": 10000, 00:35:02.960 "nvme_ioq_poll_period_us": 0, 00:35:02.960 "io_queue_requests": 512, 00:35:02.960 "delay_cmd_submit": true, 00:35:02.960 "transport_retry_count": 4, 00:35:02.960 "bdev_retry_count": 3, 00:35:02.960 "transport_ack_timeout": 0, 00:35:02.960 "ctrlr_loss_timeout_sec": 0, 00:35:02.960 "reconnect_delay_sec": 0, 00:35:02.960 "fast_io_fail_timeout_sec": 0, 00:35:02.960 "disable_auto_failback": false, 00:35:02.960 "generate_uuids": false, 00:35:02.960 "transport_tos": 0, 00:35:02.960 "nvme_error_stat": false, 00:35:02.960 "rdma_srq_size": 0, 00:35:02.960 "io_path_stat": false, 00:35:02.960 "allow_accel_sequence": false, 00:35:02.960 "rdma_max_cq_size": 0, 00:35:02.960 "rdma_cm_event_timeout_ms": 0, 00:35:02.960 "dhchap_digests": [ 00:35:02.960 "sha256", 00:35:02.960 "sha384", 00:35:02.960 "sha512" 00:35:02.960 ], 00:35:02.960 "dhchap_dhgroups": [ 00:35:02.960 "null", 00:35:02.960 "ffdhe2048", 00:35:02.960 "ffdhe3072", 00:35:02.960 "ffdhe4096", 00:35:02.960 "ffdhe6144", 00:35:02.960 "ffdhe8192" 00:35:02.960 ] 00:35:02.960 } 00:35:02.960 }, 00:35:02.960 { 00:35:02.960 "method": "bdev_nvme_attach_controller", 00:35:02.960 "params": { 00:35:02.960 "name": "nvme0", 00:35:02.960 "trtype": "TCP", 00:35:02.960 "adrfam": "IPv4", 00:35:02.960 "traddr": "127.0.0.1", 00:35:02.960 "trsvcid": "4420", 00:35:02.960 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:02.960 "prchk_reftag": false, 00:35:02.960 "prchk_guard": false, 00:35:02.961 "ctrlr_loss_timeout_sec": 0, 00:35:02.961 "reconnect_delay_sec": 0, 00:35:02.961 "fast_io_fail_timeout_sec": 0, 00:35:02.961 "psk": "key0", 00:35:02.961 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:02.961 "hdgst": false, 00:35:02.961 "ddgst": false 00:35:02.961 } 00:35:02.961 }, 00:35:02.961 { 00:35:02.961 "method": "bdev_nvme_set_hotplug", 00:35:02.961 "params": { 00:35:02.961 "period_us": 100000, 00:35:02.961 "enable": false 00:35:02.961 } 00:35:02.961 }, 00:35:02.961 { 00:35:02.961 "method": "bdev_wait_for_examine" 00:35:02.961 } 00:35:02.961 ] 00:35:02.961 }, 00:35:02.961 { 00:35:02.961 "subsystem": "nbd", 00:35:02.961 "config": [] 00:35:02.961 } 00:35:02.961 ] 00:35:02.961 }' 00:35:02.961 09:55:51 keyring_file -- keyring/file.sh@115 -- # killprocess 404725 00:35:02.961 09:55:51 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 404725 ']' 00:35:02.961 09:55:51 keyring_file -- common/autotest_common.sh@954 -- # kill -0 404725 00:35:02.961 09:55:51 keyring_file -- common/autotest_common.sh@955 -- # uname 00:35:02.961 09:55:51 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:02.961 09:55:51 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 404725 00:35:02.961 09:55:51 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:02.961 09:55:51 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:02.961 09:55:51 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 404725' 00:35:02.961 killing process with pid 404725 00:35:02.961 09:55:51 keyring_file -- common/autotest_common.sh@969 -- # kill 404725 00:35:02.961 Received shutdown signal, test time was about 1.000000 seconds 00:35:02.961 00:35:02.961 Latency(us) 00:35:02.961 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:02.961 =================================================================================================================== 00:35:02.961 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:02.961 09:55:51 keyring_file -- common/autotest_common.sh@974 -- # wait 404725 00:35:03.219 09:55:52 keyring_file -- keyring/file.sh@118 -- # bperfpid=406245 00:35:03.219 09:55:52 keyring_file -- keyring/file.sh@120 -- # waitforlisten 406245 /var/tmp/bperf.sock 00:35:03.219 09:55:52 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 406245 ']' 00:35:03.219 09:55:52 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:03.219 09:55:52 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:03.219 09:55:52 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:35:03.219 09:55:52 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:03.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:03.219 09:55:52 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:03.219 09:55:52 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:03.219 09:55:52 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:35:03.219 "subsystems": [ 00:35:03.219 { 00:35:03.219 "subsystem": "keyring", 00:35:03.219 "config": [ 00:35:03.219 { 00:35:03.219 "method": "keyring_file_add_key", 00:35:03.219 "params": { 00:35:03.219 "name": "key0", 00:35:03.220 "path": "/tmp/tmp.08FlkHxSgK" 00:35:03.220 } 00:35:03.220 }, 00:35:03.220 { 00:35:03.220 "method": "keyring_file_add_key", 00:35:03.220 "params": { 00:35:03.220 "name": "key1", 00:35:03.220 "path": "/tmp/tmp.N3YFj57ZnX" 00:35:03.220 } 00:35:03.220 } 00:35:03.220 ] 00:35:03.220 }, 00:35:03.220 { 00:35:03.220 "subsystem": "iobuf", 00:35:03.220 "config": [ 00:35:03.220 { 00:35:03.220 "method": "iobuf_set_options", 00:35:03.220 "params": { 00:35:03.220 "small_pool_count": 8192, 00:35:03.220 "large_pool_count": 1024, 00:35:03.220 "small_bufsize": 8192, 00:35:03.220 "large_bufsize": 135168 00:35:03.220 } 00:35:03.220 } 00:35:03.220 ] 00:35:03.220 }, 00:35:03.220 { 00:35:03.220 "subsystem": "sock", 00:35:03.220 "config": [ 00:35:03.220 { 00:35:03.220 "method": "sock_set_default_impl", 00:35:03.220 "params": { 00:35:03.220 "impl_name": "posix" 00:35:03.220 } 00:35:03.220 }, 00:35:03.220 { 00:35:03.220 "method": "sock_impl_set_options", 00:35:03.220 "params": { 00:35:03.220 "impl_name": "ssl", 00:35:03.220 "recv_buf_size": 4096, 00:35:03.220 "send_buf_size": 4096, 00:35:03.220 "enable_recv_pipe": true, 00:35:03.220 "enable_quickack": false, 00:35:03.220 "enable_placement_id": 0, 00:35:03.220 "enable_zerocopy_send_server": true, 00:35:03.220 "enable_zerocopy_send_client": false, 00:35:03.220 "zerocopy_threshold": 0, 00:35:03.220 "tls_version": 0, 00:35:03.220 "enable_ktls": false 00:35:03.220 } 00:35:03.220 }, 00:35:03.220 { 00:35:03.220 "method": "sock_impl_set_options", 00:35:03.220 "params": { 00:35:03.220 "impl_name": "posix", 00:35:03.220 "recv_buf_size": 2097152, 00:35:03.220 "send_buf_size": 2097152, 00:35:03.220 "enable_recv_pipe": true, 00:35:03.220 "enable_quickack": false, 00:35:03.220 "enable_placement_id": 0, 00:35:03.220 "enable_zerocopy_send_server": true, 00:35:03.220 "enable_zerocopy_send_client": false, 00:35:03.220 "zerocopy_threshold": 0, 00:35:03.220 "tls_version": 0, 00:35:03.220 "enable_ktls": false 00:35:03.220 } 00:35:03.220 } 00:35:03.220 ] 00:35:03.220 }, 00:35:03.220 { 00:35:03.220 "subsystem": "vmd", 00:35:03.220 "config": [] 00:35:03.220 }, 00:35:03.220 { 00:35:03.220 "subsystem": "accel", 00:35:03.220 "config": [ 00:35:03.220 { 00:35:03.220 "method": "accel_set_options", 00:35:03.220 "params": { 00:35:03.220 "small_cache_size": 128, 00:35:03.220 "large_cache_size": 16, 00:35:03.220 "task_count": 2048, 00:35:03.220 "sequence_count": 2048, 00:35:03.220 "buf_count": 2048 00:35:03.220 } 00:35:03.220 } 00:35:03.220 ] 00:35:03.220 }, 00:35:03.220 { 00:35:03.220 "subsystem": "bdev", 00:35:03.220 "config": [ 00:35:03.220 { 00:35:03.220 "method": "bdev_set_options", 00:35:03.220 "params": { 00:35:03.220 "bdev_io_pool_size": 65535, 00:35:03.220 "bdev_io_cache_size": 256, 00:35:03.220 "bdev_auto_examine": true, 00:35:03.220 "iobuf_small_cache_size": 128, 00:35:03.220 "iobuf_large_cache_size": 16 00:35:03.220 } 00:35:03.220 }, 00:35:03.220 { 00:35:03.220 "method": "bdev_raid_set_options", 00:35:03.220 "params": { 00:35:03.220 "process_window_size_kb": 1024, 00:35:03.220 "process_max_bandwidth_mb_sec": 0 00:35:03.220 } 00:35:03.220 }, 00:35:03.220 { 00:35:03.220 "method": "bdev_iscsi_set_options", 00:35:03.220 "params": { 00:35:03.220 "timeout_sec": 30 00:35:03.220 } 00:35:03.220 }, 00:35:03.220 { 00:35:03.220 "method": "bdev_nvme_set_options", 00:35:03.220 "params": { 00:35:03.220 "action_on_timeout": "none", 00:35:03.220 "timeout_us": 0, 00:35:03.220 "timeout_admin_us": 0, 00:35:03.220 "keep_alive_timeout_ms": 10000, 00:35:03.220 "arbitration_burst": 0, 00:35:03.220 "low_priority_weight": 0, 00:35:03.220 "medium_priority_weight": 0, 00:35:03.220 "high_priority_weight": 0, 00:35:03.220 "nvme_adminq_poll_period_us": 10000, 00:35:03.220 "nvme_ioq_poll_period_us": 0, 00:35:03.220 "io_queue_requests": 512, 00:35:03.220 "delay_cmd_submit": true, 00:35:03.220 "transport_retry_count": 4, 00:35:03.220 "bdev_retry_count": 3, 00:35:03.220 "transport_ack_timeout": 0, 00:35:03.220 "ctrlr_loss_timeout_sec": 0, 00:35:03.220 "reconnect_delay_sec": 0, 00:35:03.220 "fast_io_fail_timeout_sec": 0, 00:35:03.220 "disable_auto_failback": false, 00:35:03.220 "generate_uuids": false, 00:35:03.220 "transport_tos": 0, 00:35:03.220 "nvme_error_stat": false, 00:35:03.220 "rdma_srq_size": 0, 00:35:03.220 "io_path_stat": false, 00:35:03.220 "allow_accel_sequence": false, 00:35:03.220 "rdma_max_cq_size": 0, 00:35:03.220 "rdma_cm_event_timeout_ms": 0, 00:35:03.220 "dhchap_digests": [ 00:35:03.220 "sha256", 00:35:03.220 "sha384", 00:35:03.220 "sha512" 00:35:03.220 ], 00:35:03.220 "dhchap_dhgroups": [ 00:35:03.220 "null", 00:35:03.220 "ffdhe2048", 00:35:03.220 "ffdhe3072", 00:35:03.220 "ffdhe4096", 00:35:03.220 "ffdhe6144", 00:35:03.220 "ffdhe8192" 00:35:03.220 ] 00:35:03.220 } 00:35:03.220 }, 00:35:03.220 { 00:35:03.220 "method": "bdev_nvme_attach_controller", 00:35:03.220 "params": { 00:35:03.220 "name": "nvme0", 00:35:03.220 "trtype": "TCP", 00:35:03.220 "adrfam": "IPv4", 00:35:03.220 "traddr": "127.0.0.1", 00:35:03.220 "trsvcid": "4420", 00:35:03.220 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:03.220 "prchk_reftag": false, 00:35:03.220 "prchk_guard": false, 00:35:03.220 "ctrlr_loss_timeout_sec": 0, 00:35:03.220 "reconnect_delay_sec": 0, 00:35:03.220 "fast_io_fail_timeout_sec": 0, 00:35:03.220 "psk": "key0", 00:35:03.220 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:03.220 "hdgst": false, 00:35:03.220 "ddgst": false 00:35:03.220 } 00:35:03.220 }, 00:35:03.220 { 00:35:03.220 "method": "bdev_nvme_set_hotplug", 00:35:03.220 "params": { 00:35:03.220 "period_us": 100000, 00:35:03.220 "enable": false 00:35:03.220 } 00:35:03.220 }, 00:35:03.220 { 00:35:03.221 "method": "bdev_wait_for_examine" 00:35:03.221 } 00:35:03.221 ] 00:35:03.221 }, 00:35:03.221 { 00:35:03.221 "subsystem": "nbd", 00:35:03.221 "config": [] 00:35:03.221 } 00:35:03.221 ] 00:35:03.221 }' 00:35:03.221 [2024-10-07 09:55:52.127762] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:35:03.221 [2024-10-07 09:55:52.127841] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid406245 ] 00:35:03.221 [2024-10-07 09:55:52.182894] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:03.479 [2024-10-07 09:55:52.288940] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:35:03.479 [2024-10-07 09:55:52.464985] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:04.412 09:55:53 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:04.412 09:55:53 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:35:04.412 09:55:53 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:35:04.412 09:55:53 keyring_file -- keyring/file.sh@121 -- # jq length 00:35:04.412 09:55:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:04.670 09:55:53 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:35:04.670 09:55:53 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:35:04.670 09:55:53 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:04.670 09:55:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:04.670 09:55:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:04.670 09:55:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:04.670 09:55:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:04.928 09:55:53 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:35:04.928 09:55:53 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:35:04.928 09:55:53 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:04.928 09:55:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:04.928 09:55:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:04.928 09:55:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:04.928 09:55:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:05.186 09:55:53 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:35:05.186 09:55:53 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:35:05.186 09:55:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:35:05.186 09:55:53 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:35:05.444 09:55:54 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:35:05.444 09:55:54 keyring_file -- keyring/file.sh@1 -- # cleanup 00:35:05.444 09:55:54 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.08FlkHxSgK /tmp/tmp.N3YFj57ZnX 00:35:05.444 09:55:54 keyring_file -- keyring/file.sh@20 -- # killprocess 406245 00:35:05.444 09:55:54 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 406245 ']' 00:35:05.444 09:55:54 keyring_file -- common/autotest_common.sh@954 -- # kill -0 406245 00:35:05.444 09:55:54 keyring_file -- common/autotest_common.sh@955 -- # uname 00:35:05.444 09:55:54 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:05.444 09:55:54 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 406245 00:35:05.444 09:55:54 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:05.444 09:55:54 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:05.444 09:55:54 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 406245' 00:35:05.444 killing process with pid 406245 00:35:05.444 09:55:54 keyring_file -- common/autotest_common.sh@969 -- # kill 406245 00:35:05.444 Received shutdown signal, test time was about 1.000000 seconds 00:35:05.444 00:35:05.444 Latency(us) 00:35:05.444 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:05.444 =================================================================================================================== 00:35:05.444 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:35:05.444 09:55:54 keyring_file -- common/autotest_common.sh@974 -- # wait 406245 00:35:05.701 09:55:54 keyring_file -- keyring/file.sh@21 -- # killprocess 404715 00:35:05.701 09:55:54 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 404715 ']' 00:35:05.701 09:55:54 keyring_file -- common/autotest_common.sh@954 -- # kill -0 404715 00:35:05.701 09:55:54 keyring_file -- common/autotest_common.sh@955 -- # uname 00:35:05.701 09:55:54 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:05.701 09:55:54 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 404715 00:35:05.701 09:55:54 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:05.701 09:55:54 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:05.701 09:55:54 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 404715' 00:35:05.701 killing process with pid 404715 00:35:05.701 09:55:54 keyring_file -- common/autotest_common.sh@969 -- # kill 404715 00:35:05.701 09:55:54 keyring_file -- common/autotest_common.sh@974 -- # wait 404715 00:35:06.269 00:35:06.269 real 0m15.371s 00:35:06.269 user 0m38.763s 00:35:06.269 sys 0m3.275s 00:35:06.269 09:55:55 keyring_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:06.269 09:55:55 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:06.269 ************************************ 00:35:06.269 END TEST keyring_file 00:35:06.269 ************************************ 00:35:06.269 09:55:55 -- spdk/autotest.sh@289 -- # [[ y == y ]] 00:35:06.269 09:55:55 -- spdk/autotest.sh@290 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:35:06.269 09:55:55 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:35:06.269 09:55:55 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:06.269 09:55:55 -- common/autotest_common.sh@10 -- # set +x 00:35:06.269 ************************************ 00:35:06.269 START TEST keyring_linux 00:35:06.269 ************************************ 00:35:06.269 09:55:55 keyring_linux -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:35:06.269 Joined session keyring: 108892185 00:35:06.269 * Looking for test storage... 00:35:06.269 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:35:06.269 09:55:55 keyring_linux -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:35:06.269 09:55:55 keyring_linux -- common/autotest_common.sh@1681 -- # lcov --version 00:35:06.269 09:55:55 keyring_linux -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:35:06.269 09:55:55 keyring_linux -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:35:06.269 09:55:55 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:06.269 09:55:55 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:06.269 09:55:55 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:06.269 09:55:55 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:35:06.269 09:55:55 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:35:06.269 09:55:55 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:35:06.269 09:55:55 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:35:06.269 09:55:55 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:35:06.269 09:55:55 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:35:06.269 09:55:55 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:35:06.269 09:55:55 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:06.269 09:55:55 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:35:06.269 09:55:55 keyring_linux -- scripts/common.sh@345 -- # : 1 00:35:06.269 09:55:55 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:06.269 09:55:55 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:06.269 09:55:55 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:35:06.269 09:55:55 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:35:06.269 09:55:55 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:06.269 09:55:55 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:35:06.269 09:55:55 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:35:06.269 09:55:55 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:35:06.269 09:55:55 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:35:06.269 09:55:55 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:06.270 09:55:55 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:35:06.270 09:55:55 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:35:06.270 09:55:55 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:06.270 09:55:55 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:06.270 09:55:55 keyring_linux -- scripts/common.sh@368 -- # return 0 00:35:06.270 09:55:55 keyring_linux -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:06.270 09:55:55 keyring_linux -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:35:06.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:06.270 --rc genhtml_branch_coverage=1 00:35:06.270 --rc genhtml_function_coverage=1 00:35:06.270 --rc genhtml_legend=1 00:35:06.270 --rc geninfo_all_blocks=1 00:35:06.270 --rc geninfo_unexecuted_blocks=1 00:35:06.270 00:35:06.270 ' 00:35:06.270 09:55:55 keyring_linux -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:35:06.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:06.270 --rc genhtml_branch_coverage=1 00:35:06.270 --rc genhtml_function_coverage=1 00:35:06.270 --rc genhtml_legend=1 00:35:06.270 --rc geninfo_all_blocks=1 00:35:06.270 --rc geninfo_unexecuted_blocks=1 00:35:06.270 00:35:06.270 ' 00:35:06.270 09:55:55 keyring_linux -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:35:06.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:06.270 --rc genhtml_branch_coverage=1 00:35:06.270 --rc genhtml_function_coverage=1 00:35:06.270 --rc genhtml_legend=1 00:35:06.270 --rc geninfo_all_blocks=1 00:35:06.270 --rc geninfo_unexecuted_blocks=1 00:35:06.270 00:35:06.270 ' 00:35:06.270 09:55:55 keyring_linux -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:35:06.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:06.270 --rc genhtml_branch_coverage=1 00:35:06.270 --rc genhtml_function_coverage=1 00:35:06.270 --rc genhtml_legend=1 00:35:06.270 --rc geninfo_all_blocks=1 00:35:06.270 --rc geninfo_unexecuted_blocks=1 00:35:06.270 00:35:06.270 ' 00:35:06.270 09:55:55 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:35:06.270 09:55:55 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:06.270 09:55:55 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:35:06.270 09:55:55 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:06.270 09:55:55 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:06.270 09:55:55 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:06.270 09:55:55 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:06.270 09:55:55 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:06.270 09:55:55 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:06.270 09:55:55 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:06.270 09:55:55 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:06.270 09:55:55 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:06.270 09:55:55 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:06.270 09:55:55 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:35:06.270 09:55:55 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:35:06.270 09:55:55 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:06.270 09:55:55 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:06.270 09:55:55 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:06.270 09:55:55 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:06.270 09:55:55 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:06.270 09:55:55 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:35:06.270 09:55:55 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:06.270 09:55:55 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:06.270 09:55:55 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:06.270 09:55:55 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:06.270 09:55:55 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:06.270 09:55:55 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:06.270 09:55:55 keyring_linux -- paths/export.sh@5 -- # export PATH 00:35:06.270 09:55:55 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:06.270 09:55:55 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:35:06.270 09:55:55 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:06.270 09:55:55 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:06.270 09:55:55 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:06.270 09:55:55 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:06.270 09:55:55 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:06.270 09:55:55 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:06.270 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:06.270 09:55:55 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:06.270 09:55:55 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:06.270 09:55:55 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:06.270 09:55:55 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:35:06.270 09:55:55 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:35:06.270 09:55:55 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:35:06.270 09:55:55 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:35:06.270 09:55:55 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:35:06.270 09:55:55 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:35:06.270 09:55:55 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:35:06.270 09:55:55 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:35:06.270 09:55:55 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:35:06.270 09:55:55 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:06.529 09:55:55 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:35:06.529 09:55:55 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:35:06.529 09:55:55 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:06.529 09:55:55 keyring_linux -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:06.529 09:55:55 keyring_linux -- nvmf/common.sh@728 -- # local prefix key digest 00:35:06.529 09:55:55 keyring_linux -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:35:06.529 09:55:55 keyring_linux -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:35:06.529 09:55:55 keyring_linux -- nvmf/common.sh@730 -- # digest=0 00:35:06.529 09:55:55 keyring_linux -- nvmf/common.sh@731 -- # python - 00:35:06.529 09:55:55 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:35:06.529 09:55:55 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:35:06.529 /tmp/:spdk-test:key0 00:35:06.529 09:55:55 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:35:06.529 09:55:55 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:35:06.529 09:55:55 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:35:06.529 09:55:55 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:35:06.529 09:55:55 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:35:06.529 09:55:55 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:35:06.529 09:55:55 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:35:06.529 09:55:55 keyring_linux -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:35:06.529 09:55:55 keyring_linux -- nvmf/common.sh@728 -- # local prefix key digest 00:35:06.529 09:55:55 keyring_linux -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:35:06.529 09:55:55 keyring_linux -- nvmf/common.sh@730 -- # key=112233445566778899aabbccddeeff00 00:35:06.529 09:55:55 keyring_linux -- nvmf/common.sh@730 -- # digest=0 00:35:06.529 09:55:55 keyring_linux -- nvmf/common.sh@731 -- # python - 00:35:06.529 09:55:55 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:35:06.529 09:55:55 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:35:06.529 /tmp/:spdk-test:key1 00:35:06.529 09:55:55 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=406722 00:35:06.529 09:55:55 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:35:06.529 09:55:55 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 406722 00:35:06.529 09:55:55 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 406722 ']' 00:35:06.529 09:55:55 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:06.529 09:55:55 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:06.529 09:55:55 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:06.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:06.529 09:55:55 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:06.529 09:55:55 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:06.529 [2024-10-07 09:55:55.406024] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:35:06.529 [2024-10-07 09:55:55.406129] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid406722 ] 00:35:06.529 [2024-10-07 09:55:55.461435] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:06.788 [2024-10-07 09:55:55.572197] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:35:07.046 09:55:55 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:07.046 09:55:55 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:35:07.046 09:55:55 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:35:07.046 09:55:55 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:07.046 09:55:55 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:07.046 [2024-10-07 09:55:55.808453] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:07.046 null0 00:35:07.046 [2024-10-07 09:55:55.840512] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:35:07.046 [2024-10-07 09:55:55.840932] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:35:07.046 09:55:55 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:07.046 09:55:55 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:35:07.046 681212062 00:35:07.046 09:55:55 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:35:07.046 400540239 00:35:07.046 09:55:55 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=406736 00:35:07.046 09:55:55 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:35:07.046 09:55:55 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 406736 /var/tmp/bperf.sock 00:35:07.046 09:55:55 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 406736 ']' 00:35:07.046 09:55:55 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:07.046 09:55:55 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:07.046 09:55:55 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:07.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:07.046 09:55:55 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:07.046 09:55:55 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:07.046 [2024-10-07 09:55:55.906313] Starting SPDK v25.01-pre git sha1 3365e5306 / DPDK 24.03.0 initialization... 00:35:07.046 [2024-10-07 09:55:55.906393] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid406736 ] 00:35:07.046 [2024-10-07 09:55:55.960906] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:07.305 [2024-10-07 09:55:56.067752] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:35:07.305 09:55:56 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:07.305 09:55:56 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:35:07.305 09:55:56 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:35:07.305 09:55:56 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:35:07.563 09:55:56 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:35:07.563 09:55:56 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:07.821 09:55:56 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:35:07.821 09:55:56 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:35:08.078 [2024-10-07 09:55:57.003376] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:08.337 nvme0n1 00:35:08.337 09:55:57 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:35:08.337 09:55:57 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:35:08.337 09:55:57 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:35:08.337 09:55:57 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:35:08.337 09:55:57 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:35:08.337 09:55:57 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:08.595 09:55:57 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:35:08.595 09:55:57 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:35:08.595 09:55:57 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:35:08.595 09:55:57 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:35:08.595 09:55:57 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:08.595 09:55:57 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:08.595 09:55:57 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:35:08.853 09:55:57 keyring_linux -- keyring/linux.sh@25 -- # sn=681212062 00:35:08.853 09:55:57 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:35:08.853 09:55:57 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:35:08.853 09:55:57 keyring_linux -- keyring/linux.sh@26 -- # [[ 681212062 == \6\8\1\2\1\2\0\6\2 ]] 00:35:08.853 09:55:57 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 681212062 00:35:08.853 09:55:57 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:35:08.853 09:55:57 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:08.853 Running I/O for 1 seconds... 00:35:09.794 11300.00 IOPS, 44.14 MiB/s 00:35:09.794 Latency(us) 00:35:09.794 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:09.794 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:35:09.794 nvme0n1 : 1.01 11310.97 44.18 0.00 0.00 11251.27 4587.52 16602.45 00:35:09.794 =================================================================================================================== 00:35:09.794 Total : 11310.97 44.18 0.00 0.00 11251.27 4587.52 16602.45 00:35:09.794 { 00:35:09.794 "results": [ 00:35:09.794 { 00:35:09.794 "job": "nvme0n1", 00:35:09.794 "core_mask": "0x2", 00:35:09.794 "workload": "randread", 00:35:09.794 "status": "finished", 00:35:09.794 "queue_depth": 128, 00:35:09.794 "io_size": 4096, 00:35:09.794 "runtime": 1.010435, 00:35:09.794 "iops": 11310.97002776032, 00:35:09.794 "mibps": 44.18347667093875, 00:35:09.794 "io_failed": 0, 00:35:09.794 "io_timeout": 0, 00:35:09.794 "avg_latency_us": 11251.273693236502, 00:35:09.794 "min_latency_us": 4587.52, 00:35:09.794 "max_latency_us": 16602.453333333335 00:35:09.794 } 00:35:09.794 ], 00:35:09.794 "core_count": 1 00:35:09.794 } 00:35:09.794 09:55:58 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:09.794 09:55:58 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:10.096 09:55:59 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:35:10.096 09:55:59 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:35:10.096 09:55:59 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:35:10.096 09:55:59 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:35:10.096 09:55:59 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:10.096 09:55:59 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:35:10.409 09:55:59 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:35:10.409 09:55:59 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:35:10.409 09:55:59 keyring_linux -- keyring/linux.sh@23 -- # return 00:35:10.409 09:55:59 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:10.409 09:55:59 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:35:10.409 09:55:59 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:10.409 09:55:59 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:35:10.409 09:55:59 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:10.409 09:55:59 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:35:10.409 09:55:59 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:10.409 09:55:59 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:10.409 09:55:59 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:10.692 [2024-10-07 09:55:59.564833] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:35:10.692 [2024-10-07 09:55:59.564923] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c1d480 (107): Transport endpoint is not connected 00:35:10.692 [2024-10-07 09:55:59.565915] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c1d480 (9): Bad file descriptor 00:35:10.692 [2024-10-07 09:55:59.566914] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:35:10.692 [2024-10-07 09:55:59.566934] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:35:10.692 [2024-10-07 09:55:59.566947] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:35:10.692 [2024-10-07 09:55:59.566960] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:35:10.692 request: 00:35:10.692 { 00:35:10.692 "name": "nvme0", 00:35:10.692 "trtype": "tcp", 00:35:10.692 "traddr": "127.0.0.1", 00:35:10.692 "adrfam": "ipv4", 00:35:10.692 "trsvcid": "4420", 00:35:10.692 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:10.692 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:10.692 "prchk_reftag": false, 00:35:10.692 "prchk_guard": false, 00:35:10.692 "hdgst": false, 00:35:10.692 "ddgst": false, 00:35:10.692 "psk": ":spdk-test:key1", 00:35:10.692 "allow_unrecognized_csi": false, 00:35:10.692 "method": "bdev_nvme_attach_controller", 00:35:10.692 "req_id": 1 00:35:10.692 } 00:35:10.692 Got JSON-RPC error response 00:35:10.692 response: 00:35:10.692 { 00:35:10.692 "code": -5, 00:35:10.692 "message": "Input/output error" 00:35:10.692 } 00:35:10.692 09:55:59 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:35:10.692 09:55:59 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:35:10.692 09:55:59 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:35:10.692 09:55:59 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:35:10.692 09:55:59 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:35:10.692 09:55:59 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:35:10.692 09:55:59 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:35:10.692 09:55:59 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:35:10.692 09:55:59 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:35:10.692 09:55:59 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:35:10.692 09:55:59 keyring_linux -- keyring/linux.sh@33 -- # sn=681212062 00:35:10.692 09:55:59 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 681212062 00:35:10.692 1 links removed 00:35:10.692 09:55:59 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:35:10.692 09:55:59 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:35:10.692 09:55:59 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:35:10.692 09:55:59 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:35:10.692 09:55:59 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:35:10.692 09:55:59 keyring_linux -- keyring/linux.sh@33 -- # sn=400540239 00:35:10.692 09:55:59 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 400540239 00:35:10.692 1 links removed 00:35:10.692 09:55:59 keyring_linux -- keyring/linux.sh@41 -- # killprocess 406736 00:35:10.692 09:55:59 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 406736 ']' 00:35:10.692 09:55:59 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 406736 00:35:10.692 09:55:59 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:35:10.692 09:55:59 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:10.692 09:55:59 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 406736 00:35:10.692 09:55:59 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:10.692 09:55:59 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:10.692 09:55:59 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 406736' 00:35:10.692 killing process with pid 406736 00:35:10.692 09:55:59 keyring_linux -- common/autotest_common.sh@969 -- # kill 406736 00:35:10.692 Received shutdown signal, test time was about 1.000000 seconds 00:35:10.692 00:35:10.692 Latency(us) 00:35:10.692 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:10.692 =================================================================================================================== 00:35:10.692 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:10.692 09:55:59 keyring_linux -- common/autotest_common.sh@974 -- # wait 406736 00:35:10.950 09:55:59 keyring_linux -- keyring/linux.sh@42 -- # killprocess 406722 00:35:10.950 09:55:59 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 406722 ']' 00:35:10.950 09:55:59 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 406722 00:35:10.950 09:55:59 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:35:10.950 09:55:59 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:10.950 09:55:59 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 406722 00:35:10.950 09:55:59 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:10.950 09:55:59 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:10.950 09:55:59 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 406722' 00:35:10.950 killing process with pid 406722 00:35:10.950 09:55:59 keyring_linux -- common/autotest_common.sh@969 -- # kill 406722 00:35:10.950 09:55:59 keyring_linux -- common/autotest_common.sh@974 -- # wait 406722 00:35:11.518 00:35:11.518 real 0m5.260s 00:35:11.518 user 0m10.415s 00:35:11.518 sys 0m1.571s 00:35:11.518 09:56:00 keyring_linux -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:11.518 09:56:00 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:11.518 ************************************ 00:35:11.518 END TEST keyring_linux 00:35:11.518 ************************************ 00:35:11.518 09:56:00 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:35:11.518 09:56:00 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:35:11.518 09:56:00 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:35:11.518 09:56:00 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:35:11.518 09:56:00 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:35:11.518 09:56:00 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:35:11.518 09:56:00 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:35:11.518 09:56:00 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:35:11.518 09:56:00 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:35:11.518 09:56:00 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:35:11.518 09:56:00 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:35:11.518 09:56:00 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:35:11.518 09:56:00 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:35:11.518 09:56:00 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:35:11.518 09:56:00 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:35:11.518 09:56:00 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:35:11.518 09:56:00 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:35:11.518 09:56:00 -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:11.518 09:56:00 -- common/autotest_common.sh@10 -- # set +x 00:35:11.518 09:56:00 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:35:11.518 09:56:00 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:35:11.518 09:56:00 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:35:11.518 09:56:00 -- common/autotest_common.sh@10 -- # set +x 00:35:14.050 INFO: APP EXITING 00:35:14.050 INFO: killing all VMs 00:35:14.050 INFO: killing vhost app 00:35:14.050 INFO: EXIT DONE 00:35:14.618 0000:84:00.0 (8086 0a54): Already using the nvme driver 00:35:14.618 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:35:14.618 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:35:14.618 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:35:14.618 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:35:14.618 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:35:14.877 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:35:14.877 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:35:14.877 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:35:14.877 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:35:14.877 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:35:14.877 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:35:14.877 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:35:14.877 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:35:14.877 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:35:14.877 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:35:14.877 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:35:16.253 Cleaning 00:35:16.253 Removing: /var/run/dpdk/spdk0/config 00:35:16.253 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:35:16.253 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:35:16.253 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:35:16.253 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:35:16.253 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:35:16.253 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:35:16.253 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:35:16.253 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:35:16.253 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:35:16.253 Removing: /var/run/dpdk/spdk0/hugepage_info 00:35:16.253 Removing: /var/run/dpdk/spdk1/config 00:35:16.253 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:35:16.253 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:35:16.253 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:35:16.253 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:35:16.253 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:35:16.253 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:35:16.253 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:35:16.253 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:35:16.253 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:35:16.253 Removing: /var/run/dpdk/spdk1/hugepage_info 00:35:16.253 Removing: /var/run/dpdk/spdk2/config 00:35:16.253 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:35:16.253 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:35:16.253 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:35:16.253 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:35:16.253 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:35:16.253 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:35:16.253 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:35:16.253 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:35:16.253 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:35:16.253 Removing: /var/run/dpdk/spdk2/hugepage_info 00:35:16.253 Removing: /var/run/dpdk/spdk3/config 00:35:16.253 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:35:16.253 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:35:16.253 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:35:16.253 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:35:16.253 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:35:16.253 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:35:16.253 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:35:16.253 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:35:16.253 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:35:16.253 Removing: /var/run/dpdk/spdk3/hugepage_info 00:35:16.253 Removing: /var/run/dpdk/spdk4/config 00:35:16.253 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:35:16.253 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:35:16.253 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:35:16.253 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:35:16.253 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:35:16.253 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:35:16.253 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:35:16.253 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:35:16.253 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:35:16.253 Removing: /var/run/dpdk/spdk4/hugepage_info 00:35:16.253 Removing: /dev/shm/bdev_svc_trace.1 00:35:16.253 Removing: /dev/shm/nvmf_trace.0 00:35:16.253 Removing: /dev/shm/spdk_tgt_trace.pid98840 00:35:16.253 Removing: /var/run/dpdk/spdk0 00:35:16.253 Removing: /var/run/dpdk/spdk1 00:35:16.253 Removing: /var/run/dpdk/spdk2 00:35:16.253 Removing: /var/run/dpdk/spdk3 00:35:16.253 Removing: /var/run/dpdk/spdk4 00:35:16.253 Removing: /var/run/dpdk/spdk_pid100047 00:35:16.253 Removing: /var/run/dpdk/spdk_pid100181 00:35:16.253 Removing: /var/run/dpdk/spdk_pid101380 00:35:16.253 Removing: /var/run/dpdk/spdk_pid101386 00:35:16.253 Removing: /var/run/dpdk/spdk_pid101642 00:35:16.253 Removing: /var/run/dpdk/spdk_pid102909 00:35:16.253 Removing: /var/run/dpdk/spdk_pid103794 00:35:16.253 Removing: /var/run/dpdk/spdk_pid104093 00:35:16.253 Removing: /var/run/dpdk/spdk_pid104291 00:35:16.253 Removing: /var/run/dpdk/spdk_pid104501 00:35:16.253 Removing: /var/run/dpdk/spdk_pid104708 00:35:16.253 Removing: /var/run/dpdk/spdk_pid104955 00:35:16.253 Removing: /var/run/dpdk/spdk_pid105105 00:35:16.253 Removing: /var/run/dpdk/spdk_pid105299 00:35:16.253 Removing: /var/run/dpdk/spdk_pid105718 00:35:16.253 Removing: /var/run/dpdk/spdk_pid108109 00:35:16.253 Removing: /var/run/dpdk/spdk_pid108271 00:35:16.253 Removing: /var/run/dpdk/spdk_pid108427 00:35:16.253 Removing: /var/run/dpdk/spdk_pid108544 00:35:16.253 Removing: /var/run/dpdk/spdk_pid108843 00:35:16.253 Removing: /var/run/dpdk/spdk_pid108966 00:35:16.253 Removing: /var/run/dpdk/spdk_pid109268 00:35:16.253 Removing: /var/run/dpdk/spdk_pid109388 00:35:16.253 Removing: /var/run/dpdk/spdk_pid109552 00:35:16.253 Removing: /var/run/dpdk/spdk_pid109675 00:35:16.253 Removing: /var/run/dpdk/spdk_pid109839 00:35:16.253 Removing: /var/run/dpdk/spdk_pid109844 00:35:16.253 Removing: /var/run/dpdk/spdk_pid110328 00:35:16.253 Removing: /var/run/dpdk/spdk_pid110476 00:35:16.253 Removing: /var/run/dpdk/spdk_pid110764 00:35:16.253 Removing: /var/run/dpdk/spdk_pid112813 00:35:16.253 Removing: /var/run/dpdk/spdk_pid115324 00:35:16.253 Removing: /var/run/dpdk/spdk_pid122020 00:35:16.253 Removing: /var/run/dpdk/spdk_pid122517 00:35:16.253 Removing: /var/run/dpdk/spdk_pid124811 00:35:16.253 Removing: /var/run/dpdk/spdk_pid125080 00:35:16.253 Removing: /var/run/dpdk/spdk_pid127586 00:35:16.253 Removing: /var/run/dpdk/spdk_pid131302 00:35:16.253 Removing: /var/run/dpdk/spdk_pid133893 00:35:16.253 Removing: /var/run/dpdk/spdk_pid140124 00:35:16.253 Removing: /var/run/dpdk/spdk_pid145113 00:35:16.253 Removing: /var/run/dpdk/spdk_pid146373 00:35:16.253 Removing: /var/run/dpdk/spdk_pid147006 00:35:16.253 Removing: /var/run/dpdk/spdk_pid157046 00:35:16.253 Removing: /var/run/dpdk/spdk_pid159220 00:35:16.253 Removing: /var/run/dpdk/spdk_pid185737 00:35:16.253 Removing: /var/run/dpdk/spdk_pid188888 00:35:16.253 Removing: /var/run/dpdk/spdk_pid192547 00:35:16.253 Removing: /var/run/dpdk/spdk_pid196232 00:35:16.253 Removing: /var/run/dpdk/spdk_pid196235 00:35:16.253 Removing: /var/run/dpdk/spdk_pid196862 00:35:16.253 Removing: /var/run/dpdk/spdk_pid197482 00:35:16.253 Removing: /var/run/dpdk/spdk_pid198003 00:35:16.253 Removing: /var/run/dpdk/spdk_pid198404 00:35:16.253 Removing: /var/run/dpdk/spdk_pid198496 00:35:16.253 Removing: /var/run/dpdk/spdk_pid198641 00:35:16.253 Removing: /var/run/dpdk/spdk_pid198769 00:35:16.512 Removing: /var/run/dpdk/spdk_pid198772 00:35:16.512 Removing: /var/run/dpdk/spdk_pid199394 00:35:16.512 Removing: /var/run/dpdk/spdk_pid200021 00:35:16.512 Removing: /var/run/dpdk/spdk_pid200768 00:35:16.512 Removing: /var/run/dpdk/spdk_pid201143 00:35:16.512 Removing: /var/run/dpdk/spdk_pid201148 00:35:16.512 Removing: /var/run/dpdk/spdk_pid201609 00:35:16.512 Removing: /var/run/dpdk/spdk_pid202768 00:35:16.512 Removing: /var/run/dpdk/spdk_pid203468 00:35:16.512 Removing: /var/run/dpdk/spdk_pid208662 00:35:16.512 Removing: /var/run/dpdk/spdk_pid236101 00:35:16.512 Removing: /var/run/dpdk/spdk_pid238913 00:35:16.512 Removing: /var/run/dpdk/spdk_pid240043 00:35:16.512 Removing: /var/run/dpdk/spdk_pid241321 00:35:16.512 Removing: /var/run/dpdk/spdk_pid241462 00:35:16.512 Removing: /var/run/dpdk/spdk_pid241599 00:35:16.512 Removing: /var/run/dpdk/spdk_pid241731 00:35:16.512 Removing: /var/run/dpdk/spdk_pid242161 00:35:16.512 Removing: /var/run/dpdk/spdk_pid243422 00:35:16.512 Removing: /var/run/dpdk/spdk_pid244243 00:35:16.512 Removing: /var/run/dpdk/spdk_pid244663 00:35:16.512 Removing: /var/run/dpdk/spdk_pid246319 00:35:16.512 Removing: /var/run/dpdk/spdk_pid246625 00:35:16.512 Removing: /var/run/dpdk/spdk_pid247156 00:35:16.512 Removing: /var/run/dpdk/spdk_pid249665 00:35:16.512 Removing: /var/run/dpdk/spdk_pid253417 00:35:16.512 Removing: /var/run/dpdk/spdk_pid253418 00:35:16.512 Removing: /var/run/dpdk/spdk_pid253419 00:35:16.512 Removing: /var/run/dpdk/spdk_pid255529 00:35:16.512 Removing: /var/run/dpdk/spdk_pid260211 00:35:16.512 Removing: /var/run/dpdk/spdk_pid262949 00:35:16.512 Removing: /var/run/dpdk/spdk_pid266673 00:35:16.512 Removing: /var/run/dpdk/spdk_pid267588 00:35:16.512 Removing: /var/run/dpdk/spdk_pid268640 00:35:16.512 Removing: /var/run/dpdk/spdk_pid269687 00:35:16.512 Removing: /var/run/dpdk/spdk_pid272380 00:35:16.512 Removing: /var/run/dpdk/spdk_pid274632 00:35:16.512 Removing: /var/run/dpdk/spdk_pid278680 00:35:16.512 Removing: /var/run/dpdk/spdk_pid278692 00:35:16.512 Removing: /var/run/dpdk/spdk_pid281450 00:35:16.512 Removing: /var/run/dpdk/spdk_pid281584 00:35:16.512 Removing: /var/run/dpdk/spdk_pid281715 00:35:16.512 Removing: /var/run/dpdk/spdk_pid281967 00:35:16.512 Removing: /var/run/dpdk/spdk_pid282089 00:35:16.512 Removing: /var/run/dpdk/spdk_pid284729 00:35:16.512 Removing: /var/run/dpdk/spdk_pid285046 00:35:16.512 Removing: /var/run/dpdk/spdk_pid287592 00:35:16.512 Removing: /var/run/dpdk/spdk_pid290101 00:35:16.512 Removing: /var/run/dpdk/spdk_pid293380 00:35:16.512 Removing: /var/run/dpdk/spdk_pid296676 00:35:16.512 Removing: /var/run/dpdk/spdk_pid302861 00:35:16.512 Removing: /var/run/dpdk/spdk_pid307129 00:35:16.512 Removing: /var/run/dpdk/spdk_pid307132 00:35:16.512 Removing: /var/run/dpdk/spdk_pid319369 00:35:16.512 Removing: /var/run/dpdk/spdk_pid319872 00:35:16.512 Removing: /var/run/dpdk/spdk_pid320287 00:35:16.512 Removing: /var/run/dpdk/spdk_pid320883 00:35:16.512 Removing: /var/run/dpdk/spdk_pid321900 00:35:16.512 Removing: /var/run/dpdk/spdk_pid322338 00:35:16.512 Removing: /var/run/dpdk/spdk_pid322729 00:35:16.512 Removing: /var/run/dpdk/spdk_pid323226 00:35:16.512 Removing: /var/run/dpdk/spdk_pid325618 00:35:16.512 Removing: /var/run/dpdk/spdk_pid325757 00:35:16.512 Removing: /var/run/dpdk/spdk_pid329497 00:35:16.512 Removing: /var/run/dpdk/spdk_pid329549 00:35:16.512 Removing: /var/run/dpdk/spdk_pid332775 00:35:16.512 Removing: /var/run/dpdk/spdk_pid335278 00:35:16.512 Removing: /var/run/dpdk/spdk_pid341989 00:35:16.512 Removing: /var/run/dpdk/spdk_pid342376 00:35:16.512 Removing: /var/run/dpdk/spdk_pid344765 00:35:16.512 Removing: /var/run/dpdk/spdk_pid344921 00:35:16.512 Removing: /var/run/dpdk/spdk_pid347417 00:35:16.512 Removing: /var/run/dpdk/spdk_pid350937 00:35:16.512 Removing: /var/run/dpdk/spdk_pid353121 00:35:16.512 Removing: /var/run/dpdk/spdk_pid359820 00:35:16.512 Removing: /var/run/dpdk/spdk_pid364771 00:35:16.512 Removing: /var/run/dpdk/spdk_pid365942 00:35:16.512 Removing: /var/run/dpdk/spdk_pid366650 00:35:16.512 Removing: /var/run/dpdk/spdk_pid376366 00:35:16.512 Removing: /var/run/dpdk/spdk_pid378514 00:35:16.512 Removing: /var/run/dpdk/spdk_pid380432 00:35:16.512 Removing: /var/run/dpdk/spdk_pid385245 00:35:16.512 Removing: /var/run/dpdk/spdk_pid385252 00:35:16.513 Removing: /var/run/dpdk/spdk_pid388015 00:35:16.513 Removing: /var/run/dpdk/spdk_pid390046 00:35:16.513 Removing: /var/run/dpdk/spdk_pid391444 00:35:16.513 Removing: /var/run/dpdk/spdk_pid392268 00:35:16.513 Removing: /var/run/dpdk/spdk_pid393608 00:35:16.513 Removing: /var/run/dpdk/spdk_pid394338 00:35:16.513 Removing: /var/run/dpdk/spdk_pid399598 00:35:16.513 Removing: /var/run/dpdk/spdk_pid399862 00:35:16.513 Removing: /var/run/dpdk/spdk_pid400242 00:35:16.513 Removing: /var/run/dpdk/spdk_pid401723 00:35:16.513 Removing: /var/run/dpdk/spdk_pid402105 00:35:16.513 Removing: /var/run/dpdk/spdk_pid402461 00:35:16.513 Removing: /var/run/dpdk/spdk_pid404715 00:35:16.513 Removing: /var/run/dpdk/spdk_pid404725 00:35:16.513 Removing: /var/run/dpdk/spdk_pid406245 00:35:16.513 Removing: /var/run/dpdk/spdk_pid406722 00:35:16.513 Removing: /var/run/dpdk/spdk_pid406736 00:35:16.513 Removing: /var/run/dpdk/spdk_pid97227 00:35:16.513 Removing: /var/run/dpdk/spdk_pid97938 00:35:16.771 Removing: /var/run/dpdk/spdk_pid98840 00:35:16.771 Removing: /var/run/dpdk/spdk_pid99271 00:35:16.771 Clean 00:35:16.771 09:56:05 -- common/autotest_common.sh@1451 -- # return 0 00:35:16.771 09:56:05 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:35:16.771 09:56:05 -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:16.771 09:56:05 -- common/autotest_common.sh@10 -- # set +x 00:35:16.771 09:56:05 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:35:16.771 09:56:05 -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:16.771 09:56:05 -- common/autotest_common.sh@10 -- # set +x 00:35:16.771 09:56:05 -- spdk/autotest.sh@388 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:35:16.771 09:56:05 -- spdk/autotest.sh@390 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:35:16.772 09:56:05 -- spdk/autotest.sh@390 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:35:16.772 09:56:05 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:35:16.772 09:56:05 -- spdk/autotest.sh@394 -- # hostname 00:35:16.772 09:56:05 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-19 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:35:17.030 geninfo: WARNING: invalid characters removed from testname! 00:35:49.106 09:56:36 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:51.635 09:56:40 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:54.917 09:56:43 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:58.201 09:56:46 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:00.731 09:56:49 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:04.016 09:56:52 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:06.546 09:56:55 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:36:06.805 09:56:55 -- common/autotest_common.sh@1680 -- $ [[ y == y ]] 00:36:06.805 09:56:55 -- common/autotest_common.sh@1681 -- $ lcov --version 00:36:06.805 09:56:55 -- common/autotest_common.sh@1681 -- $ awk '{print $NF}' 00:36:06.805 09:56:55 -- common/autotest_common.sh@1681 -- $ lt 1.15 2 00:36:06.805 09:56:55 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:36:06.805 09:56:55 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:36:06.805 09:56:55 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:36:06.805 09:56:55 -- scripts/common.sh@336 -- $ IFS=.-: 00:36:06.805 09:56:55 -- scripts/common.sh@336 -- $ read -ra ver1 00:36:06.805 09:56:55 -- scripts/common.sh@337 -- $ IFS=.-: 00:36:06.805 09:56:55 -- scripts/common.sh@337 -- $ read -ra ver2 00:36:06.805 09:56:55 -- scripts/common.sh@338 -- $ local 'op=<' 00:36:06.805 09:56:55 -- scripts/common.sh@340 -- $ ver1_l=2 00:36:06.805 09:56:55 -- scripts/common.sh@341 -- $ ver2_l=1 00:36:06.805 09:56:55 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:36:06.805 09:56:55 -- scripts/common.sh@344 -- $ case "$op" in 00:36:06.805 09:56:55 -- scripts/common.sh@345 -- $ : 1 00:36:06.805 09:56:55 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:36:06.805 09:56:55 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:06.805 09:56:55 -- scripts/common.sh@365 -- $ decimal 1 00:36:06.805 09:56:55 -- scripts/common.sh@353 -- $ local d=1 00:36:06.805 09:56:55 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:36:06.805 09:56:55 -- scripts/common.sh@355 -- $ echo 1 00:36:06.805 09:56:55 -- scripts/common.sh@365 -- $ ver1[v]=1 00:36:06.805 09:56:55 -- scripts/common.sh@366 -- $ decimal 2 00:36:06.805 09:56:55 -- scripts/common.sh@353 -- $ local d=2 00:36:06.805 09:56:55 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:36:06.805 09:56:55 -- scripts/common.sh@355 -- $ echo 2 00:36:06.805 09:56:55 -- scripts/common.sh@366 -- $ ver2[v]=2 00:36:06.805 09:56:55 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:36:06.805 09:56:55 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:36:06.805 09:56:55 -- scripts/common.sh@368 -- $ return 0 00:36:06.805 09:56:55 -- common/autotest_common.sh@1682 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:06.806 09:56:55 -- common/autotest_common.sh@1694 -- $ export 'LCOV_OPTS= 00:36:06.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:06.806 --rc genhtml_branch_coverage=1 00:36:06.806 --rc genhtml_function_coverage=1 00:36:06.806 --rc genhtml_legend=1 00:36:06.806 --rc geninfo_all_blocks=1 00:36:06.806 --rc geninfo_unexecuted_blocks=1 00:36:06.806 00:36:06.806 ' 00:36:06.806 09:56:55 -- common/autotest_common.sh@1694 -- $ LCOV_OPTS=' 00:36:06.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:06.806 --rc genhtml_branch_coverage=1 00:36:06.806 --rc genhtml_function_coverage=1 00:36:06.806 --rc genhtml_legend=1 00:36:06.806 --rc geninfo_all_blocks=1 00:36:06.806 --rc geninfo_unexecuted_blocks=1 00:36:06.806 00:36:06.806 ' 00:36:06.806 09:56:55 -- common/autotest_common.sh@1695 -- $ export 'LCOV=lcov 00:36:06.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:06.806 --rc genhtml_branch_coverage=1 00:36:06.806 --rc genhtml_function_coverage=1 00:36:06.806 --rc genhtml_legend=1 00:36:06.806 --rc geninfo_all_blocks=1 00:36:06.806 --rc geninfo_unexecuted_blocks=1 00:36:06.806 00:36:06.806 ' 00:36:06.806 09:56:55 -- common/autotest_common.sh@1695 -- $ LCOV='lcov 00:36:06.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:06.806 --rc genhtml_branch_coverage=1 00:36:06.806 --rc genhtml_function_coverage=1 00:36:06.806 --rc genhtml_legend=1 00:36:06.806 --rc geninfo_all_blocks=1 00:36:06.806 --rc geninfo_unexecuted_blocks=1 00:36:06.806 00:36:06.806 ' 00:36:06.806 09:56:55 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:06.806 09:56:55 -- scripts/common.sh@15 -- $ shopt -s extglob 00:36:06.806 09:56:55 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:36:06.806 09:56:55 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:06.806 09:56:55 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:06.806 09:56:55 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:06.806 09:56:55 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:06.806 09:56:55 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:06.806 09:56:55 -- paths/export.sh@5 -- $ export PATH 00:36:06.806 09:56:55 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:06.806 09:56:55 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:36:06.806 09:56:55 -- common/autobuild_common.sh@486 -- $ date +%s 00:36:06.806 09:56:55 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1728287815.XXXXXX 00:36:06.806 09:56:55 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1728287815.zFeB9E 00:36:06.806 09:56:55 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:36:06.806 09:56:55 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:36:06.806 09:56:55 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:36:06.806 09:56:55 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:36:06.806 09:56:55 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:36:06.806 09:56:55 -- common/autobuild_common.sh@502 -- $ get_config_params 00:36:06.806 09:56:55 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:36:06.806 09:56:55 -- common/autotest_common.sh@10 -- $ set +x 00:36:06.806 09:56:55 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:36:06.806 09:56:55 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:36:06.806 09:56:55 -- pm/common@17 -- $ local monitor 00:36:06.806 09:56:55 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:06.806 09:56:55 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:06.806 09:56:55 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:06.806 09:56:55 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:06.806 09:56:55 -- pm/common@21 -- $ date +%s 00:36:06.806 09:56:55 -- pm/common@25 -- $ sleep 1 00:36:06.806 09:56:55 -- pm/common@21 -- $ date +%s 00:36:06.806 09:56:55 -- pm/common@21 -- $ date +%s 00:36:06.806 09:56:55 -- pm/common@21 -- $ date +%s 00:36:06.806 09:56:55 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728287815 00:36:06.806 09:56:55 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728287815 00:36:06.806 09:56:55 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728287815 00:36:06.806 09:56:55 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728287815 00:36:06.806 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728287815_collect-vmstat.pm.log 00:36:06.806 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728287815_collect-cpu-load.pm.log 00:36:06.806 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728287815_collect-cpu-temp.pm.log 00:36:06.806 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728287815_collect-bmc-pm.bmc.pm.log 00:36:07.744 09:56:56 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:36:07.744 09:56:56 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:36:07.744 09:56:56 -- spdk/autopackage.sh@14 -- $ timing_finish 00:36:07.744 09:56:56 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:36:07.744 09:56:56 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:36:07.744 09:56:56 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:36:07.744 09:56:56 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:36:07.744 09:56:56 -- pm/common@29 -- $ signal_monitor_resources TERM 00:36:07.744 09:56:56 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:36:07.744 09:56:56 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:07.744 09:56:56 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:36:07.744 09:56:56 -- pm/common@44 -- $ pid=417444 00:36:07.744 09:56:56 -- pm/common@50 -- $ kill -TERM 417444 00:36:07.744 09:56:56 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:07.744 09:56:56 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:36:07.744 09:56:56 -- pm/common@44 -- $ pid=417446 00:36:07.744 09:56:56 -- pm/common@50 -- $ kill -TERM 417446 00:36:07.744 09:56:56 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:07.744 09:56:56 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:36:07.744 09:56:56 -- pm/common@44 -- $ pid=417448 00:36:07.744 09:56:56 -- pm/common@50 -- $ kill -TERM 417448 00:36:07.744 09:56:56 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:07.744 09:56:56 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:36:07.744 09:56:56 -- pm/common@44 -- $ pid=417476 00:36:07.744 09:56:56 -- pm/common@50 -- $ sudo -E kill -TERM 417476 00:36:08.004 + [[ -n 27609 ]] 00:36:08.004 + sudo kill 27609 00:36:08.013 Pausing (Preparing for shutdown) 01:03:13.272 Resuming build at Mon Oct 07 08:24:02 UTC 2024 after Jenkins restart 01:03:13.304 Waiting for reconnection of GP19 before proceeding with build 01:03:13.305 Timeout set to expire in 14 sec 01:03:13.308 Ready to run at Mon Oct 07 08:24:02 UTC 2024 01:03:13.315 [Pipeline] } 01:03:13.329 [Pipeline] // stage 01:03:13.334 [Pipeline] } 01:03:13.349 [Pipeline] // timeout 01:03:13.354 [Pipeline] } 01:03:13.370 [Pipeline] // catchError 01:03:13.376 [Pipeline] } 01:03:13.394 [Pipeline] // wrap 01:03:13.399 [Pipeline] } 01:03:13.413 [Pipeline] // catchError 01:03:13.436 [Pipeline] stage 01:03:13.438 [Pipeline] { (Epilogue) 01:03:13.452 [Pipeline] catchError 01:03:13.454 [Pipeline] { 01:03:13.467 [Pipeline] echo 01:03:13.469 Cleanup processes 01:03:13.477 [Pipeline] sh 01:03:14.425 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 01:03:14.425 423807 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 01:03:14.447 [Pipeline] sh 01:03:14.744 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 01:03:14.744 ++ grep -v 'sudo pgrep' 01:03:14.744 ++ awk '{print $1}' 01:03:14.744 + sudo kill -9 01:03:14.744 + true 01:03:14.758 [Pipeline] sh 01:03:15.049 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 01:03:27.292 [Pipeline] sh 01:03:27.592 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 01:03:27.592 Artifacts sizes are good 01:03:27.611 [Pipeline] archiveArtifacts 01:03:27.620 Archiving artifacts 01:03:28.254 [Pipeline] sh 01:03:28.542 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 01:03:28.562 [Pipeline] cleanWs 01:03:28.576 [WS-CLEANUP] Deleting project workspace... 01:03:28.576 [WS-CLEANUP] Deferred wipeout is used... 01:03:28.591 [WS-CLEANUP] done 01:03:28.593 [Pipeline] } 01:03:28.611 [Pipeline] // catchError 01:03:28.624 [Pipeline] sh 01:03:28.911 + logger -p user.info -t JENKINS-CI 01:03:28.919 [Pipeline] } 01:03:28.933 [Pipeline] // stage 01:03:28.938 [Pipeline] } 01:03:28.963 [Pipeline] // node 01:03:28.967 [Pipeline] End of Pipeline 01:03:29.007 Finished: SUCCESS